Saturday, Nov 16, 2024
CLOSE

What The ‘Facebook Papers’ Reveal About The Social Network’s Advertising Business


What The ‘Facebook Papers’ Reveal About The Social Network’s Advertising Business

In this illustration, the Facebook logo can be seen on printed paper starting in March 2021.

NurPhoto via Getty Images

The “Facebook Papers” have been revelatory when it comes to how the company formerly known as Facebook deals with content moderation and a wide variety of other issues. However, they also shed light on the social network’s advertising business and how users respond to the ads they see. The thousands of pages of internal documents gathered and released by whistleblower Frances Haugen provide myriad examples of how Facebook—which rebranded as Meta last month—and its subsidiaries develop ideas, roll out products and research users.

A lot of attention has been paid to the details surrounding Facebook’s handling misinformation and the effects that its content can have on people. Female politicians are particularly vulnerable to mental and physical health issues, as well as their body image, political perceptions, and even psychological problems. But documents also offer fascinating glimpses into Facebook’s ad products, how they’re perceived by users and marketers, and how the social network’s employees have sought to deal with a range of concerns.

One document from February 2016 raises questions about the impact of Facebook’s reaction emojis, which debuted earlier the same year. A post written by an unnamed Facebook employee detailed feedback from an advertising client at Joyable, the mental health services company, who asked that the “angry” emoji be removed from their ads. The document revealed Joyable spent nearly $1million a year on Facebook ads, and $2,500 per hour on Facebook ads. Although just 5 of the 75 emoji reactions were “angry” on the Joyable ad—which included the text “In 5 minutes, you could start overcoming social anxiety”—that was 5 reactions too many. The client complained to the Facebook employee, explaining “it’s bad for our brand to have people publicly panning our ads on Facebook” and that “I’m sure it’s bad for our ROI also.” 

“This is a particular problem for us because mental health is polarizing,” according to the comment from the Joyable rep to the Facebook employee. “It’s already a headwind for us to advertise on Facebook because of comments like ‘social anxiety isn’t real,’ ‘take your head out of your phone,’ or ‘just drink alcohol.’ This is making it much worse.”

Frances Haugen, whistleblower for Facebook revealed that employees had discussed their frustration at angry emoticons appearing on an advertisement.

Frances Haugen

These revelations are based on hundreds of documents that Haugen provided to the Securities and Exchange Commission. They were also redacted by Haugen’s legal team and provided to Congress. News organizations obtained redacted versions of Congress’s documents last month. SME.

While speaking to British Parliament last month, Haugen said serving up “hateful, angry, divisive” ads was cheaper than other kinds of ads. She added that Facebook’s ads were priced partially on the probability that users would interact with them.

“We have seen over and over again in Facebook’s research, it is easier to provoke people to anger than to empathy or compassion, and so we are literally subsidizing hate on these platforms.”

Frances Haugen from Facebook speaks out to the British Parliament

“It is cheaper, substantially, to run an angry, hateful, divisive ad than it is to run a compassionate, empathetic ad,” Haugen said. “And I think there is a need for things even discussing disclosures of what rates people are paying for ads, having full transparency on the ad stream and understanding what are those biases that come and how ads are targeted.”

At least one brand has pulled their ads from Facebook Papers. Earlier this month, the egg and butter producer Vital Farms announced plans to pause all paid advertising on Facebook and Instagram “until we are confident that the content on their platforms is managed responsibly and not intentionally posing harm,” explaining that it’s “our small part to move a big conversation forward.”

Last week, Lush went a step further by deleting all of its social media accounts—going beyond Meta properties and also shutting down thousands of accounts across Snapchat and TikTok. The British cosmetics giant explain that it “wouldn’t ask our customers to meet us down a dark and dangerous alleyway” and called on regulators to pass laws to protect customers from “the harm and manipulation they may experience whilst trying to connect with us on social media.”

“I’ve spent all my life avoiding putting harmful ingredients in my products,” Lush cofounder and CEO Mark Constantine said in a statement. “There is now overwhelming evidence we are being put at risk when using social media. I’m not willing to expose my customers to this harm, so it’s time to take it out of the mix.”

How Political Ad Preferences Shape Facebook’s Perception

Perhaps one of the biggest revelations about Facebook’s advertising business has been how it’s handled politics. Political ads were perceived as boring even before the election of November 2020. Internal Facebook documents from March 2020 found that users who were flooded with political ads were less happy with their experience on the platform—and just as likely to close out of them as they would close out of non-political ads. When comparing users’ negative reactions to political ads with negative reactions to non-political ads, Facebook researchers said users closed out of “sexually suggestive” or “scammy ads” at similar rates to that of political ads.

In the report—titled “How do political ads impact user sentiment towards FB ads?”—researchers found that political ad impressions accounted for more than 8% of what one tenth of users saw in their feeds during the first two months of last year. Facebook’s research found that when political ads made up more than 10% of users news feeds, they reported being “somewhat or very dissatisfied” at a rate 1.3 times higher than those whose feeds were made up of 0% to 1% of political ads. In fact, users with high exposure rates to political ads were “significantly more likely to be dissatisfied with their Facebook ads experience.”

These findings shed some light on the reasons people find ads so bothersome. For example, some users had negative reactions to ads they perceived as “misleading, offensive or fake news content.” Other respondents said didn’t like ads that they felt contained a “sensitive topic” or didn’t match their own political affiliation. (However, Facebook’s researchers said affiliation mismatch didn’t fully explain most of the ads that people closed.)

A survey of 3.6 million Facebook users conducted by the company earlier this year found that 30% of young adults in the U.S. reported “seeing too many ads.”

Facebook’s research also found that users weren’t very likely to see ads from opposing viewpoints. Conservative users saw very few ads coming from primarily liberal audiences, and only 10% of those from Democrat civic pages. Meanwhile, liberal users saw “almost no ads” from Republican civic graph pages and less than 5% from primarily conservative audience pages. Pages associated with moderates are most likely to show x-outs. Researchers suggested that Facebook display more information to its users about how they can change their ads preferences, while also restricting the overall amount of political advertisements. 

Facebook’s research into user sentiment about political ads also shows the frequency with which people opt out of different types of non-political advertisements.

Francis Haugen

In a separate report titled “Effects Of Turning Off Political Ads” dated August 25, 2020, the author wrote that users saw “slightly less civic content” after political ads were turned off for two weeks. Researchers found that users clicked on civic content almost the same regardless of whether they were viewing more Pages content or Groups.

Facebook’s failure to be transparent about political ads is also evident in documents. In a post from November 9, 2018, a Facebook employee explained that there was “nothing we could do” when it came to some aspects of properly labeling advertisers or preventing manipulative actors. The company discovered that an advertising campaign purchased by a Facebook Page seemingly supporting liberal causes was not being considered to have been bought by advertisers who had conversational ties. The tactics—known as astroturfing—is something Facebook has associated with voter suppression.

Meta took additional measures over the years to increase transparency and authenticate political ads. The company started requiring political advertising companies to give more details about their organisations before they could be run in 2019. It began banning new political ads that focused on elections, social issues and politics between October 27th and November 3 in 2020.

“Since the 2018 midterms, we have strengthened our policies related to election interference and political ad transparency,” a Meta spokesperson said in an emailed statement to SME. “We continue working to make political advertising more transparent on our platform and we welcome updated regulations and help from policymakers as we evolve our policies in this space.”

Other leaked internal files in the Facebook Papers detail Facebook’s war against vaccine and QAnon misinformation during the Covid-19 crisis across ads, posts and comments. For example, a document from March 2021 uses an analogy of a rock thrown into a pond to describe how misinformation spreads, where the rock is “bad content entering our system” and the ripples are how the social network responds. While the company has sought to “stop as many rocks from being thrown,” “mute the ripples” and “fill the void with good content and conversations,” the document explains there’s more to do with mitigating misinformation across ad frame, video ad breaks and instant articles. Facebook also looked at vaccine hesitancy content to help with enforcement against foreign ad farm.

“The fact remains that anything near and dear to people’s hearts, such as their health, people will exploit for profit, and authoritative info sells less well than fear,” according to the document.

Facebook documents dated March 16 explain how the company attempted to prevent vaccine misinformation.

The Facebook Papers

How Whistleblowing Has Affected The World So Far

The revelations in the Facebook Papers also gave politicians all over the globe more material for investigation into Meta. While Haugen has already testified in front of Congress, members of the U.S. Senate will speak with Adam Mosseri—the head of Meta-owned Instagram—on December 6 as part of a series of hearings about how to protect children online. (Meta’s head of safety, Antigone Davis, also met with lawmakers back in September and disagreed with allegations that the company’s platforms are harmful for teens.) Several pieces of antitrust legislation are being considered by the U.S. House as well as Senate lawmakers.

European Union leaders have also cited the Facebook Papers as another reason to move forward with a proposal to regulate Big Tech’s political advertising and other parts of business. Earlier this month at the Web Summit tech conference in Lisbon, Věra Jourová, European Commission vice president for values and transparency, said lawmakers “would not be able to convince the people that regulation is needed” if whistleblowers like Haugen and others had not shed light on the company’s internal processes. 

“If we want to be sure that people are free to choose, we need to make sure the information they see online is not fueled by obscure functioning of platforms, algorithmic systems and an army of undetected bots.”

Věra Jourová, European Commission vice president for values and transparency

Meta had a series of adjustments made to its business, such as in its advertising and data privacy policies, over the following weeks. Meta announced that it would be closing down its facial recognition software, which had been heavily criticised by consumers advocates on November 2. One week later, it announced it would no longer allow advertisers to buy ads based on data related to users’ race, political affiliation, sexual orientation, religion and health—information deemed too sensitive to be used in targeted messaging.

Speaking on stage SME at Web Summit, Christopher Wylie—a former employee of Cambridge Analytica who shed light on Facebook’s data privacy issues back in 2018 when he came forward as a whistleblower—said the Facebook Papers and the discussions around them feel to him like “déjà vu.”

“Really déjà vu for me with the Senate hearings, and all this,” Wylie said. “We’re just talking about the same thing over and over and over again. We’re sort of stuck in this loop, and I think one of the problems is it’s clear there are a lot of problems, and those are constantly being discussed, but we sort of missing the conversation around solutions and frameworks for regulation.”

The Brand Safety Questions Are Déjà Vu, Too

Facebook employees were also concerned about Facebook allowing right-wing websites into its larger network of publishers. In a post dated June 4, 2020, a Facebook employee wrote “Do I need to explain this one” attached to a photo with a number of Breitbart News headlines related to the George Floyd protests. One of the Facebook employees who worked on the Facebook Audience Network in October 2018, wrote an older post. It was attached to a photo with Breitbart News headlines related to the George Floyd protests. The post—titled “We need to talk about Breitbart (again)”—argued that while Facebook claimed to be politically neutral and that Breitbart had not yet seemed to violate any Facebook policies, allowing the website to monetize through Facebook was “a political statement.”

The Facebook employee, whose name was redacted from the document, said that 11,000 advertisers had added Breitbart to their list of websites they wanted to avoid advertising on, adding that there were 30,000 block lists in existence—and nearly every advertiser with a block list included Breitbart. Facebook removed Breitbart’s name from their audience network in the end of last year.

“When talking about brand safety, which is a huge deal for most of our advertisers and the second-most likely reason for advertiser churn, we also hear about Breitbart,” the Facebook employee wrote. “When they talk about which publishers they want to block, it’s often them…This isn’t right-wing news, which I agree should be allowed jut as much as left-wing news, it’s vitriol. It’s losing us advertisers, trust, money, and moral integrity on a daily basis. We need to reconsider and act.”

Meta spokeswoman said that Breitbart was removed from Facebook’s Audience Network when Meta was asked. SMEThe company relies on third-party fact-checkers to rate content and maintain internal systems for repeat offenders. Those systems can apply penalties when a page’s content receives multiple false ratings and block them from receiving monetary incentives or advertising.

A screen shot from part of an internal Facebook Q&A provided to the company’s sales team after the Jan. 6 attacks on the U.S. Capitol.

Frances Haugen

“We make changes to reduce problematic or low-quality content to improve people’s experiences on the platform, not because of a Page’s political point of view,” the Meta spokesperson said. “When it comes to changes that will impact public Pages like publishers, of course we analyze the effect of the proposed change before we make it.”

Another document shows how Facebook instructed its global Sales team to formulate its response following the attack on Capitol Hill, January 6. In an internal document labeled as a high priority, the company encouraged employees to “respond reactively” to questions from clients, even offering a script. It answered a query about advertisers spending halt and explained how Facebook removed content which incited violence. Another answer explains why the company removed a video posted by then U.S. President Donald Trump, suggesting that his posts “contribute to, rather than diminish, the risk of ongoing violence.” 

An update on January 7 included answers to questions about Facebook’s decision to block Trump from posting, including why it had decided that now was the time to take action. On January 15, it updated the post to include responses regarding whether the company saw violence coming ahead of time and, if so, why it didn’t act. The post noted that prior to the attack, Facebook had removed more than 600 “militarized social movements from our platform,” as well as the original “Stop The Steal” group and various hate groups.

“Demand Side Problems”

In an internal note to staff dated August 2020, longtime Facebook veteran Andrew Bosworth—who joined the social network in 2006, built Facebook’s mobile advertising business and took on a number of other high-profile roles before becoming chief technology officer in September—described Facebook’s problems moderating hate speech as one of supply and demand. (The post was also published to Bosworth’s public blog in January 2021.) He said the more Facebook invests in ways to improve its content quality controls, “the harder people work to circumvent those tools.”

“As a society we don’t have a hate speech supply problem, we have a hate speech demand problem.”

Meta CTO Andrew Bosworth

“Online platforms work on the supply side because they don’t control the demand side, and they will continue to invest huge amounts there to keep people safe,” he wrote. “It is key component of our responsibility as a platform (and I think we do it better than any of our competitors here at Facebook). But until we make more social progress as a society we should temper our expectations for results.”

Bosworth also compared overcoming the issue to building Facebook’s ads business between 2012 and 2017, saying it’s “very tempting to focus on growing the supply of spaces to show ads because that gives a predictable return on investment.”

“However if demand remains fixed you are just buying your way down the inventory in terms of quality and into diminishing marginal returns,” he wrote. “Instead we focused on the demand side which is slower to move and harder to measure but increases the value of existing inventory.”

“Medieval” Meta City

Perhaps one of the most interesting metaphors for Facebook and the Facebook Papers comes from an October 2018 post titled “A note about plagues.” The author wrote that Facebook is “currently a medieval city,” and while such a city may have marketplaces, art galleries, universities and inventions, there are also plagues.

“Before you even realize what is happening, it sweeps through the city like a fire,” the author wrote. “Its virulence is fearsome. It is unlike anything you have ever seen. Although you are trying to control it, nothing seems to work. It eventually goes away. The city was evacuated, but many of its inhabitants were killed. The ones that survived are scared.”

The employee notes that some people in the medieval city might say “this was a consequence of some simple error” that can be fixed to prevent future plagues, while others might suggest it’s “not our problem” if disease is inevitable:

“But that would also be a mistake. This is actually largely what we did. This city has opened up many opportunities for residents, as well as great possibilities for germs. People are now closer than ever before. These people were squeezed so that the waste of one can infect others. It increased the number and quality of all contacts. This allowed us to break some of the barriers that kept local outbreaks from becoming global. We are responsible for this.

Others might disagree. People who wish to be able to afford to live and work in cities must also accept the possibility of contracting a disease. That would also be wrong. This problem can be solved.”

The post also notes that cities have built sewers for waste, filters for drinking water, insecticides for fleas and antibiotics and vaccines for disease—suggesting that problems have been solved to mitigate major issues in the past.

“People come here and get value from it, but they face new dangers that they are not used to,” the author wrote. “It is something that never existed in the history of the world, so it is entirely reasonable that we do not understand its consequences yet. But it doesn’t mean that we should accept them. We have a unique opportunity to study them and find solutions.”

Whether Meta can emerge from its ‘medieval’ Facebook era and enter a renaissance period is yet to be determined. But on the company’s third-quarter earnings call in October, Mark Zuckerberg, Facebook’s cofounder and Meta’s CEO, dismissed the Facebook Papers as a way to “paint a false picture of our company” while noting that “good faith criticism helps us get better.”

“I also think that any honest account should be clear that these issues aren’t primarily about social media,” Zuckerberg said. “That means that no matter what Facebook does, we’re never going to solve them on our own. For example, polarization started rising in the U.S. before I was born…The reality is, if social media is not the main driver of these issues, then it probably can’t fix them by itself either.”

The post What The ‘Facebook Papers’ Reveal About The Social Network’s Advertising Business appeared first on Social Media Explorer.