Is Facebook flagging fake news or censoring content?
It’s a question that many have been asking since June, when Facebook posted hiring notices for “news credibility specialists,” who come up with a list of credible and non-credible news publishers. This job, according to Facebook CEO Mark Zuckerberg, was previously accomplished by news consumers and artificial intelligence.
How trustworthy a source is ranked determines whether its content will be boosted or suppressed. However, this team of specialists is headquartered at Facebook, in what is often considered liberal and secular Silicon Valley. The move generated fear of what is turning out to be real prejudice, as Facebook has been shown to discriminate against Conservative viewpoints, among many others.
Zuckerberg said in an April hearing in Congress that it is “a fair concern” that liberal posts may receive more favorable treatment because of his employees’ political leanings.
Conservative leaders have spoken out against this biased censorship. Following Zuckerberg’s Congress hearing, a joint statement by 62 conservative leaders and led by the Media Research Center called on social media platforms to provide transparency, clarity on “hate speech,” equal footing for Conservatives, and adjusted standards that align with the First Amendment as interpreted by the U.S. Supreme Court. They called on Facebook, YouTube, Twitter and Google to stop rejecting pro-life advertisements, banning gun videos and skewing search results, arguing that the tech titans “work with groups openly hostile to conservatives to restrict speech.”
“Social media companies must address these complaints if they wish to have any credibility with the conservative movement and its tens of millions of supporters,” the statement said.
In addition to harming freedom of speech by relegating articles and other content that these specialists don’t agree with, Facebook’s new policies seem also to be censoring right-leaning nonprofits who are trying to capture leads or fundraise on the platform.
In June – at the same time as the credibility specialists were hired – Facebook launched a new system for advertisements in the United States of “politics and issues.” The new policy includes adding a “paid for by” label on all politics and issues ads, as well as making these and news ads publicly searchable in an advertising archive.
While Facebook said they chose not to shut down all political ads completely because it could unfairly favor political candidates without resources for more pricey advertising platforms, many companies are reporting that the nonprofits, business and keywords that Facebook is deeming political in its new system is stifling their lead generation and their ability to advertise or fundraise effectively.
American Friends of Meir Panim, the fundraising arm of Meir Panim in Israel, works to alleviate poverty in Israel. The organization told Breaking Israel News that multiple ads in the last few months have been deemed political, despite their goal of feeding all Israelis without preference to background, religion, age, gender and ethnicity.
“Multiple times when we’ve boosted posts on our Facebook account, which has close to 30,000 followers, we’ve gotten an automated message from Facebook that the boost has not been approved due to ‘political content,’” said Danielle Rubin, project director at American Friends of Meir Panim. “But these posts have been appeals to support our work combating poverty in Israel, for example, with a ‘call to action’ to feed hungry children in Israel.”
Rubin said that though she cannot prove it, it seems that the word “Israel” causes Facebook’s Artificial intelligence (AI) algorithm to automatically reject posts.
“Social media companies must address these complaints if they wish to have any credibility with the conservative movement and its tens of millions of supporters.”
“When we get rejected, we submit an appeal to Facebook and after a live person reviews the post it is approved,” Rubin said, noting that while waiting on Facebook valuable leads and donors are lost.
“It’s concerning that the word ‘Israel’ is deemed automatically as political by Facebook,” she continued. “As a nonprofit, we are completely apolitical. We have restaurant-style soup kitchens located across Israel, and we provide a hot meal to anyone who walks through the door, regardless of race, religion or ethnic background. Feeding hungry people is not a political statement.”
According to Facebook, political ads are only available for verified U.S. advertisers who reside in the U.S. and who plan to target the U.S. with ads. Facebook defines political ads as anything related to: abortion, budget, civil rights, crime, economy, education, energy, environment, foreign policy, government reform, guns, health, immigration, infrastructure, military, poverty, social security, taxes, terrorism and values.
Many point out that Facebook’s definitions are vague and open to interpretation, especially when using human moderators – each of whom comes with his or her own political leanings and biases – to flag content.
How Does Facebook Make Decisions?
Last year, documents were released that were provided to Facebook moderators, showing that Facebook may make decisions based on expected response rather than its own policies, international law or moral code.
In a report released by The Guardian, the documents showed that, “Facebook does not want to remove Holocaust denial content in any country where it is illegal. But it appears to make exceptions in four countries – those where the site is likely to face prosecution or be sued.”
Dr. Tehilla Shwartz Altshuler, senior fellow and head of the Democracy in the Information Age Program at the Israel Democracy Institute told Breaking Israel News that Facebook is trying to adopt a “neutral attitude” because of its commercial interests. Additionally she said that “Facebook is creating pretty deep emotional social behavioral profiles about us” to sell to advertisers, something she finds “very worrisome in an age where our online life can greatly influence our personal and democratic decisions.”
The Stakes are High
Indeed, when it comes to finding the correct balance between free speech and censorship, the sources from which people consume news – which are being sourced more and more often by social media – shape the way people make informed democratic decisions.
The latest Pew Research Center Study on social media usage showed that 45 percent of Americans get their news from Facebook – and that percentage is increasing. From 2016 to 2017, the number of Americans getting at least a portion of their news from social media increased from 62 percent to 67 percent. With 45 percent of American teenagers reporting that they are online “almost constantly” and another 44 percent saying they log on “several times a day,” the effects of censorship could shape the way Americans think and act.
Jonathan Greenblatt, national director and CEO of the ADL, voiced concern over the proliferation of fake news in closed groups on Facebook and encrypted media, such as WhatsApp.
“Facebook has made some progress in responding to Holocaust denial,” Greenblatt said, as “there are fewer open Facebook groups devoted to promoting Holocaust denial. But there are still closed groups that are hotbeds for it.”
Similarly, said Shwartz Altshuler said that while Facebook is comparatively public, migration to closed groups, such as on WhatsApp or closed Facebook groups, are harder to monitor and regulate, censor, and punish.
“We now know from research about Twitter that fake news and disinformation penetrates deeper into social networks than the truth” said Shwartz Altshuler. “And even though Facebook says they are slowing down fake news, we don’t know how slowing down is being applied.”
Another problem, she posed, is the echo chamber effect, where the information that shows in one’s news feed reflects one’s own interests and opinions, which Shwartz Altshuler said causes political polarization, as people are only being exposed to certain opinions and news.
“The knowledge that you’re surrounded by other people who share the same ideas as you has a strengthening effect on people, increasing self confidence and sense of belonging,” she said. “Its influence on you is much stronger than when you are exposed to a marketplace of ideas.”
Shwartz Altshuler said Facebook can tell advertisers in a tailor-made way about individual’s fears, emotions and other aspects of their profile, making the ability for these advertisers to influence their audience much stronger.
“Having the power to influence your opinions, especially before elections, undermines the whole idea of the democratic process,” Shwartz Altshuler said.
She cited Russia’s influence on American voters before the last election, when the Russians were able to target people who were not intending to go out and vote and encourage them to do so. This, said Shwartz Altshuler helped President Donald Trump win the election.
“So sometimes it’s not only the influence on specific ideas, but also the ability to influence people to act in a way that would help one side of the election campaign,” she said.
Striking the Right Balance?
In a July interview with Recode, Zuckerberg announced that Holocaust denial is appropriate content for Facebook because even though the content is “deeply offensive,” he didn’t “think that [Holocaust deniers are] intentionally getting it wrong.”
He explained that while Facebook will not remove conspiracy theories like Holocaust denial, they will “stop fake news and misinformation from spreading” thereby losing “the vast majority of its distribution.”
“Reducing the distribution of misinformation — rather than removing it outright — strikes the right balance between free expression and a safe and authentic community,” Facebook said in a statement.
But not all are in agreement that Facebook is striking the right balance. While the ADL says that Holocaust denial isn’t just misinformation – it’s hate speech that should be taken down – others say that Facebook censorship undermines the democratic practice, silences moderate voices and even limits fundraising efforts of organizations doing humanitarian work in regions of conflict.
What each of the groups has in common is their recognition that in this digital age where online content means power, transparent and accountable social media platforms are more vital than ever before.
“It’s concerning that the word ‘Israel’ is deemed automatically as political by Facebook.”
ADL’s Greenblatt believes that giving a platform for Holocaust denial does not facilitate a “safe community,” a goal that Facebook expressed in a July statement. Rather, Greenblatt said in a report in the NY Daily News, “Facebook’s CEO fails to understand that Holocaust denial is not simply a gross distortion of the facts, but is also a pernicious form of anti-Semitic hate speech that serves no other purpose than to attack Jews.”
Because Facebook bans hate speech, he argued, Facebook should also root out Holocaust denial.
“We are talking here about a conspiracy theory which argues that Jews around the world knowingly fabricated evidence of their own genocide in order to extract reparations from Germany, gain world sympathy and facilitate the theft of Palestinian land for the creation of Israel,” Greenblatt said, “It is founded on the belief that Jews are able to force governments, Hollywood, the media and academia to promote a lie at the expense of non-Jews.”
Facebook community standards define hate speech as a “direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease.” Greenblatt said he places anti-Semitism at the core of this definition.
While Greenblatt has called for more censorship on the issue of Holocaust denial, others believe that Facebook is going too far in its censorship – arguing that its censorship processes are biased and effectively silencing voices by wrongfully freezing certain types of accounts and posts.
For Imam Mohammad Tawhidi, often called “the Imam of peace,” this censorship undermined his ability to spread messages of moderation and counter the ideology and terrorist attacks of Islamic extremism to his 100,000 followers.
A post in which he criticized Palestinian terrorist organization Hamas “for launching a missile into an Israeli Kindergarten, targeting Jewish children” was flagged by “mass extremists” and led Facebook to disable his page as well as his personal account. Before his account was reactivated, he created a video plea to Mark Zuckerberg saying, “I use these platforms to bridge between different communities and to spread peace.”
He continued, “I wasn’t given the option to appeal on Facebook,” asking Zuckerberg why pages of Islamic extremists are verified and not taken down while “the page of a moderate Imam is taken down… Silencing me only allows the extremists to win.”
The disabling of Tawhidi’s post may have occurred as a result of a Facebook employee responding to the number of users who flagged his post, or it may have occurred as a result of a Facebook engineer-made algorithm that flagged keywords. However, Tawhidi will never know, as Facebook is often not completely open about why a post, ad or page is shut down.
Moving Toward Transparency?
Many critics of Facebook’s censoring practices say the company should be both more specific in defining its terms and be more transparent about how and why they flag content.
Facebook is slowly responding to criticism against its imbalanced censorship practices, for example closing Safa News Agency’s account, which had close connections to Hamas, and taking down videos that incite physical violence, such as in Myanmar where killings were linked to hate speech on the site.
Facebook has also tested a “hate speech button” that asks users if a status contains hate speech, confirming that they were trying to understand how people will react to the new tag and different posts, including posts that Facebook deems as not hate speech.
Others agree that more transparency is necessary, calling for “open systems so that they can be held accountable, while giving weight to privacy concerns.”
“Facebook isn’t always accurate about their policies, and they are not transparent about them,” said Shwartz Altshuler, who maintained that leaders in the industry should be seeking international cooperation to help Facebook better define its platform’s community standards.
“What we have seen with the GDPR [the General Data Protection Regulation in EU law on data protection and privacy within the EU] is that once you have a big body that is willing to stand against social media platforms, it can actually influence their mode of operations,” she said, noting that the decline in Facebook stock that recently took place is a partial reflection on GDPR.
“This means that when the EU wants to implement values and to force social media platform to obey those values, they can do it,” said Shwartz Altshuler.
Schwartz Altshuler said that in cooperation with German authorities, Israel tried to give the Israeli criminal justice system the power and authority to issue orders to remove content from the general Internet, but Facebook and Twitter said they’d remove it only from Israeli IP users.
“So they would only effectively clean the Israeli Internet – and even then, you’re turning the Internet into local authorities, which changes the nature of the Internet as we know it,” she explained.
Shwartz Altshuler suggested cooperatively defining what hate speech is, “by sorting it out to different sub-definitions.” For instance, she said Facebook should consider whether satire is considered hate speech.
“Is dehumanizing people through satire?” Shwartz Altshuler asked. “How exactly do you define a group when it comes to defining hate speech as it pertains to a group? There is a lot of work to be done here to make precise decisions and definitions.”
Another solution she offered is making the enforcement process much more transparent but also involving other groups and bodies outside of the platforms in the enforcement process, for instance, having bodies of civil society that would be able to make a decision when Facebook is not sure whether certain content should be removed or not.
“I think that this whole process can be applied only when governments gather together and make enough pressure on the platform in the sense that they say, ‘If you don’t do it yourself, according to our guidelines, we are going to legislate and do it ourselves,’” Shwartz Altshuler said. She also noted that this would be no different than how the government handles traditional media, which is charged with creating press councils and self-regulating bodies so that the government does not have to meddle in its content decisions.
How will Facebook respond?
The community is waiting. In the meantime, the stakes are high.