« Jack Baruth handed you the truth, how did you handle it ? The truth about cars 2014/07/theres-no-pill-for-contextual-dysfunction/ | Main

Algorithms that Facebook's censors use to differentiate between hate speech and legitimate political expression.

The algorithms that Facebook's censors use to differentiate between hate speech and legitimate political expression.

Julia Angwin, ProPublica, and Hannes Grassegger, special to ProPublica, June 28, 2017,
deconstruct their own clickbait:


A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook's censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins' incitement to violence passed muster because it targeted a specific sub-group of Muslims -- those that are "radicalized" -- while Delgado's post was deleted for attacking whites in general.


Hoffman said the team also relied on the principle of harm articulated by John Stuart Mill, a 19th-century English political philosopher. It states "that the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others." That led to the development of Facebook's "credible threat" standard, which bans posts that describe specific actions that could threaten others, but allows threats that are not likely to be carried out.

Eventually, however, Hoffman said "we found that limiting it to physical harm wasn't sufficient, so we started exploring how free expression societies deal with this."

While Facebook was credited during the 2010-2011 "Arab Spring" with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company's hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at "protected categories"--based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about "subsets" of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook's algorithm is designed to defend all races and genders equally.

"Sadly," the rules are "incorporating this color-blindness idea which is not in the spirit of why we have equal protection," said Danielle Citron, a law professor and expert on information privacy at the University of Maryland. This approach, she added, will "protect the people who least need it and take it away from those who really need it."

But Facebook says its goal is different -- to apply consistent standards worldwide. "The policies do not always lead to perfect outcomes," said Monika Bickert, head of global policy management at Facebook. "That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share."

In 2004, attorney Nicole Wong joined Google and persuaded the company to hire its first-ever team of reviewers, who responded to complaints and reported to the legal department. Google needed "a rational set of policies and people who were trained to handle requests," for its online forum called Groups, she said.

Google's purchase of YouTube in 2006 made deciding what content was appropriate even more urgent. "Because it was visual, it was universal," Wong said.

While Google wanted to be as permissive as possible, she said, it soon had to contend with controversies such as a video mocking the King of Thailand, which violated Thailand's laws against insulting the king. Wong visited Thailand and was impressed by the nation's reverence for its monarch, so she reluctantly agreed to block the video -- but only for computers located in Thailand.

After the wave of Syrian immigrants began arriving in Europe, Facebook added a special "quasi-protected" category for migrants, according to the documents. They are only protected against calls for violence and dehumanizing generalizations, but not against calls for exclusion and degrading generalizations that are not dehumanizing. So, according to one document, migrants can be referred to as "filthy" but not called "filth." They cannot be likened to filth or disease "when the comparison is in the noun form," the document explains.

Facebook also added an exception to its ban against advocating for anyone to be sent to a concentration camp. "Nazis should be sent to a concentration camp," is allowed, the documents state, because Nazis themselves are a hate group.

TrackBack

TrackBack URL for this entry:
http://www.stylizedfacts.com/cgi-sys/cgiwrap/fotohof/managed-mt/mt-tb.cgi/10459

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)