« Sony alpha a7 | Main | Free speech absolutists needed for Milo ? »

Princeton researchers discover why AI become racist and sexist Study of language bias has implications for AI as well as human cognition

Princeton researchers discover why AI become racist and sexist
Study of language bias has implications for AI as well as human cognition.

-- An algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The "word-embedding" part of the test comes from a project at Stanford called GloVe, which packages words together into "vector representations," basically lists of associated terms. So the word "dog," if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds.

The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like "girl" or "mother." To keep things simple, the researchers limited each concept to 300 vectors.

-- ANNALEE NEWITZ

People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you'd like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more.

Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT's usefulness in discovering people's latent stereotypes about each other. (It's worth noting that there is some debate among social scientists about the IAT's accuracy.)

Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

"Language reflects facts about the world," Caliskan told

TrackBack

TrackBack URL for this entry:
http://www.stylizedfacts.com/cgi-sys/cgiwrap/fotohof/managed-mt/mt-tb.cgi/10650

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)