fbpx
Breaking Campus News. Launching Media Careers.
Cornell study: Tweets by blacks ‘much more likely’ to be tagged as ‘hate speech’

A study by Cornell University researchers concludes that tweets thought to originate from blacks are significantly more likely to be deemed “hate speech” than those of whites.

The research, the first “to measure racial bias in hate speech and abusive language detection datasets,” was presented at the annual meeting of the Association for Computational Linguistics in Italy. A Qatar Computer Research Institute analyst also contributed to the study.

According to the Cornell Chronicle, the five datasets used by the researchers “showed bias against Twitter users believed to be African American.” Social media giants like Twitter “probably don’t use” the same datasets to detect hate speech, and as such discriminate against marginalized communities.

“We found consistent, systematic and substantial racial biases,” said Cornell sociology doctoral student Thomas Davidson, one of researchers. “It’s extremely concerning if the same systems are themselves discriminating against the population they’re designed to protect.”

From the story:

To perform their analysis, they selected five datasets – one of which Davidson helped develop at Cornell – consisting of a combined 270,000 Twitter posts. All five had been annotated by humans to flag abusive language or hate speech.

For each dataset, the researchers trained a machine learning model to predict hateful or offensive speech.

They then used a sixth database of more than 59 million tweets, matched with census data and identified by location and words associated with particular demographics, in order to predict the likelihood that a tweet was written by someone of a certain race.

Though their analysis couldn’t conclusively predict the race of a tweet’s author, it classified tweets into “black-aligned” and “white-aligned,” reflecting the fact that they contained language associated with either of those demographics.

In all five cases, the algorithms classified likely African American tweets as sexism, hate speech, harassment or abuse at much higher rates than those tweets believed to be written by whites – in some cases, more than twice as frequently.

The researchers say the results have two causes: blacks’ tweets are oversampled, and “inadequate training” for those who flag tweets for hateful content.

Davidson said tweets written in “African American English,” or AAE, may be more likely to be considered offensive “due to […] internal biases.” For example, terms such as “nigga” and “bitch” are common hate speech “false positives.”

“[W]e need to consider whether the linguistic markers we use to identify potentially abusive language may be associated with language used by members of protected categories,” the study’s conclusion states.

Read the Chronicle article and full “Racial Bias in Hate Speech and Abusive Language Detection Datasets” study.

MORE: Complaints force Twitter to reinstate campus anti-Semitism tracking group

MORE: Scholars dissect pros, cons of Twitter hashtag activism

IMAGE: Ashley Marinaccio / Flickr.com

Like The College Fix on Facebook / Follow us on Twitter

Please join the conversation about our stories on Facebook, Twitter, Instagram, Reddit, MeWe, Rumble, Gab, Minds and Gettr.