I’m Just Going To Leave This Here, I Think
AI systems to detect ‘hate speech’ could have ‘disproportionate negative impact’ on African Americans.
If you’re laughing at the headline, you’re a terrible, terrible person.
A new Cornell University study reveals that some artificial intelligence systems created by universities to identify “prejudice” and “hate speech” online might be racially biased themselves and that their implementation could backfire, leading to the over-policing of minority voices online.
[Researcher, Thomas] Davidson said tweets written in “African American English,” or AAE, may be more likely to be considered offensive “due to […] internal biases.” For example, terms such as nigga and bitch are common hate speech “false positives.” “We need to consider whether the linguistic markers we use to identify potentially abusive language may be associated with language used by members of protected categories,” the study’s conclusion states.
“Human error” and “inadequate training” have been cited as explanations.
Update, via the comments:
Given the volume of research that’s subordinate to the conceit that anything reflecting poorly on a Designated Victim Group must therefore, by definition, be an unconscionable act of bias, it’s refreshing to see that the authors of the study do concede that the effect they denounce is most likely a result of statistical differences in actual behaviour:
Different communities have different speech norms, such that a model suitable for one community may discriminate against another… The ‘n-word’… can be extremely racist or quotidian, depending on the speaker… we should not penalise African-Americans for using [it].
However, the authors seem quaintly mystified by the fact that tweets by black people “are classified as containing sexism almost twice as frequently.” And whether the word bitch and various common synonyms should result in flagging and censure only when used by white people and other, as it were, unprotected categories is left to the imagination.
Also, open thread.
terms such as nigga and bitch are common hate speech “false positives.”
So are they false positives depending on the skin colour of who says them?
So are they false positives depending on the skin colour of who says them?
Essentially, yes. Given the volume of ‘research’ that’s subordinate to the conceit that anything that reflects poorly on a Designated Victim Group must therefore, by definition, be an unconscionable act of bias, it’s refreshing to see that the authors of the study do concede that the effect they denounce is most likely a result of statistical differences in actual behaviour: “Different communities have different speech norms, such that a model suitable for one community may discriminate against another… The ‘n-word’… can be extremely racist or quotidian, depending on the speaker… we should not penalise African-Americans for using [it].”
However, the authors seem quaintly mystified by the fact that tweets by black people “are classified as containing sexism almost twice as frequently.” And whether the word bitch and various common synonyms should be frowned upon only when used by white people is left to the imagination.
If you’re laughing at the headline, you’re a terrible, terrible person.
I denounce myself.
protected categories,
There’s your problem.
Yes. But ultimately, as it’s a FACK that only cisgendered white males can be rayciss, those will be properly tagged as false positives.
Or blamed on somebody else. Somebody white.
Not entirely unrelated:
The woke view, it seems, is that students with brown skin needn’t be articulate, verbally self-possessed, or precise in their thoughts. And that ungrammatical job application, the one enlivened with incomprehensible sentences and lots of inventive spelling, will do just fine.
There’s nothing difficult about training an AI to find problematic what the bluechecks find problematic, and Twitter presumably already has. Bluecheck behavior isn’t whimsical or lacking in transparency, it’s governed by rules that an AI can figure out as easily as we can. According to those rules, it’s not problematic if Datrovious calls Lashonda a ho, or if it is conceded to be problematic, it’s in such a boring, qualified way that it doesn’t light up the neural pathways and cause a brain to press the Report button. And if Becky quotes a rap lyric to Mackenzie, that is problematic.
Those are the rules. They’re not inconsistent or illogical. And it’s not even a case where an AI has picked up on people’s implicit behavior being inconsistent with their explicit principles. Any bluecheck can explain the principles whereby they treat Becky differently to Datrovious.
The only reason for confusion is that white male brogrammers have coded up the badtalk detectors based on their own prejudices about principles being applied blindly and reciprocally. And the bluechecks have a solution for that too.
Payment is mandatory. Education is optional.
Universities feeding at the trough on Gov’t dollars supporting equal opportunity. Just get the money, never mind the product delivery. Grade school and high school are putting out the poor product of “no student left behind” and universities are adapting to the marketplace. This is not about education, this is about marketing.
A new Cornell University study reveals that some artificial intelligence systems created by universities to suppress free speech by conservatives accidentally suppresses free speech by minorities. Programmers are working around the clock to fix this glitch.
Fixed it for you.
Actually, it’s a rather chilling development. Because you KNOW the endgame here is using AI to deplatform those on the right…and it’s helluva lot more efficient than the human “monitors” now policing social media.
The woke view, it seems, is that students with brown skin needn’t be articulate, verbally self-possessed, or precise in their thoughts.
Racialism is nothing if not selectively diminutive. And petty, right down to the demands of superior upon subservient:
“We need to consider whether the linguistic markers we use…”
This may be a stretch but that reminds me of the immature customer service churl taking my money: “I need you to sign here…”
You what?
Life has become a series of encounters with uncivil demanding nitwits.
…it’s not even a case where an AI has picked up on people’s implicit behavior being inconsistent with their explicit principles. Any bluecheck can explain the principles whereby they treat Becky differently to Datrovious.
Given that the perfectly acceptable casualty is free speech, and given that civility is the volunteer effort of a relatively enlightened mind, is the ultimate aim – besides rank power and control – to police intent or to clean up cesspools like Twatter, which is to say, life once it’s sufficiently dystopianized? Because the shock and horror of seeing the five letters that signify nigga either goes to the mental disorder that triggers triggering, or it goes to kindly allow me to ruin your day just because I’m so glorious that I want to be seen ruining your day.
Which is to say, power and control.
Paging Jeff Goldstein.
I think Burnsie has nailed it. God the future looks bleak.
Think how bad the situation is now and imagine how much worse it would be if not for the 1A and the septics being Top Nation.
In addition to the issues above, the authors also acknowledge complications that arise because of context, nuance, sarcasm or direct quotations, etc., but they don’t challenge the premise of whether AI should be policing social media comments for such language in the first place. They merely point out that it may require some tweaking before widespread implementation. But it seems to me that if you plan to use AI to police social media for unsavoury language and verboten attitudes, and you immediately find that you have to start exempting certain demographics from whatever rules you devise, then you’re well on the way to an Orwellian farce.
…well on the way to an Orwellian farce.
In terms of crushing, eternal control, the old carbon credit tax has nothing on this scam.
Cue free-market rightists on sacred “private” corporate monopolies and starting your own 2A island in 3, 2, 1…
It’s official. I’m a terrible, terrible person! *munches popcorn*
I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their typed characters.
-M(AI)LK, circa 2012
I don’t use Facebook or Twitter, but I gather that users can already block, mute and report people who offend or harass them. So, I’m not entirely sure why further, rather sweeping measures are deemed necessary or desirable, assuming the intent is merely to reduce actual harassment. Not least given the, er, complications mentioned above.
If you’re laughing at the headline, you’re a terrible, terrible person.
Is there a limit to how many times one can be regrooved ? Asking for a friend.
Meanwhile, political pandering, a play in two acts.
“Kill all white people” would also be a false positive. 😉
Is there an equivalent word that white people can use, in song and everyday conversation that if used by a black person will cost them their job or risk of physical or social harm? If not, why not?
starting your own 2A island
Er, meaning 1A, of course.
Is there a limit to how many times one can be regrooved?

The old girl’s all powered up and ready to go.
I’d set it to Maximum Admonishment. With heavy grit.
” . . . may be associated with language used by members of protected categories,”
“Protected categories”. I like that. A new term to use to refer to black people. They won’t possibly accuse one of racism for referring to them as a “protected category”.
The old girl’s all powered up and ready to go.
That pic always makes me think of Dr. Seuss. Needs a Seussian name. Though what that might be I have no idea.
If anyone has trouble with comments not appearing, email me and I’ll make an offering to the spam filter.
Honk!
https://twitter.com/neontaster/status/1160909413198237697
It’s just so annoying when your creation doesn’t share your biases.
David Thompson, if you’re going to promote your research under a fake name, you should at least choose one that isn’t so obvious.
😉
The problem is that they neglected to tell the AI to lie. They certainly will not be permitted to make THAT mistake again.
The problem is that they neglected to tell the AI to lie.
Anybody remember these:
How quaint.
So are they false positives depending on the skin colour of who says them?
Absolutely.
We need to judge people by the colour of their skin, rather than their character, obviously.
I’m old enough to remember when the idea of apartheid (literally “apartness”, ie. separation) of races was a bad thing.
I’m reminded of a complaint in Germany where some men in a gym complained that a woman working out was dressed like a tramp.
The responses were amusing. Basically, the consensus was that men should mind their own business, and slut-shaming was unacceptable, unless the complainers were Muslim, in which case the Islamophobic tramp should learn to dress modestly.
Some people making the comments were obviously trolling and being sarcastic, but quite a number were nodding their heads in full agreement, not even considering the double standard.
“New York City’s $15 Minimum Wage Is Now Officially A Disaster… Roughly 77 percent of NYC restaurants have slashed employee hours. Thirty-six percent said they had to layoff employees and 90 percent had to increase prices following the minimum wage hike, according to a NYC Hospitality Alliance survey taken just one month after the bill took effect…”
https://www.zerohedge.com/news/2019-08-10/new-york-citys-15-minimum-wage-now-officially-disaster
Roughly 77 percent of NYC restaurants have slashed employee hours. Thirty-six percent said they had to layoff employees and 90 percent had to increase prices following the minimum wage hike
In other news, objects thrown upwards tend to come down.
A robot may not injure a human being or, through inaction, allow a human being to come to harm…
It was a lot easier for Isaac Asimov to state the Three Laws of Robotics than it is for computer scientists to figure out how to implement them. The latter remains a “someday”.
Any bluecheck can explain the principles whereby they treat Becky differently to Datrovious.
Coincidentally, “Becky” is a common derogatory label for white females. And such racist language remains largely unchallenged and accepted.
6-year old boy has friend, who is male. Friend then identifies as girl and chats incessantly about it with girls in class. Boy tells friend that hormones and surgery won’t make him a girl. Boy receives suspension. Religious-rights org Liberty Counsel takes up case and whole sordid episode produces perhaps most Clown World thing this month:
*6th-grade boy, not 6 years old.
pst314,
common derogatory label for white females
A few months back, I heard a new, and rather amusing derogatory label for white males, specifically the self-important, soy-boy urban hipster types: “weak-ass keeblers”.
(For David’s readers outside the US of A, it’s a play on “cracker” and an apparent reference to the Keebler Company’s cutesy elf mascots.)
“weak-ass keeblers”
I keep saying this place is educational.
I heard a new, and rather amusing derogatory label for white males…
I keep wondering when racism will become as socially unacceptable among blacks as it is among whites.
When a critical mass begin voting against Democrats, pst314.
I keep wondering when racism will become as socially unacceptable among blacks as it is among whites.
Many, many years ago a black female coworker, upon hearing Wild Cherry’s “Play That Funky Music, White Boy” on the radio back in the warehouse where us Morlocks slaved (heh) away, expressed her discomfort with that wording. I don’t think she had heard the song before and/or didn’t get that Wild Cherry were, in fact, white boys themselves. But she was also kinda sweet on me so that might have been a factor.
So that’s at least once in the last 30-40 years.
“University bans hamburgers ‘to tackle climate change'”
https://twitter.com/BBCNews/status/1160914653914030082
Honk!
Looking through neotasters tweet stream linked above, I see Sarah Silverman got a taste of the Social Justice she promotes.
It was a lot easier for Isaac Asimov to state the Three Laws of Robotics
The whole point of the Three Laws is that they don’t work. Pretty much all of Asimov’s writing on the subject orbits the theme that the Three Laws leave loopholes large enough to drive a three-book series through.
As an obvious modern example, if you assume the robots are allowed to see “humankind” as an acceptable synonym for “a human being” (it’s acceptable to kill individual human beings if it improves the chance of the entire species surviving) then the behaviour of the Machines in the Matrix movies is entirely consistent with the Three Laws.
“It was like, I’m playing a character, and I know this is wrong, so I can say it. I’m clearly liberal. That was such liberal-bubble stuff, where I actually thought it was dealing with racism by using racism,”
That was Silverman’s schtick alright – I’m a certified expert in rhetorical safety, protected by my quick-response crew and my intersectional asbestos suit, but you shouldn’t try this at home.
Coincidentally, “Becky” is a common derogatory label for white females.
Slightly unfairly, since Becky wasn’t the one speaking at the beginning of Mixalot KBE’s magnum opus. Still, qui tacet consentire videtur.
I keep wondering when racism will become as socially unacceptable among blacks as it is among whites
That’ll be never. The greatest reservoir of racisim in America is in da hood.
And media executive suites excusing it.
The whole point of the Three Laws is that they don’t work.
Yes, because if they worked perfectly then there would have been no entertaining stories for Asimov to write.
But in those stories the failures were all in subtle aspects of applying the laws, whereas today we cannot program a robot to reliably recognize a human and know what is harmful—we have a very long way to go…and I think the complexities are such that such AI will always have bugs.
“Dr Strouse tells us what it is we need to do.”
I read this as “Dr Seuss …” which would have been a lot more interesting.