Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

abqtommy

(14,118 posts)
Mon May 17, 2021, 07:09 AM May 2021

From The CBC*: AI** has a racism problem, but fixing it is complicated, say experts

*Canadian Broadcasting Corporation
** Artificial Intelligence

'Online retail giant Amazon recently deleted the N-word from a product description of a black-coloured action figure and admitted to CBC News its safeguards failed to screen out the racist term.

The multibillion-dollar firm's gatekeeping also failed to stop the same word from appearing in the product descriptions for a do-rag and a shower curtain.

The China-based company selling the merchandise likely had no idea what the English description said, experts tell CBC News, as an artificial intelligence (AI) language program produced the content.

Experts in the field of AI say it's part of a growing list of examples where real-world applications of AI programs spit out racist and biased results.'

It's never gonna end, it seems.

much more text, video link and pics at link:

https://www.cbc.ca/news/science/artificial-intelligence-racism-bias-1.6027150

6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
From The CBC*: AI** has a racism problem, but fixing it is complicated, say experts (Original Post) abqtommy May 2021 OP
Sounds like Richard Pryor is going to be made unavailable tirebiter May 2021 #1
I dunno. I think there are a lot of "isms" that we could do without. Books that could be burned abqtommy May 2021 #2
This is...not what the article is about. WhiskeyGrinder May 2021 #3
You didn't read the article, didya? NT Happy Hoosier May 2021 #5
This reminded me of one of the early failures Midnightwalk May 2021 #4
Stephen Hawking said that AI could be the biggest event or the worse. marie999 May 2021 #6

tirebiter

(2,537 posts)
1. Sounds like Richard Pryor is going to be made unavailable
Mon May 17, 2021, 07:26 AM
May 2021

“That N-word’s Crazy,” eliminates enlightenment by trying to be too fucking polite. Chris Rock’s going to have to have a whole new job. Lenny Bruce will have to be resurrected to be condemned again. Racism is not being eliminated. Humanity is. I guess that’s what it takes. Just another form of book burning, IMO.

abqtommy

(14,118 posts)
2. I dunno. I think there are a lot of "isms" that we could do without. Books that could be burned
Mon May 17, 2021, 08:15 AM
May 2021

that I wouldn't miss, too. But I don't advocate any of that. I see that "racism"/ethnic bigotry
is a world-wide problem. My solution is to choose not to be or do that. Your solution may
be different.

Midnightwalk

(3,131 posts)
4. This reminded me of one of the early failures
Mon May 17, 2021, 08:54 AM
May 2021

Warning: some offensive language from the bots. I didn’t know it was specifically 4chan involved until now; I just knew it learned from the internet.

It raises the question that if the bots can learn so fast to be racist then how easy is it for people, and what can we learn to stop or repair that?


In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Described as an experiment in “conversational understanding,” Tay was designed to engage people in dialogue through tweets or direct messages, while emulating the style and slang of a teenage girl. She was, according to her creators, “Microsoft’s A.I. fam from the Internet that’s got zero chill.” She loved E.D.M. music, had a favorite Pokémon, and often said extremely online things, like “swagulated.”

...snip...

Machine learning works by developing generalizations from large amounts of data. In any given data set, the algorithm will discern patterns and then “learn” how to approximate those patterns in its own behavior.

......

On March 23, 2016, Microsoft released Tay to the public on Twitter. At first, Tay engaged harmlessly with her growing number of followers with banter and lame jokes. But after only a few hours, Tay started tweeting highly offensive things, such as: “I f@#%&*# hate feminists and they should all die and burn in hell” or “Bush did 9/11 and Hitler would have done a better job…”

......

Over the next week, many reports emerged detailing precisely how a bot that was supposed to mimic the language of a teenage girl became so vile. It turned out that just a few hours after Tay was released, a post on the troll-laden bulletin board, 4chan, shared a link to Tay’s Twitter account and encouraged users to inundate the bot with racist, misogynistic, and anti-semitic language.

[link:https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation|]

This part of a six part series that sounds very interesting so I’ll paste that as well. I’m sure some remember Eliza.

This is the fifth installment of a six-part series on the history of natural language processing. Last week’s post described people’s weird intimacy with a rudimentary chatbot created in 1966. Come back next Monday for part six, which tells of the controversy surrounding OpenAI’s magnificent language generator, GPT-2.

You can also check out our prior series on the untold history of AI.
 

marie999

(3,334 posts)
6. Stephen Hawking said that AI could be the biggest event or the worse.
Mon May 17, 2021, 09:06 AM
May 2021

It could really help us or destroy us.

Latest Discussions»General Discussion»From The CBC*: AI** has a...