General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsFrom The CBC*: AI** has a racism problem, but fixing it is complicated, say experts
*Canadian Broadcasting Corporation
** Artificial Intelligence
'Online retail giant Amazon recently deleted the N-word from a product description of a black-coloured action figure and admitted to CBC News its safeguards failed to screen out the racist term.
The multibillion-dollar firm's gatekeeping also failed to stop the same word from appearing in the product descriptions for a do-rag and a shower curtain.
The China-based company selling the merchandise likely had no idea what the English description said, experts tell CBC News, as an artificial intelligence (AI) language program produced the content.
Experts in the field of AI say it's part of a growing list of examples where real-world applications of AI programs spit out racist and biased results.'
It's never gonna end, it seems.
much more text, video link and pics at link:
https://www.cbc.ca/news/science/artificial-intelligence-racism-bias-1.6027150
tirebiter
(2,537 posts)That N-words Crazy, eliminates enlightenment by trying to be too fucking polite. Chris Rocks going to have to have a whole new job. Lenny Bruce will have to be resurrected to be condemned again. Racism is not being eliminated. Humanity is. I guess thats what it takes. Just another form of book burning, IMO.
abqtommy
(14,118 posts)that I wouldn't miss, too. But I don't advocate any of that. I see that "racism"/ethnic bigotry
is a world-wide problem. My solution is to choose not to be or do that. Your solution may
be different.
WhiskeyGrinder
(22,355 posts)Happy Hoosier
(7,308 posts)Midnightwalk
(3,131 posts)Warning: some offensive language from the bots. I didnt know it was specifically 4chan involved until now; I just knew it learned from the internet.
It raises the question that if the bots can learn so fast to be racist then how easy is it for people, and what can we learn to stop or repair that?
In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Described as an experiment in conversational understanding, Tay was designed to engage people in dialogue through tweets or direct messages, while emulating the style and slang of a teenage girl. She was, according to her creators, Microsofts A.I. fam from the Internet thats got zero chill. She loved E.D.M. music, had a favorite Pokémon, and often said extremely online things, like swagulated.
...snip...
Machine learning works by developing generalizations from large amounts of data. In any given data set, the algorithm will discern patterns and then learn how to approximate those patterns in its own behavior.
......
On March 23, 2016, Microsoft released Tay to the public on Twitter. At first, Tay engaged harmlessly with her growing number of followers with banter and lame jokes. But after only a few hours, Tay started tweeting highly offensive things, such as: I f@#%&*# hate feminists and they should all die and burn in hell or Bush did 9/11 and Hitler would have done a better job
......
Over the next week, many reports emerged detailing precisely how a bot that was supposed to mimic the language of a teenage girl became so vile. It turned out that just a few hours after Tay was released, a post on the troll-laden bulletin board, 4chan, shared a link to Tays Twitter account and encouraged users to inundate the bot with racist, misogynistic, and anti-semitic language.
[link:https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation|]
This part of a six part series that sounds very interesting so Ill paste that as well. Im sure some remember Eliza.
This is the fifth installment of a six-part series on the history of natural language processing. Last weeks post described peoples weird intimacy with a rudimentary chatbot created in 1966. Come back next Monday for part six, which tells of the controversy surrounding OpenAIs magnificent language generator, GPT-2.
You can also check out our prior series on the untold history of AI.
marie999
(3,334 posts)It could really help us or destroy us.