Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(160,598 posts)
Thu Aug 24, 2017, 09:50 PM Aug 2017

Researchers built an invisible backdoor to hack AIs decisions



WRITTEN BY
Dave Gershgorn
5 hours ago

A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software.

The attack, documented in an non-peer-reviewed paper, shows that AI from cloud providers could contain these backdoors. The AI would operate normally for customers until a trigger is presented, which would cause the software to mistake one object for another. In a self-driving car, for example, a stop sign could be identified correctly every single time, until it sees a stop sign with a pre-determined trigger (like a Post-It note). The car might then see it as a speed limit sign instead.

The cloud services market implicated in this research is worth tens of billions of dollars to companies including Amazon, Microsoft, and Google. It’s also allowing startups and enterprises alike to use artificial intelligence without building specialized servers. Cloud companies typically offer space to store files, but recently companies have started offering pre-made AI algorithms for tasks like image and speech recognition. The attack described could make customers warier of how the AI they rely on is trained.

“We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us,” Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. “Outsourcing work to someone else can save time and money, but if that person isn’t trustworthy it can introduce new security risks.”

More:
https://qz.com/1061560/researchers-built-an-invisible-back-door-to-hack-ais-decisions/
3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Researchers built an invisible backdoor to hack AIs decisions (Original Post) Judi Lynn Aug 2017 OP
It's a mistake to depend for our lives on things we don't understand or control. n/t Binkie The Clown Aug 2017 #1
Yes yes a thousand times yes defacto7 Aug 2017 #2
Thanks for saying it. n/t Judi Lynn Aug 2017 #3
Latest Discussions»Culture Forums»Science»Researchers built an invi...