Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Science
Related: About this forumResearchers built an invisible backdoor to hack AIs decisions
WRITTEN BY
Dave Gershgorn
5 hours ago
A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software.
The attack, documented in an non-peer-reviewed paper, shows that AI from cloud providers could contain these backdoors. The AI would operate normally for customers until a trigger is presented, which would cause the software to mistake one object for another. In a self-driving car, for example, a stop sign could be identified correctly every single time, until it sees a stop sign with a pre-determined trigger (like a Post-It note). The car might then see it as a speed limit sign instead.
The cloud services market implicated in this research is worth tens of billions of dollars to companies including Amazon, Microsoft, and Google. Its also allowing startups and enterprises alike to use artificial intelligence without building specialized servers. Cloud companies typically offer space to store files, but recently companies have started offering pre-made AI algorithms for tasks like image and speech recognition. The attack described could make customers warier of how the AI they rely on is trained.
We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us, Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. Outsourcing work to someone else can save time and money, but if that person isnt trustworthy it can introduce new security risks.
More:
https://qz.com/1061560/researchers-built-an-invisible-back-door-to-hack-ais-decisions/
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
3 replies, 1537 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (5)
ReplyReply to this post
3 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Researchers built an invisible backdoor to hack AIs decisions (Original Post)
Judi Lynn
Aug 2017
OP
It's a mistake to depend for our lives on things we don't understand or control. n/t
Binkie The Clown
Aug 2017
#1
Binkie The Clown
(7,911 posts)1. It's a mistake to depend for our lives on things we don't understand or control. n/t
defacto7
(13,485 posts)2. Yes yes a thousand times yes
Judi Lynn
(160,598 posts)3. Thanks for saying it. n/t