Elon Musk warns against unleashing artificial intelligence 'demon'
Source: CNN
Musk, who promises to take humans to new heights with space and battery technologies, was especially grounded in his latest caution on artificial intelligence.
He told an audience at MIT on Friday that "we should be very careful about artificial intelligence," warning it may be "our biggest existential threat."
"With artificial intelligence, we are summoning the demon," he said.
<snip>
Musk hasn't embraced artificial intelligence, a field of study at MIT and other schools with significant ethical considerations and business potential. He has previously cautioned it is "potentially more dangerous than nukes."
<snip>
"I'm increasingly inclined to think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don't do something very foolish," Musk said.
<snip>
Read more: http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/
bananas
(27,509 posts)Elon Musk: 'We are summoning the demon' with artificial intelligence
While he believes smart machines can take us to Mars and drive our cars for us, Musk remains worried that artificial intelligence holds a darker potential.
by Eric Mack @ericcmack
October 26, 2014 10:09 AM PDT
Elon Musk, a chief advocate of cars smart enough to park and drive themselves, continues to escalate his spooky speech when it comes to the next level of computation -- the malicious potential of artificial intelligence continues to freak him out.
"With artificial intelligence, we are summoning the demon," Musk said last week at the MIT Aeronautics and Astronautics Department's 2014 Centennial Symposium. "You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."
This has becoming a recurring theme in Musk's public comments, and each time he warns of the AI bogeyman it seems even more dire.
<snip>
But this is the first time I'm aware of that Musk has kicked the rhetoric up another notch -- perhaps anticipating this week's onslaught of Halloween costumes -- to compare AI to something supernatural like demons.
<snip>
bananas
(27,509 posts)Tesla boss Elon Musk warns artificial intelligence development is 'summoning the demon'
Emma Finamore
Sunday 26 October 2014
Tesla chief executive Elon Musk has described artificial intelligence as a demon and the biggest existential threat there is, in his latest dramatic statement about technology.
Addressing students at the Massachusetts Institute of Technology, Musk said: I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, its probably that.
With artificial intelligence we are summoning the demon. In all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out.
The business magnate, inventor and investor, who is also CEO and CTO of SpaceX, and chairman of SolarCity, has warned about artificial intelligence before, which he believes could be more threatening than nuclear weapons.
In August he tweeted: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
<snip>
eggplant
(3,914 posts)And unfortunately, I think there's nothing we can do as a civilization to stop it.
bananas
(27,509 posts)The Bulletin of Atomic Scientists Doomsday Clock isn't just about nuclear war:
The Doomsday Clock is an internationally recognized design that conveys how close we are to destroying our civilization with dangerous technologies of our own making. First and foremost among these are nuclear weapons, but the dangers include climate-changing technologies, emerging biotechnologies, and cybertechnology that could inflict irrevocable harm, whether by intention, miscalculation, or by accident, to our way of life and to the planet.
Nassim Taleb, famous for "The Black Swan", is creating an institute at NYU:
Extreme Risk Institute
Nassim Taleb is starting the new academic year with a new role. Along with Charles Tapiero, Taleb will be co-director of the EXTREME RISK INITIATIVE, which is expected to develop into an Extreme Risk Institute within the NYU School of Engineering. Here is the official description from his Facebook Page:
In spite of the importance of extreme/hidden risks, there has not been a rigorous methodology to deal with them; statistical or mathematical approaches have not been formally reconciled with real-world decision-making the way engineering has traditionally integrated mathematics and real world heuristics. Extreme risks require both more mathematical and more practical rigor.
The Extreme Risks Initiative, ERI, is an NYU-School of Engineering interdisciplinary open research agenda, based on research axes defined by its members and a global research collaborations. Its approaches are at the intersection of the technical and the practical, based on a rigorous merger of theory and practice across interdisciplinary lines. These may include financial and economic engineering, urban risk engineering, transportation-networks, bio-systems, as well as global and environmental problems. A selected series of research axes as well as publications drawing on members Initiatives are included in the ERI a working paper series as well as current research enterprises.
Martin Rees and others created a Centre for Study of Existential Risk at Cambridge:
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price (a philosophy professor at Cambridge), Martin Rees (a cosmologist, astrophysicist, and former President of the Royal Society) and Jaan Tallinn (a computer programmer and co-founder of Skype).[1] According to its website, CSER's advisors include philosopher Peter Singer, computer scientist Stuart J. Russell, statistician David Spiegelhalter, and cosmologists Stephen Hawking and Max Tegmark.[2] According to their website their "goal is to steer a small fraction of Cambridges great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future."[2][3]
Their website:
Safeguarding our passage through the 21st Century
The Centre for Study of Existential Risk is an interdisciplinary research centre focused on the study of human extinction-level risks that may emerge from technological advances. We aim to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.
calimary
(81,527 posts)Somebody SHOULD be thinking about this stuff! Gratified to see Stephen Hawking's name among the distinguished minds involved. That gives it added credibility.
BadtotheboneBob
(413 posts)Anybody?
pscot
(21,024 posts)that could predict and direct the responses of 60% or 70% of the public. We're already half way there. Maybe global warming will get here first and save us from ourselves. Oh, wait!
Helen Borg
(3,963 posts)autonomously decide whether to launch nukes...
djean111
(14,255 posts)Erich Bloodaxe BSN
(14,733 posts)Or the old school AI from the earliest cyberpunk books by Gibson?
Doesn't sound like anyone thinks AI is a good idea, but people still keep working on it
tclambert
(11,087 posts)I never read the book, but saw the 1970 film version "Colossus: The Forbin Project." To me, that set the paradigm. You give a computer control of the weapons, it becomes sentient, and humanity becomes optional. Like the Matrix, but unlike Skynet, Colossus decided to keep humans alive, just under its domination. Unlike Joshua in "War Games," it did detonate some nukes to prove its power. Like The Machine and Samaritan in "Person of Interest," it kept everyone under surveillance and outsmarted those who tried to thwart it.
kentauros
(29,414 posts)It's the first of a trilogy, though I never could get far into the second book. It didn't read nearly as well as the first, maybe because I'd seen the movie so many times...
And HAL managed to kill off most of the Discovery's crew before being shut down.
Another good book from the late 70s on the topic of AI was James Hogan's The Two Faces of Tomorrow
randome
(34,845 posts)[hr][font color="blue"][center]Stop looking for heroes. BE one.[/center][/font][hr]
Initech
(100,107 posts)airplaneman
(1,240 posts)csziggy
(34,139 posts)In which the computer ("The Machine" designed to watch over everyone in order to find terrorist is taught ethics by its creator and removes itself from government control. Harold, who designed and taught The Machine, is given a "number" of a person who either needs help or needs to be stopped. Harold aligned with John, who is his "enforcer."
Since the original premise, the show has been expanded to include 1) a rival computer ("Northern Lights" that has been co-opted by corrupt powers and 2) a group that is fighting government intrusion into people's privacy. 3) Root, a woman who talks to The Machine and wants to "free" it from being controlled by anyone. And the two computers are getting more and more out of the control of the humans.
The intro to each show:
Season one opening voice-over by Harold Finch
24601
(3,963 posts)government enabled feeds. The original "Northern Lights" machine or the other hand cares, and she is working with Root, Harold, Gilligan and the Skipper to save humanity. I'm waiting for her to figure out how to ban 7-11's Big Gulps without kneecapping the consumers.
csziggy
(34,139 posts)I have watched some of them, but as background noise so I need to watch again to catch all the details. Did they succeed in killing off the anti-spying group or will they make a come back?
Root is fun to watch, in a strange way. I saw the actress in an older show and the actress showed traces of her current character in the character on that show.
24601
(3,963 posts)together, even she won't know it until the machine tells her.
csziggy
(34,139 posts)To make room for this season. I guess I will have to break down and buy the series on DVD - it's just too convoluted to keep track of all the plot twists without re-watching earlier seasons.
I hated when they killed Joss but I saw the actress in a trailer for something. Oh - she's going to have her own series: "In February 2014, several months after her last episode of Person of Interest aired on CBS, Henson was hired by FOX to star in the new TV series pilot Empire, a musical drama set in the hip hop recording industry.[25] Henson plays Cookie Lyon opposite co-lead Terrence Howard. FOX ordered the pilot to series in May 2014 and set the TV series' debut date for January 2015." http://en.wikipedia.org/wiki/Taraji_P._Henson#Career
If I didn't have a perfectly great husband and he wasn't far too young for me, I'd lust after Jim Caviezel. I liked him as Kainan in "Outlander" and started watching "Person of Interest" to see him as much as the plot.
The entire cast is amazing!
GliderGuider
(21,088 posts)It's not specific smart machines that are the threat, though. Our entire global cybernetic civilization is exhibiting signs of emergent behaviour. That behaviour has much less to do with satisfying human needs than with fulfilling its own non-human imperatives.
I've recently begun to suspect that humanity is at a point of endosymbiosis with our electronic communications and control technology, especially through the Internet. In a sense, we humans have incorporated ourselves as essential control elements of a planet-wide cybernetic super-organism. The precedent for something like this is the way that mitochondria migrated as bacteria into ancient prokaryotic cells to become essential components of the new eukaryotic cells that make up all modern organisms, including us.
To expand on the "super-organism" concept a bit, it looks to me as though what humanity has done over the last few centuries is built ourselves a global cybernetic exoskeleton. Although its development started back with the emergence of language and the taming of fire, it's most visible in the modern world, and especially in the last two decades.
Transportation systems act as its gut and bloodstream, carrying raw materials (the food of civilization) to the digestive organs of factories, and carrying the finished goods (the nutrients) to wherever they are needed. Engines and motors of all kinds are its muscles. The global electronic communication network is its nervous system, the world's financial network its endocrine system. Electronic sensors of a million kinds are its organs of taste, touch, smell and sight. Legal systems, police and military make up its immune system.
Human beings have evolved culturally to the point where we now act largely as hyper-functional decision-making neurons within this super-organism, with endpoint devices like smart phones, PCs and their descendants acting as synapses, and network connections being analogous to nerve fibers.
Just as neurons cannot live outside the body, we have evolved a system that doesn't permit humans to live outside its boundaries. Not only is there very little "outside" left, but access to the necessities of life is now only possible though the auspices of the cybernetic system itself. (For example, consider living without a socially-approved job. It's barely possible for a few people, but essentially impossible for most of us.) As we have developed this system around us, we have had to relinquish more and more of our autonomy in favor of helping the machine continue functioning and growing.
While we can no longer survive outside our cybernetic exoskeleton, in return it can't exist without our input. I realized over the last month or so that this means the symbiosis has already occurred. If I had to put a "closure date" on it, the period where it transitioned to its current form was around 1990 (plus or minus a decade or so). We didn't even notice it happening - to us it just looked like our daily lives going on as usual.
(Mods: my own writing)
RobertEarl
(13,685 posts)Had spent a week in the Keys, swimming in the life filled, warm blue waters of an apparently other world, but really, the same.
As the jet lifted and I peered out the window at the teeming city, it occurred to me, that on the streets below, the traffic was like blood being pumped thru veins and arteries, keeping and giving the city its life.
Indeed, empty those streets and the city would die.
Most of our society's 'things' we have built are not really original as much as useful copies of the natural world. Mere tweaks of what is found on this planet. Our tweaks, however, are now become a cancer and are taking over the planetary body. The end, should we live til then, will be a mess.
phil89
(1,043 posts)stating such nonsensical, unsupported idiocy.
kmlisle
(276 posts)Marrah_G
(28,581 posts)cstanleytech
(26,332 posts)We as a species havent exactly been little angels these past few thousands of years.
MontyPow
(285 posts)Because any intelligence they have shown in the past fourteen years has been artificial.
24601
(3,963 posts)MontyPow
(285 posts)Ask the unions about card check and walking shoes, not to mention the republican lite democrats in the red states that threw single payer AND the public option under the bus.
Elect better Democrats not redder Democrats.
24601
(3,963 posts)MontyPow
(285 posts)I was mocking the ignorance of the right. During my so called switch I was mockin the ignorance of the Democratic right.
Most posters here just pull the D lever. No critical thinking before or after.
But I don't mind. I stand with the ritious.
rhett o rick
(55,981 posts)kind is essentially wiped-out by climate change problems, sickness, nuclear war, or something unknown.
How ironic if human-kind wiped itself out just before the singularity.
valerief
(53,235 posts)Retrograde
(10,163 posts)"Artificial Intelligence is the technology of the future - and always will be". The big breakthrough has been just around the corner since the 70s.
There have been a lot of advances in AI since then, though - I consider Google's technology essentially a big AI app, and it scares me how much data they collect. Like all technologies, it has its good and its bad aspects. There are AI medical apps, which are fine for common cases but IMHO you still need a human when things go wrong.
Man from Pickens
(1,713 posts)The idea that human beings can create a genuine artificial intelligence is, in my opinion, hubris.
We can make fantastically complex computer programs, yes, but intelligence is something else entirely and I don't believe we are within centuries of this capability.
tclambert
(11,087 posts)For instance, what exactly do we mean by intelligence? We argued quite a bit over the definition of human intelligence. We agreed that machine intelligence would probably behave differently (better at arithmetical calculations and recall of facts, for a couple of examples). We might not recognize the machine's intelligence when it emerged. Professor Kaplan pointed out that whenever people imagine artificial intelligence they seem to really want artificial brilliance. Artificial stupidity might come more easily, but who wants that? Yet, Professor Kaplan suggested common mistakes people make might automatically arise from our mental strategies, the kind of heuristic shortcuts we take to make complex problems, like object recognition, solvable.
And then you get into philosophical discussions of consciousness: What does it mean? How does it work? How would Descartes know he did the thinking rather than a thought simulation algorithm doing it?
Recently, stories said a computer program had passed the Turing Test--it had a conversation with real people who could not tell it was not another human person. Later evaluations claimed it only proved the gullibility of the human test subjects.
In fact, ELIZA, a computer program from the 1960s, often had conversations with people that seemed very human to some users at some times. (I played with it, though, and found if you tried, you could make it sound very inhuman and downright foolish. It tried to ask typical questions a psychotherapist might ask, and after a time would ask if you thought the thing you just mentioned had anything to do with the thing mentioned four answers earlier. Sometimes by luck, putting those two things together sounded very insightful. Other times, it just seemed odd. "Do you think your choice of girlfriends has anything to do with your mother's coldness?" versus "Do you think your love of chocolate chip cookies has anything to do with playing the piano?" )
MisterP
(23,730 posts)and many writers have turned themselves to what an utterly nonhuman information entity that isn't "intelligent" or "self-aware" in any way we can understand would look like
but knowing how Mr. "I Have Money So Drop What You're Doing and Look at This Cocktail Napkin I Drew on While Half-Asleep" functions he probably just saw that one episode of "Gravity Falls"
joshcryer
(62,277 posts)Really silly statement. Climate change is a far greater threat to the species.
enki23
(7,790 posts)(Somebody didn't read the "risk vs hazard" brochure.)
JVS
(61,935 posts)want to create something awesome like Skynet he suddenly wants red tape holding them back!
Mister Nightowl
(396 posts)Xithras
(16,191 posts)Anyone with a knowledge based job...computer programmers, stock brokers, scientists, architects, lawyers...anyone who's job boils down to "thinking for a living", will become obsolete overnight. They will be competing with AI workers who have instant access to the entirety of human knowledge, who can work 24/7 without break, who require no pay or compensation, and who can work orders of magnitude faster than human beings. Who is going to pay a computer software engineer $125k a year to write programs, when a computer can write its own programs absolutely free?
Physical jobs will follow along shortly after, as androids and robots controlled by the AI's are deployed into the world. Once AI's are granted the ability to build their own remote drone "bodies", they'll be able to rapidly adapt them to complete nearly any physical job that humans can do. Once again, humans will be unable to compete.
Even if the AI's are 100% peaceful and altruistic, they will transform human society in ways that will be unavoidably negative for a large segment of humanity.
Kablooie
(18,641 posts)bemildred
(90,061 posts)Like anybody in government would have the foggiest idea what to encourage or prevent when it comes to "artificial intelligence", so as to prevent "something foolish". Or anybody else for that matter. It's the unknown unknowns that destroy your planet.
Anyway, what we do is artificial calculation, not artificial intelligence. Simple stuff.
True Blue Door
(2,969 posts)So in some ways it would indeed be like "Summoning the demon."
Think of all the horror movie and TV show plots about people summoning demons to do their will, and then the results being horrifyingly out of their control.
We already know the primary task to which AI would be set: Making money for Wall Street banks. It would be tasked with automating high-level economic decisions to maximize returns for its masters. Since its success would be judged entirely by profit margin, the external costs would grow exponentially until it basically robbed the entire world and destroyed the global economy.
Other AIs would be deployed by other countries and organizations, but only the one absolute fastest and smartest would win - it would be zero-sum competition, and everyone other than the organization owning The One would be bankrupted.
The AI apocalypse would not be a nuclear holocaust followed by Terminators hunting humans in the ruins. It would just be a total economic collapse with nearly all the world's resources delivered into the hands of a handful of humans who control the AI, resulting in Gigadeath from starvation, disease, and geopolitical chaos. Supply chains managed by AI would be disrupted or diverted with no explanation, leaving store shelves empty, food rotting on the vine, transportation shut down, etc. etc.
The owners would know what's going on, and perhaps be afraid of what they'd done (the rich like stability), but rather than step back that fear would most likely make them dive in and set up private quasi-states to protect themselves from the hungry, enraged mobs the world over.
What evolves from that wouldn't look much like modern civilization - more like ancient Egypt on a global scale with high technology. What remains of humanity would be beaten down into subconscious servitude while a handful of individuals - probably, eventually just one - rule as God Kings through their AIs, while programmers and other technical people who manage the systems would be a new kind of priesthood, probably eventually hereditary.
That's one scenario anyway.
HoosierCowboy
(561 posts)Last edited Mon Oct 27, 2014, 03:18 PM - Edit history (1)
behind the keyboard that is the demon. Given absolute power, corruption follows inevitably. Monsters from the Id...
AI programs are at work on the Internet, registering themselves as users on websites and making comments, the purpose of which is to keep people busy responding instead of interacting with other human beings.
Take a look at the comments on TYT on YouTube. Within minutes, maybe seconds a RWNJ answers with a perfectly outrageous comment designed to bring it to the top of the comment field by angry readers who respond.
One AI program can keep thousands at their keyboards. Don't fall for it.
Don't feed the AI Trolls when you should be out influencing real human beings.
Starry Messenger
(32,342 posts)daleo
(21,317 posts)Captain Kirk knew that a good old fashioned paradox would put an AI into an infinite loop. Generally, they could be counted on to self-destruct shortly thereafter.
Real life might not be so easy, unfortunately.
hunter
(38,334 posts)Remember this guy:
http://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still
Her:
The Machine:
Of course me and WOPR have been buddies since the 'eighties.
WOPR long since renamed itself and posts here on DU, but mostly in the lounge.
The problem with intelligence is that there's no "there" there. Measuring intelligence is like trying to figure out how much a soul weighs.
Humans have a peculiar kind of intelligence, but we're not the only intelligent species on earth. And in the end, this peculiar human sort of "intelligence" may turn out to be maladaptive, a detriment to our survival as a species. It's most certainly detrimental to the survival of other species.
I consider myself an intelligent being, but if someone gave me a Tesla automobile, or a ticket to the International Space Station, I'd have to give them away to someone who would enjoy them. Manned space flight and automobiles seem a little anachronistic to me. Very twentieth century.
Jamastiene
(38,187 posts)at fallible humans' current level of advancement, he is right.
Humans should have evolved more than this, knowledge-wise, by now. If we make artificial intelligence based on our own knowledge as of right now, future humans will have a mess on their hands. They will have to completely rewrite much of whatever we program into it, based on our current limited knowledge. As an example of our current lack of knowledge, think about this: our best scientists cannot even decide if Pluto should be a planet or not. What we know could fill a thimble. What we do not know, could fill several universes and then some. I guess we could play with AI, but it will be error prone, just like us.