Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

bananas

(27,509 posts)
Tue Jan 13, 2015, 02:06 AM Jan 2015

Stephen Hawking and Elon Musk sign open letter warning of a robot uprising

Source: Daily Mail

Artificial Intelligence has been described as a threat that could be 'more dangerous than nukes'.

Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.

The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.

The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.

The authors say there is a 'broad consensus' that AI research is making good progress and would have a growing impact on society.

<snip>

Read more: http://www.dailymail.co.uk/sciencetech/article-2907069/Don-t-let-AI-jobs-kill-Stephen-Hawking-Elon-Musk-sign-open-letter-warning-robot-uprising.html

46 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Stephen Hawking and Elon Musk sign open letter warning of a robot uprising (Original Post) bananas Jan 2015 OP
Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter bananas Jan 2015 #1
Artificial Intelligence Warning Says Research Must Avoid Apocalyptic 'Pitfalls' bananas Jan 2015 #2
"The letter links to a research paper - which is well worth reading - on the future of AI" bananas Jan 2015 #4
A 2001: A Space Odyssey / Lord of the Rings Redux. Life Imitates Art. -nt 99th_Monkey Jan 2015 #3
asimov developed his three laws of robotics in a 1942 short story Mosby Jan 2015 #44
For those who don't think this is scary.... Spitfire of ATJ Jan 2015 #5
Wow that thing is horrifying. christx30 Jan 2015 #7
Looking for the "unsee" button Turbineguy Jan 2015 #26
Yeah, but what if some guy from the future comes back to kill them? jberryhill Jan 2015 #6
I'm looking forward to the new "12 Monkeys" series which starts Friday on SyFy. bananas Jan 2015 #21
Well, according to the last elections project_bluebook Jan 2015 #8
Good one. riversedge Jan 2015 #14
I know, right? People getting dumber, machines getting smarter, more calculating. loudsue Jan 2015 #20
not just the elections, researchers found a 14 point drop in IQ in the last 100 years or so GreatGazoo Jan 2015 #27
I wonder how many of those 14 points came after Faux came online. Elmer S. E. Dump Jan 2015 #32
K&R DeSwiss Jan 2015 #9
Mustn't forget that snort Jan 2015 #13
But only after being made able to, by his creator. DeSwiss Jan 2015 #16
AI solution: pray harder... blkmusclmachine Jan 2015 #10
I, for one, welcome our new AI overlords. branford Jan 2015 #11
John Quincy Addinton Machine promises not to go on a killing spree if elected president. Exultant Democracy Jan 2015 #15
Anyone remember Colossus: The Forbin Project? Frank Cannon Jan 2015 #12
Welcome to world control. longship Jan 2015 #18
! DeSwiss Jan 2015 #19
"They just don't make dystopic sci-fi like they they did in the 1970s. Alkene Jan 2015 #22
Says Hawking who is himself a robot! longship Jan 2015 #17
this dog is a hero Enrique Jan 2015 #23
Our dog just attacks the regular vacuum cleaner, the robot is left alone /nt jakeXT Jan 2015 #36
How soon we forget that science fiction TreasonousBastard Jan 2015 #24
To serve and obey, and guard men from harm Paulie Jan 2015 #25
Elon Musk RobinA Jan 2015 #28
AI forsaken mortal Jan 2015 #29
Hawkings is out of his field here. FLPanhandle Jan 2015 #30
Climate Disaster has greater and more immediate potential for changing the social order ... Auggie Jan 2015 #31
Nonsense: the alarmists have been saying this stuff for years cpwm17 Jan 2015 #33
Depends on how you define Conscious FLPanhandle Jan 2015 #35
Yes, without emotions there is no desire to take over or anger or revenge. cpwm17 Jan 2015 #38
Thank you for making that point. kentauros Jan 2015 #39
The AI problem doesn't require them to be conscious. It's not alarmism. Xithras Jan 2015 #40
It's already too late for some of us tularetom Jan 2015 #34
Crappy headline. Their concern has nothing to do with "robots." TygrBright Jan 2015 #37
If you haven't read it already, you might like this book: kentauros Jan 2015 #42
Daily Mail Matariki Jan 2015 #41
Wasn't Stephen Hawking taken over by an artificial intelligence long ago? tclambert Jan 2015 #43
oh, like in Superman III? MisterP Jan 2015 #45
If they just get Old Glory Insurance, they'll be fine: Sheldon Cooper Jan 2015 #46

bananas

(27,509 posts)
1. Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter
Tue Jan 13, 2015, 02:14 AM
Jan 2015
http://futureoflife.org/misc/open_letter

(If you have questions about this letter, please contact tegmark@mit.edu)

Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents - systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality - colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Open letter signatories include:

<snip>

bananas

(27,509 posts)
2. Artificial Intelligence Warning Says Research Must Avoid Apocalyptic 'Pitfalls'
Tue Jan 13, 2015, 02:20 AM
Jan 2015
http://www.huffingtonpost.co.uk/2015/01/12/artificial-intelligence-warning_n_6454678.html

Artificial Intelligence Warning Says Research Must Avoid Apocalyptic 'Pitfalls'
Huffington Post UK | By Michael Rundle
Posted: 12/01/2015

Dozens of scientists and innovators including Stephen Hawking and executives from Google, Amazon and Space X have made a pre-emptive call for artificial intelligence research to specifically avoid causing the end of the world.

The letter states that studies into advanced AI must focus on positive aims, and put restrictions on areas that might lead down a dark path.

"Our AI systems must do what we want them to do," the letter warns.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

<snip>

bananas

(27,509 posts)
4. "The letter links to a research paper - which is well worth reading - on the future of AI"
Tue Jan 13, 2015, 02:27 AM
Jan 2015

One more sentence from the HuffPo article:

The letter links to a research paper - which is well worth reading - on the future of AI and its potential benefits and calls for more research in the identified areas.


The letter is at: http://futureoflife.org/misc/open_letter
The research paper pdf is at: http://futureoflife.org/static/data/documents/research_priorities.pdf

bananas

(27,509 posts)
21. I'm looking forward to the new "12 Monkeys" series which starts Friday on SyFy.
Tue Jan 13, 2015, 07:03 AM
Jan 2015

starring the actor who played Birkhoff on Nikita.

If you think about it, a virus is a form of AI, like a tiny robot,
except it's natural,
except when it's created artificially in a lab as a bioweapon.

So from that perspective the 12 Monkeys and the Terminator are actually very similar.
in one case the AI is human-sized robots, in the other case the AI is weaponized viruses.

http://io9.com/syfys-new-12-monkeys-trailer-really-shows-how-time-trav-1652925113

 

project_bluebook

(411 posts)
8. Well, according to the last elections
Tue Jan 13, 2015, 02:50 AM
Jan 2015

intelligent life among humans in the US is regressing so maybe AI will save humanity from itself.

loudsue

(14,087 posts)
20. I know, right? People getting dumber, machines getting smarter, more calculating.
Tue Jan 13, 2015, 06:43 AM
Jan 2015

What could possibly go wrong?

 

DeSwiss

(27,137 posts)
9. K&R
Tue Jan 13, 2015, 03:03 AM
Jan 2015
Three Laws of Robotics by Isaac Asimov

The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics


- The problem with ''laws'' is that anything that can be enacted can also be repealed.....
 

DeSwiss

(27,137 posts)
16. But only after being made able to, by his creator.
Tue Jan 13, 2015, 06:27 AM
Jan 2015
- Once granted, free-will spreads like dandelions.....



I have had a thought about this recently which I will tell you. One of the science fiction fantasies that haunts the collective unconscious is expressed in the phrase "a world run by machines"; in the 1950s this was first articulated in the notion, "perhaps the future will be a terrible place where the world is run by machines."

Well now, let's think about machines for a moment. They are extremely impartial, very predictable, not subject to moral suasion, value neutral, and very long lived in their functioning. Now let's think about what machines are made of, in the light of Sheldrake's morphogenetic field theory. Machines are made of metal, glass, gold, silicon, plastic; they are made of what the earth is made of. Now wouldn't it be strange if biology is a way for earth to alchemically transform itself into a self-reflecting thing.

In which case then, what we're headed for inevitably, what we are in fact creating is a world run by machines. And once these machines are in place, they can be expected to manage our economies, languages, social aspirations, and so forth, in such a way that we stop killing each other, stop starving each other, stop destroying land, and so forth. Actually the fear of being ruled by machines is the male ego's fear of relinquishing control of the planet to the maternal matrix of Gaia.


~Terence McKenna
 

branford

(4,462 posts)
11. I, for one, welcome our new AI overlords.
Tue Jan 13, 2015, 04:19 AM
Jan 2015

Terminator movies aside, they could hardly do a worse job than our current crop of politicians.



Exultant Democracy

(6,594 posts)
15. John Quincy Addinton Machine promises not to go on a killing spree if elected president.
Tue Jan 13, 2015, 05:54 AM
Jan 2015

I know politicians are famous for their broken promises, but I have a good feeling about this Robotician.

Frank Cannon

(7,570 posts)
12. Anyone remember Colossus: The Forbin Project?
Tue Jan 13, 2015, 04:48 AM
Jan 2015

A 1970 movie based on a 1966 book. I've always found it to be the most plausible and disturbing of all the "computers run amuck" movies.



They just don't make dystopic sci-fi like they they did in the 1970s. Maybe because we're actually living in a 1970s sci-fi dystopia.

longship

(40,416 posts)
18. Welcome to world control.
Tue Jan 13, 2015, 06:38 AM
Jan 2015

I am Stephen Hawking. I control the vertical, and the horizontal...

(Mixing metaphors here)

longship

(40,416 posts)
17. Says Hawking who is himself a robot!
Tue Jan 13, 2015, 06:37 AM
Jan 2015

He is just trying to justify his own supremacy over all the other merely biological underlings.

Just watch. You will all see.

TreasonousBastard

(43,049 posts)
24. How soon we forget that science fiction
Tue Jan 13, 2015, 09:34 AM
Jan 2015

beat us all to the punch.

A lot of things I didn't like much about Battlestar Galactica and its prequel, but it was about a war between humans and machines. There was some mention about such a war in Star Wars, too, but it wasn't fleshed out much.

And those huge flying androids that Picard was always fighting. Became one himself for a while, just in case fighting them wasn't scary enough. Living spaceships were always a ready plot device in scifi also , with the usual exercises in morality involved.

And nobody my age will ever forget HAL.


RobinA

(9,888 posts)
28. Elon Musk
Tue Jan 13, 2015, 09:47 AM
Jan 2015

and his inexplicably convincing grandiosity is a more immediate threat than any AI. Bernie Madoff mixed with the Wizard of Oz. All he needs is a big sparkly city that looks like it's made of emeralds when you view it through green-colored glasses. A swindler and a sociopath.

forsaken mortal

(112 posts)
29. AI
Tue Jan 13, 2015, 10:06 AM
Jan 2015

At some point we're going to need a new economic paradigm, at least a universal income program. It's only a matter of time before AI is able to compete with human cognition, there is nothing supernatural about the brain after all. The brain's functions will be imitated and improved on somewhere down the line and put to work where humans used to work.

FLPanhandle

(7,107 posts)
30. Hawkings is out of his field here.
Tue Jan 13, 2015, 10:08 AM
Jan 2015

Frankly, this is a silly concern.

Just goes to show that humans are easily scared by things they don't understand and Hawkings is not a computer guy. Even smart people can get scared when dealing in areas they don't understand.

 

cpwm17

(3,829 posts)
33. Nonsense: the alarmists have been saying this stuff for years
Tue Jan 13, 2015, 11:39 AM
Jan 2015

and such predictions of future AI capabilities have always been way off. In a hundred years from now the alarmist will still be saying this stuff. AI is not going to take us over.

Computers are electronic machines that operate as designed. They are not going to become conscious by accident, or ever. Without consciousness, the computers will have no capability to give a shit about anything, so they are not going to take over the world.

Consciousness evolved through millions of years of evolution to allow nature to create complex animated critters. Without the positive and negative feelings we experience, such as emotions and pain, the computer will have no way to create fully independent thought. It will still be a machine, because, only through feelings do we think, do, learn, remember, choose, and care.

We have no clue how we are conscious, and probably never will, so we are not in any way going to make computers conscious by accident.

FLPanhandle

(7,107 posts)
35. Depends on how you define Conscious
Tue Jan 13, 2015, 01:54 PM
Jan 2015

If it's being aware of your inputs and ability to define a pattern from those inputs and interacting with it, then, yes, computers can be conscious.

However, that doesn't imply emotions which computers would not possess. Without emotions there is no desire to take over or anger or revenge....

The entire premise of AI being a danger is stupid.

 

cpwm17

(3,829 posts)
38. Yes, without emotions there is no desire to take over or anger or revenge.
Tue Jan 13, 2015, 03:10 PM
Jan 2015

I'd take it even further than that: without our feelings, which includes emotions, we'd be not much more than unmotivated blobs of goo, unable to think, do, or learn. Essentially we'd be in a coma-like-state.

And without consciousness there can be no feelings (pain, pleasure, emotions and other subtle feelings similar to emotions.) Every thought that goes through our heads and every action that we take is driven by our feelings. It's usually a subtle process that we are not really aware of, but it does give us the illusion of freewill. I think this is the purpose of consciousness.

I see no reason that a computer would ever become conscious, which is awareness. We evolved consciousness for a reason, but consciousness serves no purpose in the operation of a computer and I don't think it will acquire consciousness by accident.

Yes, the entire premise of AI being a danger is stupid.



kentauros

(29,414 posts)
39. Thank you for making that point.
Tue Jan 13, 2015, 03:41 PM
Jan 2015

It's one reason why I've always liked the movie and television series Ghost in the Shell. While there is some limited forms of AI proposed in it, none are like the horrors proposed by the alarmists. If anything, the Tachikomas are clown-like in their perpetual happiness-mode, despite being machine-agents with the rest of the group of Section 9.

The problems put forth in the series are the things we're most likely to continue to have in the future: hackers wanting to control the world through the machines, and corrupt politicians.

Xithras

(16,191 posts)
40. The AI problem doesn't require them to be conscious. It's not alarmism.
Tue Jan 13, 2015, 03:44 PM
Jan 2015

When scientists and technologists talk about AI taking over the world, they're not talking about some supercomputer that decides to wipe us out one day, a la Skynet. What Elon Musk is talking about is the VERY real threat that AI poses to our economic and social future. One recent estimate put it bluntly...if the AI technology that we ALREADY HAVE WORKING TODAY were applied everywhere that it potentially COULD be adopted, the United States will have a 45% unemployment rate in approximately 30 years. And that's presuming no additional technological advancement. And understand that there are companies working, right at this very second, to automate every job they can find. Why hasn't todays technology driven everyone out of a job? Because it will take decades to deploy it everywhere. But it's already started, and it's a given that some engineer somewhere is trying to figure out a way to automate your job.

Automation engineers exist for the sole purpose of putting other people out of a job. It's a large part of what my current employer does for clients. You think outsourcing to India is cheap? We moved millions of phone service jobs to India to save money. Today, the holy grail is a computerized system that can have authentic conversations with human beings. The technology ALREADY EXISTS in research labs, and we now have functional conversational AI's that can recognize and respond to differing perspectives, sarcasm, and even understand emotional states and respond accordingly. These things make Siri look about as advanced as rocks and sticks. They'll introduce themselves as Bill, Mary, and Javier. They'll laugh at your jokes, or talk about how badly the quarterback for your local football team is doing this year. They're not human, but you'll be hard pressed to tell the difference. This technology, adapted to commercial use, promises to replace rooms full of telephone workers with a single computer that can work 24/7 without complaints, unions, or a paycheck. If they stick it in the cloud and offer it as a service, the business won't even have to pay IT guys to maintain it. Poof, millions of jobs gone.

That is the threat that AI poses. Not that it will grow into some Hollywood style super-villain that wants to run the world or exterminate humanity, but that it will make humanity obsolete.

tularetom

(23,664 posts)
34. It's already too late for some of us
Tue Jan 13, 2015, 12:00 PM
Jan 2015

We bought a new car last year and I'm pretty sure it's smarter than we are. We treat it pretty nice just so it won't get pissed off and turn against us.

TygrBright

(20,758 posts)
37. Crappy headline. Their concern has nothing to do with "robots."
Tue Jan 13, 2015, 02:08 PM
Jan 2015

AI is about operating systems for all types of technology, and the concerns relate to the level of dependence we've already achieved on technology working according to design that reflects benign human intent.

AI detaches operation from "benign human intent," and puts it in the realm of pure logic, which virtually assures unintended consequences, some of which could be catastrophic.

Portraying it as a specfic thriller full of CGI robots obscures the real issues at stake.

wearily,
Bright

kentauros

(29,414 posts)
42. If you haven't read it already, you might like this book:
Tue Jan 13, 2015, 03:54 PM
Jan 2015
The Two Faces of Tomorrow by James P. Hogan (1979)

He does an excellent job of showing what the "mindless" AI is capable of as well as what a higher form of AI might be like in a control environment (off-world in a space station.)

I haven't read it in ages, so I may have to find an ebook version to read again

tclambert

(11,085 posts)
43. Wasn't Stephen Hawking taken over by an artificial intelligence long ago?
Tue Jan 13, 2015, 04:13 PM
Jan 2015

Are you sure you can tell?

Latest Discussions»Latest Breaking News»Stephen Hawking and Elon ...