Stephen Hawking and Elon Musk sign open letter warning of a robot uprising
Source: Daily Mail
Artificial Intelligence has been described as a threat that could be 'more dangerous than nukes'.
Now a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.
The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.
The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.
The authors say there is a 'broad consensus' that AI research is making good progress and would have a growing impact on society.
<snip>
Read more: http://www.dailymail.co.uk/sciencetech/article-2907069/Don-t-let-AI-jobs-kill-Stephen-Hawking-Elon-Musk-sign-open-letter-warning-robot-uprising.html
bananas
(27,509 posts)(If you have questions about this letter, please contact tegmark@mit.edu)
Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents - systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality - colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
Open letter signatories include:
<snip>
bananas
(27,509 posts)Artificial Intelligence Warning Says Research Must Avoid Apocalyptic 'Pitfalls'
Huffington Post UK | By Michael Rundle
Posted: 12/01/2015
Dozens of scientists and innovators including Stephen Hawking and executives from Google, Amazon and Space X have made a pre-emptive call for artificial intelligence research to specifically avoid causing the end of the world.
The letter states that studies into advanced AI must focus on positive aims, and put restrictions on areas that might lead down a dark path.
"Our AI systems must do what we want them to do," the letter warns.
"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."
<snip>
bananas
(27,509 posts)One more sentence from the HuffPo article:
The letter is at: http://futureoflife.org/misc/open_letter
The research paper pdf is at: http://futureoflife.org/static/data/documents/research_priorities.pdf
99th_Monkey
(19,326 posts)Mosby
(16,299 posts)Just saying.
Spitfire of ATJ
(32,723 posts)Have you seen the "Live Elvis" out of the box?
http://i.kinja-img.com/gawker-media/image/upload/s--J4CNglx---/c_fit,fl_progressive,q_80,w_636/18rbc5rv4wrozjpg.jpg
christx30
(6,241 posts)I think it wants my soul.
Turbineguy
(37,319 posts)jberryhill
(62,444 posts)bananas
(27,509 posts)starring the actor who played Birkhoff on Nikita.
If you think about it, a virus is a form of AI, like a tiny robot,
except it's natural,
except when it's created artificially in a lab as a bioweapon.
So from that perspective the 12 Monkeys and the Terminator are actually very similar.
in one case the AI is human-sized robots, in the other case the AI is weaponized viruses.
http://io9.com/syfys-new-12-monkeys-trailer-really-shows-how-time-trav-1652925113
project_bluebook
(411 posts)intelligent life among humans in the US is regressing so maybe AI will save humanity from itself.
riversedge
(70,191 posts)loudsue
(14,087 posts)What could possibly go wrong?
GreatGazoo
(3,937 posts)which means in 100 years more we should be averaging an IQ of 72.
http://www.upi.com/Health_News/2013/05/23/Last-century-Western-nations-lost-an-average-14-IQ-points/UPI-77081369362633/?spt=hs&or=hn
Elmer S. E. Dump
(5,751 posts)The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws are:
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
- The problem with ''laws'' is that anything that can be enacted can also be repealed.....
snort
(2,334 posts)one of the robots made its own rule.
DeSwiss
(27,137 posts)Well now, let's think about machines for a moment. They are extremely impartial, very predictable, not subject to moral suasion, value neutral, and very long lived in their functioning. Now let's think about what machines are made of, in the light of Sheldrake's morphogenetic field theory. Machines are made of metal, glass, gold, silicon, plastic; they are made of what the earth is made of. Now wouldn't it be strange if biology is a way for earth to alchemically transform itself into a self-reflecting thing.
In which case then, what we're headed for inevitably, what we are in fact creating is a world run by machines. And once these machines are in place, they can be expected to manage our economies, languages, social aspirations, and so forth, in such a way that we stop killing each other, stop starving each other, stop destroying land, and so forth. Actually the fear of being ruled by machines is the male ego's fear of relinquishing control of the planet to the maternal matrix of Gaia.
~Terence McKenna
blkmusclmachine
(16,149 posts)branford
(4,462 posts)Terminator movies aside, they could hardly do a worse job than our current crop of politicians.
Exultant Democracy
(6,594 posts)I know politicians are famous for their broken promises, but I have a good feeling about this Robotician.
Frank Cannon
(7,570 posts)A 1970 movie based on a 1966 book. I've always found it to be the most plausible and disturbing of all the "computers run amuck" movies.
They just don't make dystopic sci-fi like they they did in the 1970s. Maybe because we're actually living in a 1970s sci-fi dystopia.
longship
(40,416 posts)I am Stephen Hawking. I control the vertical, and the horizontal...
(Mixing metaphors here)
- It may come to this.....
Alkene
(752 posts)Maybe because we're actually living in a 1970s sci-fi dystopia."
http://www.dailymail.co.uk/sciencetech/article-2903407/Dawn-Planet-Apes-Orangutan-learns-whistle-tunes-mimic-human-speech.html
longship
(40,416 posts)He is just trying to justify his own supremacy over all the other merely biological underlings.
Just watch. You will all see.
Enrique
(27,461 posts)jakeXT
(10,575 posts)TreasonousBastard
(43,049 posts)beat us all to the punch.
A lot of things I didn't like much about Battlestar Galactica and its prequel, but it was about a war between humans and machines. There was some mention about such a war in Star Wars, too, but it wasn't fleshed out much.
And those huge flying androids that Picard was always fighting. Became one himself for a while, just in case fighting them wasn't scary enough. Living spaceships were always a ready plot device in scifi also , with the usual exercises in morality involved.
And nobody my age will ever forget HAL.
Paulie
(8,462 posts)The Humanoids are coming!
RobinA
(9,888 posts)and his inexplicably convincing grandiosity is a more immediate threat than any AI. Bernie Madoff mixed with the Wizard of Oz. All he needs is a big sparkly city that looks like it's made of emeralds when you view it through green-colored glasses. A swindler and a sociopath.
forsaken mortal
(112 posts)At some point we're going to need a new economic paradigm, at least a universal income program. It's only a matter of time before AI is able to compete with human cognition, there is nothing supernatural about the brain after all. The brain's functions will be imitated and improved on somewhere down the line and put to work where humans used to work.
FLPanhandle
(7,107 posts)Frankly, this is a silly concern.
Just goes to show that humans are easily scared by things they don't understand and Hawkings is not a computer guy. Even smart people can get scared when dealing in areas they don't understand.
Auggie
(31,164 posts)IMO.
cpwm17
(3,829 posts)and such predictions of future AI capabilities have always been way off. In a hundred years from now the alarmist will still be saying this stuff. AI is not going to take us over.
Computers are electronic machines that operate as designed. They are not going to become conscious by accident, or ever. Without consciousness, the computers will have no capability to give a shit about anything, so they are not going to take over the world.
Consciousness evolved through millions of years of evolution to allow nature to create complex animated critters. Without the positive and negative feelings we experience, such as emotions and pain, the computer will have no way to create fully independent thought. It will still be a machine, because, only through feelings do we think, do, learn, remember, choose, and care.
We have no clue how we are conscious, and probably never will, so we are not in any way going to make computers conscious by accident.
FLPanhandle
(7,107 posts)If it's being aware of your inputs and ability to define a pattern from those inputs and interacting with it, then, yes, computers can be conscious.
However, that doesn't imply emotions which computers would not possess. Without emotions there is no desire to take over or anger or revenge....
The entire premise of AI being a danger is stupid.
cpwm17
(3,829 posts)I'd take it even further than that: without our feelings, which includes emotions, we'd be not much more than unmotivated blobs of goo, unable to think, do, or learn. Essentially we'd be in a coma-like-state.
And without consciousness there can be no feelings (pain, pleasure, emotions and other subtle feelings similar to emotions.) Every thought that goes through our heads and every action that we take is driven by our feelings. It's usually a subtle process that we are not really aware of, but it does give us the illusion of freewill. I think this is the purpose of consciousness.
I see no reason that a computer would ever become conscious, which is awareness. We evolved consciousness for a reason, but consciousness serves no purpose in the operation of a computer and I don't think it will acquire consciousness by accident.
Yes, the entire premise of AI being a danger is stupid.
kentauros
(29,414 posts)It's one reason why I've always liked the movie and television series Ghost in the Shell. While there is some limited forms of AI proposed in it, none are like the horrors proposed by the alarmists. If anything, the Tachikomas are clown-like in their perpetual happiness-mode, despite being machine-agents with the rest of the group of Section 9.
The problems put forth in the series are the things we're most likely to continue to have in the future: hackers wanting to control the world through the machines, and corrupt politicians.
Xithras
(16,191 posts)When scientists and technologists talk about AI taking over the world, they're not talking about some supercomputer that decides to wipe us out one day, a la Skynet. What Elon Musk is talking about is the VERY real threat that AI poses to our economic and social future. One recent estimate put it bluntly...if the AI technology that we ALREADY HAVE WORKING TODAY were applied everywhere that it potentially COULD be adopted, the United States will have a 45% unemployment rate in approximately 30 years. And that's presuming no additional technological advancement. And understand that there are companies working, right at this very second, to automate every job they can find. Why hasn't todays technology driven everyone out of a job? Because it will take decades to deploy it everywhere. But it's already started, and it's a given that some engineer somewhere is trying to figure out a way to automate your job.
Automation engineers exist for the sole purpose of putting other people out of a job. It's a large part of what my current employer does for clients. You think outsourcing to India is cheap? We moved millions of phone service jobs to India to save money. Today, the holy grail is a computerized system that can have authentic conversations with human beings. The technology ALREADY EXISTS in research labs, and we now have functional conversational AI's that can recognize and respond to differing perspectives, sarcasm, and even understand emotional states and respond accordingly. These things make Siri look about as advanced as rocks and sticks. They'll introduce themselves as Bill, Mary, and Javier. They'll laugh at your jokes, or talk about how badly the quarterback for your local football team is doing this year. They're not human, but you'll be hard pressed to tell the difference. This technology, adapted to commercial use, promises to replace rooms full of telephone workers with a single computer that can work 24/7 without complaints, unions, or a paycheck. If they stick it in the cloud and offer it as a service, the business won't even have to pay IT guys to maintain it. Poof, millions of jobs gone.
That is the threat that AI poses. Not that it will grow into some Hollywood style super-villain that wants to run the world or exterminate humanity, but that it will make humanity obsolete.
tularetom
(23,664 posts)We bought a new car last year and I'm pretty sure it's smarter than we are. We treat it pretty nice just so it won't get pissed off and turn against us.
TygrBright
(20,758 posts)AI is about operating systems for all types of technology, and the concerns relate to the level of dependence we've already achieved on technology working according to design that reflects benign human intent.
AI detaches operation from "benign human intent," and puts it in the realm of pure logic, which virtually assures unintended consequences, some of which could be catastrophic.
Portraying it as a specfic thriller full of CGI robots obscures the real issues at stake.
wearily,
Bright
kentauros
(29,414 posts)He does an excellent job of showing what the "mindless" AI is capable of as well as what a higher form of AI might be like in a control environment (off-world in a space station.)
I haven't read it in ages, so I may have to find an ebook version to read again
Matariki
(18,775 posts)enough said.
tclambert
(11,085 posts)Are you sure you can tell?