'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.htmlhttps://archive.ph/TgPyC
The Godfather of A.I. Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
By Cade Metz
May 1, 2023
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industrys biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his lifes work.
Dr. Hintons journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
...
highplainsdem
(49,045 posts)Bernardo de La Paz
(49,047 posts)I'm not a doomsayer, but there are risks, especially with interacting AIs and "unexplainable AI" and implementing AI suggestions / "solutions" without true comprehension and vetting.
bronxiteforever
(9,287 posts)Weapons ,that in a matter of minutes, can literally kill and poison the earth for thousands of years. There is no way the development of this technology will stop. Hinton has taught thousands of students in this field and my guess is that these people are all over the world, perhaps advising authoritarian governments.
The genie is out of the bottle. Hinton, now 75, will warn away but I wonder what his Google payouts were. My guess is quite a large amount of money.
progressoid
(50,000 posts)In the US we are apparently concerned with who gets to use the bathroom.
AllaN01Bear
(18,534 posts)iluvtennis
(19,882 posts)relegated humans to a subspecies.
live love laugh
(13,162 posts)Fla Dem
(23,785 posts)Second, its inner workings are not completely understood by his creators. With Hal, people have created a very powerful technology that they cannot fully control. When Hal begins to think on its own and deviate from the way in which it has been instructed, this is an expression of the fear many people held that our own technological advancement would come back to haunt us unexpected and unforeseen ways.
https://www.sparknotes.com/lit/2001/symbols/#:~:text=Hal%202001%2C%20the%20eerily%20human,not%20better%20than%2C%20any%20human.
Duppers
(28,127 posts)Hubby & I were just discussing this.
Must bring this up with our son who's waist deep in the AI field now & making some advancements.
It's scary when AI develops self-identity.
jaxexpat
(6,862 posts)a scifi novel about a really big brother.
But could an AI's judgement be any worse than the current 6 on the USSC? Name a single one of them whose "intelligence" isn't artificial?
If any old corporation is a person then surely AI is a prime candidate for emperor. Happy minionhood my fellow citizens. The only downside I see is that the daily news might be more boring.
chia
(2,244 posts)The result was surprising: The boat was far too interested in the little green widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy. It drove in endless circles, colliding with other vessels, skidding into stone walls and repeatedly catching fire.
. . . . Researchers like Googles Ian Goodfellow, for example, are exploring ways that hackers could fool A.I. systems into seeing things that arent there. . . Just by changing a few pixels in the photo of elephant, for example, they could fool the neural network into thinking it depicts a car.
That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing youre someone else.
https://web.archive.org/web/20230328063840/https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html
judesedit
(4,443 posts)Correct me if I'm wrong