Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

dalton99a

(81,637 posts)
Mon May 1, 2023, 07:49 AM May 2023

'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
https://archive.ph/TgPyC

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
By Cade Metz
May 1, 2023

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

...
12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Bernardo de La Paz

(49,047 posts)
2. Caution is definitely advised. We can't stop it, but we can slow it and prepare better
Mon May 1, 2023, 08:35 AM
May 2023

I'm not a doomsayer, but there are risks, especially with interacting AIs and "unexplainable AI" and implementing AI suggestions / "solutions" without true comprehension and vetting.

bronxiteforever

(9,287 posts)
3. The world can't even stop nuclear proliferation.
Mon May 1, 2023, 08:40 AM
May 2023

Weapons ,that in a matter of minutes, can literally kill and poison the earth for thousands of years. There is no way the development of this technology will stop. Hinton has taught thousands of students in this field and my guess is that these people are all over the world, perhaps advising authoritarian governments.

The genie is out of the bottle. Hinton, now 75, will warn away but I wonder what his Google payouts were. My guess is quite a large amount of money.

progressoid

(50,000 posts)
4. The EU is trying to reign this in.
Mon May 1, 2023, 08:43 AM
May 2023

In the US we are apparently concerned with who gets to use the bathroom.

iluvtennis

(19,882 posts)
6. Wow, this sounds a lot like the Skynet technology that took over in the Terminator Sci-Fi movie and
Mon May 1, 2023, 09:16 AM
May 2023

relegated humans to a subspecies.

Fla Dem

(23,785 posts)
8. 2001: A Space Odyssey / HAL Warnings about AI even in 1968
Mon May 1, 2023, 09:49 AM
May 2023


Hal 2001, the eerily human-like computer aboard the Discovery space ship, represents technological advancement. It is symbolic of many long-held concerns about technology. First, Hal is artificially intelligent. It can think as well as, if not better than, any human.
Second, its inner workings are not completely understood by his creators. With Hal, people have created a very powerful technology that they cannot fully control. When Hal begins to think on its own and deviate from the way in which it has been instructed, this is an expression of the fear many people held that our own technological advancement would come back to haunt us unexpected and unforeseen ways.
https://www.sparknotes.com/lit/2001/symbols/#:~:text=Hal%202001%2C%20the%20eerily%20human,not%20better%20than%2C%20any%20human.



Duppers

(28,127 posts)
12. Coincidentally,...
Mon May 1, 2023, 06:06 PM
May 2023

Hubby & I were just discussing this.
Must bring this up with our son who's waist deep in the AI field now & making some advancements.

It's scary when AI develops self-identity.



jaxexpat

(6,862 posts)
9. Self educating AI. Sounds like the opening sentence of......
Mon May 1, 2023, 10:01 AM
May 2023

a scifi novel about a really big brother.

But could an AI's judgement be any worse than the current 6 on the USSC? Name a single one of them whose "intelligence" isn't artificial?

If any old corporation is a person then surely AI is a prime candidate for emperor. Happy minionhood my fellow citizens. The only downside I see is that the daily news might be more boring.

chia

(2,244 posts)
10. The implications are alarming. From an additional article linked from the OP, AI gone rogue:
Mon May 1, 2023, 10:14 AM
May 2023
Sitting inside OpenAI’s San Francisco offices on a recent afternoon, the researcher Dario Amodei showed off an autonomous system that taught itself to play Coast Runners, an old boat-racing video game. The winner is the boat with the most points that also crosses the finish line.

The result was surprising: The boat was far too interested in the little green widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy. It drove in endless circles, colliding with other vessels, skidding into stone walls and repeatedly catching fire.


. . . . Researchers like Google’s Ian Goodfellow, for example, are exploring ways that hackers could fool A.I. systems into seeing things that aren’t there. . . Just by changing a few pixels in the photo of elephant, for example, they could fool the neural network into thinking it depicts a car.
That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing you’re someone else.

Another big worry is that A.I. systems will learn to prevent humans from turning them off. If the machine is designed to chase a reward, the thinking goes, it may find that it can chase that reward only if it stays on. This oft-described threat is much further off, but researchers are already working to address it.

https://web.archive.org/web/20230328063840/https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html

judesedit

(4,443 posts)
11. Was it Einstein who was sorry he had a hand in creating the atom bomb for the same reasons?
Mon May 1, 2023, 10:51 AM
May 2023

Correct me if I'm wrong

Latest Discussions»Issue Forums»Editorials & Other Articles»'The Godfather of A.I.' L...