Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(49,404 posts)
Fri May 17, 2024, 01:43 PM May 17

OpenAI Reportedly Dissolves Its Existential AI Risk Team

Source: Gizmodo

OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days after the team’s founders, Ilya Sutskever and Jan Leike, simultaneously quit the company.

Wired reports that OpenAI’s Superalignment team, first launched in July 2023 to prevent superhuman AI systems of the future from going rogue, is no more. The report states that the group’s work will be absorbed into OpenAI’s other research efforts. Research on the risks associated with more powerful AI models will now be led by OpenAI cofounder John Schulman, according to Wired. Sutskever and Leike were some of OpenAI’s top scientists focused on AI risks.

Leike posted a long thread on X Friday vaguely explaining why he left OpenAI. He says he’s been fighting with OpenAI leadership about core values for some time, but reached a breaking point this week. Leike noted the Superaligment team has been “sailing against the wind,” struggling to get enough compute for crucial research. He thinks that OpenAI needs to be more focused on security, safety, and alignment.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” said the Superalignment team in an OpenAI blog post when it launched in July. “But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”

-snip-

Read more: https://gizmodo.com/openai-reportedly-dissolves-its-existential-ai-risk-tea-1851484827



Leike's Twitter thread this morning was pretty clear about why he left, and what it said about OpenAI's priorities is damning. Especially since OpenAI CEO Sam Altman has often talked about how dangerous the type of AI he's rushing to develop could be.

Link to my GD thread with Leike's complete statement this morning, and much more on what's going on: https://www.democraticunderground.com/100218957011
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

highplainsdem

(49,404 posts)
2. Sam Altman said it could be "lights out for all of us" - article and video:
Fri May 17, 2024, 03:28 PM
May 17
https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1

-snip-

He added: "I can sort of imagine what it's like when we have just, like, unbelievable abundance and systems that can help us resolve deadlocks and improve all aspects of reality and let us all live our best lives. But I can't quite. I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it."

His thoughts on the worst-case scenario, though, were pretty bleak.

"The bad case — and I think this is important to say — is, like, lights out for all of us," Altman said. "I'm more worried about an accidental misuse case in the short term."

He added: "So I think it's like impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening."

-snip-


He was either deliberately lying there, or he just decided he couldn't let a little thing like "lights out for all of us" get in the way of what they're doing now, even to the extent of giving the safety team the computer resources he'd promised.

LudwigPastorius

(9,487 posts)
6. Maybe not as simple as that.
Sat May 18, 2024, 12:18 AM
May 18

Chat GPT-4 isn't superintelligent AI. It's not even human-level general intelligence, yet it learned how to manipulate someone to do "its bidding".

https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471

reACTIONary

(5,820 posts)
7. The chatbot was told to make up an excuse...
Sat May 18, 2024, 09:41 AM
May 18

... it didn't decide to do so for some nefarious purpose all it's own. It was following instructions. No one was "doing it's bidding". It, and the TaskRabbit worker, were doing the bidding of the prompter.

Just pull the plug... And, while you are at it, throw the promoter in jail.

Ford_Prefect

(8,010 posts)
4. The implication is very clear. They have dropped even the pretense that AI could NOT be a threat. It is clear they want
Fri May 17, 2024, 07:49 PM
May 17

no one to be aware of how dangerous and how real this potential is. We appear to have a genuine "Mad Scientist makes a monster who takes over the world" scenario.

Latest Discussions»Latest Breaking News»OpenAI Reportedly Dissolv...