OpenAI Reportedly Dissolves Its Existential AI Risk Team
Source: Gizmodo
OpenAIs Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days after the teams founders, Ilya Sutskever and Jan Leike, simultaneously quit the company.
Wired reports that OpenAIs Superalignment team, first launched in July 2023 to prevent superhuman AI systems of the future from going rogue, is no more. The report states that the groups work will be absorbed into OpenAIs other research efforts. Research on the risks associated with more powerful AI models will now be led by OpenAI cofounder John Schulman, according to Wired. Sutskever and Leike were some of OpenAIs top scientists focused on AI risks.
Leike posted a long thread on X Friday vaguely explaining why he left OpenAI. He says hes been fighting with OpenAI leadership about core values for some time, but reached a breaking point this week. Leike noted the Superaligment team has been sailing against the wind, struggling to get enough compute for crucial research. He thinks that OpenAI needs to be more focused on security, safety, and alignment.
Currently, we dont have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue, said the Superalignment team in an OpenAI blog post when it launched in July. But humans wont be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.
-snip-
Read more: https://gizmodo.com/openai-reportedly-dissolves-its-existential-ai-risk-tea-1851484827
Leike's Twitter thread this morning was pretty clear about why he left, and what it said about OpenAI's priorities is damning. Especially since OpenAI CEO Sam Altman has often talked about how dangerous the type of AI he's rushing to develop could be.
Link to my GD thread with Leike's complete statement this morning, and much more on what's going on: https://www.democraticunderground.com/100218957011
dchill
(38,734 posts)highplainsdem
(49,404 posts)He added: "I can sort of imagine what it's like when we have just, like, unbelievable abundance and systems that can help us resolve deadlocks and improve all aspects of reality and let us all live our best lives. But I can't quite. I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it."
His thoughts on the worst-case scenario, though, were pretty bleak.
"The bad case and I think this is important to say is, like, lights out for all of us," Altman said. "I'm more worried about an accidental misuse case in the short term."
He added: "So I think it's like impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening."
-snip-
He was either deliberately lying there, or he just decided he couldn't let a little thing like "lights out for all of us" get in the way of what they're doing now, even to the extent of giving the safety team the computer resources he'd promised.
reACTIONary
(5,820 posts)... pull out the plug.
LudwigPastorius
(9,487 posts)Chat GPT-4 isn't superintelligent AI. It's not even human-level general intelligence, yet it learned how to manipulate someone to do "its bidding".
https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471
reACTIONary
(5,820 posts)... it didn't decide to do so for some nefarious purpose all it's own. It was following instructions. No one was "doing it's bidding". It, and the TaskRabbit worker, were doing the bidding of the prompter.
Just pull the plug... And, while you are at it, throw the promoter in jail.
Ford_Prefect
(8,010 posts)no one to be aware of how dangerous and how real this potential is. We appear to have a genuine "Mad Scientist makes a monster who takes over the world" scenario.