Elon Musk wonders if OpenAI doing something ‘dangerous to humanity’ could be reason for Sam Altman’s sacking

Elon Musk has stoked fears that OpenAI doing something ‘potentially dangerous to humanity’ was the reason for Sam Altman’s surprise ouster from the artificial intelligence startup.

By
  • Moneycontrol,
| November 21, 2023 , 9:29 am
The memo signed by Elon Musk reads, "There is nothing I hate more, but it must be done. This will enable us to be lean, innovative and hungry for the next growth phase cycle."
The memo signed by Elon Musk reads, "There is nothing I hate more, but it must be done. This will enable us to be lean, innovative and hungry for the next growth phase cycle."

Elon Musk has stoked fears that OpenAI doing something “potentially dangerous to humanity” was the reason for Sam Altman’s surprise ouster from the artificial intelligence startup he co-founded.

Sam Altman, 38, was fired on Friday from the company that created the popular ChatGPT chatbot. The board of OpenAI said he was pushed out after a review found he was not consistently candid in his communications with the board. The board no longer has confidence in his ability to continue leading OpenAI, the company said in a statement that sent shockwaves through the tech industry.

On Tuesday, the chief executive of Tesla asked OpenAI chief scientist and board member Ilya Sutskever to explain why he took such drastic action against Sam Altman. Ilya Sutskever is widely believed to be the man who engineered Altman’s sacking from OpenAI. He seemingly had a change of heart days later as he tweeted: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

“Why did you take such a drastic action?” Elon Musk asked Sutskever. “If OpenAI is doing something potentially dangerous to humanity, the world needs to know.”

Musk is not alone in raising concerns about the potential dangers of artificial intelligence. The rift between Altman and
OpenAI board reflects fundamental differences over safety and the social impact of AI.

On one side are those, like Altman, who view the rapid development and, especially, public deployment of AI as essential to stress-testing and perfecting the technology. On the other side are those who say the safest path forward is to fully develop and test AI in a laboratory first to ensure it is, so to speak, safe for human consumption.

Some caution the hyper-intelligent software could become uncontrollable, leading to catastrophe – a concern among tech workers who follow a social movement called “effective altruism,” who believe AI advances should benefit humanity. Among those sharing such fears is OpenAI’s Ilya Sutskever.

Sutskever reportedly felt Altman was pushing OpenAI’s software too quickly into users’ hands, potentially compromising safety.

Leave a comment

Your email address will not be published. Required fields are marked *