A former senior employee at OpenAI has criticised the company for prioritising product development over safety, revealing that he left the company after a long-standing disagreement over key aims reached a “breaking point.”
Jan Leike, who served as the co-head of superalignment at OpenAI, focused on ensuring that powerful AI systems adhered to human values and aims. His resignation precedes a global AI summit in Seoul next week, where oversight of the technology will be a primary topic among politicians, experts, and tech executives.
Leike resigned shortly after the San Francisco-based company launched its latest AI model, GPT-4o. His departure follows the resignation of Ilya Sutskever, OpenAI’s co-founder and fellow co-head of superalignment, marking the loss of two senior safety figures in one week.
In a series of posts on X, Leike detailed his reasons for leaving, stating that the company’s safety culture had diminished in importance. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.
OpenAI, known for developing the ChatGPT language model, the Dall-E image generator, and the Sora video generator, was founded with the goal of ensuring that artificial general intelligence (AGI) benefits all of humanity. Leike indicated that his disagreements with OpenAI’s leadership about the company’s priorities had been ongoing but had now “finally reached a breaking point.”
Impact Shorts
More ShortsHe argued that OpenAI should be investing more resources in safety, social impact, confidentiality, and security for its next generation of models. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote, noting that it was becoming increasingly difficult for his team to conduct its research.
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike added, highlighting that OpenAI “must become a safety-first AGI company.”
OpenAI’s CEO, Sam Altman, responded to Leike’s comments on X, thanking him for his contributions to the company’s safety culture and acknowledging the need for continued efforts. “He’s right we have a lot more to do; we are committed to doing it,” Altman wrote.
Ilya Sutskever, OpenAI’s former chief scientist, also expressed confidence in the company’s future under its current leadership in his departure announcement. Sutskever had played a key role in the internal debates at OpenAI, initially supporting Altman’s removal as CEO last November before advocating for his reinstatement.
Leike’s concerns emerge as a panel of international AI experts released a report on AI safety, highlighting disagreements over the likelihood of powerful AI systems evading human control.
)