Skip to content

OpenAI’s New Team to Police Sam Altman Is Led by Sam Altman

The company is already training its next frontier model bringing it a step closer to AGI.

OpenAI says it’s training the next frontier model, according to a press release on Tuesday, and anticipates it will bring the startup one step closer to artificial intelligence systems that are generally smarter than humans. The company also announced a new Safety and Security Committee to guide critical safety and security decisions, led by CEO Sam Altman and other OpenAI board members.

“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI said in a press release. “We welcome a robust debate at this important moment.”

The announcement follows a tumultuous month for OpenAI, where a group led by Ilya Sutskever and Jan Leike that researched AI risks existential to humanity was disbanded. Former OpenAI board members Helen Toner and Tasha McCauley wrote in The Economist on Sunday these developments and others following the return of Altman “bode ill for the OpenAI experiment in self-governance.” Several employees concerned with safety also exited the company in recent months, many of which called for greater attention to the dangers of AI systems.

This new group seems to be OpenAI’s attempt at addressing those fears, and the group’s first task will be to evaluate OpenAI’s processes and safeguards over the next 90 days. However, including Altman in this group is unlikely to appease OpenAI’s critics. Toner and McCauley write that senior leaders told them Altman cultivated “a toxic culture of lying” at OpenAI. Just last week, Scarlett Johansson seemed to call Altman’s trustworthiness into question as the company created a voice that sounded just like her.

Altman, Chair Bret Taylor, and board members Adam D’Angelo and Nicole Seligman will lead the Safety and Security Committee. The company’s new chief scientist, Jakub Pachocki, will also sit on the committee alongside department heads Matt Knight, John Sculman, Lilian Weng, and Aleksander Madry. The group will also consult with outside experts and make recommendations to the larger OpenAI board on safety procedures.

During May, the company also released GPT-4 Omni, which showcased a new frontier in human interaction with AI models. This model shocked many in the AI community with its real-time multimodal interactions, and it seems OpenAI is already pushing ahead with training the next one. Altman’s startup is facing pressure to compete with Google to win an AI chatbot partnership with Apple, largely expected to be unveiled during the June WWDC. The partnership could be one of the most important business deals in tech.

That said, OpenAI’s new safety team is largely made up of board members brought on after Altman’s return to OpenAI, and new department heads who were granted more power as a result of recent departures. The new safety group is a small step towards addressing the safety concerns plaguing OpenAI, but the collection of insiders does not seem sufficient to silence these worries.

You May Also Like

Mode

Follow us