2.5 C
Ottawa
Tuesday, March 25, 2025

OpenAI creates oversight team with Sam Altman on board, begins training new model

Date:

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. 

Thank you for reading this post, don't forget to subscribe!

Jason Redmond | AFP | Getty Images

OpenAI on Tuesday said it created a safety and security committee led by senior executives, after disbanding its previous oversight board in mid-May.

The new committee will be responsible for recommending to OpenAI’s board “critical safety and security decisions for OpenAI projects and operations,” the company said.

News of the new committee comes as the developer of the ChatGPT virtual assistant announced that it has begun training its “next frontier model.”

The firm said in a blog post that it anticipates the “resulting systems to bring us to the next level of capabilities on our path to AGI,” or artificial general intelligence, a kind of AI that is as smart or smarter than humans. 

In addition to Altman, the safety committee will consist of Bret Taylor, Adam D’Angelo and Nicole Seligman, all members of OpenAI’s board of directors, according to the blog post.

The formation of a new oversight team comes after OpenAI dissolved a previous team that focused on the long-term risks of AI. Before that, both team leaders, OpenAI co-founder Ilya Sutskever and key researcher Jan Leike, announced their departures from the Microsoft-backed startup.

Leike earlier this month wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.” In response to his departure, Altman said on social media platform X that he was sad to see Leike leave, adding that OpenAI has “a lot more to do.”

Over the next 90 days, the safety group will evaluate OpenAI’s processes and safeguards and share their recommendations with the company’s board, the blog post said. OpenAI will provide an update on the recommendations that it has adopted at a later date.

AI safety has been at the forefront of a larger debate, as the huge models that underpin applications like ChatGPT get more advanced. AI product developers also wonder when AGI will arrive and what risks will accompany it.

— CNBC’s Hayden Field contributed to this report.

Don’t miss these exclusives from CNBC PRO

know more

Popular

More like this
Related

Postecoglou axe inevitable after he discarded handy Spurs player

Ange Postecoglou could have done with a Pierre-Emile Hojbjerg in that ‘horrible’ and ‘lost’ Spurs midfield against Livepool. Or a Chelsea outcast. Send your thoughts to theeditor@football365.com. So lonely Morning all, here goes A quick one on Carragher. You’re mostly right but get your facts right. Spurs scored one, not two late goals against Ajax.

As investors hunt for reasons to sell stocks, I’m stepping in to buy — here’s why

This photograph taken on October 20, 2022 shows the...

I plan six-figure luxury vacations for the ultra-wealthy: My 3 ‘common sense’ travel tips anyone can follow

Jason Stevens is used to dealing with travel headaches.As...