OpenAI is Dissolving Team

OpenAI is Dissolving Team Focused on Long-Term AI Risks: Artificial Intelligence Trends

After OpenAI co-founder Ilya Sutskever resigned last week, CNBC is reporting that OpenAI is dissolving the team he co-led, focused on long-term AI risks.

In the report, (OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it, written by Hayden Field and available here), CNBC states that OpenAI has disbanded its Superalignment team focused on the long-term risks of artificial intelligence just one year after the company announced the group, according to a person familiar with the situation, who spoke on condition of anonymity, and said some of the team members are being reassigned to multiple other teams within the company.

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Advertisement
Elite Discovery

The news comes days after both team leaders, Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

On Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

Advertisement
Minerva26

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Instead of building a “safety-first AGI company”, OpenAI is dissolving the team focused on safety and risks. That’s not a good sign – unless, that is, you’re fans of the continuing OpenAI soap opera! Sounds like a spin-off is in the works! 😀

So, what do you think? Are you concerned that OpenAI is dissolving the team focused on safety and long-term risks? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using Bing Image Creator Powered by DALL-E, using the term “opera singer with soap”. Get it? 😉

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

3 comments

  1. Don’t you just love intraoffice politics? I am concerned with the drain of good minds from OpenAI, especially for the reasons they have stated when leaving. It is too early to tell who is right in this latest departure, but leaving over dismantling of the security and safety elements of the organization is not a good look for OpenAI.

Leave a Reply