OpenAI Board Formed Safety

OpenAI Board Formed Safety Committee, Plans for New AI Model: Artificial Intelligence Trends

OpenAI announced today that the OpenAI Board formed a Safety and Security Committee and has begun training a new AI model to replace the GPT-4 series.

In a blog post earlier today, OpenAI announced:

“Today, the OpenAI Board formed a Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.

Advertisement
Everlaw

OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

Just eleven days ago, it was reported that OpenAI was dissolving the Superalignment team (which was focused on the long-term risks of AI) just one year after the company announced the group on the heels of departures from both team leaders, including OpenAI co-founder Ilya Sutskever. A lot of people raised concerns about the disbanding of the team, especially after the other team leader – Jan Leike – wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Will this address those concerns? Well, while this is happening, two former OpenAI board members – Helen Toner and Tasha McCauley – who left in the board shakeup that occurred in November after Altman was fired and then rehired, said (in an op-ed for The Economist published Sunday) that AI companies can’t be trusted to govern themselves and that third-party regulation is necessary to hold them accountable.

Advertisement
Cimplifi

That’s not all. They also wrote that they stood by their decision to remove Altman, citing statements from senior leaders that the cofounder created a “toxic culture of lying” and engaged in behavior that could be “characterized as psychological abuse.” Ruh-roh.

The continuing OpenAI soap opera keeps churning out new episodes! No re-runs here! 😀

So, what do you think? Are you excited that the OpenAI Board formed a Safety and Security Committee or do you think it’s merely being done to make the company look better? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using Bing Image Creator Powered by DALL-E, using the term “opera singer with soap”. Get it? 😉

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply