Senators Have Demanded OpenAI Discuss

Senators Have Demanded OpenAI Discuss Its Safety Efforts: Artificial Intelligence Trends

In the latest scrutiny of OpenAI’s safety policies, five US senators have demanded that OpenAI discuss its efforts to ensure its AI is safe.

As reported by The Washington Post (Senators demand OpenAI detail efforts to make its AI safe, written by Pranshu Verma, Cat Zakrzewski and Nitasha Tiku and available here), five senators demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model.

Led by Sen. Brian Schatz (D-Hawaii), the five lawmakers asked OpenAI chief executive Sam Altman to outline how the maker of ChatGPT plans to meet “public commitments” to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks, in the letter obtained by The Post.

Advertisement
Syllo

The senators — a group of Democrats and an independent — also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks. In a July letter to the Securities and Exchange Commission, OpenAI whistleblowers said they had filed a complaint with the agency alleging the company illegally issued restrictive severance, nondisclosure and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators.

There are twelve questions, four of which have follow-up questions. Examples include:

  • Does OpenAI plan to honor its previous public commitment to dedicate 20 percent of its computing resources to research on AI safety?
  • What security and cybersecurity protocols does OpenAI have in place, or plan to put in place, to prevent malicious actors or foreign adversaries from stealing an AI model, research, or intellectual property from OpenAI?
  • Does OpenAI allow independent experts to test and assess the safety and security of OpenAI’s systems pre-release?
  • Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?

On that last question: yeah, sure they will! 😉

The senators requested answers to their questions by August 13. It will be interesting to see what happens, including if and how OpenAI responds. When senators have demanded that OpenAI discuss its efforts to ensure its AI is safe, it seems unlikely that the company will ignore the request – equally unlikely that they will provide the detailed answers the senators are seeking.

Advertisement
Minerva26

So, what do you think? Do you think OpenAI will answer the senators’ questions? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using GPT-4o’s Image Creator Powered by DALL-E, using the term “robot looking at a warning sign on a computer monitor”.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

One comment

  1. Given the fact that foundational LLMs have crawled, captured, expropriated, and exploited our cultural history, transparency is a minimum. Actually, these monopolistic LLMs should be treated as regulated public utilities providing universal access with a reasonable return on the investment.

Leave a Reply