While Sam Altman, the CEO of OpenAI, talked publicly about the need for regulation, privately, OpenAI apparently lobbied for watered down AI regulation.
Yesterday’s article from TIME (Exclusive: OpenAI Lobbied the EU to Water Down AI Regulation, written by Billy Perrigo and available here) stated that OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the EU’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with EU officials obtained by TIME from the European Commission via freedom of information requests.
This after Altman has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.
In several cases, OpenAI proposed amendments that were later made to the final text of the EU law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.
In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight.
That argument brought OpenAI in line with Microsoft, which has invested $13 billion into the AI lab, and Google, both of which have previously lobbied EU officials in favor of loosening the Act’s regulatory burden on large AI providers. Both companies have argued that the burden for complying with the Act’s most stringent requirements should be on companies that explicitly set out to apply an AI to a high-risk use case—not on the (often larger) companies that build general purpose AI systems.
“By itself, GPT-3 is not a high-risk system,” said OpenAI in a previously unpublished seven-page document (included in the TIME article) that it sent to EU Commission and Council officials in September 2022, titled OpenAI White Paper on the European Union’s Artificial Intelligence Act. “But [it] possesses capabilities that can potentially be employed in high risk use cases.”
One expert who reviewed the OpenAI White Paper at TIME’s request was unimpressed. “What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”
OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by EU lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called “foundation models,” or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments. OpenAI supported the late introduction of “foundation models” as a separate category in the Act, a company spokesperson told TIME.
All this is despite the fact that researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.
Yeah, it sounds like they can regulate themselves just fine. 😉
BTW, this is the “cliffhanger” to yesterday’s post regarding Project Counsel Media and their coverage of two topics in one article – hence, the image at the top (see what I did there?). 😀
So, what do you think? Are you surprised that OpenAI apparently lobbied for watered down AI regulation while publicly calling for regulation? Please share any comments you might have or if you’d like to know more about a particular topic.
Image Copyright © TriStar Pictures
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



