Expecting regulators from the EU to restore order to the AI chaos? Apparently, EU regulators are stumped on how to regulate generative AI models.
According to Julia Toussaint from The Project Counsel Media Team (story available here), the EU regulators who are in the midst of creating the world’s first binding AI rulebook were knocked off their butts by ChatGPT, forced to take an unexpected detour to address the matter.
According to Toussaint, a member of the European Parliament who is handling the AI legislative proposal drawn up by the European Commission to regulate AI, said that generative AI models (like ChatGPT) had thrown a spanner in the works:
“My God, our draft is already out of date. It is a conundrum. These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose. We need to figure out how to make them fit into our proposal to regulate AI.”
Apparently, the tech has prompted all EU institutions to go back and rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act this past December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs. But an EU Council representative said “My parliamentary colleague is right. Our stuff is already out of date”.
Resulting disputes and discussions about what should be part of the “high-risk” list has caused the primary group of 12 lawmakers working on the European Union’s AI Act – which they now describe as “a risk-based legislative tool, meant to cover specific high-risk use-cases of AI” – to be “stymied”.
What has the group done? They have penned an open letter calling on European Commission’s President Ursula von der Leyen and U.S. President Joe Biden to organize a global AI summit at which representatives from around the world discuss and define governing principles aimed at controlling the development and deployment of AI models and ensure they are human-centric, safe, and trustworthy.
The letter lands just as many country regulators increase efforts to understand and manage AI. Canada, France, Italy, and Spain have each launched investigations into OpenAI’s ChatGPT due to data privacy concerns and Italy’s Guarantor for the Protection of Personal Data has temporarily blocked access to the AI chatbot and said it will lift the current ban if OpenAI complies with privacy rules aimed at protecting personal data and minors by the end of the April. Meetings and negotiations are ongoing.
In the US, we’re looking at more congressional hearings regarding forcing companies to undergo independent assessment of AI products before they’re released on the market. And the European Commission’s proposed draft rules for an AI Act is now 108 pages long (up from 52 pages) and sources tell the author a parliamentary committee “hopes to at least reach a common position with all parties” by April 26th. That’s just 5 days away.
So, if you thought EU regulators have a plan to restore order to the AI chaos, they are apparently just as stumped as we are.
There’s a lot more interesting info in the article here. Expect the unexpected – for the foreseeable future!
So, what do you think? Are you surprised that EU regulators aren’t sure how to account for generative AI chatbots like ChatGPT? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.