Not Quite Asimov’s 3 Laws (But We’re Getting There): New Proposed Artificial Intelligence Act from the European Commission

In Isaac Asimov’s 1950 story collection, I, Robot, he establishes the Three Laws of Robotics for a fictional version of Earth in the mid-21st century. These laws were created so that interactions between robots and humans wouldn’t lead to harm.  The European Commission didn’t quite go to Asimov’s level, but they did recently propose a new Artificial Intelligence Act to propose the first ever legal framework on AI.

For those who don’t know, here were Asimov’s Three Laws of Robotics:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It seems that Asimov wasn’t wrong to assume that those of us actually living in the time-period of his Science Fiction, would need to be laws created to manage the way humans and technology come together (though we’re still a “ways off” from the robots he imagined for our time).

eDiscovery Assistant

Instead, artificial intelligence (AI) is the latest technology to fall under scrutiny with newly proposed legislation in the European Union (EU). With the “Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)”, the European Commission is proposing the first ever legal framework on AI.

This is only a part of the EU’s Data Strategy, which has become a bellwether in other areas such as data privacy. One only has to look at the trends around privacy acts in US states like California and Virginia and the influence of the General Data Protection Regulation (GDPR) to see the significance AI regulation could have in coming years.

The proposed Artificial Intelligence Act provides varying degrees of regulation for AI systems, from those deemed low-risk being subject to basic disclosure requirements to dangerous AI systems created to exploit human vulnerabilities, including government-run surveillance, being banned outright (except in certain circumstances). Less extreme high-risk AI would require detailed compliance reviews in addition to current EU compliance requirements.

Similar to the GDPR, these regulations would apply to AI systems whose output is used within the EU, even if the producer or user is located outside the EU. Even more interesting, is the definition of “AI System” laid forth in the legislation.

Having worked in LegalTech for the past six years, I’ve heard more than one debate over what constitutes AI, usually in regard to products claiming to have AI, only to have competitors say, “That’s not real AI.”

Under the new proposed European legislation, these arguments may be moot, as they lay out a broad definition of technology that would fall under AI: software that uses any of several identified approaches to generate outputs for a set of human-defined objectives. Not only does this cover neural networks, but statistical approaches and search and optimization methods as well.

There’s no doubt that many eDiscovery tools would fall under this category, as well as other technologies outside the legal department used by most global corporations, which would then be part of compliance standards and risk management policies, as well as being discoverable data should litigation arise around these proposed laws.

Of course, at this point, all of this is “what ifs” and conjecture. But there is no denying the trend toward increased regulation over data and technology, and as I stated above, the EU seems to be on the forefront of that regulation. Even if similar regulations aren’t created in the US, anyone doing business in Europe needs to be aware of the changing legislations over technologies. It’s also yet to be seen if fines will be significant enough to ensure compliance (which is often the case with the GDPR).

Regardless, AI legislation is something to watch. It’s not quite Science Fiction, but more and more we are living in a world where laws are being created to govern machines.

So, what do you think? Will the proposed Artificial Intelligence Act lead to more compliance around Artificial Intelligence?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Leave a Reply