Is your use of AI violating the law? A new article discusses the current legal landscape that companies need to know to minimize potential liability.
The article (Is Your Use of AI Violating the Law? An Overview of the Current Legal Landscape, available here) – published in the New York University Journal of Legislation & Public Policy and written by Miriam Vogel, Michael Chertoff, Jim Wiley, and Rebecca Kahn – is designed to provide a detailed overview of the legal landscape in both the US and around the world in terms of existing and emerging law.
As noted in the Introduction, “all AI actors, including those building, buying, licensing, and deploying these systems, must consider their potential legal liability when engaging with AI systems. In short, we all have a role to play in ensuring AI is safe and its benefits are shared; this Article aims to be a resource in support of this critical end goal.”
To that end goal, the 98-page article discusses several risk areas where companies using AI could face legal liabilities, including:
- Consumer Protection: Companies may be liable for unfair, deceptive, or harmful practices involving AI. Agencies like the FTC are actively regulating AI under existing laws such as Section 5 of the FTC Act, targeting deceptive claims and unfair practices in AI products.
- Privacy: The use of AI in sensitive areas like healthcare, education, and law enforcement raises significant privacy concerns. AI systems that violate privacy laws, such as HIPAA or state-specific regulations, could expose companies to liability.
- Civil Rights and Discrimination: AI systems used in areas such as hiring, housing, and healthcare have been shown to perpetuate biases, leading to civil rights violations. Legal claims under statutes like the Fair Housing Act or the Equal Credit Opportunity Act are becoming more frequent, especially where AI leads to discriminatory outcomes.
- Intellectual Property: Companies using AI to generate or manipulate content face potential liability if their systems use copyrighted material without permission. This includes risks associated with training AI on proprietary datasets.
- Contracts: AI-induced breaches of contract or unsatisfactory AI performance are also legal risks. AI systems can malfunction or fail to meet performance criteria, leading to breach of contract claims.
- Criminal Justice: The use of AI in criminal justice systems, such as for risk assessments or predictive policing, raises liability risks, particularly if AI systems produce biased or inaccurate results that harm individuals.
None of these are surprising, but it’s a good collection of different risk areas and important to keep in mind that existing laws can lead to potential legal liabilities, even though the US doesn’t have much specific legislation regarding the use of genAI yet.
Recommendations for managing these legal liabilities are also not surprising, including things like: 1) Businesses and legal professionals becoming more “AI-literate” to understand the risks and regulatory landscape surrounding AI; 2) Maintaining human authority and oversight in the design, deployment, and use of AI; 3) Ensuring transparency and accountability with a company’s AI systems, 4) Mitigating bias within AI systems by diversifying training data and continually auditing AI models for discriminatory patterns; and 5) Complying with existing laws, like consumer protection regulations, civil rights laws, and privacy standards, while being prepared for new AI-specific laws that will come.
The article discusses various global perspectives on AI regulation, focusing on the regulatory approaches of different countries and international collaborations, ranging from the European Union and its Artificial Intelligence Act to Brazil (which combines the EU’s risk-based approach to AI regulation with a rights-based approach) to Canada’s Artificial Intelligence and Data Act (AIDA) and more.
As the authors note in the Conclusion: “All of us have a role to play in ensuring that AI is trustworthy, safe, and creates more opportunity.” While none of these findings should be surprising, they bear repeating, as long as we continue to see plenty of misuses of AI and generative AI. A good craftsperson never blames their tools!
So, what do you think? Is your use of AI violating the law? An even better question is: would you know if it is violating the law? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4’s Image Creator Powered by DALL-E, using the term “robot looking at a picture of a landscape on a computer monitor.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.







