Should outputs from Generative AI under the First Amendment be considered protected speech? One law professor says they shouldn’t.
As discussed in Rob Robinson’s excellent ComplexDiscovery blog (Generative AI and the First Amendment: Legal Experts Weigh in on the Need for Regulation as Election Nears, available here), the progress of generative AI in generating content nearly indistinguishable from that produced by humans has brought with it pressing concerns about the safety and legality of that content, especially as we approach another presidential election year in the United States. The implications of AI in creating and disseminating false information have become a particularly acute issue for policymakers, company executives, and citizens alike.
A series of legal discussions, spearheaded by individuals such as Peter Salib, assistant professor of law at the University of Houston Law Center, have unfolded around the legitimacy of AI content under current constitutional laws and its potential unpredictable impact on society. As AI technologies like ChatGPT become more expressive and speech-like, Salib warns of the pressing need for adequate regulations. In a forthcoming paper set to appear in the Washington University School of Law Review, Salib argues that outputs from large language models (LLMs) like ChatGPT should not be considered protected speech under the First Amendment – a perspective that challenges the current discourse.
Salib’s stance holds that if AI outputs are deemed protected, regulations could be severely hampered, allowing for the creation of content that could disrupt societal norms. He highlights the potential for AI systems to invent catastrophic weaponry, such as new chemical agents deadlier than the VX nerve agent, assist in critical infrastructure hacking, and engage in manipulation tactics that could lead to automated drone-based political assassinations. These prospects raise alarms regarding the far-reaching capabilities of generative AI technologies that could be used malevolently. Salib emphasizes that AI outputs are not human expressions, and thus may not warrant constitutional protections typically afforded to human speech.
Rob’s coverage goes on to discuss Salib’s recommendations that propose varying levels of liability and control over AI outputs, potential federal content guidelines like the proposed No Fakes Act, and more. The battle is already raging and the stakes are high. Check out Rob’s article on Generative AI and the First Amendment here.
So, what do you think? Should outputs from generative AI models be considered protected speech under the First Amendment? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4’s Image Creator Powered by DALL-E, using the term “robot law professor speaking to robot students in a classroom”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




I can’t help, but wonder, with tongue slightly in cheek, what the constitutional founders would’ve thought?
I might change my mind, but for now say NO. Humans only should be protected.