Will requiring mandatory hyperlinks to curb fake AI legal cases become a thing? Perhaps we’re a step closer, but not everyone is on board.
According to Alexander Dumont of Project Counsel Media (Attorneys are split over requiring mandatory hyperlinks to curb fake AI legal cases, available here), a new legal requirement to hyperlink case law is drawing support from legal professionals as a counter to artificial intelligence-generated fake cases in court submissions, but some aren’t sure that it is enough to solve the problem and worry that it will be an added burden on lawyers.
Dumont notes that “the Illinois Supreme Court is among the first state court systems in the United States to implement formal, proactive AI policies to address ‘hallucinated’ or fake case law. Illinois will be the first court system to formalize ‘the Hyperlink Rule’.”
If you read my Kitchen Sink on Fridays, you know that the number of documented court filings containing fake legal cases or citations generated by AI has been rising dramatically, per Damien Charlotin’s database. It’s currently at 1,180 as of this afternoon. 😬
One note, however: Dumont says “1,100 documented instances of attorneys submitting court filings containing fake legal cases or citations generated by AI” (emphasis added), but they’re not all attorneys. As I discussed here at the beginning of March, only around 25% were US attorneys, over 400 were US pro se parties. There are also some experts and judges in the mix as well, and non-US cases (though US is the majority of them).
Regardless, Dumont notes that Oliver Roberts (side note: second mention of him today for me, what are the odds!), former co-head of the AI practice group at Holtzman Vogel Baran Torchinsky & Josefiak PLLC, is the author of an op-ed in the National Law Review in December 2025 where he called for courts to require that all electronic filings include hyperlinked citations to “authoritative legal sources” to address the growing number of citations to nonexistent law.
Roberts writes that the “hyperlink rule” would require all cited judicial opinions, statutes and regulations to be hyperlinked to a reputable legal database, arguing that this prerequisite to verify the citations deters “careless reliance” on AI without prohibiting its use entirely. The procedural proposal calls for attorneys to provide hyperlinks and affirm the existence of the cases. There would be some exceptions, including for pro se litigants (which means it only would solve a portion of the problem as noted above).
Regardless, Dumont reports that the reaction to his op-ed has been “strong and positive, with several judges expressing interest in the concept, and several judges saying they will watch the Illinois court system implementation of the rule for guidance.” He also reports that Charlotin supports the proposed hyperlink requirement as well, because some AI platforms do not provide them by default.
Of course, not every lawyer supports the proposed requirement. For example, Tyler Maulsby, deputy managing partner at the New York City-based boutique firm Frankfurt Kurnit Klein & Selz PC and a legal ethics specialist, notes that the challenge with requiring hyperlinks is that generative AI tools can create a fake URL that shows the case. Another issue is that most leading legal AI tools already include citation links, and that hasn’t stopped attorneys from submitting briefs with fake cases because lawyers might neglect to verify the links. Not to mention that just because a link works doesn’t mean that the cited information exists within the linked materials – AI tools often cite non-existent content within real cases.
As Dumont points out, Maulsby knows what he’s talking about as he represented Steven Schwartz of Levidow Levidow & Oberman PC, who submitted a filing with fake case citations in the now infamous Mata v. Avianca case (close to three years ago now!).
Dumont hits the nail on the head with his penultimate paragraph here:
“So a counter argument is being made that courts shouldn’t try to curb AI-created briefs as a whole, because the real issue is the use of sloppily prepared and unchecked briefs. Many instances involving attorneys submitting fake briefs are similar to past mistakes and unauthorized conduct by lawyers. The new aspect in these instances is the lack of understanding of how the technology works. These are all problems that have been plaguing the legal industry for years. No court has had difficulty finding a rule under which to sanction the lawyer or to fashion a remedy. Attorneys who submitted briefs with fake cases were dealt sanctions that include big fines and temporary suspensions. Or so the argument goes.”
To me, that’s the point. There is no “Staples easy button” solution to this problem. It’s an education problem, not a technology problem. And this – in my opinion – is the real cause of the problem.
So, what do you think? Will requiring mandatory hyperlinks fix the issue of fake AI legal cases fix the issue? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using DALL-E 3, using the term “robot lawyer wearing a suit slipping on a small banana peel in a courtroom”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.



