States Are Requiring Lawyers to Verify

States Are Requiring Lawyers to Verify AI Outputs. Will it Help?: Artificial Intelligence Trends

We’re starting to see a trend where states are requiring lawyers to verify AI outputs. Will it help eliminate hallucinated citations? Probably not.

Yesterday, Bob Ambrogi wrote that the California Bar has proposed a rule requiring lawyers to verify every AI output — as well as five other AI-focused ethics changes. As Bob notes: the California Bar is saying that, when using any technology – including AI – a lawyer “must independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.”

That language appears in a new comment to Rule 1.1 on competence proposed by the State Bar of California’s Standing Committee on Professional Responsibility and Conduct (COPRAC) as part of a package of AI-related amendments to six of the state’s Rules of Professional Conduct.

Advertisement
Relativity

The proposed changes would, for the first time, write specific AI obligations into California’s rules. The changes span the rules on competence, client communication, confidentiality, candor toward tribunals, and supervision of both lawyers and other staff.

Bob proceeds to detail the proposed amendments in his post on his excellent LawSites blog here if you want to check them out.

California is the second state that I’ve seen in the past few weeks to propose a requirement for verification of AI outputs. As discussed by Angela Delvecchio on Project Counsel Media (and covered by us here), Connecticut also has proposed a rule. In that state, The Rules Committee of the Superior Court proposed a rule requiring lawyers and pro se parties to “independently verify all citations, legal authorities or evidence produced by generative A.I.” amid a slate of practice book revisions the Connecticut Law Journal shared last month.

The Connecticut proposed rule is notably different. Because it was proposed by the Rules Committee of the Superior Court and not a bar association, it was applied to lawyers and pro se parties.

Advertisement
KLDiscovery

That distinction is important. Yesterday afternoon, I downloaded the latest list of cases from Damien Charlotin’s site tracking AI Hallucination Cases – the one for which I provide a weekly update on the Kitchen Sink – and ran an analysis of the party(ies) responsible for the mistake. Out of 1,387 cases in his spreadsheet (the site reported 1,397, so 10 cases appear to be missing from the spreadsheet?), only 522 cases had lawyer(s) as the party(ies) solely responsible for the mistake. 825 cases had pro se party(ies) as the party(ies) solely responsible for the mistake. The rest were a mix of paralegals, experts, judges and combinations of different party types.

So, any rule directed at just lawyers may reduce the occurrences somewhat, but by less than half – even if every lawyer complies with the rule.

And they won’t. As Judge Ralph Artigliere (ret.) noted in this terrific article on the EDRM blog (which we covered here), “Many AI-related failures in law are not failures of ignorance. They are failures of execution. The lawyers involved generally know, at least in principle, that AI output must be verified and that confidential information and privilege must be protected. What fails is the disciplined application of those requirements in the press of actual work: under deadlines, under workload pressure, and amid the strong temptation to value speed and convenience over verification.” Absolutely right.

Drawing a parallel to aviation where “Commercial airline operations involve mandatory protocols, institutional oversight, and highly standardized procedures”, Judge Artigliere shared an observation with a colleague that: “The durable answer is to move the guardrails into the workflow itself, so that verification, confidentiality checks, and bias flags surface at the point of action rather than relying on memory alone.”

That’s a terrific way of thinking about it and approaching it – with the lawyers. But they make up less than 40% of the problem. Not that it’s not important to have the rules – and incorporate the guardrails into the workflows: it is. But that won’t solve the entire problem. Not even half.

Pro se parties are becoming even more the “wild west” of litigation than they already were. Public LLMs like ChatGPT, Claude, and Google Gemini are making them think that creating a filing for litigation is easy. I’ve been told by some corporate legal professionals that pro se filings are up considerably and this is the reason. “Give the AI chatbot the right set of instructions and it will pop out a document that is ready to file with the court” is what I suspect many of them are thinking.

It won’t. If you’re reading this far (and read this blog or others regularly), you already know that. They don’t.

That makes me think the Connecticut approach – on steroids and applied by the courts to lawyers and pro se parties – is the best way to truly make a difference. Somehow, courts need to get across the message to everyone who submits filings in a case that they are required to verify AI outputs. Maybe require them to read a statement and acknowledge in writing that they have checked all AI outputs before the filing will be accepted. Only then will it truly begin to have a significant impact. States are requiring lawyers to verify AI outputs. But that’s not enough.

So, what do you think? Do you think the fact that states are requiring lawyers to verify AI outputs will make a difference? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using DALL-E 3, using the term “robot lawyer wearing a suit going through a checklist”.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply