Insurer Sent Law Firms

Insurer Sent Law Firms a ChatGPT Warning: Artificial Intelligence Trends

Alas, we all could see this coming. See what I did there? You will. 😉 An insurer sent law firms a warning about using ChatGPT within their practices.

According to LegaltechÂź News (An Insurer Sent Law Firms a ChatGPT Warning. It Likely Won’t Be the Last, written by Isha Marathe and available here), Attorneys’ Liability Assurance Society Ltd. (ALAS), a mutual insurance carrier that caters to law firms sent out a newsletter-style bulletin to its policyholders titled “ChatGPT—Not Ready for Prime Time.” As the heading might allude to, the message warns attorneys that the GPT-powered chatbot comes with significant legal risks, from potential data privacy violations and concerns around “hallucinations” to the burden of disclosure that may come with its use.

“We do recognize that this landscape in terms of AI is changing, and that it’s important for attorneys to understand new technologies as part of the core competencies under rule 1.1 of the [ABA] model rules,” Mary Beth Robinson, Senior Vice President of Loss Prevention at ALAS, said. “We are not taking a ‘no, never’ approach to innovation—but we are concerned that [law] firms could get carried away, and [forget] that this technology is not a substitute for critical thinking.”

Advertisement
Veritas

The law firms ALAS covers have inquired about the use of ChatGPT and generative AI when it comes to basic legal work, like drafting contracts and initial parts of the legal workflow, Robinson said. But as the bulletin the carrier sent out states, even with human review at later stages of drafting, using ChatGPT can create multiple risks for law firms if they are not careful.

For example, the email notes some ethical rules that have the potential to be violated by the use of the technology. These include: ABA model rule 1.1, which requires attorneys to provide competent representation to clients; rule 1.4, which requires lawyers to be transparent and consult their clients  “about the means by which the client’s objectives are to be accomplished,”; rule 8.4 which states it is professional misconduct to be dishonest about a legal process with a client; and rule 1.6 which requires attorneys to be upfront about the circulation of a client’s information.

Robinson said the concerns around ChatGPT use don’t just extend to violation of ABA rules, but may also lead to further scrutiny by cybersecurity insurers.

She noted that “if there is a cyber intrusion [into OpenAI or ChatGPT], not only will that data potentially be lost to threat actors, but they could conceivably also obtain the firm’s searches
 [gaining] access into the mind of a lawyer and the arguments they might be raising.”

Advertisement
KLDiscovery

So, what do you think? Are you surprised that an insurer sent law firms a warning about using ChatGPT? Or are you surprised that it took this long? Please share any comments you might have or if you’d like to know more about a particular topic.

Image Copyright © NBC Universal. See what I did there again? 😀

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

One comment

Leave a Reply