This was a notable point from an LTN article yesterday: some judges may have trouble distinguishing between AI and generative AI.
This was one of four points in an article from Legaltech® News (4 Generative AI Issues That Are Likely Keeping Judges Up At Night, written by Isha Marathe and available here) that stemmed from coverage of a judges’ panel hosted by the Practicing Law Institute titled “Generative AI and Judges: How Are They Getting Along?” and moderated by Ron Hedges, a retired U.S. magistrate judge for the District of New Jersey and the principal at (the aptly named) Ronald J. Hedges LLC.
Judge Hedges was part of a panel that included Hon. Bernice Bouie Donald (Ret.), US Court of Appeals for the Sixth Circuit, Hon. James C. Francis IV (Ret.), US Magistrate Judge, Southern District of New York and Kenneth J. Withers from The Sedona Conference.
The fact that some judges may have trouble distinguishing between AI and generative AI may not be technically keeping them up at night, but maybe it should keep us up at night?
The panel noted that some of the judicial standing orders posing requirements and limitations on generative AI use (like this one published in a Texas court) in court filings might do more harm than good.
For example, Judge Francis noted that Judge Michael Baylson, serving in the U.S. District Court for the Eastern District of Pennsylvania, “probably doesn’t mean quite what he is saying,” in his standing order.
Specifically, Francis was referring to portion of the order that reads: “If any attorney for a party, or a pro se party, has used Artificial Intelligence (“AI”)…[they] MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing…”
Using the term “AI” instead of “generative AI” here is rather broad, Francis noted, since AI has been integrated into myriad legal technology that most attorneys use.
Withers added that everyone uses AI, even if they don’t realize it in all sorts of different applications. Does this mean that if you’re using a program like Grammarly or if you’re using a translation program, which no one really thought was objectionable before do you suddenly have to certify that this has been done?
And that’s the issue. AI is ingrained in so many apps and platforms these days, many of us are using it without even realizing it.
Here’s a single platform example: Do you use Microsoft Word to create documents? If so, do you take advantage of text predictions where it suggests the completion of a word or phrase based on what you’ve typed so far? Who doesn’t, right? That’s AI, and we’re using it with text messages as well.
Even if you don’t use Grammarly, do you use Word’s spelling and grammar check? That’s AI too.
And if you like Word’s Dictate feature for “speak to text” (which I’m not personally a fan of because it’s often way off, but that’s beside the point), that’s AI too. Those are three uses of AI within a single platform many (if not most) of us use. I used the first two uses considerably to write this post. 😉
Some judges may have trouble distinguishing between AI and generative AI. In other words, they’re just like most everyone else. It’s not just attorneys like this one who need better AI education, it’s judges too.
So, what do you think? How many uses of AI can you think of in your daily activities? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.