Oh, Canada! That’s what Lexis Nexis is saying after a Canadian law professor gave Lexis+ AI a failing grade for use in legal research.
The review was discussed in National Magazine (Law professor gives Lexis+ AI a failing grade, written by Canadian law professor Benjamin Perrin and available here). As Perrin notes: Lexis+ AI is a new “generative AI-powered legal assistant” that LexisNexis says is “grounded in our extensive repository of accurate and exclusive Canadian legal content.” The company describes Lexis+ AI as “an efficient starting point for legal research” and claims it will “deliver results” that are “always backed by verifiable, citable authority.” One U.S. review said it is “like having a legal research wizard and a document-drafting ninja all in one.”
Perrin also notes the fine print doesn’t say it will eliminate hallucinations but does claim to reduce the risk of them. Anytime you see a case mentioned with a hyperlink, it goes to the actual case. Further, lawyers can’t upload confidential client information on unsecured platforms, so their options are limited. Lexis+ AI also promises this level of security. It “shows its work” with hyperlinks to “content supporting AI-generated response.” He stated: “With these important features, I was eager to give it a try.”
What did he find? Here is what he said:
“After several rounds of testing, I found Lexis+ AI disappointing. I encountered non-existent legislation referenced in the results (without a hyperlink), headnotes reproduced verbatim and presented as ‘case summaries,’ and responses with significant legal inaccuracies.
These issues are familiar in some free, general-purpose generative AI tools, but they are more concerning when found in a product marketed specifically for legal professionals and imminently to be offered to law students who are still learning the law.”
Perrin went on to provide several examples of issues with Lexis+ AI, which you can read in the article. He sums his findings with this statement: “Given its current limitations, I cannot recommend Lexis+ AI to my law students, and I would not use it for my own legal research at this time.”
As noted in this article by Ella Sherman in Legaltech® News, LexisNexis chief product officer Jeff Pfeifer said in a statement that some of the shortcomings Perrin wrote about were due to capabilities or features that had not been developed yet for Lexis+ AI:
“Professor Perrin requested a summary for a case Lexis+ AI had already summarized through editorial review,” Pfeifer said. “In these instances, Lexis+ AI defaults to the editorial summary and does not currently support multi-turn requests.”
Pfeifer added that Lexis+ AI is also not able to execute motion drafting capabilities as Perrin requested in his testing. He said LexisNexis will improve its transparency in its messaging for customers and will continue to review Lexis+ AI responses.
Sherman’s article also reminds us of the Stanford study from earlier this year, where Stanford researchers studied the accuracy of AI tools such as Thomson Reuters’ Westlaw AI-Assisted Research tool as well as Lexis+ AI. In an updated study, the researchers found that Lexis+ AI’s answers were accurate 65% of the time while Westlaw AI was accurate 42% of the time. Of course, the original Stanford study was disputed by LexisNexis and Thomson Reuters for its methodology and compared with their own internal testing.
Two observations: 1) We haven’t seen (to my knowledge) a fake case citations story yet related to either Lexis+ AI or Westlaw AI-Assisted Research – when we do, it will be interesting to see how the company responds. 2) Any lawyer who trusts the AI model without verifying the results – whether it’s a general LLM like ChatGPT or either of these two products – should expect sanctions. I don’t think you need a Canadian law professor to tell you that.
Hat tip to Tom O’Connor for the heads up on this story!
So, what do you think? Does your organization use either Lexis+ AI or Westlaw AI-Assisted Research? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4o’s Image Creator Powered by DALL-E, using the term “robot lawyer slipping on a small banana peel”.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.







