The Supreme Court of Canada

The Supreme Court of Canada Is Considering Instructions to Lawyers About AI: Artificial Intelligence Trends

Several US courts (including Texas, Illinois and Pennsylvania) have standing orders regarding the use of AI. Apparently, the Supreme Court of Canada is also considering instructions to lawyers about AI use.

According to Law360™ Canada (SCC considers possible practice direction on use of AI in top court as more trial courts weigh in, written by Cristin Schmitz and available here), the Supreme Court of Canada is among the courts mulling whether and what practice direction to issue to counsel and litigants about the use of artificial intelligence (AI) tools in the preparation of Supreme Court materials, after two superior trial courts recently required disclosure to the bench of AI used in court submissions.

“This is indeed a very important — and emerging — topic,” Stéphanie Bachand, the top court’s executive legal officer and chief of staff to Chief Justice of Canada Richard Wagner said, in response to questions from Law360 Canada. “The Supreme Court of Canada is currently considering the matter, in order to decide whether to move forward with a directive or policy on the use of AI.”

Bachand confirmed that among the issues the Supreme Court of Canada is considering is what policy might apply internally to govern AI use by the apex court’s judges and staff.

The first known Canadian courts to publicly address AI use by litigants and the bar were the Manitoba Court of King’s Bench, which broke new ground nationally by issuing a directive June 23 mandating disclosure to the court of AI use in court materials, followed three days later by the Yukon Supreme Court, which issued a somewhat more broadly worded “practice direction” titled “Use of Artificial Intelligence Tools.”

These cutting-edge legal developments could be a harbinger of an emerging trend in Canada toward judicial regulation of AI use in court submissions by counsel and litigants.

The move is even more significant for the bar since law societies have yet to issue specific guidance or express rules about lawyers’ use in court of AI-assisted submissions and documents — arguably, a professional regulatory vacuum, if not a gap, for law societies to fill.

“Artificial intelligence is rapidly evolving,” Yukon Supreme Court Justice Suzanne Duncan wrote in a two-paragraph practice direction June 26. “Cases in other jurisdictions have arisen where it has been used for legal research or submissions in court,” she said. “There are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. As a result if any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter in any form before the court, they must advise the court of the tool used and for what purpose.”

In a one-paragraph directive, titled “Use of artificial intelligence in court submissions,” Manitoba Court of King’s Bench Chief Justice Glenn Joyal wrote, in part: “While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.”

As litigator Eugene Meehan of Ottawa’s Supreme Advocacy LLP, a leading Supreme Court of Canada agent and the top court’s former executive legal officer, pointed out some Supreme Advocacy LLP lawyers use Grammarly, “a spellcheck on steroids. Grammarly employs AI to improve the tool’s suggestions on making one’s writing more effective. Do we have to advise the court that this type of AI tool was used? Spellcheck, that’s AI too.”

“These AI practice directives appear premature and overly general to be of great assistance in terms of actually providing direction to practitioners in the field/metaverse,” he suggested. “Some degree of AI or machine-learning may already be present in a range of commonly used tools, and it is unclear when and what needs to be disclosed.”

So, what do you think? Are courts overreacting here? Or are these directives on AI use justified? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

One comment

Leave a Reply