We discussed this article from Stephanie Wilkins during last week’s live Legaltech Week roundtable discussion. The MIT Task Force proposed an early version of seven draft principles that establish a lawyer’s duties when using AI for legal work.
Stephanie’s article (MIT Task Force Proposes Principles for the Responsible Use of Generative AI in Legal, available here), discusses how law.MIT.edu has assembled a Task Force “to develop principles and guidelines on ensuring factual accuracy, accurate sources, valid legal reasoning, alignment with professional ethics, due diligence, and responsible use of Generative AI for law and legal processes.”
The Task Force “believes [generative AI] provides powerfully useful capabilities for law and law practice and, at the same time, requires some informed caution for its use in practice.” Recently, the Task Force publicly released an early version of seven draft principles that establish a lawyer’s duties when using AI for legal work:
- Duty of Confidentiality to the client in all usage of AI applications;
- Duty of Fiduciary Care to the client in all usage of AI applications;
- Duty of Client Notice and Consent* to the client in all usage of AI applications;
- Duty of Competence in the usage and understanding of AI applications;
- Duty of Fiduciary Loyalty to the client in all usage of AI applications;
- Duty of Regulatory Compliance and respect for the rights of third parties, applicable to the usage of AI applications in your jurisdiction(s);
- Duty of Accountability and Supervision to maintain human oversight over all usage and outputs of AI applications;
*Consent may not always be required—refer to existing best practices for guidance. We also seek feedback on whether or when consent may be advisable or required.
The document also lays out several examples of how the principles might be applied in real-life scenarios, showing both inconsistent and consistent practices with each principle. The Task Force is taking an interactive approach to finalizing the principles and is actively soliciting feedback from the industry.
On Wednesday August 16, 2023, the Task Force is holding an open forum, at which they invite those interested in the governance of generative AI and its use in the legal profession to share feedback on the currently proposed principles and engage in an open discussion about the larger issue of responsible AI use.
The forum will take place on Zoom on August 16 at 12:00 p.m. PT, 3:00 p.m. ET. Anyone who would like to participate can fill out the Task Force’s feedback form to receive an invitation. I did. 🙂
Going forward, the Task Force is also expanding to be a Joint Task Force by law.MIT.edu and CodeX, The Stanford Center for Legal Informatics.
Speaking of last week’s live Legaltech Week roundtable discussion, it was great to sit in as a guest panelist along with Bob Ambrogi, Joe Patrice, Jean O’Grady, Jeff Brandt and Stephanie! Look for a post on our discussion (and the articles we discussed) when the recording is posted to their Youtube channel!
So, what do you think? Do you think the principles proposed by the MIT Task Force will gain traction? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
[…] alluded to last week’s Legaltech Week roundtable discussion on Monday. Now that it’s online, here is more info about the […]