How secure is your AI system? Can you trust it with confidential information? This article discusses the questions law firms should ask.
Melissa “Rogo” Rogozinski’s article (How to Evaluate AI Security in Legal Tech: A Guide for Legal Professionals, available here on her RPC Strategies site) discusses this critical question for law firms and corporate legal departments adopting AI technologies to streamline their practices: How secure are these AI systems?
The potential of AI in the legal field is immense. Law firms are deploying AI technologies for everything from analyzing large volumes of legal documents to automating routine tasks such as contract management and compliance monitoring.
But with these advancements comes a new wave of challenges. AI models, particularly large language models (LLMs), process data in ways that can be difficult to understand. Legal professionals need to know that these models raise concerns about privacy, data retention, and model biases. When AI is used to handle sensitive information, such as client communications or privileged data, the stakes become higher. An AI model that inadvertently exposes sensitive data or is vulnerable to external attacks could lead to breaches of confidentiality and legal liability.
Governments and bar associations throughout the world are adopting regulations to ensure AI users in the legal community use due diligence to educate themselves about how AI uses and stores confidential data, personal identification information, and other protected material.
To protect your clients’ data from unauthorized disclosure and your firm from liability or disciplinary actions, there are critical questions you should be asking about the AI systems and products your firm is using or considering onboarding.
Rogo identifies several questions in her article that law firms should be asking, covering categories related to:
- Model Type and Architecture, such as what model is being used, where is it being hosted, etc.
- Data Usage and Training, such as whether your data be used to train their model and, if so, how.
- Data Retention, Privacy, and Security, such as what the provider does to protect and retain your data.
- Agreements and Compliance, including whether the provider offers indemnification for copyright violations related to use of their product.
- Use Cases & Other Considerations, such as restrictions and addressing hallucinations and bias.
The detailed questions are contained in a brief AI Security Questionnaire, which you can download and complete. As Rogo notes, Richard Robbins, Director of Applied AI at Reed Smith, contributed to the research for the article and the accompanying AI Security Questionnaire.
How secure is your AI system? With a questionnaire like this, you can do a lot to answer that question!
So, what do you think? I’ll ask it again: How secure is your AI system? 😉 Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4o’s Image Creator Powered by DALL-E, using the term “robot using a computer with a big lock on the screen”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.








