Generative AI Cybersecurity Policy

Generative AI Cybersecurity Policy & Tips for Writing One: Cybersecurity Best Practices

Does your organization have a generative AI cybersecurity policy? If not, here’s an article with some tips and considerations for writing one.

In this article by Trend Micro (How to Write a Generative AI Cybersecurity Policy, written by Greg Young and available here), the author discusses how CISOs urgently need practical guidance on how to establish AI security practices to defend their organizations as they play catchup with deployments and plans. With the right combination of cybersecurity policy and advanced tools, enterprises can meet their goals for today and lay a foundation for dealing with the evolving complexities of AI going forward.

Among the information provided in the article are four key AI security policy considerations:

Advertisement
Casepoint
  1. Prohibit sharing sensitive or private information with public AI platforms or third-party solutions outside the control of the enterprise. Seems obvious, but I’ve seen plenty of examples where people in organizations fail to consider that. It only takes one who doesn’t get it to compromise sensitive data.
  2. Don’t “cross the streams”. That means clear rules of separation for different kinds of data. The author also states: “This may require establishing a classification scheme for corporate data if one doesn’t already exist”, which is something your organization should have anyway.
  3. Validate or fact-check any information generated by an AI platform to confirm it is true and accurate. Again obvious, but we have examples hereherehere, here and here where it hasn’t happened. Remember, these models don’t actually know anything – they merely predict correct responses based on their training.
  4. Adopt—and adapt—a zero trust posture. As the author notes, “Zero trust is a robust way of managing the risks associated with user, device, and application access to enterprise IT resources and data. The concept has gained traction as organizations have scrambled to deal with the dissolution of traditional enterprise network boundaries.” In my opinion, it’s a must have for organizations today, regardless of their posture on generative AI.

The article also discusses considerations for choosing the right tools. My favorite quote: “I’ve argued in chats with ChatGPT about facts concerning network security after getting incorrect information, and forcing it to disclose the correct answer it seemed to know all along.” If you’re going to write a generative AI cybersecurity policy, you’re going to need human expertise to get it done.

So, what do you think? Does your organization have a generative AI cybersecurity policy? It should! Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using GPT-4o’s Image Creator Powered by DALL-E, using the term “robot writing a security policy on a computer”.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Advertisement
Minerva26

Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply