The UK and US, along with international partners from 16 other countries, have released new guidelines for secure AI system development.
Published Monday, the new Guidelines for Secure AI System Development (available here and also here as a 20-page PDF) are touted as the first global guidelines to ensure the secure development of AI technology. They have been developed by the UK’s National Cyber Security Centre (NCSC), a part of GCHQ, and the US’s Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from across the world: Australia, Canada, Chile, the Czech Republic, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore, in addition to the UK and US.
The new UK-led guidelines are touted to be the first of their kind to be agreed globally. They are designed to help developers of any systems that use AI make informed cyber security decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others. The guidelines are broken down into four key areas within the AI system development life cycle:
- Secure design: This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
- Secure development: This section contains guidelines that apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
- Secure deployment: This section contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
- Secure operation and maintenance: This section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.
The guidelines are designed to help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.
The new Guidelines for Secure AI System Development are a good high-level list of guidelines to follow in each of the four key areas discussed above. Of course, as a list of high-level guidelines, the guide focuses more on what to do, not how to do it. Probably difficult to get 18 countries – or even one country – to agree on that! 😉
So, what do you think? Do you think the new guidelines for secure AI system development will have an impact on the security of AI systems? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Microsoft Bing’s Image Creator Powered by DALL-E, using the term “robot programmers”.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




Why of why do you promote this bullshit. Pure theatre. The agreement is non-binding, with no teeth. “Recommendations”. Just like that bullshit AI “pause” letter last March that nobody followed. Why dance to their tune 🤦♂️
Gee, Greg, tell me what you really think! 😀
Seriously though, I agree that it is non-binding with no teeth and I said it doesn’t get into the “how” to do it. It’s the equivalent of saying “we should all do something about global warming”, which nobody ever does. 🙁
Nonetheless, I like to cover these because people should know what agencies are (or are not) doing about AI issues. My hope is to look back in a few months and see where agreements like these have gone (even if nowhere). I’m not “dancing to their tune” (I can’t dance), but I do think I should generate awareness of activities good and bad and let people make up their own minds.
I welcome your editorial on this topic! Publish it and I’ll do a follow-up! 🙂
Initiating new guidelines for the development of secure AI systems is a step in the right direction toward addressing the escalating concerns surrounding AI. Surrey It Security It is critical to prioritize the secure and ethical development of AI as it progresses. These guidelines provide organizations and developers with a timely framework for navigating the dynamic landscape of artificial intelligence with a focus on responsible use and security.