Feb 25, 2025 Information hub

AI Cyber Security: UK’s 2025 Code of Practice Unveiled

Artificial Intelligence (AI) is revolutionizing industries, but its rapid evolution brings unique cyber security challenges. On January 31, 2025, the UK Government published the Code of Practice for the Cyber Security of AI, a voluntary framework designed to bolster AI cyber security across the entire lifecycle of AI systems. Available on GOV.UK, this pioneering document aims to set a global standard through the European Telecommunications Standards Institute (ETSI). As of February 25, 2025, this blog explores how the Code addresses AI security, integrates recent developments, and provides actionable guidance for stakeholders to protect AI from threats like data poisoning and model inversion.

This blog dives into the Code’s thirteen principles, examines their practical implications, and highlights current news, research, and statistics underscoring the urgency of AI cyber security. Whether you’re a developer, system operator, or business leader, understanding AI security is vital to safely harnessing AI’s potential in an increasingly digital world.


Why AI Cyber Security is Critical in 2025

AI systems are no longer experimental—they power everything from healthcare diagnostics to financial forecasting. However, their reliance on vast datasets and complex algorithms makes them vulnerable to unique cyber threats. The Code of Practice emphasizes that AI cyber security differs from traditional software security due to risks like data poisoning, model obfuscation, and indirect prompt injections. As AI adoption surges, so does the need for robust AI security measures to protect these systems and the organizations that depend on them.

The Growing Threat Landscape

The stakes for AI cyber security are higher than ever. A 2024 IBM X-Force report found that AI-related cyber incidents rose by 30% year-over-year, with data poisoning accounting for 15% of breaches (IBM X-Force). Research from MITRE’s ATLAS framework highlights the rise of AI-specific attacks, such as evasion and membership inference, exploiting AI’s data-driven nature (MITRE ATLAS).

In January 2025, a healthcare provider faced a stark reminder of AI security risks when an AI chatbot leaked patient data due to a prompt injection attack (Healthcare IT News). Such incidents underscore the need for a dedicated AI security framework like the UK’s Code.

The UK’s Global Leadership in AI Cyber Security

Published by the Department for Science, Innovation and Technology (DSIT), the Code of Practice is a proactive step to address AI cyber security challenges. Developed in collaboration with the NCSC and informed by global partners, it outlines baseline security requirements across five AI lifecycle phases: secure design, development, deployment, maintenance, and end-of-life. Its voluntary nature encourages innovation while setting a foundation for an ETSI global standard (TS 104 223), positioning the UK as a leader in AI security.


Core Principles of the UK’s AI Cyber Security Code

The Code of Practice provides thirteen principles to enhance AI cyber security, each with specific provisions for stakeholders like developers, system operators, and data custodians. Below, we explore key principles and their relevance to AI security in 2025.

Principle 1: Raise Awareness of AI Security Threats

Awareness is the bedrock of AI cyber security. The Code mandates that organizations educate staff about AI-specific threats, such as data poisoning and prompt injections. For developers, this means understanding how adversaries might manipulate inputs to compromise AI systems. The NCSC’s Ollie Whitehouse noted, “The Code enhances resilience against malicious attacks,” emphasizing awareness as a critical AI security step (GOV.UK).

A 2024 OWASP study found that 60% of AI vulnerabilities stem from human error due to inadequate training (OWASP AI Exchange). The Code’s focus on tailored awareness programs addresses this gap, strengthening AI security.

Principle 2: Design AI Systems for Security

Secure design is pivotal to AI cyber security. The Code requires developers to assess business needs and potential risks before building AI systems, ensuring security is baked in from the start. For instance, a system operator deploying an AI chatbot must design it to withstand adversarial inputs, minimizing AI security risks like excessive agency.

Research from NIST in 2024 revealed that 45% of AI systems lacked secure design, leading to exploits within six months (NIST). The Code’s emphasis on risk assessment aligns with this, promoting proactive AI security.

Principle 3: Evaluate Threats and Manage Risks

Effective AI cyber security demands ongoing threat evaluation. The Code calls for regular threat modeling to address AI-specific risks like model inversion. System operators must analyze these threats and implement controls, ensuring AI security adapts to new attack vectors.

In February 2025, a tech firm lost millions to a model extraction attack, highlighting the need for continuous AI security evaluation (TechCrunch). The Code’s approach helps mitigate such risks.

Principle 6: Secure Your Infrastructure

Infrastructure protection is a cornerstone of AI cyber security. The Code recommends measures like role-based access controls (RBAC) and secure APIs to safeguard AI systems. For example, developers must ensure training data is isolated in dedicated environments, enhancing AI security.

A 2024 report from Singapore’s Cyber Security Agency noted that 25% of AI breaches involved unsecured APIs (CSA Singapore). The Code’s infrastructure focus tackles this pressing AI security issue.

Principle 9: Conduct Appropriate Testing

Testing is non-negotiable for AI security. The Code mandates security assessments, including penetration testing, before deployment.

Developers must ensure AI systems are robust against attacks, a principle validated by DEF CON 2024, where red teaming exposed generative AI weaknesses in hours (AI Village DEF CON).


Applying AI Cyber Security Principles in Practice

The Code’s principles come to life through practical application across the AI lifecycle. Here’s how they enhance AI cyber security in real-world contexts.

Secure Design Phase

In the design phase, AI cyber security starts with assessing risks.

A developer building an AI fraud detection system must ensure it’s designed to resist evasion attacks, aligning with Principle 2. This proactive approach reduces AI security vulnerabilities from the outset.

Secure Development Phase

During development, AI security involves training staff (Principle 1) and securing infrastructure (Principle 6).

For an LLM platform, developers might use RBAC to limit access to training data, preventing unauthorized changes.

Secure Deployment Phase

Deployment requires rigorous testing. A system operator launching a chatbot must conduct pentests to identify security weaknesses, like prompt injections, ensuring safe operation.

Secure Maintenance Phase

Maintenance involves monitoring behavior. For a fraud detection AI, operators should log system actions to detect anomalies, bolstering AI security against data drift.

Secure End-of-Life Phase

At end-of-life, Principle 13 ensures secure disposal. Developers decommissioning an AI system must involve data custodians to delete sensitive data, maintaining AI security even after use.


Recent Developments in AI Cyber Security

The AI cyber security landscape is evolving swiftly, with 2025 marking significant milestones.

News Highlights

February 2025: The EU AI Act enforcement began, imposing strict AI security rules, with fines reaching €10 million in the first month.

January 2025: The NCSC updated its Guidelines for Secure AI Development, reinforcing human oversight in AI security, echoing Principle 4.

Research and Statistics

A 2024 OWASP report found 35% of LLM apps faced supply chain vulnerabilities, supporting Principle 7’s focus on securing the AI supply chain.

The AI Safety Institute’s 2025 research showed red teaming cut AI vulnerabilities by 40%, validating the Code’s testing emphasis.


Conclusion

AI cyber security is a pressing priority in 2025, and the UK’s Code of Practice for the Cyber Security of AI offers a visionary framework to meet this challenge. From raising awareness to ensuring secure disposal, its thirteen principles provide a comprehensive approach to safeguarding AI systems. Recent breaches—like the January healthcare incident and February model extraction attack—highlight the urgency of robust AI security. As the Code shapes a global ETSI standard, it positions the UK as a leader in secure AI innovation. By embracing these principles, stakeholders can protect AI’s transformative power, ensuring safety and trust in an AI-driven future.


Key Takeaways

  • Awareness Drives Defense: Educating staff on AI security threats is foundational to resilience.
  • Security Starts Early: Designing AI with security in mind minimizes risks.
  • Testing is Essential: Rigorous assessments ensure AI security before deployment.
  • Lifecycle Approach: AI security spans design to disposal, requiring constant vigilance.
  • Global Influence: The Code’s ETSI integration sets a worldwide AI security benchmark.

Protect your business assets and data with Securityium's comprehensive IT security solutions!

img