By
Anahita Bilimoria, Decision Lab Innovation Practice Lead
Sandy Liu, Decision Lab Senior Consultant.
In the past, traditional IT security focused on protecting servers from physical intrusion, malware, and unauthorised network access, sometimes called the fortress model. But in a cloud-native, AI-driven world, threats have evolved. Even if servers remain physically secure, AI models can be manipulated or poisoned remotely, altering outcomes without breaching legacy security practices. Where IT security’s focus was on firewalls, authorisation, and privileges, modern AI security emphasises the integrity of the data and the robustness of the algorithm logics themselves. Because a single malicious input can skew predictions or decision-making, protecting algorithm and its data becomes even more crucial.
Standard software security focuses on patching vulnerabilities, managing identities, and securing APIs. It’s about ensuring the code does only what it’s told to do. If you find a bug, you patch it. If a port is open, you close it. It is deterministic and, for the most part, predictable.
AI flips the script. Unlike traditional software (precise), AI is probabilistic (uncertain). You don’t just secure the code; you have to secure the data its trained on and prompted with, the training process, and the inference logic. AI introduces black-box risks where the system might behave dangerously even if the underlying code is technically bug-free. This is where AI TRiSM (Trust, Risk, and Security Management) becomes essential.
Unlike one-stop shop solutions, AI solutions involve a large group of functionalities interacting with each other. Throughout the solution lifecycle, there are multiple areas that can induce the fear of a functionality being a black box. TRiSM addresses this fear by providing a framework to put multiple layers of security in the solution, ensuring the entirety of the solution follows security measures and builds trust across the solution.
Within the TRiSM framework, Security Management is the proactive discipline of protecting the entire AI lifecycle. It moves beyond simple IT security to ensure that AI models remain robust, private, and resistant to malicious manipulation.
AI Security vs Traditional Security
Security can’t be an afterthought. Bolting security into solutions after deployment exposes your solution to immense risk. We must adopt a secure-by-design framework in our lifecycle, which starts with identifying and categorising potential threats to the solution.
To understand the changing nature of system security, we can compare traditional software security with AI security across common threats categories, noting that while the categories remain the same, the nature of the risks, and what requires protection, changes.
| Threat Category | Traditional Security | AI Security |
|---|---|---|
| Social/Input | Phishing: Tricking a user into giving up a password. | Prompt Injection: Tricking a model into ignoring its guardrails to leak data or execute commands. |
| Infection | Malware: Malicious code designed to corrupt a system. | Adversarial Attacks: Subtly altered inputs (like invisible noise on an image) that cause a model to malfunction. |
| Service Disrupt | DDoS: Flooding a server with traffic to take it offline. | Model Inversion / Drift: Stealing the model’s logic via queries, or the model becoming stale and inaccurate over time. |
| Data Integrity | Man-in-the-Middle: Intercepting data as it moves between points. | Data Poisoning: Contaminating the training data so the model learns a backdoor or bias. |
As this comparison highlights, the attack surface has fundamentally shifted. We are no longer defending against malicious code trying to break into a system; we are guarding against malicious intent attempting to manipulate a model’s logic or corrupt its foundational data. Because the very nature of these threats has evolved, our defensive strategies must evolve in tandem. Let’s break down the specific security measures required to neutralise these new vectors and keep your AI solutions robust.
Type of security measures based on types of attacks

Supply Chain & Data Security
As researchers at the Royal United Services Institute (RUSI) recently highlighted, AI is quietly becoming a major supply chain vulnerability. Attacks targeting this ecosystem focus on compromising training data, external dependencies, or pretrained models used during development. One common example is data poisoning. Another risk involves compromised third-party libraries or pretrained models that may contain hidden vulnerabilities.
Security measures for these attacks focus on ensuring the integrity and trustworthiness of data and external components. Organisations should implement dataset validation processes and maintain clear data provenance records. Dependency scanning tools can help identify vulnerabilities in external libraries, while secure model repositories ensure that only verified artifacts are used during development.
Additional safeguards such as encryption of sensitive datasets, restricted access to training data, and secure data pipelines can further reduce the risk of supply chain attacks affecting the AI system.
Model Integrity
Model integrity is about ensuring the AI remains a faithful, untampered reflection of its intended design. The primary threat here is Data Poisoning (similar to supply chain software solutions), where attackers inject malicious samples into training sets to create backdoors. To counter this, organisations must implement rigorous Data Provenance and Sanitisation protocols, essentially auditing the lineage of every data point to ensure it hasn’t been corrupted.
Adversarial/ Input Security
Even a perfectly trained model can be manipulated once it goes live through Adversarial and Input attacks. The most common threat today is Prompt Injection, where users use jailbreak phrases or clever framing to bypass safety filters. To mitigate this, developers should deploy Prompt Guardrails, which act as a secondary sentinel model that scans incoming requests for malicious intent before they ever reach the primary AI.
In the realm of computer vision or file scanning, attackers often use Adversarial Examples—adding invisible noise to an image or file to cause the AI to misclassify it (e.g., making a stop sign look like a speed limit sign). Building resilience against these tactics requires Adversarial Training, a process where the model is intentionally exposed to broken or attacked samples during development, so it learns to ignore the noise. For high-stakes environments, using Ensemble Methods—where multiple different AI architectures, in effect, vote on a single input—is a highly effective defence, as it is significantly harder for an attacker to fool three different architectures simultaneously than a single, isolated system.
Access Control & API Security
Many AI systems expose their capabilities through APIs, which makes them vulnerable to attacks that attempt to exploit or misuse model access. Security measures in this category focus on controlling and monitoring how users and applications interact with AI models. Strong authentication and authorisation mechanisms should be implemented to ensure that only authorised users can access the system. Role-based access control can limit user permissions based on their responsibilities. Additionally, following industry standards with frameworks like the Model Context Protocol (MCP), allows for a standardised way to manage API calls and link models.
To mitigate automated attacks and excessive queries, organisations should implement rate limiting, request validation, and usage monitoring. Logging and auditing API activity also helps detect abnormal behaviour and potential abuse. By controlling access to AI services, these measures protect the model from exploitation and safeguard sensitive system capabilities.
Deployment & Infrastructure Security
AI models are typically deployed on cloud platforms, containerised environments, or edge infrastructure, which introduces additional attack vectors. Threats in this area may include unauthorised access to the hosting environment, infrastructure misconfigurations, or exploitation of vulnerabilities in the deployment pipeline. Attackers who compromise the infrastructure may gain access to model artifacts, manipulate outputs, or disrupt AI services.
Security measures designed to defend against these attacks focus on protecting the runtime environment and deployment infrastructure. This includes implementing secure configuration practices for cloud resources, isolating AI workloads through containerisation, and encrypting communications between system components.
Integrating security checks into the MLOps or CI/CD pipeline helps identify vulnerabilities before models are deployed. This lifecycle-wide vigilance aligns perfectly with emerging international frameworks like ETSI EN 304 223, which mandates secure practices from initial design right through to operation and retirement. Continuous monitoring of infrastructure activity and system logs can also detect suspicious behavior early. Together, these measures help ensure that AI systems operate within a secure and controlled environment even after deployment.
Apart from forming a security policy, companies must bake them into daily operations of their solution lifecycles. Operationalising security means shifting from reactive patching to proactive, hardened deployment environments. That means stress-testing solutions against both technical failures and adversarial intent.
Conclusion
As we navigate the gold rush of Artificial Intelligence, we must remember a fundamental truth: Unprotected performance isn’t an asset; it’s a liability. A model that is 99% accurate but left vulnerable to data theft or security breaches is not an asset; it is a ticking liability. AI TRiSM allows companies to build a foundation for safe scaling of solutions. Security Management in particular is a pillar that transcends technology types. Whether you are dealing with:
- Hardware (Physical tampering and side-channel attacks),
- Traditional Software (Logic flaws and exploit kits), or
- AI Solutions (Prompt injection and model drift),
The philosophy remains the same. The aim is to introduce security management in every aspect of the solution and not treat it as an afterthought upon deployment. This includes a mindset shift from building a solution to building a secure-by-design solution. We must follow a granular approach and introduce security in the ideation of functionalities, to achieve a robust, anti-fragile and efficient product. By integrating Security management into the solution lifecycle, we help companies ensure trust and dependability.
At Decision Lab, we follow the secure by design approach, so that our solutions excel in current markets, that require anti-fragile, robust solutions. To learn more, contact us!





