Chapter 6 : AI Risk Management

AI Risk Management 101

So, what’s risk management all about? Think of it as making a plan for “what could go wrong” with your AI system — and how you’ll deal with it if it does. It’s about spotting risks early, documenting them, and making sure you don’t get caught off guard.

Here’s how different experts explain it:

  • ISO 31000: Risk management = coordinated activities to control risks.

  • MIT AI Risk Repo: A system for spotting and managing AI risks across an organization.

  • NIST AI RMF: A way to reduce the bad impacts of AI (like threats to rights) while boosting the good stuff.

At its core: Risk = the chance that something bad happens because of a weak spot.

To manage risks in AI, keep it simple:

  1. Set the stage – know your internal & external challenges, your stakeholders, and how much risk you’re willing to take (your “risk appetite”).

  2. Decide what’s acceptable – not every risk can be fixed, and some are worth taking. That’s why you need risk acceptance criteria.

  3. Plan for the weird stuff too – some risks are rare but huge (black swan events). Example: your AI suddenly stops following human instructions. Low chance, high impact.

The goal isn’t to remove all risk (that’s impossible). The goal is to understand, prepare, and act responsibly so your AI doesn’t surprise you in the worst way.

How to Calculate AI Risk

So, how do you actually measure AI risk? The most common formula is pretty simple:

Risk = Impact × Likelihood

Let’s break that down:

  • Impact = how bad the damage would be if something goes wrong.

    • Examples:

      • Loss of IP → loss of revenue 💸

      • AI hallucinations → damage to reputation 📰

      • Unintended AI use → breach of contract 📑

  • Likelihood = how probable it is that the bad thing actually happens.

    • You can think in terms like:

      • Almost certain

      • Very likely

      • Likely

      • Rather unlikely

      • Unlikely

Once you’ve scored both, you combine them to figure out which risks matter most. This helps you prioritize: should you treat the risk, accept it, or put it on watch?

💡 You can also group risks by seriousness of the impact:

  • Catastrophic: impacts entire industries/sectors, long-term consequences 🌍

  • Critical: threatens the survival of the organization 🚨

  • Serious: major consequences for the organization 🛑

  • Significant: noticeable but limited effects ⚠️

  • Minor: negligible, barely felt ✅

Finally, how do you spot these risks? Two main approaches:

  1. Event-based – imagine realistic scenarios starting from possible problems (like misuse or malicious actors).

  2. Asset-based – inspect your AI assets directly (models, datasets, systems) to uncover threats and vulnerabilities.

Don’t just calculate risks once and forget them. Keep updating as your AI system grows and interacts with the real world.

Threats, Vulnerabilities & What to Do About Them

Step 1: Spot the difference between threats and vulnerabilities.

  • A threat = the bad thing that could happen.

  • A vulnerability = the weak spot that makes it possible.

👉 Example:

  • Threat: Overreliance on the AI system.

  • Vulnerability: Poor AI literacy.

👉 Another one:

  • Threat: Prompt injection attack.

  • Vulnerability: Poor security.


Step 2: Analyze the risks. Now that you’ve mapped threats + vulnerabilities, you figure out how risky they are:

  • Assess likelihood (how likely is it?).

  • Assess consequences (how bad would it be?).

  • Run this past the risk owners (the people accountable for the area).

  • Compare results to your risk acceptance criteria → decide if it’s acceptable or not.


Step 3: Treat the risk. Not all risks need the same approach. You’ve got four main plays in your playbook:

  1. Avoidance – Stop the risky activity altogether.

  2. Modification – Reduce the likelihood, the consequence, or both.

  3. Sharing – Transfer some responsibility to a third party (e.g., insurance, outsourcing).

  4. Retention – Accept the risk knowingly (informed decision).

To actually mitigate risks, you’ll lean on controls:

  • Preventive controls – stop bad stuff before it happens.

  • Detective controls – spot the bad stuff when it’s happening.

  • Corrective controls – fix the damage after it happens.


Step 4: Document it in a Statement of Applicability (SoA). This is your “receipt” of what you’ve decided:

  • Which controls are fully implemented, partially implemented, or not implemented.

  • Why you excluded any controls (accountability = explaining your “no”).


Step 5: Build a Risk Treatment Plan. This is the action plan that ties it all together:

  • How the risk will be treated.

  • Rationale behind your treatment choice.

  • Who the risk owner is.

  • KPIs to measure effectiveness.

  • Who will actually implement the controls.

  • Sign-off from the risk owners.


Don’t forget residual risks. Even after treatment, some risk will always remain. The goal isn’t “zero risk” — it’s informed, managed risk.

🤖 What Makes AI Risks Special?

So, you’ve got the basics of risk management down. But here’s the twist: AI risks don’t play by the same rules as traditional software or info-security risks. They’re bigger, messier, and sometimes reach way beyond a single company.

🌍 Why AI risks stand out

  • Scale & Reach: Traditional software risks usually stay within one company. AI risks? They can spill over into entire industries, societies, or even ecosystems.

  • Complexity: AI is often a “black box” — even developers don’t always fully understand how or why it behaves the way it does.

  • Bias & Fairness: If your data is biased, your AI will likely amplify that bias. That’s not just a bug — it can create real-world harm.

  • Privacy Concerns: AI systems gobble up and infer personal data in ways traditional software doesn’t.

🔎 Looking through the NIST lens: AI Harms

The NIST AI Risk Management Framework helps us understand the unique harms AI can cause. These fall into three big buckets:

1. Harms to People & Groups

  • Civil liberties (e.g., surveillance or profiling).

  • Physical or psychological safety (think self-driving accidents or manipulative recommendations).

  • Economic harms (job loss, unfair credit scoring).

  • Discrimination against vulnerable groups.

  • Erosion of democratic participation or educational access.

2. Harms to Organizations

  • Business disruption (AI failure in operations).

  • Security breaches or financial loss.

  • Brand and reputation damage (PR nightmare from an AI gone wrong).

3. Harms to Ecosystems & Society

  • Breakdown of interconnected systems (e.g., supply chains, financial markets).

  • Environmental harm (resource-heavy AI models draining energy/water).

  • Harm to natural systems and the planet.

AI risks are bigger than software bugs. They can mess with people’s rights, destabilize organizations, and even ripple through global systems.

⚖️ AI Risks & the EU AI Act

So far, we’ve seen that AI risks are not just bigger versions of IT risks — they come with their own unique challenges like bias, data quality issues, and entirely new attack surfaces. Now let’s connect that to how the EU AI Act handles risks.


🚨 Novel AI Risks You Don’t See in Classic IT Security

Beyond traditional harms (like data breaches or downtime), AI brings new flavors of risk that older risk frameworks are still catching up with:

  • Bias & Data Quality Issues: If the training data is biased, the system can amplify discrimination, tanking trust and fairness.

  • Complex Attack Surfaces: AI opens doors to novel attacks (think: prompt injection or model inversion) that go far beyond “patch your servers.”

  • Generative AI Risks: From deepfakes to copyright chaos, generative AI introduces creative and chaotic new threats.


📜 How the EU AI Act Defines Risk

The EU AI Act takes a structured view:

👉 Risk = Severity x Likelihood

  • Severity = how bad the harm is (material or immaterial: physical, psychological, societal, or economic).

  • Likelihood = how probable that harm is.

This is spelled out in:

  • Article 3(1a) (definition of risk)

  • Recital 4 (definition of harm)


🛑 Risk Categories Under the EU AI Act

1. ❌ Unacceptable Risk (Prohibited AI Practices – Article 5)

Some AI uses are outright banned because they’re too harmful to ever be justified:

  • Manipulation & Exploitation → subliminal techniques, exploiting vulnerabilities of children, elderly, or disadvantaged groups.

  • Social Scoring → ranking people’s worth across contexts (like China’s system), leading to unfair treatment.

  • Predictive Policing / Criminal Risk Assessment → profiling people solely on traits like nationality, biometrics, or social background.

  • Certain Biometric Uses → sensitive biometric categorization, scraping face images to build databases, workplace/education emotion recognition, real-time biometric ID in law enforcement.

(Some narrow exceptions exist, like legitimate medical treatment or lawful evaluations.)


2. ⚠️ High Risk

AI systems that could seriously affect people’s lives or rights.

  • Type 1: Product-Oriented (Annex II)

    • Products already covered under EU product safety laws, like medical devices or toys.

  • Type 2: Sector-Specific (Annex III)

    • AI systems in sensitive domains: biometrics, education, employment, essential services, law enforcement, migration, justice, and democratic processes.

    • Note: In some cases, if providers can prove their system isn’t likely to cause significant harm, it might fall outside this scope — but profiling natural persons is almost always in.


3. 🟡 Limited Risk

Not banned or high risk, but still tricky. Obligations focus on transparency so people know they’re interacting with AI. Examples:

  • Chatbots → must inform users they’re talking to an AI.

  • Biometric categorization or emotion recognition → must notify individuals exposed.

  • Deepfakes → must disclose the artificial nature of the content.


4. 🟢 Minimal Risk

The “low stakes” AI systems. These have no strict legal requirements, beyond general product safety. Voluntary codes of conduct and ethical principles are encouraged. Examples:

  • AI in video games 🎮

  • Fun filters in social apps ✨


🚦 AI Lifecycle Risks & Risk Registry for Deployers

Why it matters Risks look different depending on when in the lifecycle you measure them. Some appear early (like bad data collection), others show up later (like overreliance in deployment). Developers and deployers also see risks differently: a model builder might worry about algorithmic bias, while a hospital deploying the same model worries about patient harm.

All actors share responsibility. That’s why building a Risk Registry — a living document where you log risks, their likelihood/impact, and treatments — is essential for EU AI Act compliance.

We’ll now walk through the Top 10 AI Risks for Deployers, mapped to lifecycle phases, with examples and mitigation strategies.


📒 Example Use Case: Risk Registry Entry

Lifecycle Stage
Risk
Description
Mitigation
Status
Owner
Residual Risk

Collect & Process

Data Transparency

Training data missing source info

Use datasheets for datasets

Open

Data Steward

Medium


🔟 Top 10 Risks for Deployers (MIT AI Risk Repository + EU AI Act lens)

🛠️ Planning Phase

Risk 1: Third-Party Risk

  • Description: LLM and AI software supply chains are complex. Dependencies on vendors, APIs, or pre-trained models can import hidden vulnerabilities.

  • Mitigation: Vet third parties, require security/compliance certifications, add contractual clauses for AI risk sharing.

Risk 2: Compliance Risk

  • Description: AI systems can easily breach laws (IP, copyright, privacy, labor). Deployers are on the hook for penalties, reputational loss, and loss of user trust.

  • Mitigation: Conduct AI impact assessments, legal review, and keep compliance logs from day one.


📊 Collect & Process Data

Risk 3: Lack of Transparency in Data

  • Description: Deployers may inherit datasets without clarity on origin, copyright, or bias.

  • Mitigation: Use Datasheets for Datasets, keep a structured data inventory with metadata and usage rights.

Risk 4: Data Compliance (IP & Privacy)

  • Description: Using scraped data, copyrighted text, or personal info without lawful basis → GDPR fines, lawsuits.

  • Mitigation: Sign lawful data agreements, anonymize/sanitize fields, run IP audits.


🧠 Build & Use Model

Risk 5: Model Bias

  • Description: Models can underperform for groups based on gender, ethnicity, region → discrimination risks.

  • Mitigation: Bias testing, representative datasets, retraining with fairness constraints, independent audits.

Risk 6: Misuse / Function Creep

  • Description: Model built for one purpose (diagnosis) gets used for another (screening job candidates) → ethical/legal risks.

  • Mitigation: Document intended use in Model Cards, restrict API or access scope.


✅ Verify & Validate

Risk 7: Explainability & Transparency (XAI)

  • Description: “Black box” models make it hard for regulators, users, or patients to trust outputs.

  • Mitigation: Use interpretable models, LIME/SHAP explainability tools, document limitations in model cards.

Risk 8: Fairness Gaps

  • Description: Validation misses subgroup differences → model passes lab tests but fails in real life.

  • Mitigation: Test across diverse datasets, simulate edge cases, add fairness metrics in validation reports.


🚀 Deploy & Use

Risk 9: Overreliance & Unsafe Use

  • Description: Users blindly trust AI recommendations, even in safety-critical contexts.

  • Mitigation: Mandatory user training, guardrails (warnings, “human-in-the-loop”), monitoring feedback loops.


🔍 Operate & Monitor

Risk 10: Hallucinations

  • Description: Model outputs false but confident answers (fake medical advice, made-up sources). Can mislead users and cause real harm.

  • Mitigation: Human oversight, feedback loops, confidence thresholds, disclaimers.

+ Privacy Leakage (bonus risk)

  • Description: Model accidentally reveals personal data memorized during training.

  • Mitigation: Data minimization, differential privacy, red-teaming for leakage.


📴 Retire

Risk 11: Poor Data Retention & Disposal

  • Description: Old models and training data retained beyond lawful need → privacy time-bombs.

  • Mitigation: Set retention policies, secure deletion, archive only what’s legally/ethically necessary.

A Risk Registry isn’t a one-time task. You iterate on it throughout the lifecycle — adding new risks, updating likelihood/impact scores, and documenting mitigation. That’s how you stay compliant with the EU AI Act and build trustworthy AI systems.

Stay tuned for more and play some of our cool compliance games at 👇 https://play.compliancedetective.com/

Last updated

Was this helpful?