AI risk scoreAI risk levelsEU AI Act categorieshigh-risk AI systemsAI governance framework

What are the risk levels of AI?

24 February 2026
Answered by Rohit Parmar-Mistry

Quick Answer

Understand the four AI risk levels, Unacceptable, High, Limited, and Minimal, and how to calculate an AI risk score for your business to ensure compliance.

Detailed Answer

What are the risk levels of AI?

The globally accepted framework for Artificial Intelligence risk, codified primarily by the EU AI Act but adopted as a benchmark by governance bodies worldwide, categorises AI systems into four distinct risk levels: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. This tiered approach determines the level of compliance, transparency, and human oversight required for any given AI deployment.

For businesses in regulated sectors like financial services, law, and insurance, understanding these levels is not merely an academic exercise; it is the foundation of a defensible AI risk score for your internal operations. Failing to correctly classify a tool can lead to significant regulatory exposure and professional liability.

1. Unacceptable Risk (Banned)

These are AI systems deemed to pose a clear threat to fundamental human rights or safety. Under most emerging regulations, including the EU AI Act, these systems are prohibited entirely. Examples include:

  • Social Scoring: Systems that evaluate natural persons based on social behaviour or personality characteristics (similar to state-run social credit systems).
  • Cognitive Behavioural Manipulation: AI designed to deploy subliminal techniques to distort behaviour in a way that causes physical or psychological harm.
  • Real-time Biometric Identification: The use of facial recognition in public spaces by law enforcement (with very narrow exceptions).

If a vendor pitches a tool that claims to "assess employee loyalty via webcam analysis" or similar, you are likely looking at an Unacceptable Risk system. The only governance move here is to refuse deployment.

2. High Risk (Strictly Regulated)

This category creates the most significant compliance burden for organisations. High-risk AI systems are permitted but subject to rigorous obligations regarding data quality, documentation, traceability, and human oversight. These include AI used in:

  • Critical Infrastructure: Transport, water, gas, and electricity management where failure could endanger life.
  • Educational & Vocational Training: Scoring exams or assigning students to schools.
  • Employment & HR: Tools used for recruitment sorting, CV scanning, or making promotion/termination decisions.
  • Essential Private & Public Services: Credit scoring (financial services), evaluating eligibility for insurance or benefits.
  • Law Enforcement & Administration of Justice: AI used to assist judges or police in assessing risk or evidence.

For our clients in the legal and financial sectors, this is the danger zone. If you are using an AI tool to assist in credit decisions or to sift through job applicants, you are operating a High-Risk system. You must maintain a comprehensive AI risk score and audit trail to prove the system is not biased and remains under human control.

3. Limited Risk (Transparency Obligations)

Limited risk refers to systems where the primary danger is manipulation or deception. The regulatory requirement here is transparency: the user must know they are interacting with an AI. Examples include:

  • Chatbots and Customer Service AI: Users must be informed they are speaking to a machine.
  • Emotion Recognition Systems: Outside of prohibited areas.
  • Deepfakes/Generative Content: Content generated by AI must be labelled as artificially manipulated (e.g., watermarking images or disclaimers on text).

While the regulatory burden is lighter, the reputational risk is high. Failing to disclose AI usage to a client destroys trust, even if it doesn't strictly break a law.

4. Minimal Risk (Unregulated)

The vast majority of AI systems currently in use fall into this category. These tools pose negligible risk to rights or safety. Examples include:

  • Spam filters.
  • AI-enabled video games.
  • Inventory management tools.

While these are legally unregulated, "minimal risk" does not mean "zero risk." Data leakage remains a concern even in simple tools if they are connected to sensitive proprietary databases.

The Context Trap: Why "Low Risk" Tools Can Become High Risk

A common error we see in corporate governance is assuming that the risk level is inherent to the software product itself. It is not. Risk is determined by the context of use.

For example, a standard Large Language Model (LLM) like ChatGPT might be considered a "General Purpose" or "Limited Risk" tool. However, if a law firm uses that same LLM to draft a contract, or a financial advisor uses it to summarise a confidential client portfolio, the usage context shifts. You have potentially moved that tool into a High-Risk workflow because of the nature of the data and the decision-making impact.

When calculating an AI risk score for your organisation, you cannot rely solely on the vendor's classification. You must assess:

  1. Data Sensitivity: What data is being fed into the system? (PII, financial data, health records?)
  2. Decision Impact: Is the AI making a decision that affects a person's livelihood, credit, or legal standing?
  3. Autonomy: Is there a human-in-the-loop reviewing the output before it is actioned?

How to Calculate an AI Risk Score

To effectively manage this, organisations need a governance framework that assigns a risk score to every AI use case, not just every software license. At Pattrn Data, we recommend a simple scoring matrix:

  • Data Risk (1-5): From public data (1) to highly sensitive client/medical data (5).
  • Model Risk (1-5): From deterministic scripts (1) to "black box" deep learning models (5).
  • Impact Risk (1-5): From internal drafting (1) to automated client-facing decisions (5).

Any workflow with a combined score above a certain threshold requires mandatory Human-in-the-Loop (HITL) protocols and regular auditing. This is not about slowing down innovation; it is about ensuring that your use of AI is robust, defensible, and compliant with emerging regulations.

Do not wait for a regulator to ask for your governance documentation. By then, it is already too late.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.