Financial ServicesSM&CRAccountabilityAI GovernanceFCA

What Are the SM&CR Implications of Deploying AI in My Financial Services Firm?

24 January 2026
Answered by Rohit Parmar-Mistry

Quick Answer

The SM&CR was designed for a world of human decision-making. Learn how to navigate AI accountability under the Senior Managers and Certification Regime.

Detailed Answer

This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about the use of AI in your firm.


What Are the SM&CR Implications of Deploying AI in My Financial Services Firm?

The Senior Managers and Certification Regime (SM&CR) is the FCA's accountability framework, and it has your name written all over it. When you deploy an AI system, you, the senior manager, are personally on the hook for its actions. The excuse "the algorithm did it" is not just a non-starter; it is a career-ending statement.

AI is no longer a theoretical concept in financial services; it is being used for everything from credit scoring and fraud detection to investment advice and customer service. The Treasury Committee's recent report highlighted that 75% of UK financial services firms are now using AI. But with great power comes great accountability, and that is where SM&CR bites.

The Senior Manager's Burden: You Own the Black Box

The core principle of SM&CR is that someone is always responsible. When an AI system makes a decision that harms a customer or the market, the FCA will not be interviewing the algorithm. They will be interviewing you.

Here is the critical point that the Treasury Committee report hammered home: a lack of understanding is not a defence. You cannot delegate your accountability to a third-party vendor or your internal data science team. If you are the senior manager responsible for the function using the AI, you are responsible for the AI itself.

SM&CR Principle How AI Magnifies the Risk
Duty of Responsibility You must take reasonable steps to prevent regulatory breaches from occurring in your area of responsibility. This now includes breaches caused by AI systems.
Prescribed Responsibilities If you hold a prescribed responsibility for, say, risk management or compliance, you are responsible for how AI impacts those areas.
Overall Responsibility Even if AI is not explicitly in your job description, the "overall responsibility" requirement means that if it is used in your business area, you are accountable.

The "Explainability" Paradox

This is where it gets tricky. Many advanced AI models, particularly deep learning networks, are "black boxes." It is incredibly difficult, sometimes impossible, to fully explain how they reached a specific decision. This creates a direct conflict with the SM&CR's requirement for senior managers to demonstrate a clear understanding of the risks in their business areas.

If you cannot explain how your AI-powered credit scoring model declined a customer's application, how can you demonstrate to the FCA that you have taken reasonable steps to prevent unfair outcomes?

The FCA has been clear: if you cannot explain it, you cannot control it. And if you cannot control it, you cannot use it.

A Framework for Demonstrating "Reasonable Steps"

So, how do you, as a senior manager, protect yourself and your firm? You need to be able to demonstrate to the FCA that you have taken "reasonable steps" to manage the risks of AI. This is not about becoming a data scientist; it is about implementing a robust governance framework.

Here is what that looks like in practice:

  1. Clear Accountability: Your firm's governance map and statements of responsibilities must be updated to explicitly include accountability for AI systems. Who is the senior manager responsible for the firm's overall AI strategy? Who is responsible for the use of AI in specific business functions?
  2. Robust Due Diligence: You must have a rigorous process for vetting third-party AI vendors. This goes beyond a standard procurement check; it is a deep dive into their models, their data, their security, and their ethics.
  3. Effective Human Oversight: You must have meaningful human oversight of your AI systems. This is not just a case of having a human in the loop to rubber-stamp the AI's decisions. It is about having qualified individuals who can challenge the AI, understand its limitations, and intervene when necessary.
  4. Comprehensive Training: You and your team need to be trained on the risks of AI. You need to understand the potential for bias, the importance of data quality, and the limitations of the technology.
  5. Transparent Record-Keeping: You need to be able to show your working. This means documenting your risk assessments, your due diligence, your training programs, and your monitoring of the AI system's performance.

The Bottom Line: SM&CR Demands Active Governance

The SM&CR has fundamentally changed the game for AI adoption in financial services. It has made it a personal issue for every senior manager.

You can no longer afford to be a passive observer of your firm's AI strategy. You need to be an active, engaged, and critical participant. You need to ask the tough questions, challenge the assumptions, and demand the evidence.

Your career depends on it.


Take the Next Step

If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.

Book a Discovery Call → or learn more about the AI Audit.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.