How Can Insurers Address the "Black Box" Problem in AI Pricing Models?
Quick Answer
The 'black box' problem in AI pricing is a compliance nightmare. Learn how insurers can address explainability while maintaining competitive advantage.
Detailed Answer
This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about the use of AI in your firm.
How Can Insurers Address the "Black Box" Problem in AI Pricing Models?
The "black box" problem is the inconvenient truth at the heart of the AI revolution in insurance. You have an AI pricing model that is incredibly accurate at predicting risk, but you cannot fully explain how it does it. This creates a direct and profound conflict with your regulatory obligations under the Consumer Duty and the SM&CR. If you cannot explain your pricing, you cannot justify it. And if you cannot justify it, you cannot use it.
For years, insurers have used generalised linear models (GLMs) for pricing. They are simple, transparent, and easy to explain. But they are also relatively unsophisticated. The new generation of AI models, particularly deep learning networks, can identify complex, non-linear relationships in data that GLMs could never find. They are far more powerful, but that power comes at the cost of transparency.
The Regulatory Collision Course
The black box problem puts you on a collision course with two fundamental regulatory principles:
- The Consumer Duty: The "consumer understanding" outcome requires you to communicate with your customers in a way that is clear, fair, and not misleading. If a customer asks you why their premium has increased, and your only answer is "because the algorithm said so," you are failing to meet this outcome. You are also likely failing to act in "good faith."
- The SM&CR: The regime requires senior managers to have a clear understanding of the risks in their business areas. If you are the Head of Underwriting, and you cannot explain how your most important pricing tool works, you cannot demonstrate that you are in control of the risks. You are failing in your duty of responsibility.
The Myth of the Unexplainable Model
The first thing to understand is that the idea of a completely unexplainable "black box" is often a myth, or at least an exaggeration. While you may not be able to trace the exact path of every single decision, you can take steps to understand the key drivers of the model and to put robust controls around it.
Ignoring the problem is not an option. The FCA has been clear: where full explainability is not possible, you must be able to explain the safeguards you have put in place to protect against negative outcomes.
A Practical Framework for Taming the Black Box
Addressing the black box problem is not about dumbing down your models. It is about building a more sophisticated governance framework around them. Here is a practical approach:
| Strategy | Description |
|---|---|
| 1. Prioritise Interpretable Models | Do not just default to the most complex model available. Challenge your data science team to use the simplest model that can do the job effectively. Techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) can help to explain the outputs of more complex models. |
| 2. Conduct Feature Importance Analysis | You need to understand which data inputs are the most influential in your model’s decisions. If you find that a seemingly innocuous variable (like the time of day a quote is requested) is a major driver of price, you need to investigate why. This is a critical step in identifying hidden biases. |
| 3. Implement Robust Model Validation | You need a dedicated, independent model validation team that can stress-test the model and challenge its assumptions. They should be asking questions like: "What happens to the model’s outputs if we remove this data input?" or "How does the model behave in extreme market conditions?" |
| 4. Build a "Human-in-the-Loop" System | For particularly sensitive decisions (e.g., a very high premium or a declined application), the AI should not be making the final decision. It should flag the case for review by an experienced human underwriter who can apply their professional judgment and document the rationale for the final decision. |
| 5. Focus on Outcome-Based Explanations | While you may not be able to explain the inner workings of the neural network, you can and must be able to explain the outcomes. This means being able to tell a customer: "Your premium has increased because our data shows that your postcode has a higher risk of subsidence, and the model has identified that your property type is more susceptible to this risk." |
The Bottom Line: Explainability is a Design Choice
The black box problem is not an unsolvable technical issue. It is a business and governance challenge. It requires you to make a conscious choice to prioritise transparency and fairness in the design and deployment of your AI systems.
You can have powerful, predictive AI models without sacrificing your ability to explain their decisions. But it requires a level of investment in governance, validation, and human oversight that many firms are currently unwilling to make.
In the world of the Consumer Duty, a pricing decision that you cannot explain is a pricing decision that you cannot defend. And an indefensible pricing decision is a regulatory fine waiting to happen.
Take the Next Step
If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.
Book a Discovery Call → or learn more about the AI Audit.