AI compliance for solicitorsSRA AI guidancelaw firm data governanceshadow AI risks

What are the key AI compliance requirements for solicitors?

2 March 2026
Answered by Rohit Parmar-Mistry

Quick Answer

Solicitors cannot blame AI for errors. We outline the SRA compliance requirements for AI, focusing on confidentiality, supervision, and strict liability.

Detailed Answer

What are the key AI compliance requirements for solicitors?

The Solicitors Regulation Authority (SRA) does not prohibit the use of artificial intelligence in law firms, but it does enforce a strict principle of non-transferable liability. In short: you cannot outsource your professional obligations to a machine.

For solicitors, AI compliance is not a technology issue; it is a regulatory one. The SRA’s Code of Conduct for Firms and Individuals remains the primary framework. If an AI tool hallucinates a case citation, misses a clause in a contract, or exposes client data to a public server, the solicitor, not the software vendor, is liable. The regulator has made it clear that "reliance on technology" is not a defence for professional negligence.

Therefore, the key requirements revolve around three pillars: Accountability (SRA Principle 2), Confidentiality (Code of Conduct 6.3), and Supervision (Code of Conduct 3.5).

The "Shadow AI" threat to client confidentiality

The most immediate compliance risk for law firms is not the AI they buy, but the AI their staff are already using. We call this Shadow AI.

SRA Code of Conduct Paragraph 6.3 requires you to keep the affairs of current and former clients confidential. When a junior associate pastes a draft NDA into a public instance of ChatGPT to "summarise it quickly," they are technically transmitting client data to a third-party server (often hosted in the US) without a Data Processing Agreement (DPA) in place. This is a direct breach of confidentiality.

To be compliant, firms must:

  • Block access to public, non-enterprise LLMs on work devices.
  • Provide sanctioned, private environments (e.g., Azure OpenAI instances) where data is not used to train the public model.
  • Update staff handbooks to explicitly define acceptable use of generative AI.

The Duty of Competence and "Human in the Loop"

Under Paragraph 3.2, you must provide a competent service. In the context of AI, this means you must understand the tools you use. You do not need to be a coder, but you must understand the limitations of the system, specifically, the risk of hallucination.

In the now-infamous Ayinde v London Borough of Hackney (and similar US cases), the courts have shown zero tolerance for lawyers submitting AI-generated case law that doesn't exist. Compliance requires a mandatory "Human in the Loop" workflow. No AI output should ever be sent to a client, court, or counterparty without human verification. Governance frameworks must document who supervised the AI's work.

Transparency and Client Care

Do you need to tell clients you are using AI? The SRA guidance suggests transparency is key (Paragraph 8.6 regarding client information). While you may not need to declare every use of a spellchecker or automation tool, if AI is substantive to the delivery of the service, or if it is used to generate bills, clients should likely be informed.

More importantly, if you are billing by the hour for work that AI completed in seconds, you risk breaching SRA Principle 7 (acting in the best interests of the client). Firms must adjust their billing models to reflect the efficiency gains of AI, rather than charging human rates for machine time.

How to prove compliance

If the SRA investigates a breach involving AI, they will ask for your governance documentation. A compliant firm should be able to show:

  • An AI Risk Assessment: Documenting the specific risks of the tool used.
  • A Data Map: Showing exactly where client data goes when it enters the AI model.
  • Training Logs: Proving staff have been trained on the limitations of the tool.

At Pattrn Data, we help regulated firms build this infrastructure before they deploy the tools. It is easier to build governance now than to explain a data breach later.

Conclusion

AI compliance for solicitors is about governance, not prohibition. The SRA expects you to innovate, but they demand you do it safely. Start by auditing what your team is already using, lock down the data leaks, and ensure every AI output is treated as a draft, not a final product.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.