What client-confidential information should be kept out of AI tools?
Quick Answer
Client-confidential information should be kept out of AI tools unless the tool is approved, contractually controlled, and the use is necessary. Matter facts, personal data, privileged advice, deal details and client files need partner-approved guardrails before use.
Detailed Answer
Why client-confidential AI use needs a bright line
The practical question for a professional services firm is not whether people will use AI. They already will. The safer question is which client-confidential information must stay out of AI tools unless a responsible partner or risk owner has approved the tool, purpose and controls.
That bright line matters because AI use can create confidentiality, privilege, data protection, contractual and quality risks. The firm needs rules that busy teams can follow without guessing.
The safest default is to block confidential material from unmanaged AI tools
Client-confidential information should not be pasted into unmanaged public AI tools. It should only be used in approved tools where the firm can evidence data handling, retention, access controls, contractual protections, human review and accountability.
The rule should be simple: if the information would not be put into an unapproved external system, it should not be put into an unapproved AI system.
Map which AI tools can handle client information
Information that should be banned without explicit approval
The ban should cover any information that identifies the client, reveals a matter or engagement, or could harm the client if exposed, reused or misinterpreted. In practice, that includes:
- Client names, matter names, file references and engagement identifiers
- Contracts, pleadings, board papers, deal documents, claims files and advice drafts
- Privileged, legally sensitive or litigation-related material
- Personal data, special category data and employee or customer records
- Commercially sensitive facts, pricing, strategy, forecasts and negotiation positions
- Regulated advice, financial information, insurance claims detail and KYC or AML material
- Any client document that the team has not been authorised to share with a third-party system
When approval may be appropriate
Approval may be appropriate where the AI tool is enterprise-controlled, covered by suitable contractual terms, configured not to train on firm inputs, and limited to a clear business purpose. Even then, the approval should state what data may be used, what outputs require review, and who owns the risk.
Approval should not be a casual message in a chat thread. It should create an audit trail that shows the tool, purpose, data class, approver, conditions and review steps.
The controls that make approved use defensible
For approved use, the firm should keep a short control pack. At minimum, it should include:
- An approved AI tool list with permitted and prohibited use cases
- Data classification rules for client and matter information
- Vendor due diligence and contractual data protection evidence
- Settings or terms that prevent training on confidential inputs where applicable
- Human review requirements before outputs reach a client or file
- Logging of material AI-assisted work and exceptions
- An incident route for accidental disclosure or unauthorised tool use
Put practical AI governance ownership in place
How to write the policy so teams actually follow it
The policy should be written around examples, not abstractions. Tell teams what they can do, what they cannot do, and when to ask for approval. A useful rule is: anonymise first, use approved tools only, record material use, and never rely on an AI output without professional review.
The firm should also train people on borderline cases. For example, a short anonymous summary of a legal issue may be acceptable in an approved research tool, while a full client document, a named chronology or a privileged advice draft should require explicit approval or be prohibited entirely.
Conclusion
The safest approach is to treat client-confidential information as blocked from AI by default unless the firm can prove the tool, data flow, purpose and review controls are approved. That keeps the policy practical: teams can still use AI, but they do not gamble with client trust.
Build the approval workflow for AI use in client work
FAQ
Can staff use AI if they remove the client name?
Sometimes, but removing the name is not always enough. Matter facts, dates, transaction details or rare circumstances can still identify the client or reveal confidential strategy.
Do approved enterprise AI tools remove the need for partner approval?
No. Approval of the tool is only one layer. Higher-risk uses still need approval for the purpose, data class, output review and accountability.
Should AI outputs go straight to clients?
No. AI-assisted work should be reviewed by a competent human before it reaches a client, file, regulator or external counterparty.
What evidence should the firm keep?
Keep the approved tool list, vendor checks, data classification rules, approval records, human review notes, training evidence and exception logs.