An Insurer Ready to Scale AI, With a Contract That Was Not Ready for Insurance
The company is a mid-sized US insurance firm writing policies, processing claims, and serving policyholders across multiple states. Its operations handle the full spectrum of personal and commercial lines, and its customer-facing systems manage sensitive personal information at every stage: policy applications, underwriting decisions, claims submissions, medical records, financial data, and coverage correspondence.
The firm had been piloting GPT tools for several months with promising results. A GPT-based virtual assistant was handling initial customer inquiries, answering questions about coverage, deductibles, and policy terms. A separate claims processing application was using GPT to summarise adjuster notes, flag inconsistencies in claims documentation, and accelerate first-notice-of-loss intake. Customer response times improved. Claims processing throughput increased.
Leadership wanted to move beyond pilots and formalise a long-term OpenAI (Azure OpenAI) contract to scale these capabilities enterprise-wide. When the insurer's legal and IT teams reviewed the standard OpenAI service agreement, they found terms fundamentally misaligned with the regulatory environment, risk profile, and data obligations of an insurance company. The contract had been designed for general enterprise use — not for an industry where customer data is protected by state insurance regulations, where AI-generated advice carries liability implications, and where a single catastrophe event can spike AI usage by an order of magnitude overnight.
The firm's leadership made the right call: rather than negotiate the contract themselves, they engaged Redress Compliance to conduct an independent OpenAI Contract Risk Review before signing anything. For the full picture of insurance-specific AI contract risks, see our GenAI Knowledge Hub.
Insurance company deploying or scaling AI?
Standard AI contracts were not written for insurance. Independent contract risk review identifies and fixes the provisions that create regulatory exposure, uncapped cost risk, and liability gaps before you sign.
What We Found in the Contract: Four Critical Risks
The Data Retention Clause Would Have Let the AI Provider Keep and Reuse Policyholder Data
The default data usage provision permitted the AI provider to retain data submitted through the service and use it for model training and improvement. For an insurance company, this was unacceptable on every level. The data flowing through the AI system included policyholder names, addresses, policy numbers, claims histories, medical information, financial details, and coverage specifics. Allowing an AI vendor to retain this data — let alone use it for model training — would violate the insurer's privacy policies, breach its obligations under state insurance regulations, and create regulatory exposure with every state insurance department where the firm operates. The clause was not just commercially unfavourable. It was a compliance violation waiting to happen.
The Contract Contained No Spending Cap
AI usage in insurance is inherently volatile. A quiet month generates predictable API call volumes from routine customer inquiries and standard claims processing. A hurricane, wildfire, or major weather event can multiply claims volume by 5 to 10 times in a matter of days, and every claims-related AI interaction generates API calls. The draft contract had no mechanism to cap monthly or annual spending. If a catastrophe event triggered a surge in AI-powered claims processing, the insurer's AI costs would spike proportionally with no ceiling. For a company that had recently experienced a major weather event season, this was not a theoretical risk — it was a budgetary time bomb embedded in the contract terms. Compare this to the spending controls secured for a top-20 US bank in a different AI deployment context.
Liability Was Entirely One-Sided
The GPT-powered virtual assistant was designed to answer customer questions about coverage, explain policy terms, and guide policyholders through claims processes. If the AI provided incorrect coverage advice — telling a policyholder they were covered when they were not, or miscategorising a claim — the consequences for the insurer would be tangible: policyholder complaints, E&O claims, regulatory scrutiny, and reputational damage. The draft contract placed all of this liability on the insurer. The AI provider bore no responsibility for the accuracy of its outputs and accepted no contractual obligation to help resolve issues caused by its own technology. The insurer would be deploying AI across customer-facing workflows while bearing 100% of the risk if that AI made a material error.
There Was No Explainability Requirement
State insurance departments have increasing expectations around the use of AI in underwriting, claims processing, and customer communications. Regulators want to understand how AI-driven decisions are made, particularly when those decisions affect policyholder outcomes. The draft contract contained no requirement for the AI to provide confidence scores, reasoning explanations, or decision rationale. The insurer would have been deploying AI into regulated workflows with no ability to explain to regulators, auditors, or policyholders how the AI reached its conclusions. In a regulatory environment where AI transparency requirements are expanding rapidly, this gap would have created compliance risk that worsened over time.
How We Rewrote the Contract
Redress Compliance provided a comprehensive contract risk review followed by targeted negotiation on each of the four critical areas. Every proposed change was anchored to a specific regulatory requirement — state insurance privacy regulations, the firm's financial risk management framework, E&O exposure from AI-generated advice, and emerging AI governance standards from state insurance departments. Presenting each change as a regulatory necessity rather than a negotiation preference transformed the dynamic with the AI provider.
We Eliminated Data Retention Entirely
The revised contract prohibits the AI provider from storing any customer data beyond providing the immediate service. All policyholder data submitted through the API must be processed and deleted. No data may be retained for model training, service improvement, or any purpose beyond the specific interaction. The prohibition covers prompts, responses, metadata, and any derivative data generated during processing. This is not an aggressive negotiation position for an insurer — it is the minimum standard required by the regulatory environment in which insurers operate.
We Implemented Hard Monthly Spending Caps with Detailed Reporting
The revised agreement includes a contractual ceiling on monthly AI usage charges. The insurer cannot be billed above the agreed limit without explicit approval. If usage approaches the cap, the AI provider must deliver detailed usage reports showing consumption by workflow category — customer service, claims processing, underwriting support — so the insurer can make informed decisions about whether to authorise additional capacity. During a catastrophe event, when claims volume spikes and AI workloads surge, the spending cap prevents the technology from becoming an uncontrolled cost centre. Our benchmarking service informed the appropriate cap level based on comparable insurance deployments.
We Negotiated Shared Liability for AI-Generated Errors
The revised terms establish shared responsibility when the AI's output causes a material error. If the GPT assistant provides incorrect coverage advice or miscategorises a claim, the AI provider is contractually obligated to support remediation — including participation in root cause analysis, technical remediation of the underlying issue, and documented cooperation in responding to regulatory inquiries. For an insurer deploying AI into customer-facing workflows where errors have direct financial and regulatory consequences, the difference between sole liability and shared liability is the difference between acceptable risk and unacceptable exposure.
We Added an AI Explainability Requirement
The revised contract requires the AI to provide confidence scores and reasoning rationale for its outputs. When the virtual assistant answers a coverage question, the system must be able to produce an explanation of why it gave that answer. When the claims processing application flags an inconsistency, it must document the basis for the flag. This explainability requirement serves two purposes: it gives the insurer's compliance and quality assurance teams the ability to audit AI-driven decisions, and it provides the documentation necessary to respond to regulatory inquiries. As state insurance departments expand their AI oversight frameworks, the insurer will be able to demonstrate that its AI systems produce explainable, auditable outputs — a capability that will become increasingly important as regulatory expectations mature.
What Changed for the Insurer
GPT is now running across critical workflows with appropriate protections in place. The virtual assistant handles customer inquiries across all service channels. The claims processing application operates at enterprise scale. Underwriting support functions use GPT for document analysis and risk assessment. The technology that showed promise in pilots is delivering value in production, and it is doing so under contract terms designed for insurance rather than inherited from a generic enterprise template.
Policyholder data is protected by contract, not by assumption. The spending cap means the insurer's AI costs are predictable regardless of usage volatility. The shared liability framework means that when the AI produces an error affecting a policyholder, the AI provider is contractually required to participate in remediation. The explainability requirements produce audit trails the insurer can present to state insurance departments as AI regulation evolves. Rather than retrofitting explainability later under regulatory pressure, the insurer has it from day one — positioning the firm ahead of regulatory requirements rather than scrambling to catch up.
For other insurers navigating similar challenges, compare this outcome with the European Insurance Group case study, where similar AI contract risks were addressed in a pan-European regulatory context, and our GenAI Contract Risk Review service for a full description of our methodology.
"Our priority was protecting our customers and our budget. Redress Compliance made sure the AI contract did both. They caught the clauses that would have put us at risk and rewrote them in our favour. We now have an AI agreement with strong data safeguards and cost controls. We can innovate with AI now, knowing we are protected on all fronts."
Chief Information Officer, U.S. Insurance Firm
Why Insurance Companies Cannot Sign Standard AI Contracts
Policyholder data is not general enterprise data. The data that flows through an insurer's AI systems is among the most sensitive information any industry handles. State insurance departments, HIPAA (for health insurance), and state privacy laws impose specific obligations on how this data is handled, stored, and shared. Standard AI contracts were not written with these obligations in mind. An AI vendor's default data retention policy may be perfectly acceptable for a technology company or a retailer. For an insurer, it is a regulatory violation.
AI-generated advice carries liability that other industries do not face. When an AI system tells a policyholder they are covered for a specific loss, that statement has legal and financial implications. Standard AI contracts place all liability on the customer. For an insurer, this means absorbing 100% of the risk from errors produced by technology the insurer did not build and cannot fully control. Shared liability is not a negotiation preference — it is a risk management necessity.
Usage volatility in insurance is extreme and unpredictable. A catastrophe event can multiply claims volume and AI workloads by an order of magnitude within days. No other industry experiences the same spike dynamics. Standard AI contracts with no spending caps expose insurers to cost surges that coincide precisely with the moments when the company is already under maximum financial stress from claims payouts.
Regulatory oversight of AI in insurance is expanding rapidly. Insurers that deploy AI today without explainability requirements built into their vendor contracts will face the cost and disruption of retrofitting those capabilities later — under regulatory pressure and on timelines they do not control. Getting an independent contract risk review is the lowest-cost way to ensure the contract is built for the regulatory environment the insurer will face tomorrow, not just today. To discuss your specific AI deployment, contact our team.
AI Contract Intelligence for Regulated Industries — Delivered Monthly
Join risk, compliance, and technology leaders receiving our monthly advisory on GenAI contract terms, AI governance developments, insurance-specific AI risks, and vendor negotiation intelligence. Free. 100% vendor-independent.
Reviewing an OpenAI or AI vendor contract for a regulated industry?
Share your draft agreement. We will identify the provisions that create regulatory exposure, uncapped cost risk, and liability gaps — and negotiate the changes that protect your data, your budget, and your customers.