20
Critical Questions
5
Risk Categories
3–5×
Typical True Cost vs Quote
$Millions
At Stake per Contract

Every AI vendor contract signed in 2025 and 2026 will look naïve within three years. The technology is evolving faster than legal and procurement teams can draft terms, and vendors are exploiting that gap. They are embedding pricing structures that penalise growth, data handling terms that create compliance liability, and lock-in mechanisms that eliminate negotiation leverage at renewal. This checklist exists because we have reviewed dozens of enterprise AI contracts across how OpenAI is changing software procurement, Anthropic, Google, AWS, Microsoft, Salesforce, and specialised AI vendors — and the same twenty gaps appear in nearly every one. These are the questions your vendor hopes you do not ask. Ask them anyway.

How to Use This Checklist

Each of the twenty questions targets a specific contractual or commercial risk that is unique to AI procurement. Unlike traditional SaaS contracts, AI agreements introduce variable consumption pricing, data training rights, model deprecation risks, and capability degradation that have no precedent in conventional enterprise software licensing. For each question, we explain why it matters, what a good answer looks like, and what the red flag response reveals.

Print this list. Bring it to your next vendor meeting. Send it to your legal team before contract review. The questions are sequenced across five risk categories: pricing and cost control, data rights and privacy, performance and availability, lock-in and exit, and governance and compliance. Every question that goes unasked is a concession you did not know you made.

Pricing and Cost Control

1. What is the complete unit economics model, including all consumption-based charges?

Why it matters: AI contracts routinely quote a headline per-seat or per-token price that represents 20–40% of true cost. The remaining 60–80% hides in consumption overages, API call charges, storage fees, compute surcharges, premium feature tiers, and support costs. A $30/seat/month Copilot licence becomes $55–$80/seat when you add Azure consumption, premium API features, and the Microsoft 365 E5 prerequisite. A $3/million-token API quote becomes $8–$12 when agent orchestration, tool calls, knowledge base infrastructure, and monitoring are included.

Good answer: The vendor provides a total-cost-of-ownership model that itemises every charge — subscription, consumption, infrastructure, storage, support, and any variable components — with worked examples at your projected usage levels. Red flag: The vendor only quotes the headline per-seat or per-token rate and says “it depends on usage” when asked about ancillary costs.

2. How are consumption units defined, and can the definition change during the contract term?

Why it matters: AI vendors use inconsistent and sometimes deliberately opaque unit definitions. Tokens are not standardised across providers. A “seat” may include or exclude API access. An “FSE” or “full-service equivalent” may count contractors, part-time workers, or inactive users differently depending on the vendor’s interpretation. If the vendor can redefine what a consumption unit means mid-contract, your cost model becomes unreliable.

Good answer: Consumption units are explicitly defined in the contract with examples, and the definition is locked for the contract term. Red flag: Unit definitions reference the vendor’s “current documentation” or “standard practices,” which can change unilaterally.

3. What happens to pricing when our usage exceeds the contracted tier or committed volume?

Why it matters: AI usage is inherently unpredictable, especially in the first 12–18 months. Enterprises routinely underestimate consumption by 2–5× as adoption spreads beyond the initial pilot group. Overage pricing is where vendors recover the discounts they offered to win the deal. If overage rates are 2–3× the committed rate, a successful AI deployment becomes a budget crisis.

Good answer: Overage rates are capped at no more than 125% of the committed per-unit price, with the ability to true-up to a higher commitment tier at the lower rate retroactively. Red flag: Overage rates are at “list price” (typically 2–4× the negotiated rate) with no mid-term adjustment mechanism.

4. Is there a pricing protection clause that limits annual increases at renewal?

Why it matters: Google increased Workspace pricing 17–22% overnight by bundling Gemini. Microsoft added Copilot at $30/seat with no opt-out path for E5 customers. AI vendors are repricing their entire portfolios as they embed AI capabilities. Without contractual pricing protections, your renewal price is whatever the vendor decides the market will bear — and your switching costs ensure you will pay it.

Good answer: Annual price increases are capped at 3–5% for the contract term, with any mid-term SKU restructuring or feature bundling not resulting in a net price increase. Red flag: The contract has no renewal pricing language, or includes a clause allowing “pricing adjustments to reflect changes in the service.”

Assess Your Contract Readiness

Answer 10 questions to benchmark your AI procurement preparation against enterprise best practices.

Take the contract readiness assessment →

Data Rights and Privacy

5. Will our data — prompts, responses, documents, and metadata — be used to train or improve your models?

Why it matters: This is the foundational data rights question. If your prompts and documents are used to train the vendor’s model, your proprietary information, trade secrets, and client data become embedded in a system that serves your competitors. Most enterprise-tier AI agreements now exclude customer data from training by default, but the contractual language varies significantly — and some vendors have different defaults for different product tiers.

Good answer: The contract contains an explicit, unconditional statement that customer data (prompts, responses, uploaded documents, and metadata) will not be used for model training, improvement, or any purpose other than delivering the contracted service. Red flag: The training exclusion is buried in terms of service rather than the commercial agreement, or it applies only to “enterprise” tier and not to API or developer access used by the same organisation.

6. Where does inference processing occur, and can we restrict it to specific geographic regions?

Why it matters: GDPR, data sovereignty laws, and industry-specific regulations require knowing where data is processed, not just where it is stored. AI inference may route prompts through data centres in regions outside your compliance boundary. Cross-region inference on AWS Bedrock, for example, can route requests globally without additional charges — which is operationally convenient and potentially a compliance violation. US-only inference options exist but typically carry a 10% price premium.

Good answer: The contract specifies inference processing regions and provides a mechanism to restrict processing to compliant geographies, with the premium (if any) for regional restriction clearly stated. Red flag: The vendor says data is “processed in our global infrastructure” with no option to constrain regions.

7. What data retention policies apply, and can we enforce deletion timelines?

Why it matters: AI interactions generate prompt logs, response caches, embeddings, and metadata that may be retained by the vendor for debugging, abuse prevention, or service improvement. If your organisation processes sensitive data (healthcare, financial, legal), retention of AI interaction logs may violate data minimisation requirements. You need contractual control over retention duration and a verifiable deletion mechanism.

Good answer: Retention period is configurable (ideally zero retention or 30-day maximum), with contractual commitment to deletion and audit rights to verify. Red flag: The vendor retains “de-identified” interaction data indefinitely, with de-identification defined by the vendor rather than an objective standard.

8. Who owns the outputs generated by the AI using our data and prompts?

Why it matters: AI intellectual property rights and output ownership ownership of AI-generated content is legally unsettled in most jurisdictions. If your contract is silent on output ownership, you may face disputes about whether AI-generated code, analysis, documents, or creative content belongs to you, the vendor, or no one. This is particularly critical for organisations using AI to generate client deliverables, product designs, or patentable inventions.

Good answer: The contract explicitly assigns all rights in AI-generated outputs to the customer, with the vendor disclaiming any ownership interest. Red flag: The contract is silent on output ownership, or grants the vendor a licence to use outputs for “service improvement.”

Performance and Availability

9. What uptime SLA applies, and what are the financial remedies for breach?

Why it matters: Most AI API providers offer 99.9% uptime SLAs (8.7 hours of permitted downtime per year), but the financial remedies for breach are often limited to service credits of 10–25% of one month’s fee. If your business depends on AI availability for customer-facing applications, a 4-hour outage that costs you $500,000 in lost revenue triggers a service credit of $2,000. The SLA is functionally meaningless without meaningful remedies.

Good answer: Uptime SLA of 99.95%+ with escalating service credits (25% at 99.9%, 50% at 99.5%, 100% below 99.0%) and the right to terminate without penalty if SLA breaches occur in consecutive months. Red flag: SLA is “commercially reasonable efforts” with no quantified uptime commitment or financial remedy.

10. What happens when the model version we depend on is deprecated?

Why it matters: AI vendors deprecate model versions faster than any previous software category. OpenAI has deprecated multiple GPT versions within months of release. Anthropic’s Claude 3 family was superseded by Claude 3.5, then Claude 4, within roughly a year. Model deprecation is not like a software version upgrade — a new model version can produce materially different outputs for the same inputs, breaking applications that depend on consistent behaviour. If your contract does not address model lifecycle, the vendor can force-migrate you to a new model version that breaks your production systems.

Good answer: Minimum 12-month notice before deprecation of any model version the customer is actively using, with parallel access to both old and new versions during the transition period at no additional cost. Red flag: The vendor reserves the right to “update or modify models at any time” with no notice or transition period.

11. Are there rate limits or throttling thresholds, and are they contractually guaranteed?

Why it matters: Rate limits determine your application’s maximum throughput. If rate limits are documented in a pricing page but not in your contract, the vendor can reduce them unilaterally. During peak demand, on-demand AI APIs throttle requests, which means your application either queues (adding latency), retries (wasting compute), or fails (losing transactions). If your application has throughput requirements, those requirements must be contractual, not aspirational.

Good answer: Rate limits are specified in the contract with guaranteed minimums per endpoint, and the vendor provides provisioned capacity options for applications requiring burst protection. Red flag: Rate limits are described as “best effort” or reference a documentation page that the vendor can update without notice.

12. How is model quality measured, and what recourse exists if quality degrades?

Why it matters: Unlike traditional software where functionality either works or is broken, AI model quality exists on a spectrum that can degrade subtly. A model update might reduce accuracy on your specific use case by 15% while improving average benchmark scores. Without contractual quality baselines and measurement mechanisms, you have no recourse when the model that won your evaluation performs worse six months later.

Need Expert AI Contract Negotiation Support?

Redress Compliance provides independent GenAI licensing advisory services — fixed-fee, no vendor affiliations. Our specialists help enterprises negotiate AI contracts, review terms, and protect against hidden cost escalation.

Explore Advisory Services →

Good answer: The contract includes an acceptance testing enterprise AI vendor selection framework, with the customer’s right to benchmark model performance against agreed criteria and the ability to revert to a previous model version or terminate if quality falls below the agreed baseline. Red flag: The vendor says quality is “continuously improving” and offers no mechanism for customer-defined quality measurement.

Need an independent review before signing your AI contract?

Our team has reviewed dozens of enterprise AI agreements across OpenAI, Anthropic, Google, and Microsoft. We catch the clauses that internal legal teams miss. Fixed-fee, vendor-independent.

Learn about our GenAI advisory services →

Lock-In and Exit

13. What is the total cost and timeline to exit this contract and migrate to an alternative?

Why it matters: AI vendor lock-in operates differently from traditional SaaS. Beyond data export, you face prompt library migration, fine-tuning recreation, integration rewiring, and application code changes. An enterprise that has built 50 AI-powered workflows on one vendor’s API faces 6–12 months and $500K–$2M in migration costs. Vendors know this, which is why they invest heavily in proprietary features that increase switching costs with every month of usage.

Good answer: The contract includes data portability provisions, API compatibility commitments, and a transition assistance period of at least 90 days post-termination during which the customer retains read access to all data, configurations, and prompt libraries. Red flag: Data export is available only in proprietary formats, and access terminates immediately upon contract expiration.

14. Can we use multiple AI vendors simultaneously without contractual penalty?

Why it matters: Multi-model strategies are becoming standard practice. Enterprises route simple requests to cheaper models, complex requests to premium models, and specialised requests to domain-specific models — often from different vendors. If your contract contains exclusivity clauses, most-favoured-nation provisions, or volume commitments that penalise multi-vendor usage, you lose the negotiation leverage and cost optimisation that multi-vendor strategies provide.

Good answer: No exclusivity requirements. Volume commitments are based on minimum spend with the vendor, not on a percentage of total AI spend. Red flag: The contract includes a “preferred vendor” clause requiring a minimum percentage of AI workloads be routed through the contracted vendor, or discounts are conditional on exclusivity.

15. What happens to our fine-tuned models, embeddings, and knowledge bases if we terminate?

Why it matters: Fine-tuning a model on your proprietary data creates an asset that sits on the vendor’s infrastructure. If you terminate, can you export the fine-tuned model weights? In most cases, no — the fine-tuned model is a derivative of the vendor’s base model and cannot be transferred. Your investment in fine-tuning (training data preparation, iteration, evaluation) is lost at termination. The same applies to vector embeddings in knowledge bases and agent configurations.

Good answer: Fine-tuned model weights are exportable (for open-source base models) or the vendor provides a transition period to re-create fine-tuning on an alternative platform. All embeddings, knowledge base content, and agent configurations are exportable in standard formats. Red flag: Fine-tuned models are “non-transferable” and knowledge bases can only be exported as raw source documents, not as processed embeddings.

16. Is there an auto-renewal clause, and what is the notice period to prevent it?

Why it matters: Auto-renewal clauses in AI contracts are particularly dangerous because they lock you into pricing that may be dramatically out of market. AI model costs are dropping 50–70% per generation (roughly every 12–18 months). An auto-renewed contract at 2024 pricing in 2026 represents a 200–300% overpayment relative to current market rates. The notice period to prevent auto-renewal is typically 60–120 days, and missing it means another 12-month commitment at the inflated rate.

Good answer: Auto-renewal is either eliminated or set to month-to-month continuation after the initial term, with 30-day notice to terminate. Red flag: Auto-renewal for a full additional term (12+ months) with 90–120 day notice period and pricing set at the vendor’s then-current list rates.

Governance and Compliance

17. What compliance certifications does the platform hold, and do they cover AI-specific processing?

Why it matters: SOC 2, ISO 27001, and HIPAA compliance for the vendor’s cloud infrastructure does not automatically extend to AI model inference. The compliance boundary for AI processing — including model input/output handling, temporary data caching during inference, and log retention — may differ from the compliance boundary for the underlying infrastructure. If your compliance framework requires specific certifications, verify that those certifications explicitly cover the AI inference pipeline, not just the hosting environment.

📊 Free Assessment Tool

Before you sign that AI contract, find out how locked in you'll be. Our free assessment reveals your exit options, portability risks, and renewal leverage — takes under 5 minutes.

Take the Free Assessment →

Good answer: The vendor provides compliance documentation that explicitly covers AI inference processing, including data handling during model invocation, temporary caching, and output delivery. Red flag: Compliance certifications reference “our cloud infrastructure” without specifying whether AI-specific processing is within scope.

18. Can we audit the vendor’s AI data handling practices?

Why it matters: Trust-but-verify is the only viable approach to AI data governance. A contractual commitment that data is not used for training is only as reliable as your ability to verify it. Enterprise agreements should include audit rights — either direct audit access or the right to receive reports from an independent third-party auditor — that cover AI-specific data handling, not just general security practices.

Good answer: Annual third-party audit reports covering AI data handling are provided at no cost, with the right to commission an independent audit with reasonable notice. Red flag: No audit rights, or audit rights limited to “general security practices” that do not cover model training data pipelines or inference data handling.

19. Who is liable if the AI produces outputs that cause harm, legal exposure, or financial loss?

Why it matters: AI outputs can generate incorrect medical guidance, flawed legal analysis, inaccurate financial calculations, discriminatory hiring recommendations, and defamatory content. If your organisation deploys AI-generated outputs to customers or uses them in decision-making, the liability for harmful outputs must be contractually allocated. Most vendor agreements disclaim all liability for AI output accuracy — which means your organisation bears the full risk of every AI-generated error.

Good answer: While no vendor will guarantee output accuracy, the contract should include a mutual limitation of liability framework that does not cap the vendor’s liability for data breaches, training data misuse, or failure to comply with contractual data handling commitments. The customer accepts responsibility for output validation and deployment decisions. Red flag: The vendor disclaims all liability including for data handling failures, with total liability capped at fees paid in the prior 12 months regardless of the nature of the breach.

20. Does the contract address AI-specific regulatory requirements that may emerge during the term?

Why it matters: The EU AI Act is being implemented in phases through 2027. US state-level AI regulations are proliferating. Industry-specific AI governance requirements are emerging in financial services, healthcare, and government. A three-year AI contract signed today will span a period of significant regulatory change. If your contract does not address how new compliance requirements will be handled — who bears the cost, who implements the changes, what happens if the vendor’s platform cannot comply — you may be locked into a non-compliant platform with no exit path.

Good answer: The contract includes a regulatory change clause requiring the vendor to implement changes necessary to comply with applicable AI regulations at no additional cost, with the customer’s right to terminate without penalty if the vendor cannot achieve compliance within a reasonable timeframe. Red flag: No regulatory change clause, or the vendor passes all compliance costs to the customer and reserves the right to modify the service in ways that may affect functionality to achieve compliance.

Client Result

Lowe’s achieved $1.2M in AI cost avoidance through independent procurement advisory.

Read the case study →

The Meta-Question: Do You Have Independent Expertise at the Table?

These twenty questions are necessary but not sufficient. Asking the right questions is only valuable if you can evaluate the answers. AI vendor sales teams are trained to provide responses that sound comprehensive while preserving commercial flexibility. “Our enterprise agreement includes robust data protections” means nothing until you have read the specific contractual language and understand what it does and does not cover.

Enterprise AI contracts sit at the intersection of cloud procurement, data privacy law, intellectual property, and a technology category that is evolving faster than legal precedent can address. Most procurement teams lack internal expertise across all four domains. Most legal teams have limited experience with AI-specific contract risks. And most IT teams are evaluating capability rather than commercial terms.

This gap is where independent advisory firms provide the highest value. An advisor who has reviewed dozens of AI vendor contracts across multiple providers can identify the specific language gaps, benchmark your commercial terms against market, and negotiate provisions that your internal team would not know to request. The cost of independent advisory is typically recovered within the first 90 days of the optimised contract through better pricing, stronger protections, and avoided risk.

Scoring Your Vendor: A Quick Assessment Framework

After working through all twenty questions with your vendor, score each response on a simple three-point scale. A score of 2 means the vendor provided a satisfactory answer with clear contractual language. A score of 1 means the answer was partially acceptable but requires negotiation to strengthen the contractual provisions. A score of 0 means the vendor either could not answer the question, gave a red-flag response, or refused to commit to contractual terms.

Tally the scores across all twenty questions for a maximum of 40 points. A score of 32–40 indicates a vendor with strong commercial maturity and enterprise-grade contract terms — proceed with confidence after verifying language in the final agreement. A score of 20–31 indicates material gaps that require negotiation before signing. Identify the zero-score questions as non-negotiable items and make contract execution conditional on resolving them. A score below 20 indicates a vendor that is not ready for enterprise procurement. Either the product is consumer-grade being positioned as enterprise, the sales team lacks authority to commit to enterprise terms, or the vendor deliberately avoids contractual commitments. Walk away or escalate to executive sponsors who can authorise the necessary contractual protections.

Pay particular attention to clusters of low scores. If a vendor scores well on pricing but poorly on data rights, the attractive pricing may be subsidised by liberal data usage rights that expose your organisation. If governance scores are consistently zero, the vendor has not built the enterprise compliance infrastructure your organisation requires, regardless of product capability.

The twenty questions in this checklist will surface the risks. Addressing those risks requires expertise, leverage, and the willingness to walk away from a deal that does not protect your organisation. The vendors signing AI contracts today are counting on the fact that most enterprises will not ask these questions. Prove them wrong.

GenAI Licensing Hub This checklist is part of our GenAI Licensing Knowledge Hub — 25+ expert guides covering AI token pricing, contract risks, data privacy, and enterprise negotiation strategies.
GenAI Licensing Hub — This guide is part of our GenAI Licensing Knowledge Hub — 80+ expert guides covering AI token pricing, contract risks, data privacy, and enterprise negotiation strategies across OpenAI, Anthropic, Google, AWS, and Microsoft.