GenAI · AI Procurement Strategy

AI Procurement Checklist:
20 Questions to Ask Before Signing an AI Contract

The twenty questions your AI vendor hopes you do not ask. Covering pricing traps, data rights, model deprecation, lock-in, and compliance gaps with good answers and red flags for each.

20
Critical questions
5
Risk categories
3-5×
Typical true cost vs quote
$Millions
At stake per contract
📘 This guide is part of our GenAI Licensing Knowledge Hub
By Redress Compliance Advisory Updated: February 2026 ⏱ 24 min read

How to Use This Checklist

Every AI vendor contract signed in 2025 and 2026 will look naïve within three years. The technology is evolving faster than legal and procurement teams can draft terms, and vendors are exploiting that gap. They are embedding pricing structures that penalise growth, data handling terms that create compliance liability, and lock-in mechanisms that eliminate negotiation leverage at renewal. Read how OpenAI is changing software procurement for context on how rapidly the landscape is shifting.

This checklist exists because we have reviewed dozens of enterprise AI contracts across OpenAI, Anthropic, Google, AWS, Microsoft Azure OpenAI, Salesforce, and specialised AI vendors, and the same twenty gaps appear in nearly every one. These are the questions your vendor hopes you do not ask. Ask them anyway.

Each question targets a specific contractual or commercial risk unique to AI procurement. Unlike traditional SaaS contracts, AI agreements introduce variable consumption pricing, data training rights, model deprecation risks, and capability degradation that have no precedent in conventional enterprise software licensing. For each question, we explain why it matters, what a good answer looks like, and what the red flag response reveals.

Print this list. Bring it to your next vendor meeting. Send it to your legal team before contract review. The questions are sequenced across five risk categories.

Pricing and Cost Control

1

What is the complete unit economics model, including all consumption-based charges?

Why it matters: AI contracts routinely quote a headline per-seat or per-token price that represents 20-40% of true cost. The remaining 60-80% hides in consumption overages, API call charges, storage fees, compute surcharges, premium feature tiers, and support costs. A $30/seat/month Copilot licence becomes $55-$80/seat when you add Azure consumption, premium API features, and the Microsoft 365 E5 prerequisite. A $3/million-token API quote becomes $8-$12 when agent orchestration, tool calls, knowledge base infrastructure, and monitoring are included. Read Azure OpenAI pricing explained for specific examples.

✅ Good answer: The vendor provides a total-cost-of-ownership model that itemises every charge with worked examples at your projected usage levels.

🔴 Red flag: The vendor only quotes the headline per-seat or per-token rate and says "it depends on usage" when asked about ancillary costs.

2

How are consumption units defined, and can the definition change mid-contract?

Why it matters: AI vendors use inconsistent and sometimes deliberately opaque unit definitions. Tokens are not standardised across providers. A "seat" may include or exclude API access. An "FSE" or "full-service equivalent" may count contractors, part-time workers, or inactive users differently. If the vendor can redefine what a consumption unit means mid-contract, your cost model becomes unreliable. For OpenAI pricing models explained, see our dedicated breakdown.

✅ Good answer: Units are explicitly defined in the contract with examples, and the definition is locked for the contract term.

🔴 Red flag: Unit definitions reference the vendor's "current documentation" or "standard practices," which can change unilaterally.

3

What happens to pricing when usage exceeds the contracted tier or committed volume?

Why it matters: AI usage is inherently unpredictable, especially in the first 12-18 months. Enterprises routinely underestimate consumption by 2-5x as adoption spreads beyond the pilot group. Overage pricing is where vendors recover the discounts they offered to win the deal. If overage rates are 2-3x the committed rate, a successful AI deployment becomes a budget crisis.

✅ Good answer: Overage rates are capped at no more than 125% of the committed per-unit price, with the ability to true-up to a higher commitment tier at the lower rate retroactively.

🔴 Red flag: Overage rates are at "list price" (typically 2-4x the negotiated rate) with no mid-term adjustment mechanism.

4

Is there a pricing protection clause that limits annual increases at renewal?

Why it matters: Google increased Workspace pricing 17-22% overnight by bundling Gemini. Microsoft added Copilot at $30/seat with no opt-out path for E5 customers. AI vendors are repricing their entire portfolios as they embed AI capabilities. Without contractual pricing protections, your renewal price is whatever the vendor decides. Read negotiating Copilot volume discounts for practical tactics.

✅ Good answer: Annual price increases are capped at 3-5% for the contract term, with any mid-term SKU restructuring not resulting in a net price increase.

🔴 Red flag: The contract has no renewal pricing language, or includes a clause allowing "pricing adjustments to reflect changes in the service."

Assess Your Contract Readiness

Answer 10 questions to benchmark your AI procurement preparation against enterprise best practices.

Take the Assessment → Lock-In Risk Assessment

Data Rights and Privacy

5

Will our data be used to train or improve your models?

Why it matters: This is the foundational data rights question. If your prompts and documents are used to train the vendor's model, your proprietary information becomes embedded in a system that serves your competitors. Most enterprise-tier agreements now exclude customer data from training by default, but the contractual language varies significantly. Read enterprise AI data privacy: what your contract must include for the specific clauses to demand.

✅ Good answer: The contract contains an explicit, unconditional statement that customer data will not be used for model training, improvement, or any purpose other than delivering the contracted service.

🔴 Red flag: The training exclusion is buried in terms of service rather than the commercial agreement, or applies only to "enterprise" tier.

6

Where does inference processing occur, and can we restrict it geographically?

Why it matters: GDPR, data sovereignty laws, and industry-specific regulations require knowing where data is processed, not just stored. AI inference may route prompts through data centres in regions outside your compliance boundary. Cross-region inference on AWS Bedrock can route requests globally without notice. US-only inference options exist but typically carry a 10% price premium.

✅ Good answer: The contract specifies inference processing regions with a mechanism to restrict processing to compliant geographies.

🔴 Red flag: The vendor says data is "processed in our global infrastructure" with no option to constrain regions.

7

What data retention policies apply, and can we enforce deletion timelines?

Why it matters: AI interactions generate prompt logs, response caches, embeddings, and metadata. If your organisation processes sensitive data, retention of AI interaction logs may violate data minimisation requirements. You need contractual control over retention duration and a verifiable deletion mechanism.

✅ Good answer: Retention period is configurable (zero retention or 30-day maximum), with contractual commitment to deletion and audit rights to verify.

🔴 Red flag: The vendor retains "de-identified" interaction data indefinitely, with de-identification defined by the vendor.

8

Who owns the outputs generated by the AI using our data and prompts?

Why it matters: AI intellectual property rights and output ownership is legally unsettled in most jurisdictions. If your contract is silent on output ownership, you may face disputes about whether AI-generated code, analysis, documents, or creative content belongs to you, the vendor, or no one. For OpenAI-specific guidance, read IP rights in OpenAI enterprise agreements.

✅ Good answer: The contract explicitly assigns all rights in AI-generated outputs to the customer, with the vendor disclaiming any ownership interest.

🔴 Red flag: The contract is silent on output ownership, or grants the vendor a licence to use outputs for "service improvement."

Performance and Availability

9

What uptime SLA applies, and what are the financial remedies for breach?

Why it matters: Most AI API providers offer 99.9% uptime SLAs (8.7 hours of permitted downtime per year), but remedies are often limited to service credits of 10-25% of one month's fee. A 4-hour outage costing you $500,000 in lost revenue triggers a service credit of $2,000. The SLA is functionally meaningless without meaningful remedies. For guidance on negotiating SLAs with AI vendors, read 7 clauses you must push back on.

✅ Good answer: Uptime SLA of 99.95%+ with escalating credits (25% at 99.9%, 50% at 99.5%, 100% below 99.0%) and right to terminate if breaches occur consecutively.

🔴 Red flag: SLA is "commercially reasonable efforts" with no quantified uptime commitment.

10

What happens when the model version we depend on is deprecated?

Why it matters: AI vendors deprecate model versions faster than any previous software category. OpenAI has deprecated multiple GPT versions within months. Model deprecation is not like a software version upgrade. A new model version can produce materially different outputs, breaking applications that depend on consistent behaviour. If your contract does not address model lifecycle, the vendor can force-migrate you.

✅ Good answer: Minimum 12-month notice before deprecation of any model version the customer is actively using, with parallel access to both versions during transition at no additional cost.

🔴 Red flag: The vendor reserves the right to "update or modify models at any time" with no notice or transition period.

11

Are rate limits contractually guaranteed?

Why it matters: Rate limits determine your application's maximum throughput. If rate limits are documented in a pricing page but not in your contract, the vendor can reduce them unilaterally. During peak demand, APIs throttle requests, adding latency, wasting compute, or losing transactions.

✅ Good answer: Rate limits are specified in the contract with guaranteed minimums per endpoint, with provisioned capacity options for burst protection.

🔴 Red flag: Rate limits are "best effort" or reference documentation the vendor can update without notice.

12

How is model quality measured, and what recourse exists if quality degrades?

Why it matters: Unlike traditional software where functionality either works or is broken, AI model quality exists on a spectrum that can degrade subtly. A model update might reduce accuracy on your specific use case by 15% while improving average benchmark scores. For vendor evaluation methodology, read our enterprise AI vendor selection framework.

✅ Good answer: The contract includes acceptance testing, with the customer's right to benchmark performance and ability to revert to a previous model or terminate if quality falls below baseline.

🔴 Red flag: The vendor says quality is "continuously improving" with no mechanism for customer-defined quality measurement.

Need an Independent Review Before Signing?

Our team has reviewed dozens of enterprise AI agreements across OpenAI, Anthropic, Google, and Microsoft. We catch the clauses that internal legal teams miss. Fixed-fee, vendor-independent.

GenAI Advisory Services → GPT Strategy & Negotiation

Lock-In and Exit

13

What is the total cost and timeline to exit this contract and migrate?

Why it matters: AI vendor lock-in operates differently from traditional SaaS. Beyond data export, you face prompt library migration, fine-tuning recreation, integration rewiring, and application code changes. An enterprise with 50 AI-powered workflows faces 6-12 months and $500K-$2M in migration costs. Read multi-vendor AI strategy for mitigation approaches.

✅ Good answer: The contract includes data portability, API compatibility commitments, and a 90-day transition assistance period post-termination.

🔴 Red flag: Data export only in proprietary formats, and access terminates immediately upon contract expiration.

14

Can we use multiple AI vendors simultaneously without penalty?

Why it matters: Multi-model strategies are becoming standard. Enterprises route simple requests to cheaper models, complex requests to premium models. If your contract contains exclusivity clauses or volume commitments that penalise multi-vendor usage, you lose negotiation leverage at renewal.

✅ Good answer: No exclusivity requirements. Volume commitments are based on minimum spend, not percentage of total AI spend.

🔴 Red flag: The contract includes a "preferred vendor" clause requiring a minimum percentage of AI workloads through the contracted vendor.

15

What happens to fine-tuned models, embeddings, and knowledge bases at termination?

Why it matters: Fine-tuning creates an asset on the vendor's infrastructure. In most cases, the fine-tuned model cannot be transferred. Your investment in fine-tuning (training data preparation, iteration, evaluation) is lost at termination. The same applies to vector embeddings and agent configurations.

✅ Good answer: Fine-tuned weights are exportable (for open-source base models) or the vendor provides a transition period to re-create fine-tuning. All embeddings and configurations are exportable in standard formats.

🔴 Red flag: Fine-tuned models are "non-transferable" and knowledge bases can only be exported as raw source documents.

16

Is there an auto-renewal clause, and what is the notice period?

Why it matters: Auto-renewal in AI contracts is particularly dangerous because AI model costs are dropping 50-70% per generation (roughly every 12-18 months). An auto-renewed contract at 2024 pricing in 2026 represents a 200-300% overpayment relative to current market rates.

✅ Good answer: Auto-renewal is eliminated or set to month-to-month continuation after the initial term, with 30-day notice to terminate.

🔴 Red flag: Auto-renewal for a full 12+ month term with 90-120 day notice and pricing at the vendor's then-current list rates.

Governance and Compliance

17

Do compliance certifications cover AI-specific processing?

Why it matters: SOC 2, ISO 27001, and HIPAA compliance for cloud infrastructure does not automatically extend to AI model inference. The compliance boundary for AI processing may differ from the hosting environment. Verify certifications explicitly cover the AI inference pipeline.

✅ Good answer: Compliance documentation explicitly covers AI inference processing, including data handling during model invocation and output delivery.

🔴 Red flag: Certifications reference "our cloud infrastructure" without specifying whether AI-specific processing is in scope.

18

Can we audit the vendor's AI data handling practices?

Why it matters: A contractual commitment that data is not used for training is only as reliable as your ability to verify it. Enterprise agreements should include audit rights covering AI-specific data handling. Read negotiating AI data usage and privacy terms in Microsoft contracts for vendor-specific guidance.

✅ Good answer: Annual third-party audit reports covering AI data handling are provided at no cost, with the right to commission an independent audit.

🔴 Red flag: No audit rights, or rights limited to "general security practices" that do not cover model training data pipelines.

19

Who is liable if the AI produces outputs that cause harm?

Why it matters: AI outputs can generate incorrect medical guidance, flawed legal analysis, inaccurate financial calculations, discriminatory hiring recommendations, and defamatory content. Most vendor agreements disclaim all liability for output accuracy, meaning your organisation bears the full risk of every AI-generated error.

✅ Good answer: A mutual limitation of liability framework that does not cap the vendor's liability for data breaches, training data misuse, or failure to comply with data handling commitments.

🔴 Red flag: The vendor disclaims all liability including for data handling failures, with total liability capped at 12 months' fees.

20

Does the contract address AI-specific regulations that may emerge during the term?

Why it matters: The EU AI Act is being implemented through 2027. US state-level AI regulations are proliferating. A 3-year contract signed today will span significant regulatory change. If your contract does not address how new compliance requirements will be handled, you may be locked into a non-compliant platform.

✅ Good answer: A regulatory change clause requiring the vendor to implement changes necessary for compliance at no additional cost, with the customer's right to terminate if compliance cannot be achieved.

🔴 Red flag: No regulatory change clause, or the vendor passes all compliance costs to the customer.

The Meta-Question: Do You Have Independent Expertise at the Table?

These twenty questions are necessary but not sufficient. Asking the right questions is only valuable if you can evaluate the answers. AI vendor sales teams are trained to provide responses that sound comprehensive while preserving commercial flexibility.

Enterprise AI contracts sit at the intersection of cloud procurement, data privacy law, intellectual property, and a technology category evolving faster than legal precedent can address. Most procurement teams lack internal expertise across all four domains. This gap is where independent advisory firms provide the highest value. An advisor who has reviewed dozens of AI vendor contracts across multiple providers can identify the specific clauses, pricing structures, and term interactions that create risk.

📈 Case Study

Lowe's achieved $1.2M in AI cost avoidance through independent procurement advisory. Read the case study →

Frequently Asked Questions

Do these questions apply to all AI vendors or just OpenAI?

+

All twenty questions apply to every enterprise AI vendor: OpenAI, Anthropic, Google, Microsoft, Salesforce, AWS, and specialised providers. The specific contract language differs, but the underlying risks (opaque pricing, data training rights, model deprecation, lock-in) are universal. Read our vendor-specific guides for negotiating OpenAI contracts and including Azure OpenAI in your Microsoft EA.

How much does AI vendor pricing typically differ from the initial quote?

+

True cost is typically 3-5x the headline quote. A $30/seat Copilot licence becomes $55-$80 with prerequisites and Azure consumption. A $3/million-token API becomes $8-$12 with infrastructure costs. See Azure OpenAI pricing explained and Microsoft Copilot cost per user for detailed breakdowns.

Can AI vendor contracts be negotiated?

+

Yes. Enterprise-tier contracts from OpenAI, Microsoft, Google, and others are negotiable. Data handling, pricing caps, SLA terms, auto-renewal, and liability caps are all negotiable with the right leverage. Read OpenAI enterprise procurement negotiation playbook and CIO playbook for OpenAI contracts.

What is the biggest risk in AI contracts today?

+

Vendor lock-in combined with rapid price deflation. AI model costs drop 50-70% per generation. An auto-renewed contract at 2024 pricing in 2026 represents a 200-300% overpayment. Take our AI vendor lock-in risk assessment to evaluate your exposure.

Should we use a multi-vendor AI strategy?

+

Yes. Multi-model strategies reduce lock-in risk, create competitive leverage, and optimise cost (routing simple tasks to cheaper models). Read multi-vendor AI strategy for enterprise and ensure your contracts do not contain exclusivity clauses that penalise this approach.

How should we handle Microsoft Copilot licensing?

+

Copilot's $30/seat flat fee creates waste because 75-85% of users are light or non-users. Negotiate phased rollout, usage-based pricing where possible, and avoid locking in full-organisation licences before proving adoption. Read Copilot licensing guide 2026, Copilot pilot programme guide, and Copilot ROI assessment.

Should we hire an independent adviser for AI contracts?

+

Yes. AI contracts sit at the intersection of cloud procurement, data privacy, IP law, and rapidly evolving technology. Most internal teams lack expertise across all four domains. Independent advisers bring benchmarking data and multi-vendor contract experience. See our GenAI negotiation services and GenAI case studies.

AI Procurement & Negotiation

AI Data, IP & Compliance

AI Pricing & Cost

Assessment Tools

GenAI Case Studies

Services

FF

Fredrik Filipsson

Co-Founder, Redress Compliance

Fredrik Filipsson brings over 20 years of experience in enterprise software licensing, including senior roles at IBM, SAP, and Oracle. For the past 11 years, he has advised Fortune 500 companies and large enterprises on complex licensing challenges, contract negotiations, and vendor management across Oracle, Microsoft, SAP, IBM, Salesforce, Broadcom, and GenAI engagements.

LinkedIn Profile →   View All Posts →

New Service
Vendor Shield
Always prepared. Never outmanoeuvred.
Year-round subscription advisory covering every vendor renewal, every contract review, and every audit response across your entire software estate. One fixed annual fee.
📊 Benchmarking

500+ deal benchmark database

📞 Pre-Call Briefings

Before every vendor meeting

📄 Contract Reviews

Before you sign anything

🛡 Audit Response

Immediate expert response

Free Monthly Newsletter

Get Licensing Intelligence
Delivered to Your Inbox

Microsoft EA updates, Azure cost optimisation tips, M365 licensing changes, audit alerts, and enterprise software advisory insights from independent experts.

Subscribe Now Company email only · No spam