Enterprise AI Procurement Framework: 20 Questions Every CIO Must Ask Before Signing an AI Contract

AI vendors are moving faster than enterprise legal and procurement teams. New model versions arrive quarterly. Pricing models change with minimal notice. Data terms that seemed reasonable at contract signature become material risks as AI becomes embedded in mission-critical workflows. The enterprises that protect themselves are not the ones with the most sophisticated AI strategies โ€” they are the ones that ask the right questions before signing, not after.

This framework is structured around 20 questions covering every dimension of enterprise AI contract risk: data governance, output ownership and IP, model deprecation, pricing stability, audit and explainability rights, GDPR and EU AI Act compliance, liability caps, and exit provisions. It is applicable to all major AI vendors โ€” OpenAI, Anthropic, Google, Microsoft Copilot, AWS Bedrock, Meta Llama, Mistral, Cohere, and any future entrant. Use it as a pre-signature checklist alongside our Enterprise AI Governance Contracts guide, our Vendor Lock-In Risk Scoring Framework, and the renewal timing context in our Enterprise Software Renewal Calendar.

Need Independent Review of Your AI Vendor Agreement?

Our GenAI advisory team reviews enterprise AI agreements against this 20-question framework โ€” identifying the provisions that need strengthening, the clauses that carry hidden risk, and the commercial terms that are negotiable before you commit.

Request an AI Contract Review

Section 1: Data Governance (Questions 1โ€“5)

Q1. Is my organisation's data explicitly excluded from model training?

Why it matters: AI vendors have a commercial incentive to use enterprise data to improve their models. Without an explicit contractual prohibition, the default position may allow training use of your prompts, inputs, outputs, and usage patterns.

What to require: A written, contractually binding opt-out covering all data categories: prompt inputs, model outputs, metadata about usage patterns, and fine-tuning datasets. "Policy statements" are not sufficient โ€” the prohibition must be in your contract or addendum.

Vendor risk ratings: OpenAI Enterprise and Anthropic Enterprise both provide contractual opt-outs as standard. Google Vertex AI and AWS Bedrock provide them by default in service terms. Pay-as-you-go tiers at most vendors do not provide equivalent protections โ€” confirm which commercial tier your agreement uses.

Q2. Where is my data processed and stored?

Why it matters: Data residency requirements under GDPR, financial services regulation, and industry frameworks may restrict where AI inference can occur. A contract that does not specify processing location leaves this to the vendor's infrastructure decisions.

What to require: Explicit specification of the region(s) where your data will be processed. For EU organisations, confirm EU-only processing is available and contractually committed. For US organisations with healthcare (HIPAA) or financial services (FedRAMP) requirements, confirm the specific compliance boundary.

Q3. Who are the sub-processors involved in my data?

Why it matters: AI inference often involves multiple sub-processors โ€” cloud infrastructure providers, safety monitoring services, human review teams. Under GDPR Article 28, you must know who they are and have the right to object to new sub-processors.

What to require: A current sub-processor list and the contractual right to receive notification of, and object to, new sub-processors. This is a standard GDPR requirement โ€” any vendor declining to provide it should be a significant procurement concern.

Q4. What are the vendor's data retention and deletion obligations?

Why it matters: Your prompts may contain personal data subject to GDPR right-to-erasure requests. If the vendor retains prompt logs for 30, 60, or 90 days (standard for safety monitoring purposes), you need to know this to fulfil your own data subject obligations.

What to require: Explicit retention periods for all data categories (prompts, outputs, usage logs) and a contractual obligation to delete on your request within the GDPR-required timeframe. Confirm the vendor's process for supporting Article 17 erasure requests.

Q5. Does the vendor have human reviewers accessing my prompts and outputs?

Why it matters: Most AI vendors reserve the right to have human reviewers access interactions for safety and quality purposes. For enterprises processing confidential legal, financial, or strategic information, this creates a confidentiality risk that standard enterprise confidentiality provisions may not adequately address.

What to require: Explicit restrictions on human review of your organisation's data, or at minimum, strict confidentiality obligations on any human reviewer, a prohibition on use of reviewed content for commercial purposes, and a notification commitment if your data is reviewed.

Section 2: Output Ownership & IP (Questions 6โ€“8)

Q6. Who owns the outputs the AI produces for my organisation?

Why it matters: AI output ownership is commercially significant when outputs are used in products, published content, or commercial decisions. Most enterprise AI vendors assign output ownership to the customer โ€” but verify this is explicit in your contract, not just implied.

What to require: An explicit clause assigning all right, title, and interest in model outputs to your organisation, to the extent such outputs are copyrightable. Note that copyright protection for AI outputs is uncertain in most jurisdictions โ€” the contract clause is a starting point, not a complete IP solution.

Q7. Does the vendor indemnify me for third-party IP claims arising from AI outputs?

Why it matters: If a third party claims that AI outputs infringe their copyright, the question is who bears the defence cost and liability. This is the single most commercially significant IP provision in AI contracts, and vendor positions vary considerably.

What to require: Explicit IP indemnification for claims arising from AI model outputs when used within the terms of your agreement. Understand the scope limits: most vendor indemnifications are conditioned on use-policy compliance, exclude fine-tuned model outputs, and cap the indemnification at contract value. For high-risk use cases (commercial content generation, code for commercial products), negotiate broader indemnification scope.

Vendor positions: OpenAI provides indemnification for enterprise API customers. Google provides IP indemnification via its standard Google Cloud indemnification framework. Anthropic provides it under enterprise agreements. Meta Llama provides none โ€” see our Meta Llama licensing guide.

Q8. Does the vendor retain any rights to fine-tuned models I create using their platform?

Why it matters: When you fine-tune a vendor's model on your proprietary data, the resulting fine-tuned model may be subject to the vendor's base licence terms. Some vendors retain broad rights over derivative models that limit your ability to export or use the fine-tuned model outside their platform.

What to require: Clear terms establishing that fine-tuned model weights are your property, can be exported on termination, and are not subject to vendor rights beyond the base model licence. If the vendor cannot grant export rights to fine-tuned weights, factor this into your lock-in risk score.

Section 3: Model Stability & Deprecation (Questions 9โ€“11)

Q9. How much notice will the vendor provide before deprecating a model version my application relies on?

Why it matters: AI vendors update and deprecate models frequently โ€” often with 30โ€“90 days' notice. If your production application is built on a specific model version, a deprecation can break your application with minimal warning.

What to require: Minimum 180-day deprecation notice for models in production use, with version-pinning rights during the notice period. The 30-day notice periods common in standard API terms are insufficient for production enterprise deployments.

Q10. Can the vendor materially change model behaviour without my consent?

Why it matters: A model update that changes output quality, format, or safety filtering can break application behaviour even without a formal deprecation. Enterprises that have built workflows around specific model characteristics can be materially disrupted by silent updates.

What to require: Advance notification of material changes to model behaviour โ€” specifically changes to output format, safety filtering thresholds, or capability characteristics. For compliance-sensitive use cases (regulatory document analysis, financial reporting), negotiate the right to validate model updates before they apply to your production workloads.

Q11. What happens to my fine-tuned models if the vendor is acquired or goes out of business?

Why it matters: The AI vendor market is consolidating. Model providers are acquisition targets, and a vendor acquisition can result in immediate changes to pricing, terms, and platform availability. If your fine-tuned models reside exclusively on the vendor's infrastructure, an acquisition puts your AI capability at risk.

What to require: An escrow or export provision that allows you to retrieve fine-tuned model weights in the event of vendor acquisition, insolvency, or material change of control. This is uncommon in standard AI agreements but negotiable for large-volume customers.

Download the AI Procurement Checklist

The complete 20-question AI procurement framework as a printable checklist for your legal and procurement team to use at every AI vendor review.

Access AI Procurement Checklist โ†’

Section 4: Pricing Stability (Questions 12โ€“13)

Q12. Are the prices in my agreement locked for the contract term?

Why it matters: AI platform pricing has been declining rapidly as competition intensifies โ€” but there is no guarantee this continues. And for committed-use or enterprise agreements, the risk runs in both directions: you want protection against price increases, while the vendor wants protection against continued price compression.

What to require: Explicit price locks for the duration of your initial contract term. For multi-year agreements, negotiate maximum annual escalation caps (3โ€“5% CPI-linked) on any variable pricing components. For token-based pricing, negotiate minimum "most favoured customer" provisions ensuring you receive any general price reductions offered to comparable customers.

Q13. If the vendor introduces a new pricing model mid-term, what protections apply?

Why it matters: AI vendors have changed pricing models significantly โ€” moving between per-token, per-call, per-user, and capacity-based pricing. A mid-term pricing model change can disrupt your cost assumptions even if headline rates appear unchanged.

What to require: A provision requiring the vendor to maintain the pricing model (not just the headline rate) in force at contract signature for the duration of your agreement, or to provide equivalent value under any new model with a minimum 90-day parallel-run period before transition.

Section 5: Audit, Explainability & Compliance (Questions 14โ€“16)

Q14. Can I audit the vendor's data processing practices?

Why it matters: Financial services, healthcare, and public sector organisations operate under compliance frameworks that require evidence of vendor data processing controls. Direct audit rights are rarely granted โ€” but a structured alternative is achievable.

What to require: Annual SOC 2 Type II reports covering the specific services in your agreement (not just the platform generally). The SOC 2 scope must explicitly cover AI inference workloads, training data governance, and sub-processor oversight โ€” not just platform security controls. Request the most recent report as part of due diligence before contract signature.

Q15. Can I get explanations for AI outputs used in regulated decisions?

Why it matters: The EU AI Act and GDPR's automated decision-making provisions (Article 22) require explainability for AI decisions affecting individuals โ€” credit decisions, HR assessments, insurance pricing, healthcare recommendations. Many AI vendors cannot provide the output-level explanations these frameworks require.

What to require: Confirm the vendor's explainability capabilities for your specific use case before deployment. For high-risk AI applications, ensure human oversight is built into the workflow โ€” AI vendor contracts alone cannot satisfy the explainability requirements of EU AI Act high-risk categories.

Q16. Does the vendor's agreement support my GDPR Article 22 obligations?

Why it matters: GDPR Article 22 gives data subjects the right not to be subject to solely automated decisions with significant effects. If your AI system makes or substantially influences decisions about individuals, you need specific contractual support from your AI vendor to fulfil these obligations.

What to require: Confirm the vendor's DPA explicitly covers Article 22 scenarios relevant to your use case. Ensure human-in-the-loop mechanisms are architecturally possible within the vendor's platform, and that the contract supports your ability to implement them.

Section 6: Liability (Questions 17โ€“18)

Q17. What is the vendor's liability cap, and is it proportionate to my risk exposure?

Why it matters: Standard AI vendor liability caps โ€” typically 12 months of fees paid โ€” are not proportionate to the business impact of AI errors in high-stakes deployments. For a $2M/year AI platform contract, the cap is $2M. For an AI system generating customer-facing advice that turns out to be materially incorrect, $2M may not cover the downstream liability.

What to require: For high-risk AI use cases, negotiate higher liability caps (3โ€“5x annual fees) or structured carve-outs from the standard cap for specific liability categories. Alternatively, design your deployment architecture to ensure AI outputs in high-stakes contexts are reviewed by a human before acting on them โ€” which partially mitigates the need for higher contractual caps.

Q18. Are there carved-out liability categories that expose my organisation to unprotected risk?

Why it matters: All AI vendor contracts exclude consequential loss โ€” the downstream business impact of AI errors. Some also exclude liability for specific output categories (legal advice, medical advice, financial advice) even when the vendor's product is specifically marketed for these use cases.

What to require: Read the exclusion clauses carefully. If the vendor's marketing claims capability for a use case but the contract excludes liability for that category, you are accepting full risk for failures in that use case. Align your deployment risk appetite with the contractual risk position.

Section 7: Exit & Portability (Questions 19โ€“20)

Q19. Can I exit the agreement with reasonable notice, and what are the exit costs?

Why it matters: AI vendor relationships that started as experiments become embedded infrastructure quickly. Without exit rights, what starts as an API contract becomes a strategic dependency. Apply the Vendor Lock-In Risk Scoring Framework to quantify exit costs before signature.

What to require: Termination for convenience with 30โ€“90 days' notice for agreements under 2 years. For longer commitments, negotiate break clauses at 12-month intervals. Exit penalties should not exceed 3 months of fees โ€” anything higher requires specific commercial justification.

Q20. What migration support will the vendor provide after termination?

Why it matters: Exiting an AI vendor is technically complex โ€” you need to migrate fine-tuned models, re-embed document corpora, update integration code, and potentially re-train internal teams. Most vendors provide no migration support as default.

What to require: A post-termination transition period of 90โ€“180 days during which the vendor provides (a) continued read-only access to export data, (b) API access for migration purposes, and (c) reasonable cooperation with migration activities. This provision is rarely proactively offered but is frequently accepted when asked, particularly by vendors confident in their product quality.

Applying the Framework: Scoring Your AI Contract

Score each question as: Fully addressed (2 points), Partially addressed (1 point), Not addressed (0 points). A total score of 35โ€“40 indicates a well-structured enterprise AI agreement. A score below 25 warrants renegotiation before signature. A score below 15 should trigger independent advisory review before proceeding.

For assistance applying this framework to your specific AI vendor agreements, book a confidential call with our GenAI advisory team. We typically complete AI contract reviews within 5 business days.