OpenAI Negotiations

CIO Playbook: Negotiating OpenAI Contracts for Generative AI

CIO Playbook: Negotiating OpenAI Contracts for Generative AI

Negotiating With Openai

Negotiating OpenAI Contracts for Generative AI

Adopting generative AI at an enterprise level promises innovation and efficiency, but it also brings new risks and contractual complexities.

As a CIO or licensing professional, securing a favorable contract with OpenAI is essential to protect your organization’s data, intellectual property, and interests.

This playbook provides a comprehensive guide, in clear, practical terms, to negotiating key terms in an enterprise agreement for OpenAI’s generative AI services (such as ChatGPT Enterprise or API access).

We cover critical areas like data privacy, IP ownership, compliance, security, and more, with real-world examples and actionable recommendations.

Use this guide to ensure your contract enables you to harness the benefits of AI while safeguarding your organization.

Read How to Prepare for Your OpenAI Negotiation.

1. Data Privacy

Protecting sensitive data is paramount.

Ensure the contract defines how OpenAI will handle your data (the prompts you send and the AI-generated outputs).

At a minimum, negotiate provisions so that:

  • Your data remains confidential: All inputs you provide and outputs generated should be treated as your confidential information. The agreement should prohibit OpenAI from sharing your data with third parties or using it for any purpose other than providing the service. By default, OpenAI commits not to train on business customer data​, and you should cement this in the contract.
  • Data retention is under your control: Ideally, you decide how long data is stored on OpenAI’s servers. For example, ChatGPT Enterprise allows organizations to set retention policies (even zero retention) for user conversations​. Ensure you have the right to request data deletion and that OpenAI will confirm the deletion. This is important for compliance with laws like GDPR’s “right to be forgotten” and limiting exposure of old data.
  • Compliance with privacy laws: Include a Data Processing Addendum (DPA) if you’ll handle personal data. OpenAI offers a standard DPA​ – ensure it’s signed and attached to your contract. The DPA should detail GDPR, CCPA, or other relevant privacy law compliance, with OpenAI acting as a processor on your instructions. If you operate in sectors such as healthcare or finance, verify any additional requirements (e.g., HIPAA compliance through a Business Associate Agreement for the handling of health data).
  • Real-world example: Earlier this year, Samsung engineers inadvertently leaked sensitive source code by inputting it into ChatGPT, prompting Samsung to temporarily ban employees from using such AI tools. This incident highlights the importance of a robust privacy clause, as it is crucial in preventing unauthorized use of your data. You reduce the risk of confidential information escaping your control by negotiating strict privacy terms (and coupling them with internal policies restricting what data can be input).

Key takeaways:

Be explicit that all data you send to OpenAI and all results are confidential and remain your property.

The contract should forbid OpenAI from mining or monetizing your data in any way. Require robust data safeguards, deletion rights, and compliance with privacy regulations.

These steps ensure your company’s secrets and customer data won’t become someone else’s training material or liability.

Read OpenAI Enterprise Procurement Negotiation Playbook.

2. Intellectual Property Ownership

Clarify who owns what – your inputs to the AI and its outputs.

OpenAI’s standard business terms are favorable: As the customer, you retain ownership of your inputs and the AI’s outputs​.

However, you should still nail down details in the contract:

  • Ownership of outputs: The contract should state that between you and OpenAI, you own all AI-generated output that you “receive” from the service based on your prompts. OpenAI’s terms assign to you any rights it has in the output​. This means you can use the AI’s responses freely in your business – incorporate them into products, reports, code, etc., without fearing that OpenAI will later claim copyright or prevent your use. For example, if the AI assists your team in generating marketing copy or software code, your company should own that text or code.
  • Your inputs remain yours: Any data or content you provide to OpenAI (e.g., proprietary documents or data you use as prompts) should remain your property. OpenAI should not gain any ownership over it. Ensure the contract acknowledges your ownership and/or rights to your input data.
  • License back to OpenAI (limited purpose): It’s acceptable for OpenAI to have a limited license to use your input and output solely to perform the service (this is usually implied, allowing the AI to process your query). But avoid any broad license that would allow other uses. The agreement should clearly state that OpenAI can use your content solely to deliver results to you, and for no other purpose.
  • Responsibility for IP issues in outputs: Even though you will own the outputs, be aware that owning them doesn’t automatically guarantee they are free of third-party IP claims. OpenAI’s terms hold the user responsible for ensuring that the output doesn’t violate laws or the rights​of others. In practice, AI systems might inadvertently generate content that is similar to copyrighted material or patented ideas. You should negotiate warranties or indemnities (see the Indemnification section) to protect yourself if this happens. At a minimum, ask OpenAI to warrant that the service isn’t knowingly delivering plagiarized text or infringing code to the best of its knowledge. While they may not promise perfection, raising this concern sets the stage for them to assist if an issue arises.
  • Example scenario: Your legal team might worry, “If the AI outputs a paragraph that matches an article from The New York Times, do we have the right to use it?” You would own that output by contract, but The New York Times still retains ownership of its article. To mitigate risk, the contract and usage policies put the onus on you to review and filter outputs for IP conflicts. In practice, you may implement a rule that requires any AI-generated content intended for publication to be checked for plagiarism or undergo a legal review. Contractually, you could also seek indemnification from OpenAI for copyright claims (more on that later). The key is understanding the shared risk: OpenAI provides the tool and gives you ownership of results, but you must use those results responsibly.

Key takeaways: Use clear language that clearly states you own all inputs you provide and all outputs generated for you.

This gives you the freedom to commercialize and modify AI-produced content as needed.

However, pair this with internal processes and, where possible, vendor commitments to address the quality and legality of those outputs.

That way, you get the benefits of AI-generated IP with fewer legal surprises.

OpenAI Negotiation Services.

3. Usage Restrictions and Compliance

OpenAI will have usage policies that you must follow as an enterprise customer.

It’s crucial to understand these use-case restrictions and ensure they align with your intended AI applications.

In negotiation, you want to both comply with OpenAI’s rules and meet your compliance obligations:

  • Respect OpenAI’s usage policies: OpenAI’s standard terms prohibit certain activities. Common restrictions include not using the service for illegal purposes, not attempting to reverse engineer or steal the model, and not using the AI to generate disallowed content (such as hate speech, malware, etc.). One notable restriction is that you may not use the output to develop models that compete with OpenAI​. Make sure these rules work for you. For most companies, they’re reasonable – e.g., you likely aren’t planning to build a competing large language model from ChatGPT outputs. However, flag any that might be problematic given your business plans. If, for instance, your strategy involves using AI outputs to improve your machine learning models, clarify with OpenAI where the line is (they often allow using outputs for analytics or fine-tuning your own smaller models, but not to directly clone GPT’s capabilities​). Obtain written clarification or revised terms if necessary, to ensure you don’t inadvertently breach the contract.
  • Ensure industry-specific compliance: You are responsible for adhering to laws in your sector. The AI’s use should not cause you to violate regulations (for example, privacy laws, financial regulations, or healthcare confidentiality). If you’re in a regulated industry, negotiate terms that acknowledge these requirements. For instance, OpenAI’s terms explicitly forbid using the service with protected health information (PHI) unless you sign a special healthcare addendum​. A hospital or insurance company must obtain a HIPAA Business Associate Agreement (BAA) from OpenAI before processing patient data with the AI. Similarly, if you’re in finance and plan to use AI to assist with customer communications, ensure that the usage complies with SEC/FINRA guidelines and that OpenAI knows you’ll use it in that context. You might include a clause that OpenAI will reasonably assist you in compliance efforts (for example, by providing documentation of how data is handled for your auditors).
  • Thorough testing for high-risk use cases: Certain uses of generative AI carry higher risks (e.g., providing legal or medical advice, making hiring decisions). OpenAI’s policy notes that these cases require extra care​. If your use falls into these categories, commit in the contract (or at least internally) to test the AI’s outputs for accuracy and biases before relying on them. You might also need to inform users when AI is involved in producing content (transparency obligations). While this may not be included in the contract, it’s part of compliance, and OpenAI may require you to do so as a condition of use. For example, if using ChatGPT to offer financial planning tips to customers, you should both contractually and operationally ensure a qualified professional reviews those tips and include disclaimers that an AI is involved, as required by OpenAI’s usage guidelines.
  • Geographic and export compliance: Ensure the contract doesn’t prevent usage in the countries you operate in and that you’re aware of any restricted regions. OpenAI, being a U.S. company, must follow export controls – its services can’t be used in certain sanctioned countries​. If you have offices in embargoed regions, you must prevent them from accessing the service. Confirm any such limitations upfront to avoid breaches. Also, if your data is extremely sensitive, consider whether any export classification issues arise from sending it to OpenAI’s US-based servers.
  • Example – Compliance in Practice: A European bank wants to utilize GPT-4 to generate portions of customer reports. They must comply with GDPR and banking secrecy laws. In negotiations, the bank insists on a DPA (for GDPR) and includes a clause that OpenAI will process data only in compliance with EU privacy standards. The bank also notes that some data (like personal account info) won’t be sent to the AI to avoid regulatory issues. Additionally, they review OpenAI’s usage policy to ensure nothing in their plan (such as analyzing customer financial data) violates it. By addressing this in the contract and implementation plan, the bank can confidently deploy the AI without facing compliance requirements.

Key takeaways: Align OpenAI’s usage rules with your business needs; if any are too restrictive or unclear, resolve them during the contracting process.

Make compliance a shared responsibility: OpenAI should commit to supporting legal compliance (e.g., providing necessary agreements and transparency) while you commit to using the AI responsibly within legal and ethical boundaries.

Ultimately, you don’t want to sign a contract that prevents a critical use case or inadvertently allows a misuse negotiate for clarity and balance.

4. Model Transparency

Enterprise leaders often require transparency in AI systems to build trust and meet governance obligations.

While OpenAI’s models are largely “black boxes” (the proprietary models and training data aren’t fully open), you should negotiate for as much insight and transparency as feasible:

  • Documentation of model behavior: Ask OpenAI to provide any system cards, model documentation, or transparency reports for the model you’ll be using. These documents (for example, OpenAI has published a “Model Card” for GPT-4) describe the model’s intended uses, limitations, and performance characteristics. They may include information on the scope of the training data (e.g., “trained on a broad corpus of internet text up to September 2021”), known biases or ethical challenges, and how the model was tested. This information helps your team understand what the model can and cannot do. For instance, if the documentation notes that the model may produce incorrect financial calculations, you’ll know not to rely on it for that without verification.
  • Disclosure of updates and changes: The contract should require OpenAI to notify you of significant changes to the model or service. AI models can evolve – OpenAI might update the model’s algorithms, training data, or safety filters during your contract term. You don’t want surprises in behavior. Negotiate a clause that requires them to inform you in advance if they deploy a new model version or make a major change (e.g., switching from GPT-4 to a hypothetical GPT-5 or altering the content moderation system). Ideally, you’d get to test the updated model in a sandbox before it goes live for your users. This transparency allows you to validate that the new version meets your requirements and doesn’t introduce new risks.
  • Explainability and audit support: Complete explainability of large language models is an unsolved problem, but you can still ask for tools or support to audit the AI’s outputs. For example, ask if OpenAI can provide log data or reasoning traces for certain queries (there’s a feature in some AI systems where you can see which training data snippets influenced an answer – OpenAI does not publicly offer this, but as an enterprise customer, you can inquire about any capabilities for interpretability). At the very least, ensure you can access logs of all prompts and outputs generated by your team. Those logs allow you to do an offline review to understand patterns and potentially deduce why the AI responded a certain way. Maintaining logs is also important for compliance and incident investigation (e.g., if the AI produces inappropriate output, you need the record to report or address the issue).
  • Bias and ethical assurances: Transparency also means ensuring the model aligns with your ethics and values. You should discuss with OpenAI what steps they’ve taken to reduce bias or harmful content in the model. While the technical details may be complex, a well-drafted contract or side letter can include a commitment to the responsible use of AI. For instance, OpenAI might commit to the model having undergone bias testing and being periodically reviewed for fairness. Suppose your organization has specific ethical guidelines (say, around avoiding any content that could be discriminatory or around transparency to end-users). In that case, you can include language stating that OpenAI will assist you in meeting those goals, possibly by configuring the model or providing content filtering options. Some enterprise offerings include the ability to moderate or filter AI outputs according to your policies. Ask if this is available and obtain written confirmation if it is.
  • Example: Transparency in Action: Let’s say you plan to use AI to assist with customer support, and occasionally, a customer might question, “How did the AI decide that answer?” While the AI can’t provide a simple citation for every answer, you can prepare by having OpenAI’s transparency note on the model. Suppose the model card reveals that the AI sometimes makes up answers if it doesn’t know (a known issue called “hallucination”). Knowing this, you implement a policy that the AI will include a disclaimer or route certain complex questions to a human. In your contract, you had OpenAI agree to provide monthly reports on model performance and any new risks identified. One month later, OpenAI informs you that they have updated the model to reduce hallucinations in your domain. This kind of openness allows you to confidently continue using the system and even advertise to your customers that the AI system is continuously improving under strict oversight.
  • Limits of transparency: It’s essential to note that OpenAI will likely not disclose proprietary details, such as the exact dataset or source code. Focus on practical transparency – information that helps you use the model responsibly. Also, verify that nothing in the contract prevents you from discussing issues. Some vendors attempt to restrict public statements about the performance of their models. As a CIO, you might need to share findings with your board or regulators. Ensure the contract allows you to conduct audits or assessments of the AI (even if only internally) and report on them as needed.

Key takeaways: Push for as much transparency as possible, including documentation of the model, notifications of changes, access to logs, and commitments regarding ethical use.

Transparency fosters trust, enabling you to clearly explain the AI’s role to stakeholders and identify issues early.

A cooperative vendor should agree to reasonable transparency measures; if OpenAI refused to provide any information about how the model works or is managed, that would be a red flag.

The goal is to eliminate the “black box” fear by shining light wherever possible so that you, as the customer, have predictability and control over how the AI functions within your business.

5. Indemnification

Indemnification serves as your safety net for legal troubles arising from the use of OpenAI’s services.

In a contract, an indemnification clause is a provision where one party promises to defend the other and cover certain costs if specific third-party claims are made.

Given the emerging legal issues surrounding generative AI, it is advisable to secure strong indemnities from OpenAI to protect your organization.

Focus on:

  • IP infringement indemnity (from OpenAI): Arguably the most crucial. You need OpenAI to indemnify (defend and hold you harmless) against claims that the AI service or its outputs violate someone’s intellectual property rights. OpenAI’s business terms offer an IP indemnity: they agree to defend you if a third party alleges that OpenAI’s services or training data infringe their copyright, patent, etc.​ Ensure this is in your contract. If an author or software company claims that ChatGPT’s output to you was essentially their copyrighted material, OpenAI would handle the lawsuit and pay any settlement or judgment on your behalf (as long as you weren’t at fault in how you used it). Without such an indemnity, your company could face potentially expensive litigation over something largely out of your control (since you don’t fully know what’s in the AI’s training data). Confirm that the indemnification covers the model and training data. OpenAI’s clause explicitly covers claims arising from the training data they used, which is important because that’s where most copyright risks lie.
  • Other indemnities from OpenAI: Consider whether there are other areas where you would like indemnification. For example, if your use of the AI causes a defamation claim (say the AI produces a false statement about a person or company, and you publish it), would OpenAI defend you? Typically, AI vendors are reluctant to indemnify for issues such as defamation or illegal content because the output is user-driven and unpredictable. However, as a negotiator, you can raise the concern. At a minimum, obtain a product liability-style indemnity: if someone claims the software (AI) itself caused harm due to a defect, OpenAI should be held accountable. Also, if OpenAI knowingly includes malicious code or viruses in an output (highly unlikely, but cover your bases), they should indemnify you for any damages. These scenarios may be theoretical, but discussing them can prompt OpenAI to reassure you and possibly include broader protective language.
  • Your indemnification to OpenAI: Indemnification is typically mutual in some respects – OpenAI will likely require you to indemnify them as well, for example, if third-party claims arise from your use of the service in violation of the contract. Commonly, you’ll be expected to indemnify OpenAI if you input data you weren’t allowed to (e.g., you upload someone else’s proprietary data without permission, and they sue OpenAI) or if your integration of the AI causes some legal issue. This is generally reasonable, but be cautious not to make it too broad. It should be tied to your breach of the agreement or misuse of the service, not just any use. Clarify that you are not indemnifying OpenAI for claims arising from the normal authorized use of the AI – OpenAI itself should cover those. Essentially, each party should cover the risks under their control: OpenAI covers the AI and its training content; you cover what you decide to do with the AI and any data you feed it that you shouldn’t.
  • Indemnification process and control: Ensure the contract outlines how indemnification will work. The party seeking indemnity must promptly notify the other, allow the other to assume control of the defense, and cooperate in that defense. These are standard terms​. If OpenAI is defending you as the customer, you want them to handle the claim (and pay for lawyers, settlements, etc.). Just make sure you retain the right to approve any settlement that would bind you or admit fault on your part – the indemnifying party shouldn’t settle a case in a way that negatively affects you without your consent (OpenAI’s terms likely have a provision that they can’t settle a claim against you without your reasonable consent, except for purely monetary settlements)​.
  • Example – IP indemnity in action: Imagine a scenario in 2025 where a news organization sues several companies claiming that their ChatGPT-based tools produced summaries from the news organization’s articles. Suppose your company is targeted in such a suit. In that case, an indemnity means OpenAI would step in to defend you because the claim is that OpenAI’s model (trained on those articles) outputs protected content. OpenAI would cover legal fees and payout (assuming you weren’t violating usage terms). This can potentially save your company hundreds of thousands of dollars in legal costs and liability. In negotiations, citing real-world cases (such as existing lawsuits against OpenAI for its use of training data) can underscore the need for this protection. OpenAI might point to their existing indemnity clause and say it’s sufficient – your job is to ensure it covers the scenarios you worry about, and if not, adjust it.
  • Indemnity vs. warranty: Don’t confuse indemnities with warranties or liability limits. You also want warranties (promises about performance or quality), and liability to be appropriately allocated (in the next sections). Indemnity is specifically about third-party claims. It doesn’t cover your direct losses if something goes wrong; it covers legal claims from others. So, push for indemnity for IP and possibly security breaches (e.g., if OpenAI’s negligence leads to a breach and third parties sue you for damages). For your losses (such as downtime), you’ll need to rely on SLA credits or liability terms rather than indemnity.

Key takeaways: Obtain a solid indemnification from OpenAI, particularly regarding intellectual property issues.

This is non-negotiable in a world where AI outputs might inadvertently cross legal lines.

Ensure the contract language has OpenAI defending you for IP claims (and any other critical risks you identify), with no trivial cap on this indemnity (often, indemnity obligations are uncapped or have separate caps).

Understand your part, too you’ll indemnify OpenAI for misuse; keep that scope narrow and manageable by adhering to the usage rules.

In short, indemnities are about sharing legal risk fairly: OpenAI should stand behind its technology, and you should stand behind your use of it.

6. Service Levels and Uptime (SLA)

For enterprise-critical services, you require assurances regarding availability and performance.

This is where a Service Level Agreement (SLA) comes in.

An SLA defines measurable commitments (uptime, response time, support responsiveness) and remedies if those commitments aren’t met.

When negotiating with OpenAI, treat their generative AI service as you would any important cloud service and insist on reliability assurances:

  • Uptime commitment: Determine how much downtime is acceptable for your use case and push OpenAI to commit to at least that level of uptime. For example, many enterprise agreements target 99.9% uptime or higher for critical services, corresponding to no more than 45 minutes of downtime per month. OpenAI does not publicly guarantee uptime for free or standard users​. Still, for enterprise customers or high-tier API users, they have offered an SLA (e.g., OpenAI’s “Scale” tier promises 99.9% uptime​). Negotiate an SLA where OpenAI commits to a specific uptime percentage (monthly or quarterly). Ensure it’s clearly defined (including hours count, maintenance windows, etc.). If you have global operations, ensure the SLA covers all regions where users might access the AI.
  • Performance and latency: Besides being “up,” the service must be responsive. Discuss expected latency (time for the AI to respond). This might not be a formal SLA metric (many vendors hesitate to guarantee response time for complex AI queries), but you can at least get commitments on having adequate infrastructure to serve your volume. OpenAI’s enterprise service often includes priority access to the model – for instance, ChatGPT Enterprise offers higher speed access to GPT-4. You could include language stating that OpenAI will provide sufficient computing resources such that the median response time for a standard query (e.g., a 1000-token prompt) is under X seconds. Even if it is not a hard guarantee, it sets an expectation. Suppose your application has specific latency requirements (for example, embedding AI in a live customer chat where a response must be delivered within 2 seconds). In that case, you must communicate these needs and determine if OpenAI can meet them. They might offer a dedicated instance or specific infrastructure for an additional cost – if so, include this in the contract along with the performance target.
  • Support response time: Support and incident response are often overlooked but critical parts of an SLA. Ensure the contract specifies the timeframe for OpenAI’s response to your support requests, particularly for urgent issues. For example, a common SLA is: for critical Severity-1 outages, the vendor will respond within 1 hour, 24/7, and work continuously to resolve; for high-priority issues, within 4 hours, etc. Confirm that as an enterprise client, you will have access to 24/7 support (OpenAI has indicated that Enterprise customers get priority support). Ideally, you should have a dedicated technical account manager or support contact who understands your deployment. Include a provision that you will be informed immediately of any widespread outages or incidents on OpenAI’s side.
  • Remedies for SLA breaches: An SLA isn’t meaningful without consequences. The typical remedy is service credits – if uptime falls below the guarantee, you get a credit (a discount) on your bill. For instance, if uptime drops to 99% in a given month (versus the promised 99.9%), you might receive 10% of that month’s fees as a credit. Negotiate a fair schedule of credits; it could be tiered (worse uptime = bigger credit). While credits won’t fully compensate for business impact, they incentivize the vendor to avoid downtime. In extreme cases (like repeated outages over several consecutive months), you should have the right to terminate the contract without penalty (and perhaps get a refund for unused services). Ensure the contract clearly outlines the process: you may need to apply for the credit, or it may be automatic. Also, clarify if the uptime calculation excludes scheduled maintenance and how such maintenance will be communicated (you want advance notice of any planned downtime).
  • Monitoring and reporting: The SLA should require OpenAI to provide uptime reports or a status dashboard that you can access. Many cloud services have a status page; confirm one exists for the OpenAI service or that they will promptly email your team in case of an outage. It’s also good to specify that you can audit or verify SLA metrics – maybe not directly (you likely can’t access their internal logs), but you can measure from your end and dispute if there’s a discrepancy.
  • Example – why SLA matters: Picture your company integrating OpenAI’s API into a customer-facing app (for example, a virtual assistant in your product). If OpenAI’s service goes down in the middle of the business day, your app’s critical functionality might be crippled. Without an SLA, OpenAI has no contractual obligation to fix it or compensate you. You’re essentially at their mercy. With a strong SLA, you know they are financially and contractually motivated to minimize downtime. Perhaps you negotiated that more than 1 hour of downtime triggers immediate executive-level escalation. OpenAI’s team scrambles when an outage occurs and keeps you informed, as it’s part of the contract. Moreover, you accrue credits that reduce your monthly costs – not a full make-good for lost business, but at least you aren’t paying full price for subpar service. Over a year, if OpenAI fails to meet the SLA consistently, you might use that as leverage to negotiate an upgrade, improvements, or exit the contract under your SLA termination clause.
  • Plan B for downtime: Always have a mitigation plan, regardless of SLA. We recommend negotiating for the ability to use a backup model or service in emergencies. Some companies use multi-AI strategies (e.g., if OpenAI is down, you switch to an alternative model temporarily). If that’s a possibility, ensure nothing in your contract forbids it. OpenAI’s terms shouldn’t prevent you from having other AI systems as backup. You may not need to include this in the contract, but from a practical standpoint, prepare for outages with contingencies, as even with an SLA, downtime can still occur (and the SLA only provides credits, not compensation for lost time or reputation).

Key takeaways: Treat OpenAI’s service as mission-critical if you use it.

Negotiate an SLA that holds OpenAI accountable for high availability and timely support. Key elements include uptime percentage, support responsiveness, and remedies such as service credits or termination rights.

Get these in writing; verbal assurances of “we strive for 24/7 uptime” aren’t enough.

The SLA converts expectations into enforceable commitments, ensuring that OpenAI has a vested interest in maintaining the smooth operation of the AI.

With a solid SLA, you can integrate AI into your operations with confidence that reliability is contractually assured.

7. Pricing and Cost Controls

Generative AI services can have complex and sometimes unpredictable pricing, particularly when usage increases rapidly.

CIOs must ensure that the contract addresses pricing and includes mechanisms to control costs.

In this section, focus on transparency of pricing, flexibility, and safeguards against budget overruns:

  • Understand the pricing model thoroughly: OpenAI’s generative services may be priced differently depending on the specific product or service. The API is often metered by tokens (fragments of text) processed, e.g., a price per 1,000 tokens. ChatGPT Enterprise, on the other hand, may offer a fixed fee per user or seat, with “all-you-can-use” access within fair limits. Ensure the contract (or order form) defines the pricing structure. If it’s usage-based (pay-as-you-go), ensure you know the rates for the specific models you’ll use (GPT-4, GPT-3.5, etc.), any volume discounts, and how usage is calculated. If it’s a fixed subscription, clarify what usage is covered – for example, unlimited usage of certain models. Any hidden caps? Also, check for separate charges: are there fees for premium features (longer context windows, dedicated capacity), or support? The contract should list all these so you don’t get surprise charges.
  • Volume commitments and discounts: If you anticipate heavy use, negotiating a volume commitment can significantly reduce unit costs. For example, you might commit to a certain monthly spend or a specific number of tokens over the course of a year in exchange for a discounted rate. OpenAI (and other AI providers) often have tiered pricing – e.g., the first N tokens at one rate, and subsequent tokens at a lower rate. Opt for the highest tier that aligns with your usage projections. Conversely, be careful not to over-commit. It might be wiser to start with a lower commitment and gradually increase it over time. If you commit to spending $x hundred thousand and end up not using that many API calls, you might still be charged for them (depending on the contract terms). Try to negotiate “use-or-lose” flexibility: perhaps unused credits roll over, or you have midway checkpoints to adjust the commitment. If OpenAI is keen on landing your business, they may agree to flexibility in the first year as you discover usage patterns.
  • Cost caps and budget control features: One risk of AI services is that usage can scale faster than expected – e.g., an app unexpectedly goes viral and racks up a huge token count. Cost control measures should be included to prevent budget shock. Contractually, you could set a monthly spending cap – e.g., “OpenAI will not charge beyond $X in a month without written approval.” Practically, OpenAI’s platform or admin console should allow usage limits or alerts to be set. Ensure that these features are enabled for your account. During negotiations, ask about monitoring and alerts: Will you have real-time visibility into usage? You may want a clause that requires OpenAI to proactively alert you if usage in a given period appears abnormally high (e.g., more than 20% above the forecast). This allows you to intervene (perhaps by temporarily disabling some integrations) before costs explode. OpenAI’s enterprise tools provide usage insights – ensure you have access to those and that they are detailed enough for your finance team’s needs.
  • Transparency in pricing changes: The contract should lock in pricing to avoid unwelcome changes. OpenAI’s standard terms sometimes allow businesses to change prices with 14 days’ notice ​ – this is not ideal for enterprises. Negotiate language that fixes your rates for the contract term (e.g., “pricing as of contract signature will remain in effect for an initial term of 12 months”). If OpenAI insists on the right to change prices for new features or in renewal, at least require a longer notice period (60-90 days) and the ability to terminate if you don’t accept the new prices. Also, consider including a rate card for optional services (such as if you want to use a larger model or more capacity in the future, what would the cost be). This way, you have predictability. For multi-year deals, you might negotiate a preset price increase (for instance, a 5% increase in year 2) rather than leaving it open.
  • Payment terms and currency: Ensure the contract covers how you pay. Is it invoicing monthly, net 30 days? Are you paying in USD or another currency? Any taxes or withholdings require clarification – OpenAI will likely charge applicable sales/VAT taxes where required. If you’re prepaying for credits (some API deals work on a credit system where you buy credits upfront), clarify the terms: Do credits expire? Are they refundable? Typically, prepaid amounts are non-refundable if unused; however, you may be able to negotiate a partial refund clause if you overbuy significantly. Keep an eye on any auto-renewal of subscriptions from a billing perspective as well (it ties into the Renewal section).
  • Audit and usage verification: For peace of mind, include a right to audit usage or, at the very least, reconcile records. Because billing is often based on technical metrics (tokens), you might want the right to examine logs or have an auditor confirm that the usage reported (and charged) matches actual use. OpenAI could provide detailed usage reports by day or user – ensure that the level of detail will be available to you upon request to verify bills.
  • Example – controlling costs: A mid-size software company integrated OpenAI’s API into their product for enhanced features. In the first month, usage exceeded expectations, resulting in a bill three times the forecast. Fortunately, during the negotiation, they included a cost threshold clause: OpenAI had to send an alert when the monthly spending exceeded a certain amount. This alert came in time for them to throttle some non-essential usage and inform finance. Additionally, because they had committed to a yearly volume, they triggered a cheaper rate once they crossed a threshold, softening the blow. In the contract, they also secured the ability to true-up annually: if they overshot their estimate this year, they could negotiate a better rate for the next year based on actual usage. This flexible arrangement was only possible because they discussed scenarios of growth and variability with OpenAI upfront and baked that into the agreement.
  • Beware of lock-in with pricing: Sometimes, vendors lure you in with a low price and then raise it once you’re dependent (classic “bait-and-switch” over contract cycles). Mitigate this by negotiating caps on price increases at renewal (e.g., no more than the CPI or a single-digit percentage increase). Also, try to obtain the most favorable customer treatment – i.e., if OpenAI offers a promotional or lower price to similar customers, you should receive that adjustment as well. They may not agree to formal MFN clauses, but asking can lead to at least an assurance that your pricing is competitive.

Key takeaways: Ensure you have a clear and controllable pricing model.

All fees (and potential fees) should be transparent. Utilize contractual tools such as volume discounts, spend caps, and fixed pricing periods to avoid unexpected costs.

In essence, tie down the dollars and cents: AI usage can scale quickly, so build in protections so that success (high usage) doesn’t blow your budget.

A well-structured pricing agreement will enable you to reap the benefits of AI at a known and manageable cost, which is vital for achieving ROI and maintaining stakeholder confidence.

8. Renewal and Lock-In

Enterprise contracts often span multiple years or are auto-renewing, and switching costs can be substantial.

As a CIO, you must manage vendor lock-in risk and negotiate favorable renewal terms.

This section addresses maintaining flexibility so that you are not bound to OpenAI beyond your comfort level. If you choose to continue the relationship, it will be on reasonable terms.

  • Contract term and renewal: Determine an appropriate initial term for your contract. Given the rapid evolution of AI, you may prefer a shorter initial term (1 year, perhaps 2 years) to reassess the landscape, unless a longer term offers significant pricing advantages or is necessary for strategic reasons. The contract should explicitly state what happens at the end of the term. Many SaaS contracts auto-renew for convenience – e.g., automatically renewing for additional one-year periods unless notice is given​​. Auto-renewal is acceptable as long as you can cancel it with sufficient notice. Negotiate a practical notice period (30 days is common, but consider 60-90 days to allow for internal processing). Also, ensure that OpenAI sends a renewal reminder in advance (some contracts will specify that the vendor must notify the customer of auto-renewal a certain period beforehand to avoid it sneaking up). If you do nothing, you don’t want to be locked for another full term inadvertently.
  • Renewal pricing and caps: A significant concern at renewal is a potential price hike. You should address this in the initial contract. If you negotiated a discounted rate for year 1, what would happen in year 2? Fix it in writing. Ideally, fix the pricing for at least 2-3 years or cap the increases. For example, you could state, “Upon renewal, fees may increase by a maximum of X% over the prior term’s fees,” or “not to exceed the percentage increase in the Consumer Price Index.” This protects you from an unpleasant shoc,k like a 50% rate jump after year one. It also sets expectations for budgeting. Some vendors may resist a strict cap, especially if they expect their costs (or the value of the service) to rise. You might compromise with something like a tiered discount that shrinks over time but doesn’t vanish entirely. Regardless of the circumstances, document the renewal pricing mechanism. If no terms are specified, the vendor may charge list prices at renewal, which could be significantly higher than the initial deal. Use your leverage at the start to lock in a gentler renewal scenario.
  • Evaluate lock-in factors: Consider what would make it hard to leave OpenAI. Is it the integration work your team has done? Fine-tuned models or custom solutions built around OpenAI’s tech? User adoption? Is the data stored in OpenAI’s format? Address each factor:
    • Data portability: Ensure you can export any data (prompts, outputs, conversation logs, and fine-tuning training datasets) in a usable format before or at the end of the contract. That way, if you switch to another AI platform, you still have the valuable data you generated during your OpenAI usage. The contract should obligate OpenAI to assist with data export upon request (perhaps as part of termination assistance).
    • Custom models or fine-tuning: If you pay OpenAI to fine-tune a model on your data, clarify ownership and post-termination use of that model. OpenAI’s policy is that others do not use custom models you train, but it doesn’t automatically provide you with a copy of the model – it runs on their platform. Try negotiating rights to retrieve the trained model (weights), or at least the training data and parameters, so you could potentially replicate the model elsewhere. They may not agree to hand over their model weights (which could expose their IP), but it’s worth discussing. At a minimum, ensure that if you leave, they will destroy any fine-tuned model derived from your data to protect your IP (or transfer it to you if that’s an option).
    • Transition plan: Request a clause stating that in the event of termination or non-renewal, OpenAI will provide reasonable transition assistance. This could mean they agree to continue service for a short period (e.g., 30-60 days) after the termination effective date, if you request it, to allow a smooth transition (with pro-rated payment). It could also mean providing extra support or consulting (potentially paid) to help you migrate to a new solution. Vendors often have a standard “upon termination, each party shall reasonably cooperate to effect an orderly transition.” Ensure that something like that is included, even if it’s not very specific.
    • No exclusive obligations: Confirm that the contract does not bar you from using other AI providers or tools. You want freedom to multi-source or change providers. Unlikely that OpenAI would try to enforce exclusivity (their standard terms don’t), but just be mindful if any language could be read that way. For example, an overly broad confidentiality clause shouldn’t prevent you from evaluating other AI models using similar prompts. Ensure you can benchmark or test alternatives (some cloud contracts restrict benchmark publication – clarify that internal testing is fine).
  • Terminate if needed (without huge penalties): This ties into termination rights (next section), but from a lock-in perspective, can you exit if things aren’t working out? Try to avoid signing away all flexibility. For instance, a 3-year contract with no termination for convenience and heavy prepaid fees can lock you in even if the product underdelivers or a better option emerges. Sometimes, a vendor will offer an out: e.g., after 12 months, you can terminate early if you give 3 months’ notice, maybe forfeiting some discount but not the full penalty. Negotiate such clauses if you foresee rapid changes in AI tech or your strategy. It’s better to have a known exit path than to be stuck.
  • For example, to avoid lock-in traps, consider a scenario where a competitor to OpenAI releases a superior AI model at a lower cost two years from now. If your contract auto-renews without negotiation, you may be paying more for less capable technology, but you may also be unable to switch easily. By having a one-year term or a renewal clause that requires mutual agreement on terms, you preserved the chance to shop around and renegotiate. Additionally, because you insisted on data export rights, you have a repository of all prompts and outputs from OpenAI. Your data science team can use that to fine-tune the new competitor model, effectively jump-starting the transition. Without those negotiated points, you might have been stuck for another year at a high cost, and even when free, you’d have to rebuild from scratch because your data was siloed. This example highlights how planning for change – even if you’re happy with OpenAI today – is just prudent future-proofing. In tech, flexibility is leverage.
  • Build relationships, not dependencies: As a CIO, managing vendor lock-in through contract terms and strategy is a worthwhile endeavor. For instance, avoid baking OpenAI-specific assumptions too deeply into your systems. Use abstraction layers in your software to swap out the AI provider. OpenAI’s API may have unique features, but design your integration modularly. That, combined with a contract that allows exit, ensures you have alternatives when renewal time comes, giving you negotiating power at renewal. If OpenAI knows you can and will switch if the terms aren’t good, they will likely offer a fair renewal to keep your business.

Key takeaways: Don’t get locked in without an escape plan.

Negotiate short-term or easy renewal options, secure your data and model investments, and minimize the impact of switching.

At the same time, leverage the promise of renewal as a bargaining chip (vendors love recurring revenue, so that they might give concessions upfront for a higher chance of renewal).

The aim is to enjoy OpenAI’s capabilities as long as they serve you well, but retain the right and ability to pivot if circumstances change.

Flexibility in the contract means freedom to make the best decisions for your enterprise over time.

9. Security Obligations

When entrusting sensitive business information to an AI service, you must demand strong security measures from the vendor.

The contract outlines OpenAI’s security responsibilities to protect your data and the service’s integrity.

Key areas to cover:

  • Baseline security standards: The contract should stipulate that OpenAI will adhere to industry-standard security practices to safeguard the confidentiality, integrity, and availability of your data. OpenAI has publicly committed to various security controls – for instance, ChatGPT Enterprise is SOC 2 compliant and utilizes encryption for both data in transit and at rest. Ensure that these commitments are included in your agreement. Specifically, requires:
    • Encryption: All your data should be encrypted at rest in OpenAI’s systems (typically using strong encryption, such as AES-256) and in transit (TLS 1.2+ for any data in motion between you and OpenAI). This protects against unauthorized access in the event of a breach of their storage or interception of network traffic.
    • Access controls: OpenAI should strictly limit access to your data on a need-to-know basis. Only authorized personnel (e.g., for debugging or maintaining the service) should access customer content, even under strict controls. The contract can stipulate that OpenAI will adhere to the principle of least privilege and implement measures such as multi-factor authentication for its staff when accessing systems.
    • Security certifications/audits: If security audits have been done (SOC 2 Type II report, ISO 27001 certification, etc.), you can request the right to review those reports under NDA. At a minimum, the contract should stipulate that OpenAI will maintain SOC 2 (or similar) compliance throughout the term. This gives you confidence that their security program is audited annually. Additionally, ask if they undergo regular penetration testing by third parties – and if so, can they share a summary of results or at least warrant that no critical vulnerabilities remain unaddressed?
  • Breach notification: Time is of the essence in security incidents. Negotiate a clause requiring OpenAI to notify you promptly in case of any data breach or security incident affecting your data. “Promptly” is often defined as within 24 or 48 hours of discovery (and in GDPR contexts, 72 hours is a legal requirement for personal data breaches). The notice should include details of what happened, the data involved, and the remediation steps being taken. You might also request that OpenAI coordinate with you on public communications or regulatory notifications if your data is compromised. This is critical for your compliance (e.g., you may need to notify customers or authorities, and you’ll rely on information from OpenAI).
  • Ongoing security obligations: Security isn’t a one-time box to tick; ensure the contract mandates ongoing efforts. This could include language like “OpenAI shall implement and maintain appropriate administrative, physical, and technical safeguards to protect Customer Data and shall regularly test and monitor the effectiveness of those safeguards.” You can borrow language from standards or regulations that apply to you. For example, if you’re in finance under GLBA or healthcare under HIPAA, incorporate relevant security requirements (OpenAI even has a separate Healthcare Addendum for HIPAA – sign that if needed).
  • Security incident response and liability: Clarify what happens if a security incident occurs on OpenAI’s side. Besides notification, will they take immediate action to mitigate the issue? Likely yes, but it’s good to have it stated that they will remediate vulnerabilities and cooperate with any forensic investigation. Also, consider clarifying that a security breach by OpenAI is considered a material breach of contract, giving you the right to terminate if appropriate (and possibly to seek damages outside the normal liability cap if you can negotiate that; see the Liability section). Vendors typically resist uncapped liability, but they might agree that you have particular remedies in case of a breach caused by their gross negligence.
  • Data locality and isolation: If your company has policies about where data can be stored (data residency), raise this. OpenAI’s services are global but may process data primarily in U.S. data centers or Azure’s cloud for certain services. Ask if you require data to be kept within certain jurisdictions or kept separate from other customers. OpenAI may offer a dedicated instance or allow hosting via Azure in your region (Microsoft’s Azure OpenAI service might be an option if data residency is a critical concern). In the contract, you could specify any agreed-upon data location or state that OpenAI will inform you where the data is stored and not move it without consent.
  • Customer Security Responsibilities: Please note the additional actions you must take on your end. For example, managing API keys securely is your responsibility – if you leak your credentials, that’s not OpenAI’s fault. The contract may remind you to secure access (e.g., utilize the provided SSO integration or manage user accounts). Ensure your team follows best practices, such as enabling SAML SSO and MFA for OpenAI Enterprise access, and using role-based access for API keys.
  • Example – Importance of a Security Clause: In March 2023, an issue in ChatGPT’s code allowed a few users to view parts of other users’ chat histories (including conversation titles from another account). Additionally, a bug exposed some payment information of ChatGPT Plus users. While such incidents are hopefully rare, they demonstrate that even top companies can have vulnerabilities. If you had sensitive data in those conversations, you’d want to know immediately and have assurances it wouldn’t happen again. By including a robust security clause, you ensure that OpenAI is contractually bound to disclose incidents and rectify them. After that event, OpenAI fixed the bug and improved the safeguards. As a paying enterprise customer, you’d likely also demand a post-mortem report and the right to audit their fixes. With a strong contract, you have the leverage to get that information and hold OpenAI accountable for preventing recurrences.
  • Security training and personnel: You may also want to include that OpenAI’s personnel handling your account/data will be vetted and trained in security. This ensures that no intern or random engineer can mishandle your info. Enterprise contracts often include clauses regarding background checks for vendor employees and adherence to a code of conduct.
  • Liability for security lapses: (Cross-reference with the Liability section.) It’s worth emphasizing that, in the event of a breach of OpenAI’s security duties resulting in damage (such as a loss or regulatory fine), OpenAI will bear responsibility for that. They might limit this, but pushing for it can sometimes carve out an exception to liability limits for security breaches. This incentivizes them to truly prioritize your security.

Key takeaways: Demand the same level of security from OpenAI as any top-tier cloud provider handling your crown jewels.

The contract should resemble a mini security policy, encompassing encryption, access control, audit, breach notification, and compliance with relevant standards.

Solidifying these obligations reduces the chance of incidents and ensures that OpenAI will react swiftly and transparently if something goes wrong.

Remember, security is non-negotiable when entrusting data to an AI. Make that clear at the negotiating table.

10. Customer Data Use and Training Opt-Out

This topic focuses on a specific aspect of data privacy: ensuring that OpenAI does not use your data to train its AI models or otherwise exploit it for its benefit.

Many AI services improve over time by learning from user interactions; however, enterprises often prefer to opt out of this process.

In negotiations, you should firmly establish how your data can and cannot be used by OpenAI:

  • No training on your data: The contract must clearly state that OpenAI will not use your inputs or outputs to train, develop, or improve any AI models outside of your intended usage. OpenAI’s current enterprise policy aligns with this – by default, they do not use business customer data for training​. The business terms explicitly state that OpenAI will not use customer content to improve its services​. Make sure this exact commitment is included in your agreement. This protects your proprietary information from being incorporated into the general AI model that others use. For example, if you have the AI analyze your internal strategy document, you don’t want fragments of that document popping up in answers to other users. It should be impossible because they won’t feed it into the model training.
  • Opt-out confirmation: In the earlier days, OpenAI had an opt-out process (you had to request via support email to exclude your data from training​). Now, with enterprise contracts, it’s opt-out by default. Still, document it. You might include language like: “Customer Content will be excluded from any datasets used to train or refine OpenAI’s AI models. OpenAI shall not store Customer Content beyond the extent necessary to provide the service to Customer, except for legal compliance or security purposes.” The section on storage ensures that even for operational needs, they’re not holding onto your data longer than necessary.
  • Limited use only to serve you: It’s acceptable for OpenAI to use your data within your account – e.g., storing conversation context to continue a chat session or fine-tuning a model for you if you request that. The contract can clarify that any such use is solely for your benefit and under your control. Additionally, if you engage OpenAI for custom model improvement (for your exclusive use), this should be under a separate scope and should not feed their general models.
  • Analytical data: Sometimes, vendors want to use metadata or usage stats (not the content itself) to improve their service or for business purposes. For example, OpenAI might log that “Customer made 1 million requests this month” or that certain API endpoints are slow. Ensure that if any of your data is used in aggregate analytics, it’s strictly anonymized and aggregated. You can explicitly allow or disallow this. If you’re very strict, forbid any use of even metadata beyond what is necessary to provide the service to you. However, many companies allow the collection of aggregated usage data if it cannot identify them or reveal any content. Decide your stance and put it in writing. Something like, “OpenAI may collect and use usage metrics (e.g., volume of queries, performance data) to maintain and improve the efficiency of the Services, but not the content of the queries, and not in any way that discloses Customer Confidential Information.”
  • Right to audit or verify: If you want extra assurance, you could negotiate a right to audit OpenAI’s compliance with the no-training commitment. This might involve them providing an annual certification that your data was not used in training. Or allowing a third-party auditor to confirm that. Due to its multi-tenant architecture, many cloud vendors may not easily permit direct audits; however, certification or incorporating this into the SOC 2 controls could be an alternative. The key is to create accountability for that promise.
  • Deletion and retention control: This overlaps with data privacy, but emphasizes that if you choose to delete certain data or end the contract, none of your data should remain in any form that could influence models. OpenAI’s terms state that they delete customer content 30 days after termination. You might want to tighten that, or at least have the right to request the immediate deletion of certain sensitive content from caches or systems. Because if data isn’t used for training, there’s no reason to keep it longer than necessary.
  • Visibility to end-users: If you provide an AI feature to your customers, you may also want to consider whether you need to communicate any information about data usage. For example, if your end-users input their data to get AI help, do you promise their data isn’t going beyond that interaction? If so, you must ensure that OpenAI contractually supports that promise (which it does if there is no training usage). This is more of a note for your external-facing terms, but it hinges on the OpenAI contract.
  • Example – benefit of opt-out: Consider a law firm using OpenAI to summarize confidential memos. Suppose OpenAI were using all inputs to train on a single day. In that case, some confidential text might indirectly appear in another user’s output (perhaps as an example or in a slightly altered form). That would be a nightmare and a breach of confidentiality. The law firm can safely utilize AI by securing an opt-out (no-training) clause, knowing that those memos will remain siloed. Earlier consumer versions of ChatGPT led companies like Samsung to ban its use, fearing that data would be absorbed into OpenAI’s model. OpenAI’s enterprise offering now addresses that, but only a solid contract cements it. The peace of mind this provides often decides whether companies are comfortable using cloud AI services.
  • Future model improvements: Another angle – what if OpenAI improves the base model using general data, and you want those improvements? You might wonder if opting out means you lose some benefits. Fortunately, OpenAI can improve its models with data from other sources (public data, voluntary feedback, etc.). You can still get newer model versions when they’re released without using your data. So it’s usually a win-win: you benefit from general improvements while your proprietary data stays private. If OpenAI offers a program where customers can opt in to data for certain benefits, carefully evaluate that trade-off. By default, we recommend keeping the opt-out.
  • Training your models: If part of your contract involves OpenAI training a custom model on your data (a bespoke model tailored to your specific needs), clarify that the data and resulting model are for your exclusive use. OpenAI should not use that data to train any other models or share the model with others. Treat that as a work-for-hire or, at the very least, as your confidential, segregated project. The contract might attach an exhibit for any custom training project, reiterating that all data and outputs are yours and that OpenAI won’t use them beyond delivering your tailored model.

Key takeaways:

Ensure your data is walled off from OpenAI’s model training pipeline. The contract should state that your usage will not feed the AI that other customers or the public use.

This protection is crucial for maintaining confidentiality and preserving a competitive advantage.

OpenAI is generally amenable to this for enterprise clients, but it’s best to get it in writing and understand any nuances.

By doing so, you unlock the power of OpenAI’s tools without handing them your data trove to learn from.

In short, your data stays yours, powering your solutions and no one else’s.

11. Liability Limits

Limitation of liability is a standard part of contracts, but you must scrutinize it closely in an AI context. It determines who bears the financial risk if things go wrong.

Vendors often try to minimize their liability, so your goal is to ensure that OpenAI has enough skin in the game and that your company isn’t left holding the bag for all damages.

Here’s how to approach it:

  • Understand the default limits: OpenAI’s standard business terms limit their liability. Typically, they’ll disclaim indirect damages (such as lost profits, revenue, and data) and cap direct damages at a certain amount (often tied to what you paid – e.g., the total fees paid in the last 12 months is a common cap). For example, if an OpenAI failure causes $10 million in losses for you, but you only paid $100k in fees that year, they’d only potentially owe up to $100k. During negotiation, push back on this if possible.
  • Negotiate a higher cap or exceptions: Try to get a higher liability cap that is more proportional to the risk. You might argue for a cap equal to some multiple of fees (2x or 3x the annual fees, or all fees paid over the contract’s life, whichever is more). Enterprises sometimes negotiate a substantial fixed dollar cap, especially if the potential damage from a breach or failure is huge for them. More effectively, carve out exceptions to the cap. Common exceptions (where the cap doesn’t apply, meaning liability can be unlimited) include:
    • Breach of confidentiality or data privacy obligations. Suppose OpenAI egregiously violates the confidentiality clause (e.g., an employee intentionally leaks your data) or fails to comply with the DPA, causing a major incident. In that case, you may want that to be outside the normal cap.
    • Indemnification obligations. Often, if the vendor indemnifies you for third-party IP claims, that indemnity might be uncapped or have its cap separate from other liabilities​​. Ensure the contract reflects that any indemnification payments are in addition to the general liability cap (OpenAI’s terms do make indemnification uncapped for IP claims by excluding it from the limit​).
    • Gross negligence or willful misconduct. It’s standard that if the vendor intentionally or with gross negligence causes harm, they shouldn’t benefit from a low cap. OpenAI’s terms already exclude gross negligence/willful misconduct from some limitations​​. Double-check this and consider strengthening it – for example, define gross negligence clearly or include serious cybersecurity failures under that umbrella.
    • Certain regulatory fines. This one is tricky, but if your industry is such that a data breach or misuse of AI could lead to regulatory fines (GDPR fines, etc.), you could attempt to say OpenAI will be responsible for fines resulting from their breach of contract (like if they caused a data leak). Many vendors won’t explicitly agree to that, but it’s part of the conversation on why caps must be higher for data issues.
  • Indirect vs direct damages: Ensure the definition of indirect (consequential) damages doesn’t inadvertently shield them from things you care about. For example, lost profits are usually indirect, but what if your use of OpenAI is directly tied to revenue (like a paid service that, if it goes down, will cause you to lose profits)? You might consider that a direct loss. You’ll most likely have to accept a no-indirect-damages clause. However, confirm that costs such as replacing the service, investigating a breach, or regulatory penalties are considered direct damages (since they result directly from the incident). Some contracts will list certain damages directly.
  • Total vs per type of claim: Confirm whether the liability cap is aggregate (covering all claims together) or per incident. Aggregate is more common (one bucket for all). If possible, negotiate that the cap be reset annually or per incident, so that one bad incident doesn’t exhaust all their liability, leaving nothing for another. For instance, “in no event will either party’s total liability for each security breach exceed $X” – this is rare but possible to negotiate for specific categories.
  • Your liability to them: Remember, the limitation of liability usually applies to both parties mutually. OpenAI will also want to limit your liability to them. Typically, that’s fine since you’re mainly paying them, not causing them losses. Just ensure it’s reciprocal in a fair way. If they heavily cap their liability but try to hold you fully liable for some things, that’s imbalanced. Balance it by making caps and exclusions mutual, or at least logical (e.g., your liability for IP indemnity to them might also be uncapped if theirs is).
  • Insurance requirements: One way to mitigate limited liability is to ensure the vendor carries adequate insurance. You might include that OpenAI must maintain certain insurance (cyber liability, errors & omissions) with coverage above the cap and possibly name you an additional insured. This way, even if contractually they cap at X, they have insurance to pay out more if needed (though contract cap could still limit what you can claim – unless you argue gross negligence, etc.). It’s another layer of reassurance. You could also align the cap with their insurance coverage (e.g., if they have $5M cyber insurance, aim for a cap near that amount).
  • Realistic risk assessment: Conduct a risk modeling exercise during negotiation. What’s the worst-case scenario for using this service? Is it a data breach costing millions, an AI giving wrong advice leading to a lawsuit, or extended downtime paralyzing operations? Once you identify the nightmare scenario, think: under the contract as drafted, who shoulders the cost of that? If too much falls on your company, that’s a problem. Use that analysis to justify higher liability or specific provisions. For example, “If your AI output defames someone and we get sued for $1M, under your current terms, you’d owe us nothing because of the cap. That’s not acceptable since the risk originates from your model’s behavior. We need you to take on more liability in such cases.”
  • Example – liability in play: Suppose OpenAI’s service goes down for two days, causing a major disruption in your customer-facing app and breaching some of your SLAs with your clients. You incurred costs issuing credits to your clients totaling $ 500,000 and lost business worth $1 million because some clients left. Under a strict liability clause, OpenAI might only owe you a couple of months of fees (maybe $50k) as damages, and they’ll point to the no-loss-profits clause for the rest. If you negotiated well, you may have carved out the provision that “costs to remedy customer-facing impacts of a downtime (like service credits you must give) count as direct damages.” You also had a slightly higher cap of $500k. You could then recover $500k from OpenAI, covering the credits you paid out. You’d still eat the lost future business (considered an indirect loss typically), but at least you didn’t eat the entire loss. This scenario illustrates how a nuanced liability clause can result in substantial savings. If you had managed to include a strong SLA with penalties, that might also provide additional remedies.
  • Disclaimers of warranty vs liability: OpenAI (like others) will have a disclaimer of warranty (they don’t guarantee the AI’s output is correct, etc.)​. That is separate from liability. You likely have to accept that they won’t warrant the AI’s accuracy or fitness for every purpose. The liability clause then reinforces that they will not be held liable for any consequences resulting from inaccuracies. Hence, you must implement your safeguards (human review, etc.). However, you might negotiate at least a warranty that the service will perform as described in the documentation (they do warrant that it will conform materially to the documents), which is a limited warranty. Just be aware: you cannot realistically hold OpenAI liable for the AI giving a wrong answer (that’s inherent in AI), so your risk management there is procedural. Focus your liability negotiations on areas where OpenAI has more control, such as security, IP compliance, and uptime, as discussed.

Key takeaways: Aim to strike a balance on the risk ledger.

OpenAI’s default stance will minimize their exposure – your job is to expand it to a reasonable level.

Increase caps where possible, and carve out critical issues (such as IP, data breaches, and willful misconduct) from any caps or exclusions.

While you likely can’t get unlimited liability in all respects (few vendors agree), you can often negotiate a middle ground that ensures your company isn’t left solely carrying the burden if a catastrophe happens because of OpenAI’s lapse.

The result should be a fair allocation of risk: OpenAI stands behind its product to a meaningful degree, and you commit to using it responsibly; each party is accountable for what they control.

12. Termination and Exit Rights

Even with the best planning, situations may arise where you need to terminate the contract or stop using the service.

Negotiating clear exit rights is important to avoid being trapped or suffering disruption if the relationship ends.

Consider both termination for cause (when someone breaches the contract) and termination for convenience (voluntarily, without breach), and ensure you can retrieve your data and mitigate any business impact:

  • Termination for cause (breach): The contract allows either party to terminate if the other materially breaches the agreement and fails to cure the breach within a specified time (commonly 30 days). Ensure this clause is present and that the cure period is reasonable. For critical breaches (like a major violation of confidentiality or repeated SLA failures), you might want the option to terminate more quickly. Ensure your company has the right to terminate (not just OpenAI). For example, suppose OpenAI is in breach (say, they consistently fail to meet the SLA or use your data unauthorized). In that case, you should be able to get out of the contract and ideally get a refund for any prepaid fees for services not provided. If OpenAI goes out of business or discontinues the service, termination should also be allowed (their terms include if a party ceases business or becomes insolvent​).
  • Termination for convenience means ending the contract without the other party breaching – essentially, “opting out” even if things are going fine. Vendors typically resist giving customers an easy out, especially if they give a fixed-term discount. However, it’s worth asking if you can have a termination for convenience with notice (e.g., you can terminate at any time with 60 days’ notice). If not immediately, perhaps after a certain minimum usage period. Sometimes, enterprises negotiate a mid-term termination right if internal circumstances change (for instance, if regulatory conditions change, making use of the service unlawful, you must be able to terminate). At the very least, you want a convenience termination at renewal, meaning if the contract auto-renews, you can opt not to renew (which is termination at the end of the term). That should be given as long as you provide the notice. In your negotiations, if OpenAI won’t allow early termination without cause, consider the contract length carefully – don’t lock in a longer term than you’re comfortable with without an exit clause.
  • Termination in case of policy or legal changes: Include clauses to protect you if external factors force a change. For example, “If a change in law or regulation makes it illegal or impractical for Customer to continue using the Services, Customer may terminate the Agreement with written notice and without penalty.” Similarly, suppose OpenAI’s policies change in a way that materially degrades what you signed up for (e.g., they suddenly impose much stricter usage limits or remove a feature critical to you). In that case, you should have the right to exit. OpenAI’s standard terms allow them to update policies; you should say that if any update is materially adverse, you can terminate.
  • Data retrieval and post-termination assistance: One of the biggest concerns at exit is retrieving your data and ensuring its safe deletion from the vendor’s side. The contract should state that upon termination or expiration, OpenAI will delete your data (as per their terms, which stipulate a 30-day timeframe). However, before they do, you likely want to export your data. Before the account is closed, you can retrieve any stored prompts, outputs, fine-tuning results, and other data. Negotiate that OpenAI will provide data export in a commonly usable format via their API or a dump. Additionally, ask for a certification of data destruction once your compliance records are complete. If you have any custom models or configurations, verify whether they can be handed over and if the necessary parameters/settings are documented.
  • Transition period: It can be beneficial to negotiate a short period after termination becomes effective, during which the service may continue to operate, allowing for a gradual transition. For instance, if you terminate, you could request that the service continue for 30 days (billed pro rata) to allow you to transition users off. Or, if they terminate your access (for instance, if you breach and they decide to cut it off), they may still need to provide you with limited access for a short time to retrieve your data. The contract could include: “Upon any termination, OpenAI will provide reasonable cooperation, for up to X days, to transition Customer off the Services, including continued data access, at Customer’s request and expense.”
  • Refunds and unused fees: If you prepaid for a year and terminate early (whether for cause or convenience), clarify what happens to the unused portion of fees. Many contracts stipulate that fees are non-refundable; however, if termination is due to OpenAI’s breach or a legal issue on their part, you should expect a pro-rata refund. If you terminate for convenience and had a committed term, you might have to forfeit some fees or pay a termination charge – negotiate that down or out. Ideally, “if Customer terminates for OpenAI’s breach, OpenAI shall refund any fees paid for the period after termination.” Conversely, if OpenAI terminates your account because you breached its terms or terminated without cause, you may not receive a refund, but you can attempt to avoid incurring additional penalties.
  • Survival of terms: Ensure that certain clauses survive termination – typically, confidentiality, IP ownership, indemnities (for any claims arising from the period), liability limits, etc., will survive​. This is standard, but double-check to ensure that, for example, OpenAI’s obligation to keep your data confidential doesn’t lapse just because the contract ends.
  • Continuity for end-users: If your product integrates OpenAI, consider how you’ll continue service to your users if the contract ends. This is more of a planning item, but you may also consider negotiating escrow or special arrangements to ensure continuity. Software escrow (placing code in escrow) is generally irrelevant to an API service, as it cannot be easily self-hosted. However, perhaps an arrangement with a cloud partner (such as switching to Azure OpenAI, which utilizes the same models under Microsoft’s contract) could serve as your fallback. Not something OpenAI will include in their contract, but as a CIO, you plan for that externally. What you can do in the contract is ensure you aren’t contractually barred from making such contingency plans or migrating data to another provider.
  • Example – Executing Termination: Imagine you decide to switch to a different AI solution after a year due to cost issues. If you have a convenient termination right, you give the notice (say, 60 days before the end of the year) and prepare for migration. OpenAI, per the contract, provides a full export of all your Q&A logs and any fine-tuned model data. They also continue serving requests for 30 days after the termination effective date as a grace period (as you requested in negotiations), so your switchover to the new service is seamless for users. You then confirm the deletion of data on OpenAI’s side. Because you negotiated upfront, this exit was smooth and professional, avoiding panic or lost data. Conversely, if OpenAI were to terminate on you (maybe they pivot strategy and drop the product you use), your clause requiring advance notice and assistance means you’re not left in the lurch – you have a window to shift, and they must help minimize disruption.
  • Avoiding “evergreen” traps: One more thing – watch out for auto-renew that turned into a de facto evergreen contract. You should always diary the notice period to actively decide whether to renew. Some companies have missed the window and ended up stuck another year. Good negotiation and contract management go hand-in-hand: negotiate fair terms, then keep track of them (like termination notice deadlines).

Key takeaways:

Ensure you can cleanly exit the relationship if needed, on your terms.

That means being able to terminate for cause with remedies, and ideally for convenience, to remain agile. It means not losing your data or momentum when leaving.

A fair exit clause also holds OpenAI accountable; they know you can leave if they don’t perform, which encourages good service.

While no one enters a partnership expecting a breakup, preparing for one in business is wise. Doing so protects your company’s continuity and leverage, regardless of the future.

13. Red Flags to Watch For

Throughout the negotiation, look for red flags—contract elements (or omissions) that could trouble your enterprise in the future.

Here’s a checklist of things that should raise concern and prompt further negotiation or clarification:

  • Data usage loopholes: If you see any clause that even vaguely suggests OpenAI could use your data beyond serving you, wave the red flag. For instance, wording like “OpenAI may use Customer data to improve its services” (without your consent) would be unacceptable – you’d need to strike or modify that. Ensure all provisions align with the promise of no secondary use of your data. Red flag: any lack of a clear statement that you own your data and outputs, or any indication they might use your content in aggregate. Also, beware if the contract is silent on data usage – silence is not golden here; insist on explicit language protecting your data.
  • Missing confidentiality obligations: As noted earlier, initial versions of consumer-oriented terms lacked a reciprocal confidentiality clause​. In an enterprise deal, if you don’t see a confidentiality or non-disclosure section protecting your info, that’s a red flag. It must be added. Without it, your sensitive info might not be legally safeguarded (beyond data protection law). Therefore, never sign an agreement that doesn’t mark your data as confidential and requires the vendor to protect it.
  • One-sided change rights: Be cautious if OpenAI retains too much freedom to change the rules on you. For example, a clause “OpenAI may modify this Agreement or the Service upon notice” could allow significant changes in terms or features. You should at least require mutual agreement for any material changes or have the right to opt out (terminate) if you don’t agree. Red flag: short notice of price changes (14 days is too short for enterprise budgeting​), or their ability to throttle or alter your service without good reason. Ensure any change clauses are tempered by your rights (notice, approval, termination).
  • No SLA or vague SLAs: If the contract draft does not mention uptime or support commitments, that’s a sign that the service might be “best effort” – unacceptable if you depend on it. Red flag: Phrases like “the service is provided as is, with no guarantee of availability” are fine in a consumer context, but for an enterprise, a guarantee is needed. Additionally, if an SLA is present but offers no remedy (or only trivial credits), it needs improvement. A too lenient SLA (e.g., 95% uptime only) might also be a flag if your needs are higher.
  • Overly restrictive usage terms: While you should comply with usage policies, be cautious of overly broad or ambiguous restrictions that could come back to haunt you. For example, regarding the earlier-mentioned restriction against using outputs to develop competing models, if your company engages in any AI development, could that clause be used to claim you’re in breach? It’s broad enough to be concerning (“models that compete with OpenAI” could be interpreted widely). If you see such a clause, it’s a red flag to clarify and possibly narrow it. You don’t want to accidentally agree not to work on AI internally. Similarly, any restriction on “reverse engineering” the model is standard, but ensure it doesn’t prohibit doing necessary security testing or analysis for your understanding – clarify acceptable activities.
  • Uncapped your liability vs. capped theirs: If you notice that your company’s liabilities (like your indemnity to them) are not capped, but theirs to you are, that imbalance is a red flag. Liability provisions should be reciprocal in principle. If they expect you to indemnify them for misuse, that is also capped, or at least not more onerous than their indemnity.
  • No indemnity from the vendor: If the draft contract lacks vendor indemnification (for example, if it’s silent on OpenAI defending you against IP claims), that’s a significant red flag. You’d be exposed to third-party legal actions with no support. Don’t proceed without adding a solid indemnity clause in your favor, as discussed. This is particularly critical given the unsettled IP landscape of AI – you need that promise in writing.
  • Mandatory arbitration or unfavorable jurisdiction: OpenAI’s terms include a mandatory arbitration clause and class action waiver​. For some enterprises, agreeing to arbitration can be a red flag depending on corporate policy (many prefer going to court, especially if a lot is at stake). Additionally, the location of arbitration or courts (OpenAI may specify California law and venue) is also important. If your legal team isn’t comfortable with that, flag it. You may negotiate the governing law to be more neutral (though big vendors often insist on their home turf). If arbitration is acceptable, at least ensure it’s a reputable forum and perhaps carve out intellectual property disputes or injunctive relief (so you can go to court if you need to stop a data disclosure immediately).
  • Unlimited termination rights for vendor: Check if OpenAI has any broad rights to terminate or suspend service beyond clear reasons. They have the right to suspend for things like law requirements or policy violations​ – that’s expected. But if a clause like “OpenAI may terminate for convenience with X days’ notice” is present, that’s a red flag. You don’t want them to drop you unexpectedly. If present, negotiate for more guarantees (like they can’t terminate except for cause or at the end of the term). If they do, ask for sufficient notice and a penalty/refund.
  • Missing remedies for breach: If the contract doesn’t spell out what you can do if OpenAI fails to meet obligations (besides termination), that could be a flag. For example, missing an SLA is addressed by credits, while missing confidentiality is addressed by injunctive relief, and so on. Ensure the contract affirms that you can seek equitable relief (like a court injunction) if OpenAI threatens to leak data – some contracts try to limit even that. Don’t allow it.
  • Intellectual property of outputs unclear: Though we covered that you should own outputs, if the contract language is convoluted or gives OpenAI some rights to outputs beyond servicing you, clarify it. A subtle red flag would be wording like “OpenAI has a license to use outputs for any purpose” – likely not in enterprise terms, but it’s worth double-checking. Ensure nothing in the P section undermines your business’s ability to use the outputs freely.
  • Publicity and reference rights: Often, vendors put a clause that they can use your company’s name/logo as a customer reference. If that’s a concern (some enterprises disallow it without permission), flag it. You can negotiate to remove it or require your approval before any press release or use of your name. This might not be a showstopper, but it is a detail to catch – you don’t want to find your logo on OpenAI’s website without knowing.
  • Ambiguous service descriptions: Ensure the contract (or an attached order form) clearly describes the service you are receiving – including model versions, capacity, and features. If it’s vague, that’s a flag because you might think you’re getting something and then not get it. For example, if you assume GPT-4 access is included but the contract simply states “access to OpenAI API” without specifics, clarify this. Ambiguity can lead to disputes later.
  • Performance metrics lacking: If you expect a certain throughput (requests per minute) or latency, and the contract doesn’t specify it, consider that a minor red flag. You may not always get these guaranteed, but if your solution needs it, bring it up.
  • Hidden costs: Scan for any mention of additional fees for support, certain volumes, overages, etc. If something is buried in the fine print (such as charging overages at a very high rate if you exceed a limit), flag it and negotiate. No one likes billing surprises.

Watch for these red flags to address potential pitfalls before signing.

Many simply require tweaking language or adding missing pieces (like a confidentiality clause or an indemnity).

The key is not to gloss over anything that feels “off” or unusually one-sided.

In summary, as you finalize the contract, do a sanity check against this checklist.

A well-negotiated agreement will delineate each party’s rights and duties, with no major red flags remaining. If something still stands out and OpenAI isn’t willing to budge, weigh the criticality of it.

You may accept a less-than-ideal term if the overall value is high and the risk is manageable; otherwise, you may opt for a more favorable term.

However, you should do so consciously, aware of the implications, rather than by accident.

Trust your instincts and consult with legal counsel if it appears to be a red flag; address it now, not later, when it could become a serious issue.

Read about our case studies and how we can help your organization negotiate better deals with OpenAI.

Read about our GenAI Contract Negotiation Services.

Before You Sign That GenAI Contract — What Enterprises Must Know About OpenAI, Azure OpenAI, and Us

Do you want to know more about our OpenAI Contract Negotiation Service?

Please enable JavaScript in your browser to complete this form.
Name

Author
  • Fredrik Filipsson

    Fredrik Filipsson is the co-founder of Redress Compliance, a leading independent advisory firm specializing in Oracle, Microsoft, SAP, IBM, and Salesforce licensing. With over 20 years of experience in software licensing and contract negotiations, Fredrik has helped hundreds of organizations—including numerous Fortune 500 companies—optimize costs, avoid compliance risks, and secure favorable terms with major software vendors. Fredrik built his expertise over two decades working directly for IBM, SAP, and Oracle, where he gained in-depth knowledge of their licensing programs and sales practices. For the past 11 years, he has worked as a consultant, advising global enterprises on complex licensing challenges and large-scale contract negotiations.

    View all posts

Redress Compliance