OpenAI Negotiations

IP Rights in OpenAI Enterprise Agreements

Who Owns What A Guide to IP Rights in OpenAI Enterprise Agreements

Who Owns What? A Guide to IP Rights in OpenAI Enterprise Agreements

OpenAIโ€™s enterprise agreements are designed to reassure businesses that they own their data and the AI-generated results.

In OpenAIโ€™s standard enterprise terms, you retain ownership of all inputs you provide and all outputs generated for you.

This brief will examine how intellectual property rights operate in OpenAI Enterprise Agreements, covering model outputs, custom-trained models, and what constitutes derivative work, enabling IT, procurement, finance, and legal decision-makers to confidently evaluate and negotiate these contracts.

IP Ownership: Inputs and Outputs

One of the first questions enterprises ask is โ€œWho owns the content we put into or get out of OpenAIโ€™s systems?โ€ The good news is that OpenAIโ€™s enterprise terms give customers full ownership of both their inputs and outputs.

Any data, prompts, or documents your team provides remain your property.

Likewise, the text, code, or content that the AI model generates in response (โ€œoutputโ€) is owned by your organization from the moment you receive it. OpenAI even explicitly assigns to you any rights it might have in that output.

Why is this important? It means you can use AI-generated material freely in your business โ€“ incorporate it into products, reports, marketing copy, or software without fearing OpenAI will later claim ownership or royalties.

For example, if OpenAIโ€™s model helps draft a piece of code or an analyst report, your company can treat that output as it would any internally created asset.

This clarity on ownership is a key benefit of OpenAIโ€™s enterprise agreement, especially compared to older software models where vendors sometimes claimed rights in user-generated content.

That said, owning the output doesnโ€™t automatically guarantee the content is free of othersโ€™ rights (more on that later).

But from an agreement standpoint, OpenAI ensures the IP chain of custody is in your favor โ€“ you own what you give and what you get.

Just make sure this ownership principle is clearly stated in your contract or order form (OpenAIโ€™s standard language covers it, but double-check any custom terms during negotiation).

Custom Models and Training Data: Whatโ€™s Yours vs. Theirs

Many enterprises plan to fine-tune AI models on their proprietary data or build custom AI solutions.

In OpenAIโ€™s ecosystem, you can indeed train custom models (fine-tuned versions of GPT) with your data.

Itโ€™s crucial to understand the IP implications:

  • Your training data remains yours. Any datasets, documents, or prompt-completion pairs you upload for fine-tuning are your property. OpenAI doesnโ€™t take ownership of that data. Ensure the contract reflects this and that OpenAI only has a limited license to use it for performing the fine-tuning service on your behalf.
  • The fine-tuned model is exclusively for your use. OpenAI has stated that a model you fine-tune with your data will never be shared with other customers or used to train OpenAIโ€™s broader models. In practice, think of the fine-tuned model as your private instanceย of the modelย hosted by OpenAI. You get the benefit of a custom AI tailored to your needs, and competitors or other users wonโ€™t have access to that specific model or the insights from your data. This protects the investment you make in training the model. (For example, if a bank fine-tunes a model on its proprietary financial data, only that bank can use the resulting tailored model โ€“ OpenAI wonโ€™t turn around and offer it to other clients.)
  • Underlying model IP. Itโ€™s essential to note that OpenAI retains ownership of the underlying AI technology and the platform. Your rights are akin to a license to use the customized model, not ownership of the modelโ€™s code itself. In other words, you can use the fine-tuned model as long as youโ€™re using OpenAIโ€™s service, but you generally wonโ€™t receive the model weights to run elsewhere. This is standard for cloud-based AI services. If owning the model outright or deploying it on-premises is a requirement for your business, youโ€™d need to negotiate that explicitly (and it may not be something OpenAI allows, as they currently do not offer on-premises model deployment for their flagship models).
  • No unintended sharing. Ensure the agreement includes confidentiality and data protection commitments around your training data and any custom model. OpenAIโ€™s enterprise agreements typically promise to use your data only to provide the service to you (and not to improve their products without permission). As a best practice, also arrange for data deletion or retention rules (OpenAI usually offers data deletion within a set time frame by default). This ensures that, after fine-tuning or use, your proprietary data doesnโ€™t linger longer than necessary.

By clearly delineating whatโ€™s yours and what remains the vendorโ€™s, you protect your intellectual property while leveraging OpenAIโ€™s technology.

In summary: your data and outputs โ€“ yours; OpenAIโ€™s foundational model and platform โ€“ theirs (you just get to use it). This division should be documented in the contract to avoid any ambiguity later on.

Using AI Outputs: Rights and Restrictions

Because you own the AIโ€™s outputs under an OpenAI enterprise contract, your organization has broad rights to use those outputs however it sees fit. You can modify them, combine them with other content, and commercialize them.

For instance, if your team uses ChatGPT Enterprise to generate a draft policy document or creative content, you can edit and publish that content under your companyโ€™s name.

Thereโ€™s no obligation to credit OpenAI or pay additional fees for distributing that output. This freedom is a major enabler for enterprise use cases โ€“ it treats the AIโ€™s work product like work-for-hire that belongs to you.

However, itโ€™s not absolute freedom.

OpenAIโ€™s terms impose certain restrictions on how you can use the outputs, mainly to protect their technology and comply with the law:

  • No using outputs to compete with OpenAI. The contract typically forbids using the modelโ€™s output to develop AI models that compete with OpenAIโ€™s services. In practice, this means you shouldnโ€™t take large volumes of GPT-generated text and use it as training data to build your large language model. (OpenAI wants to prevent someone from essentially using GPT-4 to reverse engineer a clone.) Permitted exceptions usually allow for using outputs in limited ways to build smaller analytics or classification models. For example, using GPT outputs to help train a simple model that classifies customer inquiries might be fine, but you canโ€™t use GPT-4โ€™s answers to bootstrap a GPT-4 competitor. If your business plans involve using AI-generated data to improve your internal AI models, clarify this with OpenAI. They often allow it for non-competitive uses, but itโ€™s wise to get written clarification or an exception in the contract if your use case is borderline.
  • Outputs may not be unique. OpenAI cautions that the same or similar output could be generated for another user. Just because you receive a particular AI-generated text does not mean itโ€™s exclusive to you. If another company inputs a very similar prompt, they might get a comparable response. OpenAI explicitly states that only the responses generated for you are your โ€œOutput.โ€ If similar text is generated independently for someone else, thatโ€™s their output, not yours. Practically, this means you shouldnโ€™t assume your AI-generated content is one-of-a-kind. If uniqueness is crucial (for example, for creative content), you may need to implement measures, such as adding your creative changes or using tools to check the frequency of an output. For most enterprise uses (reports, code, analyses), this is not an issue, but itโ€™s a point to be aware of.
  • Compliance with usage policies. Even though you own the output, you must use it in line with OpenAIโ€™s use policies and applicable laws. The vendorโ€™s terms prohibit illegal use of the AI or output (for example, you canโ€™t take output and knowingly use it for fraudulent purposes or harassment). They also require that if you use the output publicly or in user-facing scenarios, you adhere to any content guidelines. These restrictions are generally common sense and aligned with your companyโ€™s standards, but make sure to review OpenAIโ€™s usage policies. A notable point is that OpenAIโ€™s policies prohibit using their services to generate certain sensitive contentย (e.g., hate speech, violence, or regulated information). As an enterprise, ensure that using the output wonโ€™t inadvertently put you in breach of these rules. Internally, set guidelines for staff on approved use cases.

Table: Key IP Terms and Considerations in OpenAI Enterprise Agreements

IP AspectOpenAIโ€™s Standard ApproachEnterprise Considerations
Ownership of InputsYou retain ownership of all data and prompts you provide.Confirm this is stated. Only share inputs you have rights to. Protect sensitive data (use NDAs/DPA as needed).
Ownership of OutputsYou own all AI-generated outputs you receive. OpenAI assigns any rights in the output to you.Use outputs freely in products and internally. Still vet important outputs for any third-party content before broad use.
Model IP (OpenAIโ€™s Models)OpenAI retains all rights to its base models and services. You get a license to use them.You wonโ€™t receive model source or weights. Plan for a dependency on OpenAI. Negotiate rights to your fine-tuned modelโ€™s use (exclusive access).
Use of OutputsCannot use outputs to build competing AI models (with narrow exceptions for analytics).Ensure this aligns with your plans. If you need to use outputs for ML, get clarification or an exception in writing.
Data Usage by OpenAIOpenAI wonโ€™t use your inputs or outputs to train its models by default (enterprise setting). Data used only to provide service (and for abuse prevention).Verify confidentiality clauses. If you have extra-sensitive data, consider an on-prem or private cloud requirement (though OpenAIโ€™s solutions are cloud-based).
Feedback to VendorIf you provide feedback or suggestions to OpenAI, they can use it freely (you grant them a license).Be mindful when suggesting improvements or sharing ideas; you likely wonโ€™t own any resulting enhancements. Consider this before sharing proprietary innovations.
Indemnity for IP ClaimsOpenAI offers to defend/indemnify customers against third-party IP infringement claims arising from their technology (with some exceptions).Check the indemnity clause covers model outputs and training data issues. If needed, negotiate broader protection or warranty for non-infringement, as discussed below.

This table summarizes who owns what and how you can (or canโ€™t) use the AIโ€™s output. Itโ€™s a handy checklist of clauses to double-check in any OpenAI enterprise agreement or order form you sign.

Third-Party IP and Derivative Content Concerns

Owning the output of the AI does not eliminate intellectual property risks. A common concern is: what if the AIโ€™s output inadvertently includes someone elseโ€™s copyrighted or proprietary material?

For example, if you ask the model to generate a piece of code or a song lyric, is there a chance it might reproduce passages from its training data? And if so, who is liable?

OpenAIโ€™s stance (and the emerging legal consensus) is that most AI-generated content is highly transformative and not a verbatim copy of training data โ€“ thus, not automatically a derivative work of that data.

The model doesnโ€™t retrieve exact text unless prompted in very specific ways; it generates new text based on patterns.

However, exceptions exist. There have been instances where code or literature from training data was reproduced quite closely.

If an AI output happens to match copyrighted text (say, it spits out a famous poem line or a snippet of licensed code), your organization could face a third-party claim if you use that output commercially.

Hereโ€™s how to navigate these issues:

  • Understand your responsibilities. OpenAIโ€™s terms typically place the onus on the user to ensure their use of the output doesnโ€™t violate any laws or rights. In practice, this means your company should implement processes to review AI outputs, especially those used externally, for potential IP conflicts. For example, if generating text that will be published, run it through a plagiarism checker or have legal review of anything that feels too โ€œrealโ€ (e.g., a detailed news excerpt). For code outputs, consider using open-source compliance tools to see if a generated function matches known licensed code. Owning the output doesnโ€™t shield you if that output unknowingly contains someone elseโ€™s protected material.
  • Leverage indemnities and warranties. The enterprise agreementโ€™s indemnification clause (covered in the next section) is a safety net: OpenAI usually agrees to defend you if a third party claims that OpenAIโ€™s technology or output infringed their IP. However, this protection can have carve-outs (for instance, if the issue arose from your specific input or misuse). Itโ€™s wise to negotiate clarity here. At a minimum, ask OpenAI to affirm that the service isnโ€™t knowingly providing plagiarized or infringing content. They may not promise perfection (no AI can), but even a warranty that โ€œto the best of OpenAIโ€™s knowledge, the service does not intentionally output material known to be third-party copyrighted text beyond de minimis amountsโ€ can be useful reassurance.
  • Monitor high-risk areas. Certain outputs are higher risk for IP: generating song lyrics, lengthy code, or verbatim articles are more likely to hit known works. If your use case involves these (such as generating software code), be especially careful. Some enterprises put in place an internal AI usage policy. For example, โ€œAI-generated code must be reviewed for open-source licenses before use in production,โ€ or โ€œAI-written text for marketing must be edited to ensure originality.โ€ These internal safeguards complement the contract protections.
  • Keep an eye on evolving law. The legal landscape around AI and copyright is still developing (with ongoing lawsuits claiming that AI outputs might infringe copyrights if too similar to training data). Courts so far have been skeptical of broad โ€œthe AI output is automatically a derivative workโ€ claims. Still, as an enterprise, staying informed is key. If new rulings or regulations emerge, you might need to adjust your practices or even update contract terms at renewal. OpenAI itself might update its policies as these issues evolve โ€“ your agreement should allow you some flexibility if a change in law significantly impacts your risk.

In summary, treat AI outputs with the same due diligence you would any third-party content introduced into your workflow.

You hold the legal title to the content, but you also want to ensure you have the freedom to use it without interference.

That means being proactive about screening for IP conflicts and securing contractual promises that back you up if something slips through.

Indemnification and Liability: Covering IP Risks

Even with careful use, enterprises want assurance that if something goes wrong on the IP front, they wonโ€™t be left holding the bag alone. Indemnification is the primary contract mechanism in this case.

In an OpenAI enterprise agreement, look for an IP indemnity clause where OpenAI commits to defend and compensate your company if a third party sues, claiming the AI service or its outputs infringed their intellectual property rights.

OpenAIโ€™s standard business terms do offer an IP indemnity, which is a strong positive for customers.

Typically, it says OpenAI will indemnify you against claims that the OpenAI services (including the model outputs) violate a third partyโ€™s IP rights, with certain exceptions.

Those exceptions commonly include instances where you used the service in an unauthorized manner or if the claim is due to content you provided.

For example, if you input someone elseโ€™s proprietary text and then thereโ€™s a claim, thatโ€™s on you, not on OpenAI. However, if a claim arises solely from how the model was trained or an output it generated independently, OpenAI should cover it.

Make sure the clause explicitly covers both the model and its training data โ€“ essentially anything under OpenAIโ€™s control.

A few tips around indemnities and liability in your negotiation:

  • Push for uncapped IP indemnity if possible. Often, vendors cap their liability, but carve out indemnification obligations from those caps (since a serious IP lawsuit could far exceed a typical cap). Confirm that any indemnity OpenAI provides is either uncapped or has a high enough cap separate from general liability to truly protect you. This aligns OpenAIโ€™s incentives to stand behind their productโ€™s safety. OpenAIโ€™s contracts have been trending to exclude IP indemnity from the overall liability cap โ€“ double-check this in your draft and negotiate it clearly if not.
  • Address remediation steps. Ask what OpenAI would do if an IP issue is identified. A good indemnification clause might say that OpenAI can choose to modify the service or output to avoid infringement, obtain a license for you to continue using it, or, as a last resort, terminate the service with a refund. Knowing these options is helpful. If, for instance, a news organization claims that your AI-generated report infringes on their article, indemnity means that OpenAI handles the legal defense and any resulting settlement. They might also tweak the model to avoid repeating that mistake. Ensure your team promptly notifies OpenAI if any IP claim arises โ€“ timely notice is typically a condition for indemnity.
  • Consider additional warranties. Indemnity covers third-party lawsuits, but what about your losses if something goes awry? Vendors rarely indemnify for things like defamation or inaccurate content, but you might negotiate a simple warranty that the service, as provided, isnโ€™t knowingly outputting illegal content or infringing material. OpenAI will be cautious here, but even a mild warranty coupled with your testing protocols is better than nothing. It sets an expectation of quality. Also, check for a warranty that the outputs will be original to a degree (acknowledging the nature of AI). Most likely, OpenAI will stick to โ€œservice as isโ€ except for an indemnity, which makes your due diligence (and possibly insurance) important.
  • Your indemnity to OpenAI. Expect that the agreement will also require you to indemnify OpenAI for certain things โ€“ typically if your use of the service causes OpenAI to be sued (for example, if you provide illegal data or use the AI to libel someone and they sue OpenAI). This is standard. Just ensure itโ€™s narrowly scoped: you indemnify them for misuse or for claims arising from the content you provided, not for the normal operation of the AI on clean inputs. In negotiation, confirm that your indemnity doesnโ€™t cover โ€œordinary risksโ€ that should be on OpenAI. Each party should cover the risks under their control: OpenAI covers the model and its outputs; you cover how you choose to use those outputs.

By securing a solid indemnification and understanding the liability structure, you create a safety net. It means that in the worst-case scenario โ€“ say an IP lawsuit in a few years claiming your AI-generated content infringed something โ€“ you have contractual backup.

For many enterprise buyers, this clause is non-negotiable and may even be a board-level concern.

Donโ€™t hesitate to discuss it in depth with OpenAIโ€™s team; you may find that they have already built in robust protections to ease these worries, given the prevalence of the question of AI and IP.

Negotiating OpenAI Agreements: Practical Considerations

When approaching an OpenAI enterprise deal, come prepared with a clear list of IP-related points to address.

OpenAIโ€™s standard agreement is a strong starting point, but every enterprise has unique needs.

Here are some practical insights and tactics for negotiation and contract management regarding IP:

  • Get clarity on definitions. Ensure terms like โ€œInput,โ€ โ€œOutput,โ€ and โ€œCustomer Contentโ€ are clearly defined in the contract. OpenAI defines them in favor of the customer (your data in, your data out), which is good. But double-check that any custom agreement or Order Form doesnโ€™t override those definitions in a problematic way. If your company uses multiple OpenAI services (e.g., API and ChatGPT Enterprise), confirm that the ownership and usage terms apply uniformly across all.
  • Address data residency or transfer if needed. While not strictly related to IP ownership, some enterprises in finance or government care about where their data and outputs are stored/processed. OpenAIโ€™s default is to process in the U.S. (and they have SOC2 compliance, etc.). If your policies require data localization or special handling, be sure to include this in negotiations. This ties into IP in that you want to maintain control and confidentiality of your intellectual assets at all times.
  • Verify end-of-contract handling. What happens if you terminate the contract? OpenAIโ€™s terms state that they will delete customer content after a specified period (typically 30 days) once the agreement ends. This is good from an IP protection standpoint โ€“ you donโ€™t want them retaining your proprietary prompts or outputs indefinitely. Make sure this deletion commitment is in place. Additionally, arrange to export any important outputs or data before termination so you don’t lose work results. For example, if you fine-tuned a model or built a knowledge base of prompts, ensure you can retrieve those artifacts (outputs, not the model weights) for your records.
  • Consider future audit and compliance needs. If your industry might require an audit trail of how IP was generated (e.g., showing that an AI wrote a section of code for regulatory reasons), plan for that. The contract could include a right to certain usage logs or a cooperation clause that allows OpenAI to provide information showing how outputs were generated (without revealing their secret sauce, of course). This is a niche concern, but it can arise in regulated sectors. Being proactive in the contract can save headaches later.
  • Leverage benchmarks and alternatives. If youโ€™re evaluating OpenAI alongside other AI providers, compare their IP stances. OpenAIโ€™s โ€œyou own the outputโ€ stance is relatively generous. Some competitors might have similar terms, but always check the fine print. Use this comparison in negotiation: if another vendor offers a stronger warranty or more flexibility in usage, you can ask OpenAI to match it. Conversely, if OpenAIโ€™s terms are better, thatโ€™s a selling point to justify choosing them, which procurement can take note of. The goal is to ensure your enterprise isnโ€™t getting a worse deal than the industry standard on IP rights.
  • Keep lawyers and tech folks in sync. Negotiating AI contracts is multidisciplinary. Legal professionals should understand the technologyโ€™s nuances (such as what fine-tuning involves), and IT professionals should understand the legal terms (such as why they cannot simply export a model). Bring your AI experts, security, and legal to the same table with OpenAIโ€™s team. Often, questions about IP can be resolved with a simple explanation or a minor contract tweak. For example, suppose you need an exception to use outputs in a certain way. In that case, OpenAI might agree as long as itโ€™s not broadly redistributing their model output competitively.
  • Plan for ongoing governance. Once the contract is signed, managing IP doesnโ€™t stop. Establish an internalย governance process for the use of AI. Track what types of content you are creating with OpenAI. Periodically review if the outputs are meeting your expectations for originality and quality. If any issues arise (such as an output that appears too similar to copyrighted text), document them and notify OpenAI so they can address the issue. This not only protects you, but gives OpenAI feedback to improve their models or filters โ€“ a win-win that doesnโ€™t require giving up your IP.

By being an active participant in the contracting process and the subsequent usage of the AI, you build a trusted partnership with OpenAI.

They get a customer who uses the technology responsibly, and you get a vendor who respects your intellectual property boundaries. This balance is exactly what a well-negotiated OpenAI enterprise agreement should achieve.

Recommendations

  1. Explicitly confirm IP ownership in writing: Ensure your contract clearly states that your organization retains ownership of all inputs and owns all outputs. This should mirror OpenAIโ€™s standard terms โ€“ if anything is unclear, get it clarified or added in an addendum. A quick clause reaffirming โ€œCustomer owns the content it provides and all results producedโ€ removes any doubt.
  2. Secure broad IP indemnification: Donโ€™t proceed without an indemnity from OpenAI for intellectual property claims. Negotiate it to cover any third-party claims arising from the AIโ€™s operation or outputs (aside from your misuse). Verify that this indemnity is uncapped or has a high cap that is separate from other liabilities. Itโ€™s your safety net โ€“ make it strong.
  3. Include a non-training and confidentiality clause: Make sure the agreement (or a supporting DPA) states that OpenAI will not use your data or outputs to improve their models or share it with others. OpenAIโ€™s enterprise policy already does this, but having it in the contract is vital. This protects your proprietary information and competitive advantage.
  4. Clarify acceptable use of outputs: If you intend to use AI outputs in training internal tools or other projects, discuss it upfront. Document any permitted uses that might otherwise fall under the โ€œdonโ€™t use outputs to competeโ€ restriction. For example, get written permission if you plan to use ChatGPT-generated text as part of a dataset to train a smaller internal model. Nail down these details to avoid breach of contract later.
  5. Ask for quality assurances (within reason): While no AI company will guarantee perfection, you can request a warranty that the service isnโ€™t intentionally providing known copyrighted material. Even a modest assurance or commitment to assist if problematic output appears is worthwhile. This could be as simple as OpenAI agreeing to cooperate to remediate any output that is alleged to infringe someoneโ€™s IP.
  6. Plan for IP review processes internally: As part of managing the contract, set up internal steps for reviewing outputs that will be widely used (especially external content). For instance, require a legal or editorial review for AI-generated text before publishing, or code review for AI-written code. This isnโ€™t a contractual term, but rather a recommendation to operationalize your IP risk management. It will complement whatever the contract promises by catching issues early on.
  7. Utilize contract renewal to update terms: The AI field is evolving. At renewal time, revisit the IP clauses. If new industry standards or concerns have emerged (such as new laws on AI data usage), use the renewal as an opportunity to update the agreement. OpenAI might also improve its terms over time. Keep the dialogue open to ensure your contract remains aligned with best practices.
  8. Educate your users and stakeholders: Make sure your team knows the dos and donโ€™ts from the contract. If the agreement says โ€œdonโ€™t input sensitive personal dataโ€ or โ€œdonโ€™t use outputs to create competing models,โ€ translate that into clear internal guidelines. Empower your end-users (developers, analysts, etc.) with knowledge of what they can safely do with the AI and its outputs. This prevents inadvertent breaches and protects IP proactively.
  9. Keep records of AI-generated content: It can be useful to log which content was generated by AI (and when/by whom). This is more of a governance tip. In case of any future IP dispute or question, you have an audit trail. Some enterprises tag AI outputs or use metadata to mark them. This ties into compliance and IP management, ensuring transparency about the origin of content.

Checklist: 5 Actions to Take

  1. Review the OpenAI Agreementโ€™s IP Clauses: Obtain the latest OpenAI enterprise terms or your draft contract. Highlight sections on ownership, usage rights, confidentiality, and indemnification. Make sure you fully understand these clauses internally (legal and procurement should interpret how they protect your interests).
  2. Map Your Use Cases to the Contract: List how your enterprise intends to use OpenAIโ€™s services (e.g., generating internal reports, customer-facing content, fine-tuning a model on your data, etc.). For each use case, check the contract for alignment. Are you covered to do that? Identify any gaps or restrictions (for example, if you plan to use outputs in a certain way, ensure itโ€™s allowed).
  3. Identify Negotiation Priorities: Based on the review, pinpoint what needs negotiating. Common ones include adding or strengthening an IP indemnity clause, confirming a commitment to no training on your data, adjusting any usage restrictions that conflict with your goals, and ensuring that data handling meets your requirements. Prepare specific language or requests for these points.
  4. Engage OpenAI (or Reseller) with Your Terms: Initiate the negotiation by conveying your needed changes or confirmations. For each concern, propose a solution: e.g., โ€œWeโ€™d like to add that we own outputs and OpenAI assigns rights to us (as per standard terms), just to be explicit,โ€ or โ€œWe need an indemnification clause that covers any output-based IP claims.โ€ Document their responses. Most likely, OpenAI will accommodate reasonable requests that align with their standard policy. Where they push back, understand why, and see if an alternative protection can be put in place.
  5. Finalize and Educate: Once the contract reflects a fair balance (your inputs/outputs are safe, your risks are mitigated, and OpenAIโ€™s interests are protected as well), finalize the agreement. Then educate your team about the key points. For instance, brief your developers that โ€œwe can use the content freely, but remember we agreed not to try and reverse-engineer the model or use its answers to build our own GPT.โ€ Additionally, establish a point of contact (perhaps someone in legal or IT governance) for any questions or concerns that arise while using OpenAI. With the contract signed and the team informed, you are set to deploy OpenAIโ€™s tech with confidence that your IP is safeguarded.

FAQ

Q1: Do we (the customer) own all outputs from OpenAIโ€™s models, even code or creative content?
A: Yes. Under OpenAIโ€™s enterprise agreements, your organization is considered the owner of all AI-generated output that you receive, to the extent allowed by law. This includes code, text, images โ€“ any content the model produces for you. OpenAI assigns any of its rights in that content over to you. In practical terms, you can treat the output as you would material created by an employee or contractor. However, remember that owning it doesnโ€™t automatically clear it of external IP claims โ€“ you own it. However, you should still ensure itโ€™s safe to use (no confidential or copyrighted material from others hiding in it).

Q2: Can OpenAI use our prompts or data that we input into the system for its purposes?
A: Not without your permission in the enterprise context. OpenAIโ€™s enterprise terms state that they will only use your inputs (and outputs) to deliver the service to you โ€“ including tasks such as processing the prompt, generating the response, and ensuring compliance with laws and safety regulations. They explicitly will not use your business data or prompts to train their public models or improve their services unless you opt in to such a program. This is a key differentiator of the enterprise offering (for consumer users, data might be used to improve the model by default, but not so for businesses). Your data remains confidential and is not shared with others for their benefit.

Q3: If the AI output includes something copyrighted (like a paragraph from a book), do we get in trouble for using it?
A: You could be at risk if you use it verbatim without permission. Owning the output means OpenAI wonโ€™t claim it โ€“ but it doesnโ€™t grant you a license to someone elseโ€™s content that might be embedded in that output. If an AI accidentally produces a substantial excerpt from a copyrighted work, the original author or publisher still has rights over that excerpt. In such a case, your company should treat it like any third-party content: either donโ€™t use it, or secure permission/licensing if you need to use it. That said, outright copying is rare with text models unless prompted specifically. Many enterprises mitigate this by filtering or checking outputs. Also, your contractโ€™s indemnification clause with OpenAI is there to cover you if a third party claims the modelโ€™s output infringes their IP โ€“ so OpenAI would step in to handle the legal side. But itโ€™s best to catch and avoid using any copied output in the first place.

Q4: We plan to fine-tune GPT-4 on our proprietary data โ€“ do we then own the fine-tuned model?
A: You will have exclusive use of that fine-tuned model, but not ownership in the traditional sense. The fine-tuned model is built on OpenAIโ€™s technology with your data adjustments. OpenAI will host and run that model for you, and they contractually commit that no one else can use your fine-tuned model and that they wonโ€™t incorporate your fine-tuning data into the base product. In effect, itโ€™s your custom AI service. However, OpenAI still retains ownership of the underlying AI model and platform. Think of it like this: you own your data and the outputs, and you โ€œownโ€ the configuration that is the fine-tuned model in the sense that itโ€™s for your eyes only. However, you canโ€™t take that modelโ€™s code and give it to another vendor or run it on your servers (unless OpenAI agrees to this, which they generally donโ€™t for proprietary models). If model ownership is a sticking point, discuss it with OpenAI โ€“ they may offer solutions like escrow or special terms. However, the arrangement is typically exclusive use rather than transfer of ownership.

Q5: Are we permitted to integrate OpenAIโ€™s outputs into our products and services for commercial purposes? Are there any catches we should be aware of?
A: Yes, you are allowed โ€“ thatโ€™s a primary use of the service. If you use OpenAIโ€™s model to generate content (text, images, etc.), you can include those outputs in your commercial products, services, or internal operations without needing an additional license from OpenAI. For example, if ChatGPT helps generate a FAQ section for your app or code for a software feature, you can deploy that. The โ€œcatchesโ€ to keep in mind: (1) You cannot present OpenAIโ€™s output as if OpenAI endorses your product (avoid using their name or branding in a way that implies partnership unless you have one). (2) You shouldnโ€™t expose the raw AI model interface to your end-users unless thatโ€™s part of your agreement (different from embedding the content, which is fine). (3) Ensure the content aligns with legal and ethical norms โ€“ you wouldnโ€™t want to publish something the AI said that might be problematic or unvetted. And internally, maintain attribution where needed: some companies mark content as AI-generated in metadata to track it. But in terms of IP, as long as youโ€™ve vetted the output, you are free to monetize and use it as you see fit. This ability to commercialize outputs is a big reason enterprises opt for the paid service, and OpenAIโ€™s contract supports it.

Read about our GenAI Negotiation Service.

The 5 Hidden Challenges in OpenAI Contractsโ€”and How to Beat Them

Read about our OpenAI Contract Negotiation Case Studies.

Would you like to discuss our OpenAI Negotiation Service with us?

Please enable JavaScript in your browser to complete this form.
Name
Author
  • Fredrik Filipsson

    Fredrik Filipsson is the co-founder of Redress Compliance, a leading independent advisory firm specializing in Oracle, Microsoft, SAP, IBM, and Salesforce licensing. With over 20 years of experience in software licensing and contract negotiations, Fredrik has helped hundreds of organizationsโ€”including numerous Fortune 500 companiesโ€”optimize costs, avoid compliance risks, and secure favorable terms with major software vendors. Fredrik built his expertise over two decades working directly for IBM, SAP, and Oracle, where he gained in-depth knowledge of their licensing programs and sales practices. For the past 11 years, he has worked as a consultant, advising global enterprises on complex licensing challenges and large-scale contract negotiations.

    View all posts

Redress Compliance