Who Owns What? A Guide to IP Rights in OpenAI Enterprise Agreements
OpenAIโs enterprise agreements are designed to reassure businesses that they own their data and the AI-generated results.
In OpenAIโs standard enterprise terms, you retain ownership of all inputs you provide and all outputs generated for you.
This brief will examine how intellectual property rights operate in OpenAI Enterprise Agreements, covering model outputs, custom-trained models, and what constitutes derivative work, enabling IT, procurement, finance, and legal decision-makers to confidently evaluate and negotiate these contracts.
IP Ownership: Inputs and Outputs
One of the first questions enterprises ask is โWho owns the content we put into or get out of OpenAIโs systems?โ The good news is that OpenAIโs enterprise terms give customers full ownership of both their inputs and outputs.
Any data, prompts, or documents your team provides remain your property.
Likewise, the text, code, or content that the AI model generates in response (โoutputโ) is owned by your organization from the moment you receive it. OpenAI even explicitly assigns to you any rights it might have in that output.
Why is this important? It means you can use AI-generated material freely in your business โ incorporate it into products, reports, marketing copy, or software without fearing OpenAI will later claim ownership or royalties.
For example, if OpenAIโs model helps draft a piece of code or an analyst report, your company can treat that output as it would any internally created asset.
This clarity on ownership is a key benefit of OpenAIโs enterprise agreement, especially compared to older software models where vendors sometimes claimed rights in user-generated content.
That said, owning the output doesnโt automatically guarantee the content is free of othersโ rights (more on that later).
But from an agreement standpoint, OpenAI ensures the IP chain of custody is in your favor โ you own what you give and what you get.
Just make sure this ownership principle is clearly stated in your contract or order form (OpenAIโs standard language covers it, but double-check any custom terms during negotiation).
Custom Models and Training Data: Whatโs Yours vs. Theirs
Many enterprises plan to fine-tune AI models on their proprietary data or build custom AI solutions.
In OpenAIโs ecosystem, you can indeed train custom models (fine-tuned versions of GPT) with your data.
Itโs crucial to understand the IP implications:
- Your training data remains yours. Any datasets, documents, or prompt-completion pairs you upload for fine-tuning are your property. OpenAI doesnโt take ownership of that data. Ensure the contract reflects this and that OpenAI only has a limited license to use it for performing the fine-tuning service on your behalf.
- The fine-tuned model is exclusively for your use. OpenAI has stated that a model you fine-tune with your data will never be shared with other customers or used to train OpenAIโs broader models. In practice, think of the fine-tuned model as your private instanceย of the modelย hosted by OpenAI. You get the benefit of a custom AI tailored to your needs, and competitors or other users wonโt have access to that specific model or the insights from your data. This protects the investment you make in training the model. (For example, if a bank fine-tunes a model on its proprietary financial data, only that bank can use the resulting tailored model โ OpenAI wonโt turn around and offer it to other clients.)
- Underlying model IP. Itโs essential to note that OpenAI retains ownership of the underlying AI technology and the platform. Your rights are akin to a license to use the customized model, not ownership of the modelโs code itself. In other words, you can use the fine-tuned model as long as youโre using OpenAIโs service, but you generally wonโt receive the model weights to run elsewhere. This is standard for cloud-based AI services. If owning the model outright or deploying it on-premises is a requirement for your business, youโd need to negotiate that explicitly (and it may not be something OpenAI allows, as they currently do not offer on-premises model deployment for their flagship models).
- No unintended sharing. Ensure the agreement includes confidentiality and data protection commitments around your training data and any custom model. OpenAIโs enterprise agreements typically promise to use your data only to provide the service to you (and not to improve their products without permission). As a best practice, also arrange for data deletion or retention rules (OpenAI usually offers data deletion within a set time frame by default). This ensures that, after fine-tuning or use, your proprietary data doesnโt linger longer than necessary.
By clearly delineating whatโs yours and what remains the vendorโs, you protect your intellectual property while leveraging OpenAIโs technology.
In summary: your data and outputs โ yours; OpenAIโs foundational model and platform โ theirs (you just get to use it). This division should be documented in the contract to avoid any ambiguity later on.
Using AI Outputs: Rights and Restrictions
Because you own the AIโs outputs under an OpenAI enterprise contract, your organization has broad rights to use those outputs however it sees fit. You can modify them, combine them with other content, and commercialize them.
For instance, if your team uses ChatGPT Enterprise to generate a draft policy document or creative content, you can edit and publish that content under your companyโs name.
Thereโs no obligation to credit OpenAI or pay additional fees for distributing that output. This freedom is a major enabler for enterprise use cases โ it treats the AIโs work product like work-for-hire that belongs to you.
However, itโs not absolute freedom.
OpenAIโs terms impose certain restrictions on how you can use the outputs, mainly to protect their technology and comply with the law:
- No using outputs to compete with OpenAI. The contract typically forbids using the modelโs output to develop AI models that compete with OpenAIโs services. In practice, this means you shouldnโt take large volumes of GPT-generated text and use it as training data to build your large language model. (OpenAI wants to prevent someone from essentially using GPT-4 to reverse engineer a clone.) Permitted exceptions usually allow for using outputs in limited ways to build smaller analytics or classification models. For example, using GPT outputs to help train a simple model that classifies customer inquiries might be fine, but you canโt use GPT-4โs answers to bootstrap a GPT-4 competitor. If your business plans involve using AI-generated data to improve your internal AI models, clarify this with OpenAI. They often allow it for non-competitive uses, but itโs wise to get written clarification or an exception in the contract if your use case is borderline.
- Outputs may not be unique. OpenAI cautions that the same or similar output could be generated for another user. Just because you receive a particular AI-generated text does not mean itโs exclusive to you. If another company inputs a very similar prompt, they might get a comparable response. OpenAI explicitly states that only the responses generated for you are your โOutput.โ If similar text is generated independently for someone else, thatโs their output, not yours. Practically, this means you shouldnโt assume your AI-generated content is one-of-a-kind. If uniqueness is crucial (for example, for creative content), you may need to implement measures, such as adding your creative changes or using tools to check the frequency of an output. For most enterprise uses (reports, code, analyses), this is not an issue, but itโs a point to be aware of.
- Compliance with usage policies. Even though you own the output, you must use it in line with OpenAIโs use policies and applicable laws. The vendorโs terms prohibit illegal use of the AI or output (for example, you canโt take output and knowingly use it for fraudulent purposes or harassment). They also require that if you use the output publicly or in user-facing scenarios, you adhere to any content guidelines. These restrictions are generally common sense and aligned with your companyโs standards, but make sure to review OpenAIโs usage policies. A notable point is that OpenAIโs policies prohibit using their services to generate certain sensitive contentย (e.g., hate speech, violence, or regulated information). As an enterprise, ensure that using the output wonโt inadvertently put you in breach of these rules. Internally, set guidelines for staff on approved use cases.
Table: Key IP Terms and Considerations in OpenAI Enterprise Agreements
IP Aspect | OpenAIโs Standard Approach | Enterprise Considerations |
---|---|---|
Ownership of Inputs | You retain ownership of all data and prompts you provide. | Confirm this is stated. Only share inputs you have rights to. Protect sensitive data (use NDAs/DPA as needed). |
Ownership of Outputs | You own all AI-generated outputs you receive. OpenAI assigns any rights in the output to you. | Use outputs freely in products and internally. Still vet important outputs for any third-party content before broad use. |
Model IP (OpenAIโs Models) | OpenAI retains all rights to its base models and services. You get a license to use them. | You wonโt receive model source or weights. Plan for a dependency on OpenAI. Negotiate rights to your fine-tuned modelโs use (exclusive access). |
Use of Outputs | Cannot use outputs to build competing AI models (with narrow exceptions for analytics). | Ensure this aligns with your plans. If you need to use outputs for ML, get clarification or an exception in writing. |
Data Usage by OpenAI | OpenAI wonโt use your inputs or outputs to train its models by default (enterprise setting). Data used only to provide service (and for abuse prevention). | Verify confidentiality clauses. If you have extra-sensitive data, consider an on-prem or private cloud requirement (though OpenAIโs solutions are cloud-based). |
Feedback to Vendor | If you provide feedback or suggestions to OpenAI, they can use it freely (you grant them a license). | Be mindful when suggesting improvements or sharing ideas; you likely wonโt own any resulting enhancements. Consider this before sharing proprietary innovations. |
Indemnity for IP Claims | OpenAI offers to defend/indemnify customers against third-party IP infringement claims arising from their technology (with some exceptions). | Check the indemnity clause covers model outputs and training data issues. If needed, negotiate broader protection or warranty for non-infringement, as discussed below. |
This table summarizes who owns what and how you can (or canโt) use the AIโs output. Itโs a handy checklist of clauses to double-check in any OpenAI enterprise agreement or order form you sign.
Third-Party IP and Derivative Content Concerns
Owning the output of the AI does not eliminate intellectual property risks. A common concern is: what if the AIโs output inadvertently includes someone elseโs copyrighted or proprietary material?
For example, if you ask the model to generate a piece of code or a song lyric, is there a chance it might reproduce passages from its training data? And if so, who is liable?
OpenAIโs stance (and the emerging legal consensus) is that most AI-generated content is highly transformative and not a verbatim copy of training data โ thus, not automatically a derivative work of that data.
The model doesnโt retrieve exact text unless prompted in very specific ways; it generates new text based on patterns.
However, exceptions exist. There have been instances where code or literature from training data was reproduced quite closely.
If an AI output happens to match copyrighted text (say, it spits out a famous poem line or a snippet of licensed code), your organization could face a third-party claim if you use that output commercially.
Hereโs how to navigate these issues:
- Understand your responsibilities. OpenAIโs terms typically place the onus on the user to ensure their use of the output doesnโt violate any laws or rights. In practice, this means your company should implement processes to review AI outputs, especially those used externally, for potential IP conflicts. For example, if generating text that will be published, run it through a plagiarism checker or have legal review of anything that feels too โrealโ (e.g., a detailed news excerpt). For code outputs, consider using open-source compliance tools to see if a generated function matches known licensed code. Owning the output doesnโt shield you if that output unknowingly contains someone elseโs protected material.
- Leverage indemnities and warranties. The enterprise agreementโs indemnification clause (covered in the next section) is a safety net: OpenAI usually agrees to defend you if a third party claims that OpenAIโs technology or output infringed their IP. However, this protection can have carve-outs (for instance, if the issue arose from your specific input or misuse). Itโs wise to negotiate clarity here. At a minimum, ask OpenAI to affirm that the service isnโt knowingly providing plagiarized or infringing content. They may not promise perfection (no AI can), but even a warranty that โto the best of OpenAIโs knowledge, the service does not intentionally output material known to be third-party copyrighted text beyond de minimis amountsโ can be useful reassurance.
- Monitor high-risk areas. Certain outputs are higher risk for IP: generating song lyrics, lengthy code, or verbatim articles are more likely to hit known works. If your use case involves these (such as generating software code), be especially careful. Some enterprises put in place an internal AI usage policy. For example, โAI-generated code must be reviewed for open-source licenses before use in production,โ or โAI-written text for marketing must be edited to ensure originality.โ These internal safeguards complement the contract protections.
- Keep an eye on evolving law. The legal landscape around AI and copyright is still developing (with ongoing lawsuits claiming that AI outputs might infringe copyrights if too similar to training data). Courts so far have been skeptical of broad โthe AI output is automatically a derivative workโ claims. Still, as an enterprise, staying informed is key. If new rulings or regulations emerge, you might need to adjust your practices or even update contract terms at renewal. OpenAI itself might update its policies as these issues evolve โ your agreement should allow you some flexibility if a change in law significantly impacts your risk.
In summary, treat AI outputs with the same due diligence you would any third-party content introduced into your workflow.
You hold the legal title to the content, but you also want to ensure you have the freedom to use it without interference.
That means being proactive about screening for IP conflicts and securing contractual promises that back you up if something slips through.
Indemnification and Liability: Covering IP Risks
Even with careful use, enterprises want assurance that if something goes wrong on the IP front, they wonโt be left holding the bag alone. Indemnification is the primary contract mechanism in this case.
In an OpenAI enterprise agreement, look for an IP indemnity clause where OpenAI commits to defend and compensate your company if a third party sues, claiming the AI service or its outputs infringed their intellectual property rights.
OpenAIโs standard business terms do offer an IP indemnity, which is a strong positive for customers.
Typically, it says OpenAI will indemnify you against claims that the OpenAI services (including the model outputs) violate a third partyโs IP rights, with certain exceptions.
Those exceptions commonly include instances where you used the service in an unauthorized manner or if the claim is due to content you provided.
For example, if you input someone elseโs proprietary text and then thereโs a claim, thatโs on you, not on OpenAI. However, if a claim arises solely from how the model was trained or an output it generated independently, OpenAI should cover it.
Make sure the clause explicitly covers both the model and its training data โ essentially anything under OpenAIโs control.
A few tips around indemnities and liability in your negotiation:
- Push for uncapped IP indemnity if possible. Often, vendors cap their liability, but carve out indemnification obligations from those caps (since a serious IP lawsuit could far exceed a typical cap). Confirm that any indemnity OpenAI provides is either uncapped or has a high enough cap separate from general liability to truly protect you. This aligns OpenAIโs incentives to stand behind their productโs safety. OpenAIโs contracts have been trending to exclude IP indemnity from the overall liability cap โ double-check this in your draft and negotiate it clearly if not.
- Address remediation steps. Ask what OpenAI would do if an IP issue is identified. A good indemnification clause might say that OpenAI can choose to modify the service or output to avoid infringement, obtain a license for you to continue using it, or, as a last resort, terminate the service with a refund. Knowing these options is helpful. If, for instance, a news organization claims that your AI-generated report infringes on their article, indemnity means that OpenAI handles the legal defense and any resulting settlement. They might also tweak the model to avoid repeating that mistake. Ensure your team promptly notifies OpenAI if any IP claim arises โ timely notice is typically a condition for indemnity.
- Consider additional warranties. Indemnity covers third-party lawsuits, but what about your losses if something goes awry? Vendors rarely indemnify for things like defamation or inaccurate content, but you might negotiate a simple warranty that the service, as provided, isnโt knowingly outputting illegal content or infringing material. OpenAI will be cautious here, but even a mild warranty coupled with your testing protocols is better than nothing. It sets an expectation of quality. Also, check for a warranty that the outputs will be original to a degree (acknowledging the nature of AI). Most likely, OpenAI will stick to โservice as isโ except for an indemnity, which makes your due diligence (and possibly insurance) important.
- Your indemnity to OpenAI. Expect that the agreement will also require you to indemnify OpenAI for certain things โ typically if your use of the service causes OpenAI to be sued (for example, if you provide illegal data or use the AI to libel someone and they sue OpenAI). This is standard. Just ensure itโs narrowly scoped: you indemnify them for misuse or for claims arising from the content you provided, not for the normal operation of the AI on clean inputs. In negotiation, confirm that your indemnity doesnโt cover โordinary risksโ that should be on OpenAI. Each party should cover the risks under their control: OpenAI covers the model and its outputs; you cover how you choose to use those outputs.
By securing a solid indemnification and understanding the liability structure, you create a safety net. It means that in the worst-case scenario โ say an IP lawsuit in a few years claiming your AI-generated content infringed something โ you have contractual backup.
For many enterprise buyers, this clause is non-negotiable and may even be a board-level concern.
Donโt hesitate to discuss it in depth with OpenAIโs team; you may find that they have already built in robust protections to ease these worries, given the prevalence of the question of AI and IP.
Negotiating OpenAI Agreements: Practical Considerations
When approaching an OpenAI enterprise deal, come prepared with a clear list of IP-related points to address.
OpenAIโs standard agreement is a strong starting point, but every enterprise has unique needs.
Here are some practical insights and tactics for negotiation and contract management regarding IP:
- Get clarity on definitions. Ensure terms like โInput,โ โOutput,โ and โCustomer Contentโ are clearly defined in the contract. OpenAI defines them in favor of the customer (your data in, your data out), which is good. But double-check that any custom agreement or Order Form doesnโt override those definitions in a problematic way. If your company uses multiple OpenAI services (e.g., API and ChatGPT Enterprise), confirm that the ownership and usage terms apply uniformly across all.
- Address data residency or transfer if needed. While not strictly related to IP ownership, some enterprises in finance or government care about where their data and outputs are stored/processed. OpenAIโs default is to process in the U.S. (and they have SOC2 compliance, etc.). If your policies require data localization or special handling, be sure to include this in negotiations. This ties into IP in that you want to maintain control and confidentiality of your intellectual assets at all times.
- Verify end-of-contract handling. What happens if you terminate the contract? OpenAIโs terms state that they will delete customer content after a specified period (typically 30 days) once the agreement ends. This is good from an IP protection standpoint โ you donโt want them retaining your proprietary prompts or outputs indefinitely. Make sure this deletion commitment is in place. Additionally, arrange to export any important outputs or data before termination so you don’t lose work results. For example, if you fine-tuned a model or built a knowledge base of prompts, ensure you can retrieve those artifacts (outputs, not the model weights) for your records.
- Consider future audit and compliance needs. If your industry might require an audit trail of how IP was generated (e.g., showing that an AI wrote a section of code for regulatory reasons), plan for that. The contract could include a right to certain usage logs or a cooperation clause that allows OpenAI to provide information showing how outputs were generated (without revealing their secret sauce, of course). This is a niche concern, but it can arise in regulated sectors. Being proactive in the contract can save headaches later.
- Leverage benchmarks and alternatives. If youโre evaluating OpenAI alongside other AI providers, compare their IP stances. OpenAIโs โyou own the outputโ stance is relatively generous. Some competitors might have similar terms, but always check the fine print. Use this comparison in negotiation: if another vendor offers a stronger warranty or more flexibility in usage, you can ask OpenAI to match it. Conversely, if OpenAIโs terms are better, thatโs a selling point to justify choosing them, which procurement can take note of. The goal is to ensure your enterprise isnโt getting a worse deal than the industry standard on IP rights.
- Keep lawyers and tech folks in sync. Negotiating AI contracts is multidisciplinary. Legal professionals should understand the technologyโs nuances (such as what fine-tuning involves), and IT professionals should understand the legal terms (such as why they cannot simply export a model). Bring your AI experts, security, and legal to the same table with OpenAIโs team. Often, questions about IP can be resolved with a simple explanation or a minor contract tweak. For example, suppose you need an exception to use outputs in a certain way. In that case, OpenAI might agree as long as itโs not broadly redistributing their model output competitively.
- Plan for ongoing governance. Once the contract is signed, managing IP doesnโt stop. Establish an internalย governance process for the use of AI. Track what types of content you are creating with OpenAI. Periodically review if the outputs are meeting your expectations for originality and quality. If any issues arise (such as an output that appears too similar to copyrighted text), document them and notify OpenAI so they can address the issue. This not only protects you, but gives OpenAI feedback to improve their models or filters โ a win-win that doesnโt require giving up your IP.
By being an active participant in the contracting process and the subsequent usage of the AI, you build a trusted partnership with OpenAI.
They get a customer who uses the technology responsibly, and you get a vendor who respects your intellectual property boundaries. This balance is exactly what a well-negotiated OpenAI enterprise agreement should achieve.
Recommendations
- Explicitly confirm IP ownership in writing: Ensure your contract clearly states that your organization retains ownership of all inputs and owns all outputs. This should mirror OpenAIโs standard terms โ if anything is unclear, get it clarified or added in an addendum. A quick clause reaffirming โCustomer owns the content it provides and all results producedโ removes any doubt.
- Secure broad IP indemnification: Donโt proceed without an indemnity from OpenAI for intellectual property claims. Negotiate it to cover any third-party claims arising from the AIโs operation or outputs (aside from your misuse). Verify that this indemnity is uncapped or has a high cap that is separate from other liabilities. Itโs your safety net โ make it strong.
- Include a non-training and confidentiality clause: Make sure the agreement (or a supporting DPA) states that OpenAI will not use your data or outputs to improve their models or share it with others. OpenAIโs enterprise policy already does this, but having it in the contract is vital. This protects your proprietary information and competitive advantage.
- Clarify acceptable use of outputs: If you intend to use AI outputs in training internal tools or other projects, discuss it upfront. Document any permitted uses that might otherwise fall under the โdonโt use outputs to competeโ restriction. For example, get written permission if you plan to use ChatGPT-generated text as part of a dataset to train a smaller internal model. Nail down these details to avoid breach of contract later.
- Ask for quality assurances (within reason): While no AI company will guarantee perfection, you can request a warranty that the service isnโt intentionally providing known copyrighted material. Even a modest assurance or commitment to assist if problematic output appears is worthwhile. This could be as simple as OpenAI agreeing to cooperate to remediate any output that is alleged to infringe someoneโs IP.
- Plan for IP review processes internally: As part of managing the contract, set up internal steps for reviewing outputs that will be widely used (especially external content). For instance, require a legal or editorial review for AI-generated text before publishing, or code review for AI-written code. This isnโt a contractual term, but rather a recommendation to operationalize your IP risk management. It will complement whatever the contract promises by catching issues early on.
- Utilize contract renewal to update terms: The AI field is evolving. At renewal time, revisit the IP clauses. If new industry standards or concerns have emerged (such as new laws on AI data usage), use the renewal as an opportunity to update the agreement. OpenAI might also improve its terms over time. Keep the dialogue open to ensure your contract remains aligned with best practices.
- Educate your users and stakeholders: Make sure your team knows the dos and donโts from the contract. If the agreement says โdonโt input sensitive personal dataโ or โdonโt use outputs to create competing models,โ translate that into clear internal guidelines. Empower your end-users (developers, analysts, etc.) with knowledge of what they can safely do with the AI and its outputs. This prevents inadvertent breaches and protects IP proactively.
- Keep records of AI-generated content: It can be useful to log which content was generated by AI (and when/by whom). This is more of a governance tip. In case of any future IP dispute or question, you have an audit trail. Some enterprises tag AI outputs or use metadata to mark them. This ties into compliance and IP management, ensuring transparency about the origin of content.
Checklist: 5 Actions to Take
- Review the OpenAI Agreementโs IP Clauses: Obtain the latest OpenAI enterprise terms or your draft contract. Highlight sections on ownership, usage rights, confidentiality, and indemnification. Make sure you fully understand these clauses internally (legal and procurement should interpret how they protect your interests).
- Map Your Use Cases to the Contract: List how your enterprise intends to use OpenAIโs services (e.g., generating internal reports, customer-facing content, fine-tuning a model on your data, etc.). For each use case, check the contract for alignment. Are you covered to do that? Identify any gaps or restrictions (for example, if you plan to use outputs in a certain way, ensure itโs allowed).
- Identify Negotiation Priorities: Based on the review, pinpoint what needs negotiating. Common ones include adding or strengthening an IP indemnity clause, confirming a commitment to no training on your data, adjusting any usage restrictions that conflict with your goals, and ensuring that data handling meets your requirements. Prepare specific language or requests for these points.
- Engage OpenAI (or Reseller) with Your Terms: Initiate the negotiation by conveying your needed changes or confirmations. For each concern, propose a solution: e.g., โWeโd like to add that we own outputs and OpenAI assigns rights to us (as per standard terms), just to be explicit,โ or โWe need an indemnification clause that covers any output-based IP claims.โ Document their responses. Most likely, OpenAI will accommodate reasonable requests that align with their standard policy. Where they push back, understand why, and see if an alternative protection can be put in place.
- Finalize and Educate: Once the contract reflects a fair balance (your inputs/outputs are safe, your risks are mitigated, and OpenAIโs interests are protected as well), finalize the agreement. Then educate your team about the key points. For instance, brief your developers that โwe can use the content freely, but remember we agreed not to try and reverse-engineer the model or use its answers to build our own GPT.โ Additionally, establish a point of contact (perhaps someone in legal or IT governance) for any questions or concerns that arise while using OpenAI. With the contract signed and the team informed, you are set to deploy OpenAIโs tech with confidence that your IP is safeguarded.
FAQ
Q1: Do we (the customer) own all outputs from OpenAIโs models, even code or creative content?
A: Yes. Under OpenAIโs enterprise agreements, your organization is considered the owner of all AI-generated output that you receive, to the extent allowed by law. This includes code, text, images โ any content the model produces for you. OpenAI assigns any of its rights in that content over to you. In practical terms, you can treat the output as you would material created by an employee or contractor. However, remember that owning it doesnโt automatically clear it of external IP claims โ you own it. However, you should still ensure itโs safe to use (no confidential or copyrighted material from others hiding in it).
Q2: Can OpenAI use our prompts or data that we input into the system for its purposes?
A: Not without your permission in the enterprise context. OpenAIโs enterprise terms state that they will only use your inputs (and outputs) to deliver the service to you โ including tasks such as processing the prompt, generating the response, and ensuring compliance with laws and safety regulations. They explicitly will not use your business data or prompts to train their public models or improve their services unless you opt in to such a program. This is a key differentiator of the enterprise offering (for consumer users, data might be used to improve the model by default, but not so for businesses). Your data remains confidential and is not shared with others for their benefit.
Q3: If the AI output includes something copyrighted (like a paragraph from a book), do we get in trouble for using it?
A: You could be at risk if you use it verbatim without permission. Owning the output means OpenAI wonโt claim it โ but it doesnโt grant you a license to someone elseโs content that might be embedded in that output. If an AI accidentally produces a substantial excerpt from a copyrighted work, the original author or publisher still has rights over that excerpt. In such a case, your company should treat it like any third-party content: either donโt use it, or secure permission/licensing if you need to use it. That said, outright copying is rare with text models unless prompted specifically. Many enterprises mitigate this by filtering or checking outputs. Also, your contractโs indemnification clause with OpenAI is there to cover you if a third party claims the modelโs output infringes their IP โ so OpenAI would step in to handle the legal side. But itโs best to catch and avoid using any copied output in the first place.
Q4: We plan to fine-tune GPT-4 on our proprietary data โ do we then own the fine-tuned model?
A: You will have exclusive use of that fine-tuned model, but not ownership in the traditional sense. The fine-tuned model is built on OpenAIโs technology with your data adjustments. OpenAI will host and run that model for you, and they contractually commit that no one else can use your fine-tuned model and that they wonโt incorporate your fine-tuning data into the base product. In effect, itโs your custom AI service. However, OpenAI still retains ownership of the underlying AI model and platform. Think of it like this: you own your data and the outputs, and you โownโ the configuration that is the fine-tuned model in the sense that itโs for your eyes only. However, you canโt take that modelโs code and give it to another vendor or run it on your servers (unless OpenAI agrees to this, which they generally donโt for proprietary models). If model ownership is a sticking point, discuss it with OpenAI โ they may offer solutions like escrow or special terms. However, the arrangement is typically exclusive use rather than transfer of ownership.
Q5: Are we permitted to integrate OpenAIโs outputs into our products and services for commercial purposes? Are there any catches we should be aware of?
A: Yes, you are allowed โ thatโs a primary use of the service. If you use OpenAIโs model to generate content (text, images, etc.), you can include those outputs in your commercial products, services, or internal operations without needing an additional license from OpenAI. For example, if ChatGPT helps generate a FAQ section for your app or code for a software feature, you can deploy that. The โcatchesโ to keep in mind: (1) You cannot present OpenAIโs output as if OpenAI endorses your product (avoid using their name or branding in a way that implies partnership unless you have one). (2) You shouldnโt expose the raw AI model interface to your end-users unless thatโs part of your agreement (different from embedding the content, which is fine). (3) Ensure the content aligns with legal and ethical norms โ you wouldnโt want to publish something the AI said that might be problematic or unvetted. And internally, maintain attribution where needed: some companies mark content as AI-generated in metadata to track it. But in terms of IP, as long as youโve vetted the output, you are free to monetize and use it as you see fit. This ability to commercialize outputs is a big reason enterprises opt for the paid service, and OpenAIโs contract supports it.
Read about our GenAI Negotiation Service.
Read about our OpenAI Contract Negotiation Case Studies.