Share Share on LinkedIn

Enterprise AI deployments running on Microsoft Azure OpenAI or Azure AI Foundry carry a set of contractual obligations that are fundamentally different from traditional software agreements. Legal and procurement teams that approach these agreements with standard cloud service templates are leaving significant risk unaddressed. The privacy protections Microsoft provides by default are genuinely strong for a hyperscaler, but "strong by default" is not the same as "sufficient for your regulatory environment."

This guide identifies the contractual terms that matter most for enterprise buyers deploying AI workloads on Microsoft infrastructure, explains where Microsoft's standard terms fall short, and gives you the specific language your legal team should be negotiating before signing any AI-enabled Microsoft Enterprise Agreement or Azure addendum.

What Microsoft's Default Terms Actually Say

Microsoft governs Azure AI services through two primary documents: the Microsoft Online Services Terms (OST) and the Data Protection Addendum (DPA). For Azure OpenAI Service and Azure AI Foundry, the DPA is particularly significant because it defines Microsoft's obligations as a data processor under GDPR, CCPA, and equivalent frameworks.

The key default commitments under the standard DPA are that Microsoft will process Customer Data only to provide the contracted service and for purposes the customer authorises; prompts and completions submitted to Azure OpenAI are not used to train any Microsoft models; customer data submitted through Azure services is not shared with OpenAI; and Microsoft acts as a data processor with appropriate technical and organisational security measures.

These are meaningful commitments that distinguish Azure OpenAI from consumer AI products. However, the default terms were written for general cloud workloads and contain ambiguities that become material when applied to sensitive AI deployments. The three areas that require the most attention are data residency, fine-tuning data handling, and model audit rights.

Data Residency: The Gap Between Promise and Reality

Microsoft's standard Azure terms provide data residency commitments at the infrastructure level: your Azure resources run in the region you select, and primary data is stored in that region. For most Azure services, this is sufficient. For AI workloads, it is not.

In March 2025, Microsoft updated its Azure OpenAI terms to disclose that fine-tuning operations might involve "temporary data relocation" outside your selected geography. When you fine-tune a model using sensitive corporate data, that training data may be processed in a centralised location even if your Azure resource is in the EU. For organisations subject to GDPR data localisation requirements, this is not a disclosure you want to encounter after signing.

The fix is to require explicit confirmation in writing that all processing, including fine-tuning, occurs within your specified geography. Microsoft can and does offer this as a contractual commitment for qualifying enterprise accounts. It is not automatic. You have to ask for it, specifically, in your data processing schedule. Redress has reviewed dozens of enterprise AI contracts where this gap went unaddressed because the team assumed the standard data residency commitment covered AI processing end-to-end. It does not.

No-Training Clauses: Stronger Language Than the Default

Microsoft's default commitment that prompts and completions are not used to train models applies to its own first-party models. The picture becomes more complicated when you deploy third-party models through Azure AI Foundry's model catalogue. Cohere, Mistral, Meta, and other model providers have their own data handling commitments that operate alongside Microsoft's DPA.

When a third-party model processes your enterprise data via Azure AI Foundry, you are simultaneously covered by Microsoft's DPA for the infrastructure layer and by the model provider's terms for the inference layer. These two sets of terms do not always align. In some cases, model providers' standard terms include broad permissions for using customer interaction data to improve their models, with opt-out provisions that must be explicitly invoked.

The language your contract should include is a blanket prohibition on the use of any Customer Data submitted to any model accessible through Azure AI Foundry to train, fine-tune, or improve any model, whether operated by Microsoft or by a third party. Microsoft can facilitate this through addendum language that extends the no-training commitment downstream to model providers available through the marketplace. Without this language, you are relying on each model provider's individual opt-out process, which requires active management across an estate that typically includes multiple models.

How we helped a global bank renegotiate its Microsoft AI contract

EA restructured with explicit AI data governance terms and 25 percent spend reduction. Read the case study.

Model Audit Rights and Explainability Obligations

Regulated industries including financial services, healthcare, and critical infrastructure increasingly require that AI decisions affecting customers or operations be explainable and auditable. Microsoft's standard terms do not include audit rights over model behaviour, prompt processing, or output generation. This creates a compliance gap for organisations subject to the EU AI Act, which classifies many enterprise AI applications as high-risk systems requiring documented governance and human oversight.

What your contract should include is a provision for Microsoft to provide, on request, documentation of the model version deployed to your resource, any changes to model behaviour or safety filters applied to your deployment, and confirmation of compliance with applicable AI regulations in the jurisdictions where you operate. For high-risk AI systems under the EU AI Act, you may also need commitments around transparency documentation that Microsoft produces for its own compliance purposes.

Microsoft's enterprise account teams are increasingly familiar with these requests, particularly from financial services and healthcare accounts. The willingness to include audit-related language has improved significantly since the EU AI Act came into force. However, the standard enterprise agreement does not include it, so you must request it explicitly during contract negotiation. Our broader Microsoft security licensing guide covers the compliance licensing landscape in more detail.

Security Incident Response and AI-Specific Notifications

Microsoft's standard DPA includes breach notification obligations of 72 hours under GDPR-aligned provisions. For AI systems, the definition of a "security incident" needs to expand beyond traditional data breach scenarios. Prompt injection attacks, model poisoning, and adversarial inputs that cause unexpected model behaviour are AI-specific security events that traditional breach notification frameworks do not contemplate.

Enterprise buyers in regulated sectors should negotiate language that requires notification of any security incident affecting the model deployment or the data processing environment, including events that affect model integrity or output quality in ways that could have downstream effects on regulated activities. This is particularly important for financial services organisations using AI for credit decisions or fraud detection, where corrupted model behaviour could trigger regulatory obligations independent of a traditional data breach.

Microsoft AI & Licensing Intelligence

Weekly analysis of Microsoft contract terms, Azure AI commercial developments, and enterprise governance frameworks. Read by 14,000+ IT and procurement professionals.

Intellectual Property and Output Ownership

A question that arises in virtually every enterprise AI deployment is who owns the outputs generated by AI models. Microsoft's standard terms state that Customer Data, including outputs generated from Customer Data, belongs to the customer. This is the correct default position, but the language matters when outputs become the basis for commercial products, patentable inventions, or customer-facing services.

Three areas require specific attention. First, fine-tuned models created using your proprietary training data are a form of intellectual property, and your contract should confirm that the fine-tuned model checkpoint, or at minimum the differentials applied to the base model, are customer property that can be exported and deployed outside Azure. Microsoft supports model export in most cases but the contractual right to export a fine-tuned model is not stated in standard terms.

Second, if your organisation creates derivative works based on AI-generated outputs, you need clarity on whether any Microsoft licence restrictions flow through to those derivatives. The current Microsoft terms do not impose downstream restrictions on customer outputs, but as AI copyright law evolves, having explicit contractual language protecting your use of AI outputs is valuable defensive positioning.

Third, for organisations deploying AI in customer-facing products, your Microsoft agreement should confirm that Microsoft waives any intellectual property claim over outputs generated through your licensed deployment. This is standard practice but should be stated explicitly rather than assumed from general cloud terms. Connecting this to your Azure OpenAI commercial agreement ensures alignment across all relevant contract documents.

The Contract Checklist Your Legal Team Needs

Before signing any Microsoft agreement that includes Azure AI services, legal and procurement teams should confirm seven specific commitments are present in writing. Data residency confirmation that explicitly covers fine-tuning and all AI processing operations, not just primary data storage. A blanket no-training clause that extends to third-party model providers accessible through Azure AI Foundry. Model audit rights that provide version documentation and change notification. AI-specific incident response obligations covering model integrity events as well as traditional data breaches. IP ownership confirmation for fine-tuned models and AI-generated outputs. Explicit data processing purpose limitations that prevent Microsoft using your data for product improvement, model evaluation, or any purpose beyond service delivery. And jurisdiction-specific compliance attestations if you operate in regulated sectors under the EU AI Act, UK Financial Conduct Authority AI guidance, or US sector-specific AI requirements.

Several of these terms are available but require negotiation. Microsoft's enterprise account teams have increasing authority to include AI governance language in Enterprise Agreement schedules, particularly for accounts with annual Azure commitments above $1M. The terms are not volunteered, so knowing what to ask for is the essential starting point. Our Microsoft advisory practice has negotiated AI governance terms across more than 150 enterprise agreements and can tell you quickly which terms Microsoft will accept and where you need to push.

Download: Microsoft EA Renewal Playbook

Covers AI governance contract terms, data privacy provisions, and the full EA commercial negotiation framework for 2026.

The Regulatory Horizon: What Is Coming Next

Enterprise AI governance is not a static requirement. The EU AI Act is phasing in obligations through 2026 and 2027 that will require documented AI governance frameworks, human oversight mechanisms, and transparency disclosures for high-risk AI applications. UK regulators are developing sector-specific AI guidance for financial services that will create new contractual obligations for Microsoft customers using AI in regulated activities. US federal contractors face emerging AI governance requirements under executive orders that apply to cloud-hosted AI systems.

Enterprises that negotiate flexible AI governance language now, rather than waiting until specific regulatory requirements crystallise, are in a significantly better position. The right framework is one that commits Microsoft to cooperating with your compliance obligations as they evolve, rather than locking in terms that reflect only today's regulatory environment. This forward-looking approach is the difference between an AI contract that serves you for three years and one that needs emergency renegotiation six months after signing. Contact us at redresscompliance.com/contact.html to discuss how this applies to your specific Microsoft agreement.

Need help reviewing your Microsoft AI contract terms?

Describe your AI workloads and regulatory environment. We will identify gaps in your current terms within 48 hours.
Found this useful? Share on LinkedIn