The Processing Gap Nobody Talks About

Enterprise data governance teams have spent years locking down data at rest: encrypted storage, sovereign cloud regions, carefully negotiated data processing agreements. Then they deploy a generative AI tool and undo it all in a single API call.

The problem is the distinction between data at rest and data in processing. When a user submits a prompt to an AI model, their input — which may contain customer PII, clinical information, financial records, or trade secrets — is processed by the vendor's inference infrastructure. In many default commercial contracts, that infrastructure can be located anywhere the vendor operates. For most AI vendors in 2025, that means primarily US-based regions, regardless of where your organisation is headquartered or where your data is stored.

GDPR Article 44 prohibits transfers of personal data to third countries unless adequate safeguards are in place. HIPAA requires that covered entities and their business associates execute a Business Associate Agreement before any protected health information is processed. DORA, which became enforceable across EU financial entities in January 2025, imposes strict ICT risk concentration requirements that include AI inference infrastructure. Each of these regimes creates liability for data that moves during processing — not just data that is stored.

The negotiation gap is stark: our review of enterprise AI contracts finds that 67% of organisations have no contractually enforceable data residency restriction covering inference. They assume their cloud provider's region settings are sufficient — but those settings govern storage, not model inference, which frequently occurs outside the selected region.

Need enforceable AI data residency provisions?

Our enterprise AI negotiation specialists have secured binding residency clauses with every major vendor — including OpenAI, Azure, AWS, and Google.
Talk to an Expert →

GDPR: What the Regulation Actually Requires for AI Processing

GDPR's international transfer restrictions apply whenever personal data is processed — not just stored — outside the European Economic Area. AI inference is processing. If a European employee submits a prompt containing customer data to a US-hosted model endpoint, that constitutes a transfer under Article 44, regardless of how your storage is configured.

The two primary legal mechanisms for lawful AI processing transfers are Standard Contractual Clauses (SCCs) and adequacy decisions. The SCCs updated in 2021 — and further amended by supplementary guidance in Q2 2025 addressing AI-specific processing scenarios — require detailed technical and organisational measures covering data minimisation during inference, model output handling, and subprocessor chains. The adequacy decision for the US remains in force following the EU-US Data Privacy Framework, but it covers only organisations on the DPF list and does not automatically extend to subprocessors used by your AI vendor.

In practice, this means you need your AI vendor to execute Module 2 or Module 3 SCCs (controller-to-processor or processor-to-processor, depending on your deployment model), confirm that all subprocessors handling EEA personal data are themselves covered by adequate transfer mechanisms, and commit to processing personal data only within the EEA or in adequacy-covered territories for your specific tenant. The last point is the hardest to achieve with most vendors in their standard contracts — and the most important to negotiate.

Practical GDPR Negotiation Provisions

The following provisions should appear in any AI vendor DPA governing EU personal data. First, an explicit inference region restriction: a warranty that model inference — not just data storage — occurs within the EEA or in a jurisdiction covered by an adequacy decision. Second, subprocessor transparency: a requirement that the vendor maintains and discloses an up-to-date list of subprocessors involved in inference, including hardware providers and model hosting infrastructure. Third, prompt data minimisation: a contractual obligation that the vendor does not retain prompt content beyond the session duration required to deliver the response, unless the customer has explicitly enabled memory features. Finally, data subject rights facilitation: mechanisms for the vendor to support your Article 15-21 obligations, including the ability to identify and delete user-associated data from logs and fine-tuning datasets.

Our broader AI data governance and enterprise agreement guide covers all four non-negotiable contract provisions that every enterprise buyer must secure — residency, IP indemnification, exit rights, and a compliant AI DPA — as a connected framework.

HIPAA: AI Vendors and Business Associate Agreements

HIPAA's Business Associate Agreement requirement is straightforward in principle: any organisation that creates, receives, maintains, or transmits Protected Health Information on behalf of a covered entity must execute a BAA. In AI deployments, if a prompt contains PHI — a patient name, diagnosis, treatment detail, or insurance identifier — the AI vendor is a business associate and a BAA is mandatory before processing begins.

The updated HIPAA Security Rule (effective Q1 2025) introduced enhanced requirements for BAAs covering AI and cloud-based processing, including mandatory encryption in transit and at rest, audit log retention for a minimum of six years, and specific provisions for AI model output handling where PHI is incorporated into responses. Standard commercial AI contracts from OpenAI, Anthropic's direct channel, and many third-party model providers do not include BAAs. Healthcare organisations deploying these tools without bilateral BAA execution are in violation of HIPAA regardless of the technical controls in place.

Which AI Vendors Sign HIPAA BAAs

The vendor landscape for HIPAA-compliant AI deployment is more constrained than most healthcare technology teams realise. Microsoft Azure OpenAI Service signs HIPAA BAAs as part of its standard enterprise agreement — the BAA covers the Azure infrastructure layer, and the specific models available in Azure OpenAI (GPT-4o and derivatives) inherit that coverage. AWS Bedrock similarly provides BAA coverage for all models available through the Bedrock managed service, including Anthropic's Claude models accessed via Bedrock. Google Vertex AI offers BAA coverage for healthcare customers under its Cloud Healthcare Agreement. Direct access to OpenAI's API does not include a BAA in the standard tier; enterprise customers with dedicated tenancy arrangements can negotiate a BAA, but it is not standard and requires significant minimum commit. Anthropic direct does not offer a standard BAA — healthcare deployments must route through cloud intermediaries such as AWS Bedrock or Google Vertex AI to obtain BAA protection.

For detailed commercial structures and negotiation benchmarks for each vendor, our enterprise AI licensing guide covering OpenAI, Anthropic, Google, and AWS provides spend tier benchmarks and the specific contractual provisions available at each level.

"The question is not whether your AI vendor is HIPAA compliant. The question is whether your specific deployment configuration — including your BAA, your inference region, and your prompt handling — is compliant. Many organisations that believe they have HIPAA coverage for AI do not."
In one engagement, a global healthcare organisation processing clinical trial data via a major AI vendor discovered that their "EU-region" configuration did not prevent inference routing to US-based infrastructure during peak load periods. Redress identified the gap during a contract review, negotiated a contractually binding inference-only region restriction, and secured a HIPAA BAA amendment covering the specific model tier in use. The exposure — potential HIPAA breach notification obligations covering millions of patient records — was resolved without regulatory escalation. The engagement fee was under 2% of the estimated regulatory exposure.

DORA: AI Residency Requirements for Financial Entities

The Digital Operational Resilience Act became enforceable across EU financial entities in January 2025. DORA Article 28 requires financial entities to include specific provisions in ICT contracts with critical and important third-party providers — and an AI model that handles financial data, customer analytics, or risk models likely qualifies as supporting an important function. DORA's key requirements for AI contracts include: full sub-outsourcing transparency, contractually guaranteed exit rights with assistance obligations, data portability provisions covering model outputs and fine-tuning datasets, and business continuity guarantees covering the AI inference infrastructure.

DORA introduces a 2% of annual global turnover penalty cap — equivalent to GDPR's administrative fine tier for less serious violations, but applied to ICT resilience failures including inadequate contract provisions. Financial services entities deploying AI tools under standard commercial contracts are exposed on multiple DORA articles simultaneously: Article 28 (contract provisions), Article 29 (exit strategies), and Article 30 (sub-outsourcing). The EBA, ESMA, and EIOPA are already conducting DORA supervisory assessments of AI vendor relationships as part of their 2026 examination programmes.

The FCA in the UK has adopted equivalent expectations under its operational resilience framework. Financial entities with dual UK-EU obligations need both DORA-compliant and FCA-compliant contract provisions — which are substantively similar but not identical, requiring careful drafting to satisfy both simultaneously.

FedRAMP: US Federal Sector AI Deployment

US federal agencies and their contractors deploying AI tools must use FedRAMP-authorised services for processing federal data. The FedRAMP landscape for AI services has evolved rapidly. Microsoft Azure OpenAI holds FedRAMP High authorisation as part of Azure Government, with additional IL-4 and IL-5 impact level coverage for defence and intelligence workloads. AWS Bedrock in GovCloud achieves FedRAMP High and supports IL-4 and IL-5 deployments. Google Vertex AI has FedRAMP Moderate authorisation for standard government use cases. Direct APIs from OpenAI and Anthropic do not hold FedRAMP authorisation and cannot be used in federal deployments without a specific agency ATO covering a self-hosted variant.

The FedRAMP Moderate and High baselines impose specific controls on AI inference infrastructure: data must not leave US-domestic regions, audit logs must be retained for three years, and the vendor must support the agency's incident response obligations. Contractors working on federal programmes should verify the specific FedRAMP package and boundary documentation for any AI service before deployment, as the authorisation boundary may not cover all model variants or all regional endpoints.

Read our OpenAI enterprise procurement and negotiation playbook for a complete treatment of how to structure a compliant federal or regulated-sector OpenAI deployment through appropriate intermediaries.

Vendor-by-Vendor Data Residency Options

Understanding what is actually achievable in each vendor's commercial model is essential before entering negotiations. Here is a practical summary of data residency provisions available across the major AI platforms as of early 2026.

Azure OpenAI Service

Microsoft's Azure OpenAI offers the most mature data residency architecture for enterprise buyers. The EU Data Boundary for Azure (formerly EU Data Zone) restricts inference processing to EU and EFTA regions — this is contractually backed and covers model inference, not just storage. Provisioned Throughput Units (PTUs) deployed in EU regions guarantee EU-only inference. For UK deployments, separate UK South and UK West regions are available. Data Residency for Azure OpenAI is included in enterprise agreements at no additional fee. Microsoft has also published a detailed GDPR processing record for Azure OpenAI that satisfies Article 30 documentation requirements, and the EU Data Boundary is backed by independent third-party audits. The comparison of Azure OpenAI versus direct OpenAI deployment covers the full commercial and compliance trade-offs in detail.

AWS Bedrock

AWS Bedrock allows customers to select the AWS region where inference occurs. For EU organisations, eu-west-1 (Ireland), eu-central-1 (Frankfurt), and eu-west-3 (Paris) are available for most Bedrock-hosted models. The challenge is that cross-region inference — which Bedrock may route automatically when your selected region's capacity is constrained — needs to be explicitly disabled in your configuration and backed by a contractual guarantee in your AWS DPA. Our AWS contract negotiation specialists can help secure these guarantees. AWS's standard DPA includes SCCs and commits to processing only in customer-selected regions, but the cross-region inference routing provisions require specific attention during DPA negotiation. AWS Bedrock supports HIPAA BAAs, FedRAMP High in GovCloud, and EU data residency in a single commercial construct — making it one of the most flexible residency architectures available.

OpenAI Enterprise (Direct)

OpenAI's enterprise tier provides EU-based data processing through its OpenAI API infrastructure. As of Q1 2026, OpenAI offers a contractual commitment to process EU customer data in EU-located infrastructure for enterprise accounts with dedicated tenancy. However, this is not available on standard API pricing — it requires a minimum annual commit (typically 150+ seats or equivalent API spend) and explicit negotiation. OpenAI's DPA for enterprise customers includes Module 2 SCCs and a subprocessor list, but does not include a US-HIPAA BAA. For regulated sector deployments requiring both EU residency and HIPAA compliance, Azure OpenAI Service or AWS Bedrock are the more practical paths. Consult our enterprise guide to negotiating OpenAI contracts for the specific contractual language to request at each commitment tier.

Google Vertex AI

Google Vertex AI provides regional deployment options across EU (Belgium, Netherlands, Frankfurt, Warsaw), US, and Asia-Pacific. The Google Cloud Processing Addendum for GDPR includes Article 46 SCCs and commits to regional processing for Vertex AI workloads. Google Workspace customers accessing Gemini via the embedded Workspace channel have a different data processing structure — the Workspace DPA governs, not the Vertex AI DPA — and this distinction is important for organisations using both channels. Google's data residency guarantees for Vertex AI are contractually strong but require the customer to configure regional endpoints explicitly; the default API endpoint will route to the nearest available region globally.

Anthropic (Direct vs Via Cloud)

Anthropic does not currently operate its own EU-based inference infrastructure. Customers accessing Claude models directly via the Anthropic API are processed on US-based infrastructure. For EU GDPR compliance, Anthropic provides SCCs in its enterprise DPA, but inference occurs in the US — meaning SCCs plus a transfer impact assessment are required for personal data in prompts. For customers requiring EU-based inference, the only compliant path is to access Claude via AWS Bedrock (eu-west-1/eu-central-1 deployment) or via Azure, where Anthropic models are progressively being made available. Our Anthropic Claude enterprise licensing guide for 2026 covers pricing, DPA provisions, and the cloud intermediary comparison in full.

Audit Rights: The Forgotten Residency Provision

A data residency commitment in a contract is only as good as your ability to verify it. Audit rights provisions in AI contracts are frequently inadequate — vendors typically offer third-party audit reports (SOC 2 Type II, ISO 27001) in lieu of customer-specific audits, and these reports often do not address inference region restrictions specifically.

Under GDPR Article 28(3)(h), processors must make available all information necessary to demonstrate compliance and allow for audits. For AI vendors, this means your DPA should include: the right to receive updated subprocessor lists with 30 days' notice before any change; the right to receive copies of applicable third-party audit reports covering inference infrastructure; the right to conduct reasonable information requests about data processing locations; and an obligation on the vendor to notify you within 72 hours if it processes personal data outside the contractually committed region, even temporarily. DORA adds a requirement for the right to inspect ICT third-party provider premises — not just receive audit reports — for critical or important functions.

Most standard AI vendor contracts do not include adequate audit rights by default. They are negotiable, but vendors will push back on customer-direct access to infrastructure. The practical compromise is to negotiate for binding commitments to the content of third-party audit reports, a right to review those reports with legal counsel, and a vendor obligation to remediate any identified findings within defined timeframes.

AI Contract Intelligence Newsletter

Monthly analysis on AI vendor contract changes, regulatory updates, and negotiation leverage for enterprise procurement teams. 4,200+ subscribers.

Sector-Specific Requirements Beyond GDPR, HIPAA, and DORA

Several other regulatory frameworks impose data residency or processing location requirements that intersect with enterprise AI deployments. In the SEC-regulated financial services sector, SEC Rule 17a-4 requires that electronic records be maintained in a non-rewriteable, non-erasable format with documented chain of custody — AI-generated outputs used in investment decisions must be captured and retained in compliant storage, which may require specific configurations with your AI vendor. In FDA-regulated life sciences, 21 CFR Part 11 requirements for electronic records and signatures apply to AI systems used in drug development or clinical decision support, requiring audit trails and validation documentation that standard AI vendor contracts do not provide.

The energy and utilities sector faces NERC CIP requirements for critical infrastructure protection, which restrict certain data categories to specific domestic processing environments. AI deployments supporting grid operations or critical infrastructure control systems must be assessed against NERC CIP-004 and CIP-011 requirements before vendor selection. These sector-specific requirements often create procurement constraints that eliminate certain AI vendor options entirely — a fact that should be established during the sourcing phase, not during contract negotiation.

Building a Residency-Compliant AI Contract: The Checklist

Based on our work across 500+ enterprise software and AI vendor negotiations, the following provisions should be present in any AI contract where data residency matters. These provisions sit alongside — and are inseparable from — the broader four-pillar AI contract governance framework covering DPA, IP indemnification, data residency, and exit rights.

  • Inference region restriction: Explicit warranty that model inference (not just storage) occurs within the contractually committed geographic boundary, with no exceptions for capacity routing.
  • Subprocessor disclosure and change notice: Full subprocessor list covering inference infrastructure, with 30-day advance notice of any changes and a right to object.
  • Cross-border transfer mechanism: SCCs (Module 2 or 3), adequacy decision reliance, or Binding Corporate Rules documented in the DPA with transfer impact assessment.
  • BAA execution (HIPAA environments): Executed HIPAA BAA covering all PHI categories processed via the AI service, with updated Security Rule provisions.
  • DORA Article 28 provisions (EU financial entities): Exit assistance, data portability, sub-outsourcing transparency, and business continuity obligations.
  • FedRAMP authorisation verification (US federal): Written confirmation of applicable FedRAMP package, impact level, and boundary documentation for the specific service configuration being deployed.
  • Audit rights: Right to receive third-party audit reports, notify if processing occurs outside committed regions, and remediate identified findings.
  • Prompt retention limits: Binding limit on inference log retention periods, with confirmation that prompt data is excluded from model training.

Negotiating these provisions is where commercial leverage matters. Vendors are more flexible with customers who represent multi-year commit value, have competitive alternatives in play, or are negotiating via a platform intermediary (Azure, AWS) where the residency architecture already exists. Our enterprise AI contract negotiation team supports the full process from DPA gap analysis through to executed contract, ensuring every provision is enforceable — not just aspirational.

Download our AI platform contract negotiation framework

Covers every provision across OpenAI, Azure, AWS, Google, and Anthropic — with red-line language and negotiation benchmarks.
Download Free →

Conclusion: Residency Is a Contract Problem, Not a Configuration Problem

The single most common mistake enterprises make in AI data residency is treating it as a technical configuration question. It is a contract question. Technical configurations can be changed by the vendor without notice. Region settings can be overridden by capacity routing logic. Default endpoints can shift as vendor infrastructure evolves. Only a contractually binding commitment — with audit rights, breach notification obligations, and remediation provisions — provides the enforceable protection that regulators expect.

For EU organisations under GDPR, UK organisations under the UK GDPR, US healthcare organisations under HIPAA, EU financial entities under DORA, and US federal contractors under FedRAMP, the standard commercial AI contract is not fit for purpose. The provisions outlined in this guide need to be negotiated before go-live, not patched in after a regulatory inquiry.

The good news is that these provisions are achievable. Every major AI vendor has a compliant deployment path for regulated enterprises — the challenge is knowing which path exists, which contractual language unlocks it, and which leverage points make the vendor willing to execute. That is precisely the combination of expertise that drives successful outcomes in our enterprise AI licensing and contract advisory work.