Azure OpenAI SLA and Support
Azure OpenAI Service provides powerful AI capabilities to enterprises, but it comes with specific service-level agreements (SLAs) and support limitations.
This brief explains what Azure OpenAI’s SLA and support cover and what they don’t. Enterprise IT, procurement, finance, and legal teams will gain a clear view of uptime guarantees, model performance expectations, support escalation paths, and key contractual gaps that need to be addressed.
The goal is to help you negotiate Azure OpenAI terms with eyes wide open, ensuring you receive reliable service without unwelcome surprises.
Azure OpenAI SLA: What’s Guaranteed and What Isn’t
Microsoft provides a standard availability SLA for Azure OpenAI – typically 99.9% uptime for the service.
In plain terms, that means Azure OpenAI should be accessible for all, except for a few minutes each month.
If Microsoft fails to meet this uptime commitment, the remedy is usually a service credit on your account (pro-rated based on downtime).
Financially, this is the extent of Microsoft’s obligation for outages – it’s a limited credit, not full compensation for business losses.
Enterprise customers need to track outages and claim those credits, as they’re not automatic.
What isn’t covered by the SLA? A lot.
The SLA assures service availability (i.e., the platform is up), but it does not guarantee:
- Model accuracy or quality of responses – The AI might give wrong or unpredictable answers, and Microsoft makes no promises on output correctness.
- Specific performance metrics – Outside of uptime, there’s generally no guarantee on response times or throughput in the standard offering (unless you have a special arrangement, which we’ll discuss later).
- Uninterrupted service during preview features – New models or features in preview come with no SLA at all. If you’re relying on a preview model, Microsoft treats it as a best-effort service with no uptime guarantee until it’s officially released.
- Remedies beyond service credit – There’s no contractual provision for suing Microsoft over Azure OpenAI issues or getting additional refunds; the SLA credit is the sole contractual recourse for downtime.
It’s important to verify the exact SLA language in your agreement or Microsoft’s product terms. Azure OpenAI is a relatively new service; therefore, ensure the contract explicitly includes it under the standard Azure SLAs.
If Azure OpenAI is mission-critical for you, consider negotiating stronger terms.
For example, some enterprises push for a written assurance of the 99.9% uptime (if it’s not already clear) or even ask for a slightly higher uptime target.
You likely won’t get Microsoft to significantly alter a global SLA, but raising the topic signals that you expect reliability and will hold them accountable via credits or other measures.
Support and Escalation: Navigating Issues
When something goes wrong with Azure OpenAI, how do you get help?
Enterprises using Azure OpenAI will lean on Microsoft’s support structure.
This means any issues are handled through your Azure support plan and Microsoft account team, just like other Azure services:
- Tiered support: Depending on your support level (Standard, Professional Direct, or Premier/Unified Support), you’ll have different response times and escalation paths. For a production AI solution, most enterprises ensure they have a top-tier support plan to get a 24/7 rapid response. Azure OpenAI itself doesn’t come with a special support hotline – it falls under your existing support agreement.
- Initial troubleshooting: Day-to-day issues (e.g., API errors, service unavailability) are addressed through the Azure support ticket system. Microsoft’s engineers will determine whether the issue is client-side, an Azure infrastructure problem, or related to the OpenAI model endpoints.
- Escalation: If the problem is on Microsoft’s side (say, a regional outage or a bug in the service), it gets escalated internally. Microsoft may involve OpenAI’s engineers behind the scenes, but to you, Microsoft is the accountable party. Ensure your account team is aware of any major incident – for critical outages, have them loop in product specialists or leadership as needed.
Importantly, not all issues have quick fixes. If your application is yielding poor results from the model (due to a quality or tuning issue), that’s not a “break/fix” case that support can resolve in a single ticket.
Microsoft support can advise on best practices, but they won’t rewrite the model. Similarly, if you hit a usage limit or content filter block, support may confirm the cause, but the “fix” might be an awaited product improvement or a usage change on your end.
Enterprises should set clear expectations internally: Microsoft will assist when the service fails to work as designed, but they won’t guarantee the success of your solution.
Negotiation tip:
If Azure OpenAI will run a business-critical workload, discuss support provisions during the contracting process.
For example, you might negotiate a named technical contact or quarterly service reviews with Microsoft.
At minimum, confirm that Azure OpenAI is covered under your Premier/Unified Support agreement (it should be, but double-check for any exceptions).
Rapid support escalation is as vital as the technology itself when an AI system is in production – you don’t want to find out during an outage that you only had a basic support plan in place.
Performance Expectations and Latency
Beyond just “up or down” availability, enterprises need to consider how well Azure OpenAI performs under real-world use.
This includes model latency (the speed of responses), throughput (the number of requests per minute it can handle), and consistency of performance.
Here’s what to know:
- Shared service performance: In the standard Azure OpenAI setup, you’re hitting a multi-tenant endpoint. Microsoft manages scale behind the scenes to handle loads, but during peak times, you may experience slower responses or occasional rate-limit errors if you exceed your allotted throughput. There is no explicit performance SLA (by default) guaranteeing that every response will be under X milliseconds – only the general 99.9% uptime guarantee. In practice, the service is designed to be fast; however, heavy enterprise use should be thoroughly tested.
- Provisioned throughput (Dedicated capacity): Microsoft offers an advanced option for high-demand customers, called Provisioned or Managed Deployments. This is essentially a dedicated cluster of the OpenAI service for your use. With this, you pay a fixed hourly rate to reserve capacity, and in return, you gain predictable performance. Notably, the provisioned offering comes with a Latency SLA – Microsoft might guarantee, for example, that 99th percentile response time will stay below a certain threshold. If you have low-latency requirements or spiky loads, this option ensures the model responds consistently even under heavy loads. The trade-off is cost: you’re paying even when you’re not using it, for that peace of mind.
- Throughput limits and quotas: By default, Azure OpenAI imposes quotas on how many requests or tokens per minute you can process (especially for powerful models like GPT-4). These limits prevent any one tenant from overloading the system. If your use case needs higher throughput, you’ll have to request a quota increase through Microsoft. This isn’t a negotiation in the monetary sense but a technical allotment. However, it’s wise to get any promised capacity increase in writing (even an email from Microsoft) and to do it well before you launch a critical application.
Crucially, model performance in terms of quality is not guaranteed.
Azure OpenAI will run the chosen model (say GPT-4), but the relevance or correctness of its output is inherently variable. Microsoft’s terms explicitly disclaim any warranties about the outcomes generated. So, treat the model’s output as probabilistic.
For important tasks, incorporate human review or validation steps to ensure accuracy. Your contract won’t save you if the AI writes something incorrect or troublesome – Microsoft won’t be on the hook for that.
Managing performance means both technical tuning (prompt design, model choice, scaling resources) and governance (reviewing outputs for accuracy and policy compliance).
In summary, plan for performance as if it’s your responsibility: load test your solution, utilize caching or request batching to optimize, and select the appropriate deployment model (shared vs. dedicated) for your specific needs.
If ultra-low latency or guaranteed throughput is a must, expect to pay more or negotiate a commitment for reserved capacity. If not, at least ensure you have a scaling strategy and have discussed your expected usage with Microsoft so they can support it.
Pricing Surprises and Usage Constraints
Azure OpenAI’s pricing model is usage-based, which can be a double-edged sword for enterprise budgets.
You pay per API call or 1,000 tokens processed (for text models); image models, such as DALL-E, have per-image charges.
The rates are public and typically match OpenAI’s direct pricing. However, enterprises evaluating costs must look beyond the sticker price:
- Unpredictable consumption: Usage can grow exponentially once AI is deployed widely. What starts as a pilot with a few thousand requests could turn into millions of tokens per day if integrated into customer-facing or large-scale internal apps. With pay-as-you-go pricing, your costs scale linearly with usage – there’s no built-in volume discount that kicks in. This unpredictability means cost overruns are a real risk if usage isn’t monitored. We’ve seen patterns where an enthusiastic rollout causes a budget shock in the next Azure invoice.
- Enterprise Agreement (EA) integration: The good news is you can fold Azure OpenAI spend into your existing Microsoft enterprise agreements. If you have a pre-committed Azure spend (a Microsoft Azure Consumption Commitment, for example), Azure OpenAI consumption can count toward it. You won’t get a separate discount on Azure OpenAI, but if you already get, say, ~X% off Azure services under your EA, this service should inherit that. Always confirm with Microsoft that Azure OpenAI usage qualifies toward any commitment or discount pools you have – this can significantly offset the raw cost.
- Additional costs: Using Azure OpenAI may incur other Azure costs indirectly. For instance, if you log your prompts and outputs to Azure Storage or Application Insights, those services will have their meter. If you deploy Azure OpenAI in a way that routes traffic between regions or outside of Azure, you may incur network egress charges. These are usually minor compared to the AI compute costs, but enterprises should still apply cost governance across the whole solution (Azure Cost Management budgets, alerts, etc.).
- Dedicated capacity costs: As mentioned, if you opt for a provisioned throughput (dedicated instance), there’s a significant flat cost per hour for that reservation. This can quickly outweigh pay-as-you-go charges if your usage is not consistently high. The dedicated model makes economic sense only if you have a steady, high-volume workload or strict performance needs. Be aware of the commitment – some dedicated instances may require a minimum use period (e.g., monthly terms). Ensure that any such commitment aligns with your project’s lifecycle and that you’re not locked in for longer than necessary.
- Quotas and throttling: Microsoft will enforce usage limits (calls per minute, etc.) unless you raise them. Hitting a quota ceiling might feel like an outage from the app user’s perspective (requests start failing). However, this scenario wouldn’t be considered an SLA breach – it’s a self-imposed guardrail. Thus, it’s up to you to anticipate volume and request higher limits. Don’t assume “unlimited” usage just because it’s cloud – always check the default caps for each model.
In negotiations, while you usually can’t get a unit price reduction for Azure OpenAI, you can pursue other cost relief measures:
- Free credits or funding: Microsoft often has incentive programs for new technologies. Inquire about any Azure OpenAI trial credits or funding opportunities available for your project. For example, as part of a larger Azure deal, they might offer a certain amount of Azure OpenAI usage at no charge to encourage adoption.
- Cost caps: While Microsoft won’t cap its revenue, you can internally enforce a cap. Use Azure’s built-in cost management to set a hard limit or at least alerts. This isn’t a contractual term, but it serves as a safety net to prevent runaway spending.
- Transparency in billing: Insist on clear and detailed billing for Azure OpenAI usage. It should appear as its service in your Azure bill. During implementation, verify you can attribute costs to specific apps or departments (tag your resources). This helps later in justifying spend and optimizing usage.
Overall, treat Azure OpenAI’s cost like a utility bill that can spike. The best defense is proactive planning: forecast various usage scenarios (best case, likely case, worst case) and have a plan for each.
Procurement and finance teams should be involved early to set expectations.
No one wants a fantastic AI pilot to be shut down later because it became too successful and expensive – it’s far better to negotiate and budget with realistic growth in mind.
Contractual Risks and Negotiation Points
Adopting Azure OpenAI means signing up to Microsoft’s standard Online Services Terms (and potentially some Azure OpenAI-specific terms).
Hidden in those fine prints are a few risk areas and obligations that enterprise buyers should understand and negotiate where possible:
- Liability limits: Microsoft’s contracts typically cap their liability and exclude indirect damages. Azure OpenAI is no exception – if the service misbehaves or causes losses (say your app crashes or makes a bad decision due to an AI output), Microsoft’s liability is practically limited to what you paid for the service (and often even less, due to credits as sole remedy). This is standard, but it means that the business risk largely rests with you. You likely won’t succeed in removing these caps, but be aware that beyond SLA credits, you have little recourse. Consider your own insurance or contingency plans for critical uses of AI.
- Model and feature changes: The Azure OpenAI platform is evolving fast. Microsoft or OpenAI can update models, deprecate older versions, or change how the service works with relatively short notice. Contractually, Azure services often reserve the right to make “updates or upgrades” at Microsoft’s discretion. The risk is that a model your solution relies on might be retired or altered, potentially requiring you to recertify your application or adjust prompts. To mitigate this, negotiate for notification periods – e.g., Microsoft will give at least 90 days’ notice for any breaking change or model removal. Additionally, stay engaged with Microsoft’s product roadmap through your account team or insider programs to stay informed about upcoming changes.
- Data usage and privacy: While Microsoft contractually commits to not using your Azure OpenAI inputs/outputs for training, you do have obligations on your side. You must agree not to input certain types of sensitive data unless necessary, and if you do (especially personal data), you should utilize features like data encryption and regional storage to comply with applicable privacy laws. Ensure that Microsoft’s Data Protection Addendum is in effect (it usually is part of enterprise agreements), and that Azure OpenAI is covered by it. If your industry requires it (healthcare, etc.), get a HIPAA BAA or other needed addendum from Microsoft. All these should be in place before you deploy sensitive workloads.
- Responsible use obligations: Microsoft has an AI Code of Conduct and requires customers to use Azure OpenAI responsibly. This means implementing content filtering (to catch and block disallowed content the model might produce) and not using the service for prohibited purposes (like generating malware, hate speech, etc.). These requirements aren’t just guidance – if you violate them, you could be in breach of contract and lose Microsoft’s indemnification protections. Microsoft offers a Customer Commitment to defend you against certain intellectual property claims (for example, if a third-party accuses your AI output of infringing copyrights), but only if you follow their responsible AI guidelines (such as using the built-in content filters and not removing them). During negotiations, clarify what you need to do to qualify for this protection and ensure those processes are in place on your side. It can be a valuable safeguard for legal risk, essentially Microsoft saying, “we’ll stand by you if someone challenges the AI’s output, as long as you used the system as intended.”
- Termination and lock-in: As with any cloud service, Microsoft reserves the right to suspend or terminate Azure OpenAI access for misuse or non-payment. But even aside from that, consider lock-in risk: once your apps and users depend on Azure OpenAI, switching to a different AI provider isn’t trivial. Negotiation won’t eliminate this, but you can protect yourself by avoiding any extra restrictive clauses. For instance, ensure you retain the ability to terminate or reduce Azure OpenAI usage without severe penalties (especially if you’ve committed to a large Azure spend expecting to use OpenAI – you don’t want to over-commit if adoption is uncertain). Also, clarify the portability of your data: while the model is proprietary, any fine-tuned models or training you do on Azure should be your IP – make sure the contract doesn’t claim otherwise.
To tie these points together, here’s a table of common risk areas vs. how to address them in an Azure OpenAI agreement:
Issue / Risk Area | Potential Gap or Pitfall | Negotiation or Mitigation Approach |
---|---|---|
Service SLA & Uptime | Base SLA might be limited (99.9% uptime) and only credits as recourse. In new services, SLA could be weaker or “reasonable effort.” Potential downtime with no hefty penalty for MS. | Push for a confirmed uptime SLA in the contract (if not implicitly covered). At minimum, ensure you can claim credits for outages easily. Emphasize need for priority support during any downtime. |
Model Performance | No guarantees on output quality or accuracy; the model might fail your business expectations. Also, no performance (latency) guarantee on shared tier. | Conduct a thorough pilot to set expectations. Negotiate a trial period or early exit clause if the solution doesn’t meet defined benchmarks. For latency-critical needs, consider dedicated capacity with a latency SLA. |
Support Response | Standard support might not assure fast resolution for critical incidents; Azure OpenAI expertise within support might be limited initially. | Include support escalation clauses: e.g. named contacts, guaranteed response times for high-severity issues. Ensure your support plan is adequate (upgrade if needed). Leverage your account team to monitor critical tickets. |
Costs & Budget Overrun | Consumption pricing can lead to unpredictable costs; no built-in discount tiers and potential for over-usage beyond budget. | Negotiate to incorporate Azure OpenAI into existing Azure commitments for discounts. Set up cost governance (budgets/alerts). Ask Microsoft for any available usage credits or financial governance assistance. |
Unilateral Terms Changes | Microsoft can change service terms, pricing, or even deprecate models with notice. You could be stuck if a change undermines your solution. | Request a reasonable notification period for any material changes (e.g. 90 days). Where possible, negotiate a right to terminate or adjust commitments if a change severely impacts your use case. Stay in close communication with Microsoft about product roadmap to anticipate changes. |
Customer Obligations | Your responsibilities (content filtering, user consent, data handling) might be overlooked, risking compliance or support from MS. | During negotiation, review Microsoft’s Acceptable Use Policy and Responsible AI requirements. Ensure you can comply and bake those obligations into your implementation plan. Clarify that you receive all relevant compliance documents (e.g. SOC reports, certifications) needed for your regulators or auditors. |
Every enterprise will have a unique risk profile, but the overarching advice is this: be proactive in asking “what if…?” for each of these areas.
What if the service is down? What if the model output causes trouble? What if our usage skyrockets? Then ensure the contract or your contingency plans have an answer.
Microsoft is often willing to discuss these concerns, especially in enterprise deals.
They won’t overhaul their standard contract for one customer, but they can provide side letters, detailed clarifications, or even custom terms in some cases to address critical needs.
Recommendations
To get the most out of Azure OpenAI while protecting your organization, consider these tactical tips during negotiation and deployment:
- Integrate Azure OpenAI into your enterprise agreement: Treat Azure OpenAI as a first-class part of your Microsoft contract. This allows you to leverage any existing discounts and ensures the service is governed by the same negotiated protections (data privacy, liability) as your other Azure services.
- Insist on clarity in the SLA: Don’t assume the fine print covers Azure OpenAI – double-check. Have Microsoft explicitly confirm the uptime commitment (e.g. 99.9%) for your deployments. If your use case is extremely sensitive to downtime, negotiate for an enhanced SLA or at least a faster support response guarantee.
- Leverage a pilot phase: Before fully committing to a long-term solution, run Azure OpenAI in a pilot or proof-of-concept with measurable goals. Negotiate a checkpoint after the pilot (30-90 days) where you can adjust terms or even walk away if the technology doesn’t meet expectations. This puts pressure on Microsoft to ensure your success from the outset.
- Monitor and control usage from day one: Enable Azure cost management tools, set budgets, and place caps on usage if possible. This isn’t just a technical measure – communicate internally that Azure OpenAI costs need oversight. Microsoft can help with setting up cost alerts. Showing that you have governance in place may also strengthen your case when requesting concessions or credits (“we’re doing our part to manage costs, we expect a fair deal on pricing”).
- Align on support and escalation procedures: Work with your Microsoft rep to document how you’ll handle critical issues. For example, get names for a “fast track” escalation if the AI service has a major outage, and ensure those procedures are written into your operations runbook. During negotiation, you might secure a commitment for quarterly service reviews or a dedicated cloud solution architect to assist your team – leverage those if offered.
- Address data and IP concerns head-on: Verify that the contract gives you ownership of inputs/outputs and that Microsoft’s data handling meets your compliance needs. If your legal team is worried about IP infringement from AI outputs, point out Microsoft’s AI Customer Commitment and ensure you’ll comply with its requirements. Get all such assurances in writing (even if it’s referencing Microsoft’s public policies) so everyone is on the same page.
- Prepare for scalability and future changes: Ask Microsoft about their roadmap and plan for new models or features. If you anticipate needing a dedicated capacity or a higher quota in the future, mention it during negotiations – sometimes you can secure a fixed price or at least gain priority access when the time comes. Also, clarify with Microsoft how model upgrades will be handled (e.g., will you be forced to switch to a new model version, and how often?). Proactively negotiating flexibility can save headaches later.
Checklist: 5 Actions to Take
1. Review the official terms and SLA: Pull up the Microsoft Product Terms and Azure OpenAI Service documentation. Read the SLA section and service description to know exactly what is promised (uptime %, credit policy) and identify what’s missing or not clear.
2. Define your requirements and risk tolerance: Gather your team (IT, business owner, compliance, legal) and outline what you need from Azure OpenAI. For example, is 99.9% uptime sufficient? What support response is required if the service is down? Can we tolerate the risk of bad outputs without additional safeguards? This will guide your negotiation priorities.
3. Engage Microsoft early for contract inclusion: Talk to your Microsoft account manager about adding Azure OpenAI to your enterprise agreement or contract order. Confirm that your Azure spend commitments or discounts will apply. Inquire about any preview programs or funding opportunities for Azure OpenAI – sometimes hidden offers become available if you ask.
4. Conduct a pilot and track metrics: Don’t deploy company-wide on day one. First, run a controlled pilot using Azure OpenAI with a limited scope. Monitor uptime, latency, output quality, and costs closely. Use this pilot data to validate that the SLA and performance meet your needs. If you encounter issues, report them to Microsoft immediately – it strengthens your case for negotiating improvements or seeking support.
5. Negotiate and document key terms: Before scaling up, negotiate any custom terms or clarifications needed. This might include an addendum for a specific SLA, a written confirmation of the data handling practices, or an agreed process for requesting higher quotas. Document everything in the contract or a formal email from Microsoft. Internally, also document your compliance steps (like “we will use content filtering and not input XYZ data”) to ensure you uphold your end. With the deal signed, maintain an internal checklist for ongoing compliance (e.g., review usage monthly, refresh employee training on AI use policies, etc.).
FAQ
Q: What SLA does Azure OpenAI Service offer, and what happens if it’s not met?
A: Azure OpenAI comes with a standard 99.9% uptime SLA under Microsoft’s Online Services terms. If the service is unavailable beyond that threshold in a given month, Microsoft’s contract entitles you to request a service credit (a partial bill credit proportional to the downtime). There is no further compensation for outages – the SLA credit is the sole remedy. Enterprise customers should monitor uptime and promptly raise any SLA claims, as there’s usually a window to submit credit requests.
Q: Does Microsoft guarantee the accuracy or safety of the AI’s outputs?
A: No. The Azure OpenAI SLA and support terms do not cover the quality or correctness of model outputs. The service may produce inaccurate information or inappropriate content, and Microsoft disclaims liability for any such errors. It’s the customer’s responsibility to use the service responsibly: implement content filters, provide user warnings as needed, and have human oversight for critical decisions. In short, Azure OpenAI guarantees that the system is running, not that it will always provide a “right” answer.
Q: How is Azure OpenAI priced, and can we get volume discounts or fixed pricing?
A: Azure OpenAI is priced on a pay-as-you-go model, charging per 1,000 tokens (for text/chat models) or image, etc., with rates roughly equivalent to OpenAI’s direct pricing. There isn’t a built-in volume discount schedule (unlike, say, purchasing more licenses for a software product). However, enterprise customers can leverage their Azure agreements: if you have negotiated discounts on Azure consumption or have a prepaid Azure commitment, your Azure OpenAI usage can benefit from that. In essence, while Microsoft won’t typically drop the token price just for you, the effective rate you pay can be lower if you’re already getting, for example, a 15% Azure discount across the board. Additionally, you can ask Microsoft about promotional offers – for instance, some customers receive a certain amount of free Azure OpenAI credits for initial projects, especially when tied to an enterprise deal or an upcoming Azure renewal.
Q: What if we need higher throughput or a specific model at scale – do we have to commit to a certain volume?
A: By default, you don’t need to make an upfront commitment to use Azure OpenAI; it can scale on demand, but within set quota limits. If you anticipate needing a high volume (i.e., a large number of requests per second or the use of the largest models), you should proactively request quota increases. Microsoft can raise your limits, but it may take time and approval, so do this well in advance. For guaranteed capacity, Microsoft offers a Provisioned (dedicated) option, where you essentially reserve a specific amount of compute resources for your Azure OpenAI usage. That doesn’t require a multi-year commitment, but it does mean a fixed monthly cost as long as you use that dedicated instance. Use the dedicated option if you require very consistent, high-volume throughput or a stronger latency guarantee. Otherwise, you can stick to the standard shared model – just keep an eye on usage, and scale out by requesting quota hikes or using multiple instances as needed (Microsoft will guide you on the best approach).
Q: How does Azure OpenAI handle our data, and do we retain intellectual property rights on outputs?
A: Data handling: Azure OpenAI is built with enterprise privacy in mind. Your input data and the outputs generated are not used to train Microsoft or OpenAI’s models. Unlike some consumer services, Azure explicitly does not harvest your prompts for its model improvement. Microsoft may store the inputs/outputs temporarily (up to 30 days) in secure logs for monitoring abuse and diagnosing service issues, but those logs are isolated and automatically purged. You can also request that Microsoft not retain logging data beyond what is necessary, if required by your policies. Ownership and IP: You retain ownership of the content you input and the content the AI outputs for you. Microsoft’s terms confirm that they claim no ownership over your data or the results. Many customers treat the AI output as their IP (for example, if the model generates some code or text, you can use it freely in your products). Microsoft also offers an IP indemnification called the “Azure AI Customer Commitment”. Essentially, suppose someone sues you claiming the AI output infringes on their copyright. In that case, Microsoft will defend you as long as you adhere to the prescribed usage guidelines (such as using content filters and not intentionally generating disallowed content). This gives some peace of mind on the intellectual property front. Always review the latest Microsoft documentation and ensure you comply with their responsible AI standards to take advantage of these protections.
Read about our Microsoft Negotiation Service.