Tens of millions of AI agents appeared in the Agent 365 Registry within weeks of preview. Understanding what Agent 365 does, how it is licensed, and how to govern agent proliferation is no longer optional for enterprise IT.
Agent 365 is Microsoft's enterprise control plane for AI agents. It provides a single, central place to observe, govern, manage, and secure agents across the organisation — regardless of how they were built. Whether agents were created using Microsoft Copilot Studio, third-party tools, or open-source frameworks, Agent 365 gives IT and security teams visibility and control over what they do, what they access, and whether their outputs comply with policy.
The scale of the challenge Agent 365 addresses became apparent in the preview period: tens of millions of agents appeared in the registry within weeks. IDC predicts 1.3 billion AI agents in enterprise circulation by 2028. Without a governance layer, these agents represent a material security, compliance, and cost risk. For regulated industries — financial services, healthcare, critical infrastructure — agent governance may become a compliance requirement within the next 12–24 months.
Before Agent 365, each AI agent was effectively invisible to enterprise IT — it acted on behalf of a user but had no identity, no audit trail, and no policy constraint. Agent 365 gives every registered agent an Entra ID, a defined permission scope, and an observable activity log. This is the same shift that happened with service accounts and managed identities, now applied to AI agents. The compliance implications are significant and not yet fully priced into most enterprise security postures.
Central registry of all AI agents active in your Microsoft 365 environment. Automatically discovers agents created through Copilot Studio, Power Automate, Teams, and integrated third-party tools. Provides visibility into agent count, creator, and activity status that most organisations currently lack entirely.
Real-time and historical logging of agent activity — what data each agent accessed, what actions it took, what outputs it produced. Essential for audit trails in regulated industries and for identifying agents behaving outside expected parameters.
Automated risk scoring for registered agents. Flags agents accessing unusual data volumes, attempting unauthorised API calls, or showing activity patterns outside normal operational bounds. Integrates with Microsoft Sentinel for security operations workflows.
Agents receive Entra ID identities with defined permission scopes, just as human employees do. Lifecycle management covers creation, permission review, and decommissioning. This enables the same access control rigour applied to human users to extend to AI agents — closing a major governance gap.
Define and enforce policies governing what agents may and may not do — data access boundaries, output filtering, action restrictions. Policies can be set at the agent, department, or organisational level. Non-compliant agents are automatically flagged and can be suspended pending review.
Agent 365 governance extends beyond Microsoft-built agents. Open API integration allows IT to register and monitor agents built on third-party platforms — including Salesforce Agentforce, ServiceNow Now Assist, and custom enterprise agents built on LangChain, AutoGen, or similar frameworks.
E7's per-user pricing includes Agent 365 for each licensed user. However, agents themselves may require separate consumption-based licensing when they operate outside the E7 per-seat boundary. This creates a risk that is not immediately apparent in the per-seat pricing discussion.
An E7 user who creates an agent that runs autonomously 24 hours a day — querying SharePoint, summarising emails, sending Teams messages — generates compute and API consumption costs that are not covered by the $99/user/month E7 commitment. These charges accumulate in your Azure subscription. Without explicit consumption caps negotiated into your EA, agent proliferation can drive material cloud cost overruns with no usage threshold warning.
| Licensing Model | Applies When | Commercial Risk |
|---|---|---|
| Per-user (E7 bundle) | Agent used by a single licensed E7 user on demand | Covered — no additional charge |
| Per-agent (Copilot Studio) | Agent acts autonomously or is shared across users | Consumption-based charge outside E7 commitment |
| Azure consumption | Agent calls Azure OpenAI, Azure AI Search, Azure Functions | Billed to Azure subscription — uncapped by default |
| Standalone Agent 365 ($15/mo) | E5 users needing agent governance only | Predictable — add-on to E5 at known rate |
The single most important Agent 365 commercial protection is a monthly Azure consumption cap covering all agent-generated workloads. This cap must be explicitly negotiated into the EA addendum — it is not present by default in standard Microsoft commercial terms.
Before negotiating a cap, model expected agent deployment: how many agents, what data sources they access, how frequently they run, and what Azure services they call. A 90-day controlled pilot across 500 E7 users will generate the consumption data needed to model accurately at scale.
Negotiate a monthly consumption cap at 150% of modelled agent workload — providing growth headroom while preventing runaway spend. Require 30-day advance notice from Microsoft if consumption trends toward the cap threshold.
Any consumption above the cap should require your written approval before additional charges are incurred. This prevents a single poorly-governed agent from generating significant charges before IT is aware. Agent 365 anomaly detection should be configured to alert before the cap is reached.
The following five-item framework provides a starting structure for enterprise AI agent governance using Agent 365 capabilities. It is designed to be implemented incrementally — not as a big-bang programme — starting with high-risk agent categories.
Run a 30-day agent discovery exercise to identify all agents operating in your Microsoft 365 environment. Register every agent in the Agent 365 Registry with a named owner, defined purpose, and approved data scope. No unregistered agents should be permitted to operate in production.
Assign Entra ID identities to all registered agents. Define permission scopes using least-privilege principles — the minimum data access and action rights required for the agent's documented purpose. Review and revalidate permissions quarterly.
Classify agents by risk tier based on data sensitivity, action scope, and autonomy level. High-risk agents (those accessing sensitive data or taking consequential actions) require enhanced monitoring, mandatory output review, and explicit change approval for any modification.
Configure Agent 365 observability to generate alerts for: unusual data access volume, out-of-hours activity for user-bound agents, failed permission attempts, and output anomalies. Route alerts to the security operations team and the agent owner.
Define and enforce agent lifecycle policies: creation approval, 90-day activity reviews, automatic suspension of inactive agents, and formal decommissioning with data retention compliance. Treat agent decommissioning with the same rigour as offboarding a human employee.
Need help building an Agent 365 governance programme before your E7 rollout? Redress Compliance will design your agent governance framework, negotiate consumption caps into your EA, and advise on compliance posture for regulated industries.
Talk to Our Microsoft Advisory Team