A new tool from Microsoft called Agent 365 is designed to help businesses control their growing collection of robotic helpers.
Agent 365 is not a platform for making enterprise AI tools; it’s a way to manage them, as if they were human employees. Companies using generative AI agents in their digital workplace can use Agent 365 to organize their growing sprawl of bots, keep tabs on how they’re performing, and tweak their settings. The tool is rolling out today in Microsoft’s early access program.
Essentially, Microsoft created a trackable workspace for agents. “Tools that you use to manage people, devices and applications today, you’d want to extend them to run agents as well in the future,” says Charles Lamanna, a president of business and industry for Microsoft’s Copilot, its AI chatbot.
Lamanna envisions a future where companies have many more agents performing labor than humans. For example, if a company has 100,000 employees, he sees them as using “half a million to a million agents,” ranging in tasks from simple email organization to running the “whole procurement process” for a business. He claims Microsoft internally uses millions of agents.
This army of bots, with permission to take actions inside a company’s software and automate aspects of an employee’s workflow, could quickly grow unwieldy to track. A lack of clear oversight could also open businesses up to security breaches. Agent 365 is a way to manage all your bots, whether those agents were built with Microsoft’s tools or through a third-party platform.
Agent 365’s core feature is a registry of an organization’s active agents all in one place, featuring specific identification numbers for each and details about how they are being used by employees. It’s also where you can change the settings for agents and what aspects of a business’s software each one has permission to access.
The tool includes security measures to scan what every agent is doing in real-time. “As data flows between people, agents, and applications,” says Lamanna “It stays protected.” As more businesses run pilot programs testing out AI agents, more questions arise about how safe the technology is to implement into core workflows that often contain sensitive data. A “prompt injection attack” where a website or app has hidden messages that try to take control of an agent or change its outputs is just one example of the vulnerabilities found in existing AI agents.
Lamanna believes business leaders who are wary about the ramifications, from security concerns to random errors, of rolling out thousands of AI agents into their workforce are fighting the inevitable. “Resisting having agents enabled is kind of like resisting giving internet or PC access to your employees,” he says. And of course he thinks that! Microsoft is in the business of helping companies adopt generative AI tools alongside their enterprise software subscriptions.
While every major AI company in Silicon Valley has been laser-focused on agents this year, the technology is still buggy at best and can introduce surprise errors into workflows if the automation veers off-course.
WIRED’s recent tests of AI agents have focused more on the potential for personal uses, rather than enterprise applications like those managed by Agent 365. Still, as a reporter, I have yet to experiment with any agent that felt genuinely helpful. In early tests, many agents often failed to complete basic tasks, like shopping for a birthday gift.
Even so, white-collar workers who are likely already feeling pressure from management to use agents at the office can expect that trend to continue heading into next year. “2025 is the year of agents,” says Lamanna. “2026 will be even more agents.”
The post Microsoft’s Agent 365 Tries to Be the AI Bot Boss appeared first on Wired.




