DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Google researchers unlock some truths about getting AI agents to actually work

December 16, 2025
in News
Google researchers unlock some truths about getting AI agents to actually work

Welcome to Eye on AI. In this edition…President Trump takes aim at state AI regulations with a new executive order…OpenAI unveils a new image generator to catch up with Google’s Nano Banana….Google DeepMind trains a more capable agent for virtual worlds…and an AI safety report card doesn’t provide much reassurance.

Hello. 2025 was supposed to be the year of AI agents. But as the year draws to a close, it is clear such prognostications from tech vendors were overly optimistic. Yes, some companies have started to use AI agents. But most are not yet doing so, especially not in company-wide deployments. A McKinsey “State of AI” survey from last month found that a majority of businesses had yet to begin using AI agents, while 40% said they were experimenting. Less than a quarter said they had deployed AI agents at scale in at least one use case; and when the consulting firm asked people about whether they were using AI in specific functions, such as marketing and sales or human resources, the results were even worse. No more than 10% of survey respondents said they had AI agents “fully scaled” or were “in the process of scaling” in any of these areas. The one function with the most usage of scaled agents was IT (where agents are often used to automatically resolve service tickets or install software for employees), and even here only 2% reported having agents “fully scaled,” with an additional 8% saying they were “scaling.”

A big part of the problem is that designing workflows for AI agents that will enable them to produce reliable results turns out to be difficult. Even the most capable of today’s AI models sit on a strange boundary—capable of doing certain tasks in a workflow as well as humans, but unable to do others. Complex tasks that involve gathering data from multiple sources and using software tools over many steps represent a particular challenge. The longer the workflow, the more risk that an error in one of the early steps in a process will compound, resulting in a failed outcome. Plus, the most capable AI models can be expensive to use at scale, especially if the workflow involves the agent having to do a lot of planning and reasoning. Many firms have sought to solve these problems by designing “multi-agent workflows,” where different agents are spun up, with each assigned just one discrete step in the workflow, including sometimes using one agent to check the work of another agent. This can improve performance, but it too can wind up being expensive—sometimes too expensive to make the workflow worth automating.

Are two AI agents always better than one?

Now a team at Google has conducted research that aims to give businesses a good rubric for deciding when it is better to use a single agent, as opposed to building a multi-agent workflow, and what type of multi-agent workflows might be best for a particular task.

The researchers conducted 180 controlled experiments using AI models from Google, OpenAI, and Anthropic. It tried them against four different agentic AI benchmarks that covered a diverse set of goals: retrieving information from multiple websites; planning in a Minecraft game environment; planning and tool use to accomplish common business tasks such as answering emails, scheduling meetings, and using project management software; and a finance agent benchmark. That finance test requires agents to retrieve information from SEC filings and perform basic analytics, such as comparing actual results to management’s forecasts from the prior quarter, figuring out how revenue derived from a specific product segment has changed over time, or figuring out how much cash a company might have free for M&A activity.

In the past year, the conventional wisdom has been that multi-agent workflows produce more reliable results. (I’ve previously written about this view, which has been backed up by the experience of some companies, such as Prosus, here in Eye on AI.) But the Google researchers found instead that whether the conventional wisdom held was highly contingent on exactly what the task was.

Single agents do better at sequential steps, worse at parallel ones

If the task was sequential, which was the case for many of the Minecraft benchmark tasks, then it turned out that so long as a single AI agent could perform the task accurately at least 45% of the time (which is a pretty low bar, in my opinion), then it was better to deploy just one agent. Using multiple agents, in any configuration, reduced overall performance by huge amounts, ranging between 39% and 70%. The reason, according to the researchers, is that if a company had a limited token budget for completing the entire task, then the demands of multiple agents trying to figure out how to use different tools would quickly overwhelm the budget.

But if a task involved steps that could be performed in parallel, as was true for many of the financial analysis tasks, then multi-agent systems conveyed big advantages. What’s more, the researchers found that exactly how the agents are configured to work with one another makes a big difference, too. For the financial-analysis tasks, a centralized multi-agent syste—where a single coordinator agent directs and oversees the activity of multiple sub-agents and all communication flows to and from the coordinator—produced the best result. This system performed 80% better than a single agent. Meanwhile, an independent multi-agent system, in which there is no coordinator and each agent is simply assigned a narrow role that they complete in parallel, was only 57% better than a single agent.

Research like this should help companies figure out the best ways to configure AI agents and enable the technology to finally begin to deliver on last year’s promises. For those selling AI agent technology, late is better than never. For the people working in the businesses using AI agents, we’ll have to see what impact these agents have on the labor market. That’s a story we’ll be watching closely as we head into 2026.

With that, here’s more AI news.

Jeremy Kahn [email protected] @jeremyakahn

The post Google researchers unlock some truths about getting AI agents to actually work appeared first on Fortune.

Trump turns on CBS, Kushner pulls out and Paramount’s hostile bid for Warner Bros. shows signs of collapse
News

Trump turns on CBS, Kushner pulls out and Paramount’s hostile bid for Warner Bros. shows signs of collapse

by Fortune
December 16, 2025

Paramount’s hostile bid for Warner Bros. showed signs of unraveling just moments after President Donald Trump aired fresh grievances about ...

Read more
News

How Nick Reiner struggled with addiction, and his parents

December 16, 2025
News

Hegseth and Sen. Mark Kelly clash in classified briefing over Caribbean boat strikes

December 16, 2025
News

‘Tartuffe’ Review: Casting Keeps a Deluxe Molière Revival on Its Toes

December 16, 2025
News

USC starting quarterback Jayden Maiava is returning for the 2026 season

December 16, 2025
There’s a Simpler Explanation for the Rightward Shift of Young Men

There’s a Simpler Explanation for the Rightward Shift of Young Men

December 16, 2025
Not all taxes are equal

For Florida, a lesson on taxes

December 16, 2025
Stephen Colbert Auctions $57,050 and Counting of ‘Late Show’ Goods Ahead of May Finale

Stephen Colbert Auctions $57,050 and Counting of ‘Late Show’ Goods Ahead of May Finale

December 16, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025