In Charlie Chaplin’s 1936 film Modern Times, a factory worker struggles to keep pace with an ever-accelerating assembly line — until the machine swallows him whole Nearly 90 years later, Wharton professor Eric Bradlow has the image on his mind. The machines are smarter now. The stakes are higher. And according to a sweeping new joint report from Accenture and the Wharton School, the humans running them are falling behind in a way that should alarm every boardroom in America.
There is a lot of breathless talk of autonomous agents reshaping every corner of corporate America, from handling sales calls to writing code to managing supply chains. But the report from the partnership between Accenture’s Global Products practice and Wharton’s AI and Analytics Initiative adds evidence to an emerging, inconvenient pattern: the smarter AI gets, the more it demands of the humans behind it.
“Intelligence may be scalable, but accountability is not,” says the report, titled The Age of Co-Intelligence: How Humans, AI Agents and Robots Are Redefining Value. It’s a sentence that sounds almost simple until you sit with what it means for every boardroom deploying agents by the hundreds. “This asymmetry is critical,” it continues, arguing that as AI removes limits on how much thinking and analysis can be done, humans still have to decide what matters, set strategy, and, importantly, own the outcomes.
The central finding is not that AI is coming for human jobs — it’s that it poses a direct challenge to all the leaders who will have to manage a world of autonomous bots crawling through the white-collar economy. “In a co-intelligent enterprise, leadership does not diminish as AI improves,” the report reads. “It becomes more consequential.”
While the report illustrates hypothetical upsides and doesn’t discuss the downsides of agents run amok, consider single errors rippling through entire systems: one agent’s hallucinated inventory figure causing downstream agents to massively overorder stock, or a customer service agent telling a customer that the problem is fine, and solved, when it isn’t and a human isn’t taking the lead. James Crowley, Accenture’s Global Products Industry Practices Chair and a co-author of the report, told Fortune that “we like to say humans in the lead, not in the loop.” If humans aren’t consciously taking the lead, errors can multiply at scale.
The numbers underneath that claim are staggering. Analyzing task-level data across 18 industries using ONET and Bureau of Labor Statistics data, Accenture researchers found that more than 50% of working hours across the American economy are now in play — subject to reshaping by about 60 digital and physical AI agents considered in the study. This is a truly massive data set, corresponding to more than 120 million workers across the 18 industries studied. In banking and capital markets, Wharton and Accenture estimated that the share of hours impacted by digital agents alone exceeds 45%.
A mass redeployment of labor
For a $60 billion company — a real client modeled in the report — the researchers estimated approximately $6 billion in potential annual revenue growth from deploying agentic AI at full maturity, alongside $1.7 billion in annual productivity gains. The catch: by 2028, roughly one-third of those productivity gains showed up not as direct cost savings, but as “capacity freed” — hours that need to be deliberately redirected toward higher-value work, or they simply evaporate.
“Productivity becomes growth only through redeployment,” the report warns. “Unless leaders deliberately redeploy that capacity toward higher-value work, productivity gains stall at efficiency and fail to translate into growth.”
Crowley told Fortune that the failure mode isn’t deploying too many agents — it’s failing to think about them as a coherent workforce rather than a collection of one-off experiments. “Everyone’s building an agent here, an agent there, sometimes thousands,” Crowley said. “What we tried to do is step back and look at what the agentic landscape will look like at an enterprise level.”
That enterprise view is where the accountability problem bites hardest. AI agents are already spreading “rapidly across the enterprise value chain, often ahead of formal strategy and governance,” the report notes, with nearly three-quarters of knowledge workers now using AI — frequently through unsanctioned, bring-your-own tools, a phenomenon sometimes called “shadow AI.” By 2028, roughly a third of enterprise applications are expected to embed agentic capabilities. And yet the report makes clear that governance architecture has not kept pace.
From a tech CEO’s perspective, this report rings true. Andrey Khusid, CEO of Miro, the $17.5 billion productivity startup that made headlines for deciding to leave Russia amid the outbreak of the Ukraine War, recently sat down with Fortune for a chat about the state of things. Miro’s main app a productivity software that dates back over a decade and it’s now embedding AI. “For almost 15 years, it was human-to-human collaboration [on Miro],” he agreed. “But then agent-to-AI happened. And now a lot of collaboration happens between humans and agents together.”
By bringing agents onto the platform, Khusid said his company is allowing users to “deliver work in an agentic way.” This is more complex than human-to-human work, he said. “It’s way more powerful and way faster time-to-value. Because before you would need to have a human with this expertise or that expertise … With agents, you can have the whole team working by your side with different expertise.”
Still, it’s extremely important to recognize that agents can be error-prone, just as humans can, and “a lot of this now-agentic delivery is a black box.” Miro is working to unpack that default opacity so that it can correct agents when they err in the wrong direction. Acknowledging that it looks like “an agentic revolution,” he added, “We’re at the very beginning.”
Bradlow and Crowley conceded that agents can be error-prone, even hallucinatory, and, on a mass scale, that could lead to widespread errors. “Here’s the thing,” Bradlow said, citing his years of expertise as a mathematician and data scientist at heart, and urging us to understand agents as fundamentally non-human in their decision-making. “Agents are built on the premise of what’s called reinforcement learning, which means good outcomes as programmed by the human who determines the objective function. When agents get bad outcomes, they change their assignment. They change what they do. It’s not as obvious humans learn that same way.” When an agent makes a mistake, he explained, you can tell it what to reinforce, and it shouldn’t make that mistake again. Which makes Khusid’s point about opening the black box all the more important.
Modern Times and the Weakest Link
Bradlow, who chairs the Wharton marketing department, told Fortune that it reminded him of several images from television and film. “This will expose the weakest link in an organization,” he said, recalling the British game show that was one of the most successful in BBC history, where the host eliminated players by saying, coldly: “You are the weakest link. Goodbye.”
He said it also reminded him of famous images of Charlie Chaplin and Lucille Ball, where the comedy legends struggled to keep up with ever-accelerating assembly lines. In the classic episode “Job Switching,” Ball ended up stuffing her mouth with chocolates as they sped by her at relentless speed
Chaplin’s famous scene in Modern Times was a bit grislier, ending with him getting sucked into the conveyor belt itself. It was also an iconic image that captured the early days of 20th-century capitalism. If one worker in a 20-step process adopts AI and triples their throughput while the next worker is still running on Excel, the bottleneck doesn’t disappear, he said — it just moves. “Efficiency gains happening here but not here,” he said, “will be exacerbated, and you will see it quickly.”
The governance stakes are highest, the report found in one case study, precisely where the revenue opportunity is largest: Sales. A function combining massive decision volume, high digital agent suitability, and elevated commercial risk — customer interactions, pricing, commercial judgment — sales is simultaneously the top candidate for early agent deployment and, as the report calls it, “a governance-critical domain where trust, accountability, and human oversight must be deliberately designed.”
That word — deliberately — recurs throughout the 40-page report like a drumbeat. Leaders cannot simply enable agents and wait for value to emerge. They must set explicit P&L targets, build human-led operating models, and assign clear decision rights before agents go live. The report goes so far as to suggest organizations may need a new executive role: a Chief Agentic Resources Officer.
“We spend so much time on the productivity aspect of the story,” Crowley said. “The gains on the revenue side are going to eventually dwarf the gains on the efficiency and productivity side.” Most companies have been very focused on the efficiency and productivity opportunities with advanced AI, he said, adding that he thinks now is the right time to make this an “and” story as the revenue potential is there and could be far larger.
Bradlow agreed that was a major takeaway for him as well, citing remarks he heard at an executive breakfast roundtable that Wharton and Accenture co-hosted at Nvidia’s GTC conference earlier in March. “The gains on the revenue side are going to eventually dwarf the gains on the efficiency and productivity side … it’s corporations, entities, people doing things they just could not do before. And companies launching new types of products they just could not imagine doing before.”
But that growth prize comes with a human price tag. The more intelligence you scale, the more accountable — and irreplaceable — your human leaders become. The agents can reason, execute, and coordinate. What they cannot do is own the outcome. In a badly designed agentic enterprise, one human could suddenly find themselves responsible for an exponential cascade of outcomes they never saw coming. It suggests that the phrase “modern times” means exactly what it did in Chaplin’s time: you have to master the machine, or you could be ground up in its gears.
The post ‘Intelligence may be scalable, but accountability is not’: A new report exposes the hidden cost of the AI agent revolution appeared first on Fortune.




