Modern manufacturing has been built on structure, standardisation, and predictability. Automation takes care of repetitive tasks. MES platforms manage workflows with precision. But for all their benefits, these systems are often inflexible. They follow rules, not reasoning. They capture process, but not purpose.
Something new is now taking shape on the factory floors. AI agents, independent, context-aware and task-oriented, are functioning as a third layer of intelligence. Not a replacement for what came before, but a layer that complements and elevates it. These agents are not confined to a single screen or workflow. They move between systems, interpret context via semantic data, and solve problems across functional boundaries.
Think of them as collaborators with domain expertise baked in. They do not just respond to commands; they interpret goals from data and instructions. Once briefed, they can navigate data, weigh outcomes, and coordinate actions. The result is not just a smarter tool but a more adaptive factory.
What sets this development apart is the shift from passively reporting problems to actively resolving them. Agents are not there simply to log information or raise alerts. They operate with purpose, looking across functions and acting to resolve disruptions before they grow into bigger issues. This evolution changes not just how manufacturing systems operate, but also how problems are anticipated and managed.
The agent as a co-worker
These systems break with the logic of traditional software. Most enterprise platforms are fixed structures: interfaces on top of databases, bound together by business rules. Agents function differently. They connect to the same data but make decisions based on the context. They do not need a user to click a button; they need a problem to solve.
Crucially, they must know what they are talking about. An agent designed for manufacturing cannot rely on generic logic. It must understand engineering terms, operational constraints, and supply chain nuances. That is where domain-specific expertise comes in, combined with data that has been organised semantically.
The power of semantic data becomes obvious in these scenarios, as we have seen in our own factories, linking voltage spikes, supplier delays, and yield drops into a single narrative allows agents to act proactively. They draw connections between departments that rarely speak. It can link maintenance data with design documentation or spot recurring defects tied to upstream variables. What once required a team of experts, and a meeting room can now be initiated by a well-trained agent.
This is not a theoretical promise. It solves a very real problem: fragmentation. Most manufacturers still operate in silos, whether by system, department, or geography. Information does not flow easily. Insight gets lost. Agents offer a way to rebuild that continuity, not by restructuring the company, but by connecting its knowledge.
And they are not just gathering data. They are acting on it. A scheduling agent, for instance, does more than flag conflicts; it can reshuffle shifts, reassign workers, and communicate updates in real time. The emphasis is on initiative, not just alerts.
As these agents take on more responsibility, their role is already resembling that of a digital colleague in live factory settings. In many cases, these are multi-agent systems, especially when responsibilities broaden. They are not just lines of code running in the background. They develop operational memory, adapt to new scenarios, and respond to outcomes. In some cases, they may outperform human counterparts in consistency or speed. But the goal is not competition, it is collaboration. Let humans focus on strategy and judgment. Let agents handle pattern recognition, coordination, and routine interventions.
Human on the loop, by design
Autonomous systems often make headlines. But on the factory floor, the real goal is reliability. And that means keeping humans involved. The most successful agentic systems are those that support rather than replace human expertise. They present options, show their logic, and defer when confidence is low. Operators remain in control, but better informed. The result is more trust and better decisions.
The shift is already visible on factory floors today. Some supervisors now coordinate both agents and people. Engineers use agents to test hypotheses. Maintenance teams work alongside diagnostic agents who explain what they see and why it matters. Organisations are beginning to reflect this change. Job roles are beginning to include responsibility for agent orchestration. Agents themselves are being assigned tasks, benchmarks, and performance reviews.
That opens the door to better accountability. When an agent flags an issue, the chain of reasoning is visible. When it makes a recommendation, the source data is clear. This visibility is not a nice-to-have; it is essential. In regulated industries, in safety-critical systems, and anywhere decisions matter, trust depends on transparency.
The cultural shift this implies is not insignificant. For some, it may be the first time a non-human entity is treated as a contributor. This raises new questions around training, oversight, and ownership. Who reviews an agent’s performance? Who is responsible when they make a mistake? These are not just legal or technical concerns. They are questions about how we build partnerships with machines that are no longer passive tools but active participants.
From use case to intelligence infrastructure
Much of this begins with narrow tasks. Scheduling. Diagnostics. Regulatory checks. These are ideal proving grounds: constrained, measurable, and with high impact. But the long-term opportunity goes beyond point solutions.
To build real momentum, manufacturers need to think in terms of platforms. Agents should be modular, composable, and easy to deploy. They should not be locked to any single vendor or a system. Instead, they should sit on top of a shared infrastructure that supports semantic data, interoperability, and decentralised execution.
The real challenge, of course, is the existing environment. Most plants are a patchwork of legacy systems, vendor-specific formats, and inconsistent standards. Making agents work in that setting requires a new layer of coherence. That is where semantic data models come into play. They allow agents to operate across systems without rewriting everything underneath.
This opens the door to experimentation. A sustainability agent monitors energy use, flag inefficiencies, and suggest optimisations. A quality agent identifies patterns in defect data and correlates them with upstream variables. A supply chain agent monitors risks and adjust plans before disruption hits.
Each one begins as a use case. But together, the agents start to form an ecosystem, often operating as multi-agent systems. And the more they collaborate, sharing data, insights, and context, the more valuable they become. Success at this stage depends on openness. The agent that improves uptime in one plant should be able to do the same elsewhere. Portability, scalability, and repeatability will define which models survive. Those that are built with siloed logic or black-box reasoning will struggle to gain traction across large enterprises. Interoperability is no longer a bonus; it is the baseline.
Trust must be earned, not assumed
Factories run on precision. When something goes wrong, there are real consequences, downtime, waste, and even safety risks. So, trust in digital systems is not based on novelty. It is based on performance. Trust is being earned today by agents that demonstrate accuracy, consistency, and transparency. Their logic is open to inspection, their actions traceable and their behaviour aligned with industry norms, not just technical feasibility.
This is not just about risk. It is also about scale. In practice, cost savings and downtime reductions are already measurable. Early deployments show that a single AI agent can deliver savings of around €1 million per plant annually. A system that works once, in a pilot, proves a point. A system that works every day, under pressure, proves its value. That is the bar for agentic intelligence in manufacturing.
And relevance matters. The best agent is not the most complex; it is the one that understands the task at hand. That means being built with the operator in mind, not just the data scientist. It means solving problems that people recognise. When agents help people do their jobs better, they become more effective. When they do not, they disappear.
Looking ahead, the factories that lead will not be those with the flashiest dashboards or the biggest models. They will be the ones that embed intelligence where it counts, in the workflow, in the decisions, and in the relationships between people and machines.
Factory 2030 is not about removing humans. It is about the reality already unfolding on today’s factory floors: humans supported by accountable, transparent digital colleagues.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
The post Factory 2030 runs on more than code. As a CEO, I see the power of agentic AI—and the trust gap that we must close appeared first on Fortune.




