Nitin Ware is a lead engineer at Salesforce’s AI platform.
Most people have experienced the maddening grind of trying to fix a billing mistake, appeal an insurance decision or schedule a medical appointment, only to be pulled into a maze of forms, portals and hold music. None of these tasks require special knowledge, yet they drain hours of time and energy. Modern life often demands administrative endurance, not expertise.
Despite all the hype around artificial intelligence over the past year, it has not reduced these burdens. The chatbots that went viral could rewrite an email or summarize an insurance policy, but they left the hard part to us. They talked but they did not act.
That is beginning to change. A new kind of AI is emerging, often called agentic AI, and its defining feature is that it can take action. Instead of advising us on what to do, it can simply do it.
Ask a generative AI to help plan a trip, and it might offer a list of cities to visit. Ask an agentic system, and it could compare fares, book flights, choose seats, redeem loyalty points, reserve hotels and alert you if prices drop. That is not a subtle difference. It marks a shift from suggestion to completion, from assistance to agency.
This shift is happening because new AI systems can remember context from earlier interactions, respond to changes along the way and carry out steps on their own. They can click through buttons, fill out fields, extract information from documents and adjust when something goes wrong. These abilities open the door to automation that feels less like computation and more like having a personal assistant.
Major companies are already moving in this direction. Microsoft has rolled out Copilot tools that can perform multistep office tasks across email, calendars and documents. Google has tested travel-planning systems that book trips. Software companies like Salesforce have announced pilot programs aimed at resolving customer cases rather than drafting replies. Amazon uses automated decision systems to adjust routing and inventory flows in its logistics operations.
These efforts are early and imperfect, but they point to the shift underway. For ordinary people, the impact could be even greater. Financial platforms are testing systems that negotiate fees or compare utility rates automatically. Health care providers are exploring tools that help patients book appointments and manage insurance. Schools and local agencies are experimenting with digital assistants that guide families through enrollment and benefits applications. These chores fall hardest on those with the least time, the least support and the fewest resources.
But the rise of agentic AI raises a new set of concerns. What happens if an AI makes a decision in your name, and it gets it wrong? Who is accountable if it enrolls you in a service you never intended? What if it misses a deadline that affects benefits, credit or medical care? And what if companies or government agencies begin designing systems that assume everyone has an AI assistant?
These risks are not hypothetical. They are mundane. Rather than some feared runaway superintelligence, the danger with agentic AI is silent error: unseen obligations, missed notices, automatic decisions that shape finances, health care and legal standing. Mistakes made on our behalf may be harder to detect and reverse than mistakes we make ourselves.
Rejecting this technology outright would mean forfeiting real benefits. But those benefits won’t arrive safely without rules and guardrails. These five can help:
Any system that can act in someone’s name should be required to generate an auditable record of what it did, when it did it and why. If an AI books travel, negotiates a fee or submits a form, that decision trail should be reviewable by the user and, when necessary, by regulators or courts.
Consent must become dynamic rather than contractual. Granting agentic authority should work more like smartphone permissions than legal fine print. Users should explicitly enable and revoke categories of power: spending money, contacting others, signing documents, sharing data or initiating services. Those permissions should be visible, time-limited and reversible.
Safeguards must be mandatory for those most at risk of harm. Elderly users, people with disabilities and those with limited digital literacy should never be exposed to irreversible actions by default. That means mandatory human-in-the-loop modes, cooling-off periods for financial commitments and automatic caps on spending.
Some decisions should remain legally nondelegable. Medical diagnoses, legal judgments, binding financial commitments above defined thresholds and any action that permanently alters identity, rights or safety should always require human verification. These limits should be encoded in law, not left to product design choices.
Liability must follow delegation. If a company deploys an AI that is authorized to act, it must remain legally responsible for the harm that flows from those actions. Without clear accountability, automation becomes a shield rather than a service.
Agentic AI could make daily life more manageable and humane. Or it could normalize invisible delegation of decisions people barely understand. The outcome will depend not on how fast the technology advances, but on how deliberately we decide what it is allowed to do in our name.
The post Action-taking AI is speeding ahead. Let’s get some guardrails up. appeared first on Washington Post.




