There are various reports and they all seem to agree: The tech world is currently awash in the concept of agency. It is, more specifically, extremely into the word “agentic,” which peppers the language of the tech-associated, the tech-adjacent, the tech-adjacent-adjacent.
That’s “agentic” as in, you know, having agency — possessing the capacity “to influence and control outcomes through assertive individual action,” as the Oxford English Dictionary has it. The word holds a lot of meaning in computing, but Silicon Valley aspirants seem just as eager to apply it to themselves. They talk about being agentic people; sometimes they dress up the idea in a little rhetorical suit and talk about the Highly Agentic Individual. They are describing the kind of person who simply acts, assertively, to shape the world, rather than seeking approval or meekly following the herd. Candidates for tech jobs get asked if they’re agentic (good) or mimetic (yuck). On X, people debate whether the platform’s owner, Elon Musk, is in fact “the most agentic person alive.” One poster laments the way a cold can ruin your workday: “You won’t make any deals, you won’t be an agentic person. You’re milquetoast.” Another just needs an adequately agentic aide to help schedule medical appointments.
This sense of agency is hundreds of years old: That O.E.D. entry — II.4, “Ability or capacity to exert power” — features citations from the year 1606 onward, concerning things like “the moral Agency of the Supreme Being” versus that of humanity, or the state’s role in preserving the “personal free agency” of its citizens. But you could be forgiven for thinking it feels new, given how much our understanding of it has been shaped by recent thinking in psychology. In that field, agency is the ability to act independently and, by doing so, to feel control over your own direction — steering your fate instead of watching helplessly as life happens to you. (Children, for instance, are said to gradually develop more “agency and autonomy” as they grow.) Readers of things like feminist criticism will have watched a related usage bubble up from academic thought (1988: Unlike depictions of “women as victims of forces beyond their control,” Emma stands as “Austen’s most agentic heroine”) and eventually cross into everyday speech.
Today’s futurist spin is not so vastly different — except that it adds, predictably, the self-regarding will-to-power fantasies that seem endemic to tech culture. The language often echoes familiar dreams of becoming a rule-shattering visionary, a rugged-individualist lion instead of a placid, blue-pilled, normie sheep.
There is an obvious and proximate reason you would find agency coursing through tech right now: The industry is currently adding agency to A.I. The field is graduating from generative models and chatbots to A.I. “agents” — models meant to act on their own, pinging through the digital world making plans, purchases, decisions. People buzz about agentic coding, agentic commerce, agentic dating, a whole agentic internet; anything you do on a computer could be done by a computer. “Very soon there are going to be more A.I. agents than humans making transactions,” claims the chief executive of Coinbase, for whom such an eventuality might work out pretty nicely. This is a moment when every evocation of personal agency seems to sit in the shadow of computers being prodded to demonstrate something very similar.
Most Americans remain more connected with a different meaning of “agent.” We’re used to the agent as representative — someone who acts on behalf of. Talent agents negotiate deals for actors, writers, models. Travel agents book vacation packages for tour groups. Customer-service agents appear, if you’re lucky, after a minute or two of wearily declaiming the word “AGENT” into a speech-recognition phone system.
The word’s etymology contains both strains: the agent as actor, yes, but also as advocate, instrument, emissary. That double meaning is incredibly handy for the tech industry. It can sound as though agentic A.I. models are meant to assist us — even when the people using the word are boasting that their models are just fine acting without us.
It’s exactly this dynamic that seems to preoccupy some of our most obsessively agentic individuals. There’s a common prediction among those hoping to outstrip their ovine, mimetic peers: The A.I. models, they say, will eventually supply all of the effort, training and expertise that have historically stood between humans and our ability to simply make things happen. Once all those annoyances are pushed out of the way, the only thing separating the world’s winners from its losers will be the sheer motivation to act, the raw ambition to do a thing — pure, bold agency. You run across this notion all sorts of places, sometimes as a frantically held belief and sometimes as a sales pitch. A sufficiently agentic layperson, someone argues on X, could sit down with an A.I. and produce the same work as a Ph.D student. “In the age of A.I., becoming a super agentic individual is a superpower”; “Being a highly agentic individual is going to be so important with the rise of A.G.I.”; “In the A.I. era, become an ‘agentic individual’ to thrive! Discover how work evolves with A.I. agents. Don’t get left behind!” All that will matter is your own superior drive and will; every other aspect of achievement can be handled, as if by magic, in the guts of some colossal data center.
Nitsuh Abebe is a story editor for the magazine.
The post Worried About A.I. Taking Your Job? That’s Not Very ‘Agentic’ of You. appeared first on New York Times.




