
Matt Pressberg didn’t set out to build a job-killing AI tool. He and his business partner at Hype Lab, a small PR firm, spun up an AI agent they call Maria to help them draft pitches and monitor their inboxes. Their robotic “competent but strategic intern” lets the two-man team punch above their weight. Recently, however, a larger PR firm he occasionally works with approached him about building and deploying a Maria-like product at their company — with explicit intentions.
“That was pretty much, ‘We want to use AI agents to displace employees,'” Pressberg, who lives in Florida, says. What was meant to be an internal tool could now be a “harbinger of doom for a lot of people.”
His conundrum is becoming familiar to an increasing number of workers. As companies race to adopt AI and boost efficiency, people are being asked to build, use, and deploy tools they suspect are meant to replace their coworkers, peers, or even themselves. While the higher-ups say their AI strategy is meant to enhance their workforce, employees often aren’t so sure. The result is a quiet but mounting tension in which doing your job may help eliminate someone else’s. People are becoming inadvertent job executioners.
Pressberg says he still hasn’t decided what he’s going to do about his ethical dilemma. It’s not like he has close working relationships with the people who would be on the way out the door, and he figures he’s not the only person who’s making this type of product.
“You can kind of ride the horse or get trampled by the horse, but I don’t know if you can just sit back and watch the horse race,” he says.
AI is the most fraught workplace issue of the moment. Predictions of AI-driven recessions and massive job losses abound. People are being bombarded by a slew of AI-related layoff announcements from high-profile companies such as Snap, Block, Meta, and Coinbase. Even the economists and tech executives who take an optimistic view of AI’s effects acknowledge that some level of workforce disruption is inevitable. Goldman Sachs economists say AI is already a modest net drag on the labor market, and Morgan Stanley analysts estimate that firms adopting AI have cut headcount by 4%, though that’s accompanied by an 11.5% gain in productivity.
That helps to explain why corporate messaging around AI use has become harsher. Whereas a year or two ago, executives were encouraging employees to experiment with the technology, many businesses are now tracking AI use and figuring it into performance reviews and promotion decisions. The vibe has gone from “tinker around” to “just do it.” Shareholders expect payoffs now, not in some nebulous distant future.
You can kind of ride the horse or get trampled by the horse, but I don’t know if you can just sit back and watch the horse race.
For some, the transition from builder to hangman is, at most, a moral speedbump on the road to untapped efficiencies and major profits. That’s the case for British serial entrepreneur James Buckley-Thorp. He founded his latest project, construction insurance platform Atlian.ai, with the explicit goal of cutting half a dozen steps and people — analysts, surveyors, copywriters — out of the process for producing insurance broker quotes for building projects. Buckley-Thorp, whose previous venture was in life insurance, says a VC was upfront when they asked: “Can you basically reduce the workforce?”
He’s excited about the potential benefits for customers, who he believes will be able to use his platform to obtain and compare insurance quotes in a matter of days instead of weeks. He hopes brokers will pass savings they’ve gotten by cutting so many people out of the process onto builders, and that timelines for major construction projects will speed up. As for the people whose work he’s attempting to nix, that’s just the name of the game. “I’m developing this with a VC, and community, morality questions are usually an afterthought. That sounds very brutal, but there is so much waste,” he says. “There is a massive workforce behind that that would suffer, but you’ve got to adapt or die.”
There’s some level of comfort that comes with being the one who is swinging the ax. You know you have your grip on the handle and are making the choice. What can be more disquieting is watching the ax swing and wondering whether you accidentally helped forge the blade. I recently caught up with a friend who’s been pretty all in on AI at their startup job, until the asks started to feel specifically geared toward wiping out the teams they work with closely. They’ve started to wonder if it’s time to look for another job. I heard from someone while reporting this story about a sort of workplace survivor’s guilt — only after two of their colleagues were laid off did they realize that all the AI adoption they were helping their bosses with was a factor. “In hindsight, it’s pretty evident that it was a major factor for the decision, especially because incorporating it early and aggressively made me sort of the AI expert on the team,” they told me.
These dynamics aren’t necessarily new at work. Middle managers have long been given the ugly job of deciding who goes amid layoffs, and it’s not unheard of for employees to be given a mandate to make things more efficient so that fewer people are needed. The difference with AI is how this responsibility is being dispersed to a broader set of workers. While a startup founder or executive may see an enormous money-making opportunity in leveraging AI, rank-and-file employees are weighing whether their AI tinkering is protecting their job or sowing the seeds of a larger layoff. The reward for going along isn’t riches, it’s staying off the unemployment rolls.
Are we getting to the point where everyone wants to have a five-person company so that there is massive unemployment for the sake of increasing and increasing profitability.
There’s a cognitive dissonance that arises when people are compelled to be self-protective in a way that feels corrosive, explains Constance Noonan Hadley, an organizational psychologist who founded the Institute for Life at Work, a think tank. That can be at least partly mitigated by ensuring the people doing the dirty work can see and understand why changes are necessary — the company is pivoting in an exciting new direction with a lot of buy-in, or they’ve sunsetted a project that’s long been obviously defunct. What’s tricky with AI is that there’s a lot of uncertainty about what the gains will actually be. Does it make sense to wipe out the entire design team only to discover that ChatGPT images are not, in fact, good enough to win over customers? There’s also the question of whether potential gains are worth it on balance.
“Are we getting to the point where everyone wants to have a five-person company so that there is massive unemployment for the sake of increasing and increasing profitability?” Hadley says.
Much of the solution to this is about communication — execs making sure employees understand the AI roadmap and the decisions being made around it. More senior employees, who have more visibility, are likelier to feel better about what’s going on and their own complicity than junior employees who are more in the dark. A recent Gallup poll found that 67% of executive leaders are frequent AI users, compared to 46% of individual contributors. A 2025 survey from Columbia Business School found that 76% of executives reported that their employees were enthusiastic about AI adoption at their organizations, but only 31% of individual contributors actually felt good about AI.
It’s also important for firms to lay out how to actually use AI, and what is useful vs. superficial. Research from the Stanford Social Media Lab and BetterUp, a professional training and coaching company, found that 40% of American deskworkers believe they’ve received “workslop” from a colleague, meaning stuff thoughtlessly shot out by AI. Kate Niederhoffer, a social psychologist and the chief scientist at BetterUp says that just cranking out any old email with AI is “a very compelling path of least resistance” when you’re expected to use the tools or “trying to do more with less” and everything is treated as urgent. But sending around workslop makes people more annoyed with and confused by one another. And when you participate, your coworkers see you as less creative, capable, reliable, and smart.
Amy Gallo, a workplace consultant, tells me she sees a sort of “sinking suspicion” around AI among her clients, an attitude of “yes, this is really helpful, I’m glad I’m using it… and this is really concerning.” People question whether they should build and use AI tools as effectively as possible to try to protect themselves, which is a hard issue to navigate, because “you don’t want to tell someone to be intentionally bad at their job,” she says.
As AI becomes an intermediary in our relationships, we don’t form more typical interpersonal connections with our managers and colleagues, making everything feel more distant at a moment when we should have more honest conversations about how things are going. Workers have long been asked to balance the demands of customers, consumers, coworkers, and shareholders. Everyone has to make choices about how comfortable they feel about the various tradeoffs. “AI has made it more immediate, which is like, ‘I could build this tool today, and 20 people could lose their jobs tomorrow,'” Gallo says.
It’s not just a threat to a paycheck; it’s a threat to a career path and long-term corporate success. Many of the tasks that more senior employees are using AI to take care of are the basic blocking and tackling that junior employees have historically handled.
“A lot of even white-collar professional service industries deploy entry-level and even midlevel employees as order-takers, and AI can take orders,” Pressberg says.
The problem is, if the strategizers never teach up-and-comers the tactics and instead automate them away, the bottom of the ladder gets cut off. There’s no new crop of strategizers 10, 20 years down the line who have developed the judgment and creativity that comes with repetitively doing something. This leaves a leadership vacuum at the top of the company. The veterans may have managed to hold onto their jobs, but they’re headed for retirement and leaving a hollowed-out structure behind.
AI is an exciting tool for many workers and leaders, and it has a lot of realized and unrealized potential. Even for the skeptics, sitting it out completely probably isn’t an option. At the same time, diving in isn’t a surefire way to protect yourself — many of the workers let go by Citi in its recent round of layoffs were reportedly part of its program for “AI Champions and Accelerators.”
Emily Stewart is a senior correspondent at Business Insider, writing about business and the economy.
Read the original article on Business Insider
The post Help, I think I’m building an AI tool to lay off my coworkers appeared first on Business Insider.




