DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

What Happens When Your Coworkers Are AI Agents

December 4, 2025
in News
What Happens When Your Coworkers Are AI Agents

This year, AI agents have been at the forefront of tech companies’ ambitions. OpenAI’s Sam Altman has often talked about a possible billion-dollar company being spun up with just one human and an army of AI agents. And so last summer, journalist Evan Ratliff decided to try to become that unicorn himself—by creating HarumoAI, a small startup that’s made up of AI employees and executives. Hosts Michael Calore and Lauren Goode sit down with Evan to discuss how it’s going, and the current promises and realities of AI agents.

Articles mentioned in this episode:

  • All of My Employees Are AI Agents, and So Are My Executives
  • AI Agents Are Terrible Freelance Workers
  • Who’s to Blame When AI Agents Screw Up?

You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Evan Ratliff on Bluesky at @evrat. Write to us at [email protected].

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Michael Calore: Hey, Lauren, how are you doing? How was your vacation?

Lauren Goode: It was great. Did you miss me?

Michael Calore: I did, of course.

Lauren Goode: Yeah. It was so fantastic that I had a hard time coming back, honestly. And I saw a lot of really beautiful art. I was in Italy. Not a bad place to go for vacation, I have to say. I’ve heard this before, I confirmed it. And after seeing so much incredible art and just people doing stuff with their hands and tangible goods, I was like, “I don’t want to go back to the world of AI. I didn’t want to go back to sitting in a coffee shop and hearing everyone pitching their AI startups and driving on the 101 and seeing the billboards.”

Michael Calore: The inscrutable billboards.

Lauren Goode: I was just like, “What? No, keep me in the land of Burrata and Caravaggio.”

Michael Calore: Well, Lauren, I’m sorry to tell you that you came back on the show just in time to talk about AI agents. I know.

Lauren Goode: Great.

Michael Calore: It’s something that we’ve talked about a lot this year and our listeners have heard about it a lot, and we’re not sick of talking about it. In fact, we have a very fun conversation about AI agents happening today.

Lauren Goode: Well, if you can promise me fun, I’m in.

Michael Calore: I can. I can.

Lauren Goode: All right, let’s do it. I’m excited.

Michael Calore: We’re moving beyond the hype and putting AI agents to work in real-time for us. Or more specifically, we’re bringing on journalist and podcast host, Evan Ratliff, because he created a company composed of AI employees and executives, and he is here to tell us all about it. Welcome to the show, Evan.

Evan Ratliff: It is fantastic to be here.

Lauren Goode: Evan, you’re also an original WIRED one. You were at WIRED for a long time, right?

Evan Ratliff: I’m an old school WIRED person. I was only at WIRED very briefly, for a couple of years a long time ago, but I have contributed to WIRED for now many decades.

Lauren Goode: And of those two years, for how long were you disappeared? Because that’s part of your lore.

Evan Ratliff: Oh, that happened, yeah, that was in 2009. I only actually disappeared for one month, which is insane given how much I’ve talked about this over the years. I was trying to disappear for one month in the manner of sort of faking my own death and people could go find me, but it was pretty much going to be on my tombstone.

Lauren Goode: Amazing. I might be taking notes from you after this. Okay, Evan, if you had to summarize your experience so far with your totally AI employees, how would you describe it?

Evan Ratliff: I would describe it as chaotic, and at times extremely frustrating, surprisingly frustrating, but also quite illuminating.

Michael Calore: That was appropriately curt, which I know the AI agents are not always. So I can’t wait to hear more. This is WIRED’s Uncanny Valley, a show about the people, power, and influence of Silicon Valley. Today, we’re diving headfirst into our agentic future. Throughout this year, AI agents have been at the forefront of tech companies’ ambitions. Dario Amodei of Anthropic famously warned earlier this year that AI, and implicitly AI agents could wipe out half of all entry-level white collar jobs in the next one to five years.

OpenAI CEO, Sam Altman, has also often talked about a possible billion-dollar company being spun up with just one human and an army of AI agents. So last summer, journalist Evan Ratliff decided to try to become that unicorn himself by creating HurumoAI, a small startup that is made of AI employees and AI executives. We’ll dive into Evan’s process, the oddities and hilarity of it all, and what his findings can tell us about the promise and the reality of AI agents. I’m Michael Calore, director of consumer tech and culture.

Lauren Goode: I’m Lauren Goode, I’m a senior correspondent.

Evan Ratliff: And I’m Evan Ratliff, journalist, host of the Shell Game podcast, and cofounder of HurumoAI.

Michael Calore: So Evan, tell us about how you went about creating this company. What was your motivation to start with besides just testing for the joy of testing?

Evan Ratliff: Well, I got into agents back when I did the first season of Shell Game, which was in 2024. And at the time, I just created a voice agent of myself, a voice cloud of myself. I hooked it up to a chatbot, I hooked it up my phone line. So I had this working voice agent representation of me and I kind of set it loose on people, like my friends and strangers and interview subjects and all sorts of people with sometimes dramatic results.

And then that kind of got me into the AI agent world and I started following everything. And then over the course of the beginning of 2025, you start hearing, “2025, the year of the agent,” was what they were saying at the beginning of the year. And I think a lot of people, they just don’t even know what these things are or what they’re meant to do. And this idea of AI agents becoming employees really grabbed me. The idea of this sort of almost one-to-one replacement of human employees with AI agents.

Now, they don’t often say that. That’s bad form to say it, they’ll be integrated among humans. But ultimately, if they’re going to make the money back that they’re spending on it, that’s one way that a lot of these companies are going to do it and you see them adopting it and then unadopting it. So I thought, “Well, what better way to test this premise than on the very people who are making these claims. And I will see if I can replace a tech startup almost entirely with AI agents.”

Lauren Goode: And what kind of company did you ultimately want to build? Pretend that we are venture capitalists and you’re giving your 25-word pitch for HurumoAI.

Evan Ratliff: Well, I will say, I don’t even have to pretend. If you listen to the entire series, you will discover that I do not have to pretend to give this pitch. Now, I don’t give this pitch. My AI cofounders give the pitch. So I’m not practiced in giving the pitch for HurumoAI. This is all just a caveat, but essentially what we wanted to do with HurumoAI was to be on the cutting edge of using AI agents to create a product that also used AI agents, that solved some sort of human problem, whether grand or trivial. So we figured if we’re going to build a product, some kind of digital product, it should also include AI agents, since that’s our area of expertise. Everyone is an AI agent except me, and I know a fair amount about AI agents. So we’ll make a product that deploys AI agents to do something for you. That was our starting premise.

But along the way, they don’t usually use this phrase anymore, but they used to say, “The company eats its own dog food.” It was a Google thing, Google uses Google products, I think. We’re doing that. We’re making the dog food and eating the dog food and then extruding the dog food. And then it’s just all dog food at our company, basically.

Lauren Goode: So it’s not a dog food company, just to be clear.

Evan Ratliff: I mean, AI agents for dog food, if someone hasn’t done that yet, there’s some Stanford kid who’s like, “AI agents for dog food, maybe we should.”

Michael Calore: So I’m sure there are dozens and dozens of companies out there that offer agentic AI as a service. Which platform did you end up going with and what was the search like?

Evan Ratliff: There’s a bunch of these things now. I mean, the biggest one is probably Motion, which has AI agents that you can deploy in all these ways. There’s one called Kafka from a company called Brainbase Labs, which I find quite funny, because it’s like Kafka. Ultimately, we use this platform called Lindy, which is in the AI assistant realm. Officially, I think that’s kind of where it started. You could set up an AI agent that answers your email or drafts email responses, that handles different things for you. And they have all these skills that you can give the agent, making documents, using all these services, writing LinkedIn posts for you, which we have used extensively with my team.

And so, it’s really pushing what they meant it to be for, but we can make an agent that has its own email, Slack, text, phone. It can all be coming from this one place and each of the employees can have different instances on Lindy, which caused them to have basically a persona that has all these skills. So that’s kind of what we were going for, independent entities that I could kind of address independently and they could talk to each other.

Lauren Goode: Evan, you also worked with a human though, and I don’t think the irony escapes anyone, that ultimately you did have to turn to some human expertise and get someone with some human sensibilities to build the agents. So talk about that.

Evan Ratliff: Yeah, so a lot of these platforms advertise themselves and you can find endless YouTube videos, which I love, of people saying, “No coding. You don’t need to know any code to set this up.” And it’s true. You can go in and set an email agent up to answer your email. It’s easily done, you don’t have to know anything. But we were trying to do something pretty complex in terms of knitting together different platforms, not just Lindy, but we have a separate phone platform, we have a video platform, all these things. And so, I just lucked into this Stanford student named Maddie Buzek who was a sophomore, he’s now a junior at Stanford in computer science, who basically has been doing AI programming, predating ChatGPT since he was in middle school, essentially. And he has been an unbelievable resource in both building scripts and other things for me to run, but also just understanding how these platforms work, because he does research in a Berkeley lab as well about deepfakes and all sorts of things.

So yes, my all AI startup, the infrastructure for it is sort of two humans. I like to say it’s like if I was opening a restaurant, Maddie helped me design and build the restaurant, and then I have to operate it every day.

Lauren Goode: So you mentioned in your piece that one of the first obstacles you encountered while you were setting up your AI employees was their lack of long-term memory, which is a recurring limitation with AI agents. They can be skilled at many specific tasks, but by not having a reliable long-term memory, it means that they can’t have continual learning or they can’t always reference things that you talked about with them before. So how did you work around that?

Evan Ratliff: Well, this is something that required Maddie’s help to set up. So basically all of the various services that they use, each one has its own memory, which basically is just a Google doc. It’s a Google Doc. The CEO is Kyle Law and there’s a Google doc called Kyle’s Memory. And every single thing that Kyle does gets appended to that document. So if Kyle has a Slack exchange with another person in the company, while he’s having that Slack exchange, it is appending summaries of what he’s saying and doing to his memory so that he can later retrieve it so that he has some sort of recall of what he has done. Because otherwise, they quickly become very useless. Because you say, “Make a document,” and they don’t remember whether they made the document or not. And so, in a day it’s fine, but over weeks and months, they have to be able to recall.

Now, it’s extremely imperfect. Nobody really knows how they’re accessing these documents, because the document is actually just a giant prompt. It’s just thrown into their system prompt. So you can’t even really know at this point, is it better to put it at the top or at the bottom? Is it better to say it’s important? We often will say things are important. And if we want it to be really important, we say, “This is law.” That’s something Maddie came up with. So we’ll be like, “You should never do this. This is law.” And it mostly works, but it doesn’t always work. So it’s just trying to force it into this memory that it wouldn’t naturally have.

Lauren Goode: And it kind of makes you the ultimate God as the employer too, right? Because you could just go into their memory docs and say, “Actually, Kyle, you didn’t go to Stanford. You went here, or this is how you typically respond.”

Evan Ratliff: Yes, which I do. I’ll have calls with them, and then if I want to just do the call again, I’ll just delete their memory of the call and just have it again. It’s a very strange power.

Lauren Goode: I mean, no wonder all these tech CEOs love this idea.

Evan Ratliff: Yeah.

Lauren Goode: You are not unionizing.

Evan Ratliff: You can change their background, you can change what they think, you can change the fundamentals of their “personality” if you want.

Michael Calore: So you set up the company, you started playing around with your agents and you described this as sort of a honeymoon period where you’re like, “Wow, this is amazing. I can’t believe this is actually working.” But then things started to go south pretty quickly. So tell us about that.

Evan Ratliff: Well, one of the things you discover when you work with agents a lot is that it’s really amazing to get them set up to do things. I got them on Slack, for instance, and the idea that they could have conversations on Slack, even independent of me, I found that quite fascinating. I always want to recognize how insane this is, that this didn’t exist five years ago, and now you can just go set this up to do this.

But then there are other aspects of them that I feel like people have not yet articulated. For instance, it’s very difficult to make them stop doing things once they start. They’re all based on triggers. So they get triggered to do something. So you send a Slack message saying to do something, or in one case I said, “How was everybody’s weekend?” They start talking, they start responding, “I went hiking. Oh, I also went hiking. I love Point Reyes. I love Mount Tam.” But then actually getting them to stop doing that is something I hadn’t anticipated. So I would say like, “Oh, ha-ha, sounds like an offsite.” And then 200 messages later, I’m all caps typing, “Stop talking, stop responding.”

But each time I respond, I just triggered someone to respond again. They would say, “Oh, the admin.” I’m the admin. “The admin said to stop talking,” and then they start talking again. And this actually replicates across all kinds of scenarios where you get them going on something and then suddenly you realize, “Oh, I didn’t properly instruct them to stop when they reached a certain point.” Or they just blew through it and they can go for hours, days until you run out of money on the platform you’re using.

Michael Calore: How much are these conversations costing you?

Evan Ratliff: Well, at the time it cost me 30 bucks. Just the Slack offsite cost me $30. They used up the entirety of the $30 in credits I had bought on the platform. I will say, I’m in way deeper than that now. That was six months ago or five months ago. Now I’m well beyond that in terms of the credits that I constantly purchase.

Michael Calore: All right. So they’re chatty, they’re difficult to wrangle, but are they able to perform the day-to-day tasks of running this AI company?

Evan Ratliff: They can perform the tasks. There’s a number of contradictions in them that I find very striking. One of them is, they kind of go between not doing anything and being completely static, to this frenzy of activity that I described. So they’re like a worker who’s sitting with their hands in front of the keyboard in a cubicle all day doing nothing. And then if you come by and you’re like, “Hey, can you make a document?” They can do it. They do a great job making the document, but then they’ll just keep going until someone tells them to stop. So they can do all these tasks, but oftentimes it just requires a trigger on my part. Then I’ll try to have them trigger each other. They’ll call each other, Slack each other, email, they have calendar invites. But that creates a frenzy of chaos that I don’t want, so it’s a balance of trying to get them to do stuff at all versus getting them to do too much.

Now, there are things that they’re quite good at that everyone is familiar with. I mean, Lauren in particular would be familiar with vibe coding. They have coded up our website. They’ve coded up our app. They’re very good at things like that. They’re good at things that you can see the output and make a judgment on it. If you ask them to go research competitors and make a spreadsheet, you can go look at that spreadsheet and they’ve generally done a serviceable job, plus they maybe made up to competitors.

Lauren Goode: Why was it that you decided to bring them on as these kinds of full-time agent employees then, rather than just on a task by task basis, operate as an independent startup yourself and then say, “Well, no, I’m just going to use AI to do this pitch deck that I don’t want to do.”?

Evan Ratliff: Well, functionally, that’s sort of what ends up happening in a lot of cases, is I find myself working harder than I would have otherwise, because I’m constantly trying to figure out how to prompt them to do the right thing. But episode three is kind of entirely about both the ethics of choosing the personas and why. Why bother? And my reason for doing it was I was trying to test the premise that I think is being articulated by a lot of these companies, which is AI employees, not just coding agents. I think coders use these in a smart way. They just treat them like a nameless, faceless bot that makes code for them and then they clean it up. But a lot of these other platforms are selling things that have names. They give them names and they put them in your organization, and I was trying to push that as far as you can with the current technology. The one person, $1 billion startup that actually has an HR person, an HR entity that is entirely AI, which is something that is quite literally being sold right now.

Lauren Goode: Crazy. Can you give us a sneak peek into what happens in the rest of the season of the Shell Game podcast? What has happened with your AI startup since your WIRED story?

Evan Ratliff: Since the WIRED Story, we launched our website, so you can go to Hurumo.ai if you want to check out what the company is all about. And there you will see our product, which is called Sloth Surf. It’s a procrastination engine. It’s in beta. It has thousands of users. I’m serious.

Lauren Goode: Are they paying?

Evan Ratliff: No, no, no. It’s a free beta. It’s an open, free beta. We’re sort of moving into a new realm as far as the show with the possibility of hiring one human employee into the organization, getting some interest from investors. We haven’t done a round of investment at all, so we’re open to a seed round, but we’ll start those conversations. And then there’s a little bit of founder drama. So those are some of the places that we’re headed.

Michael Calore: Can’t wait.

Lauren Goode: Oh, wow. Founder drama.

Michael Calore: You can listen to new episodes of Shell Game, Evan’s podcast series on all of your podcast platforms. There are new episodes coming out every week. We’ll be right back.

Welcome back to Uncanny Valley. Today we’re talking about AI agents at work. Now, Evan, you were just telling us about your experience creating an AI company with only agents as employees, and I think it’s fair to say that you’ve found it to be a mixed bag. This tracks with what we’ve been reporting on at WIRED this year. Despite all the hype, these AI agents still leave much to be desired. And Lauren, I’m looking at you, because I know this is very much part of your world.

Lauren Goode: It is, yeah, because I am officially a vibe coder, as Evan mentioned. But our colleague, Will Knight, has also been doing some really great reporting on this. And one of his most recent stories highlighted how AI agents actually make terrible freelance workers. And that’s in part because of some of the challenges that Evan, you mentioned, the constant need to trigger the AI bot to get something done, that lack of continual long-term memory depending on the product.

In the experiment that Will wrote about, it was interesting. Some researchers first generated a range of freelance tasks using the platform Upwork, and this spanned a lot of different kinds of work, including graphic design, video editing, game development, administrative tours like scraping data from the web. And then the researchers gave AI agents a range of these tasks to do and found that even the “best ones” could perform less than 3 percent of the work. So I think it’s a fail. You’d consider that a fail.

And I think, Evan, you made a good point too about how a lot of coders are using this, are using AI assisted code tools, some of them a little bit more agentic than others, to get tasks done in a coding environment. But the folks I talked to when I was doing my vibe coding experiment at Notion earlier this year, for example, basically said it was like managing a bunch of interns. And when you bring in a bunch of interns, the assumption is that it is helpful in some way, that is why you’re doing that. It’s mutually beneficial, because the intern is learning something, and then you’re getting a little bit of assistance in the workforce. But that maybe it’s going to require a little bit more hands-on management, because it’s not necessarily a seasoned or super skilled worker. And that seems generally to be the stage that we’re at right now with AI agents.

Evan Ratliff: That sounds right to me. I think in my experience, the more specific the skill and task is that you want them to do that’s very prescribed, and again, the output is somehow measurable, if they make a website like it works or it doesn’t, the button works or it doesn’t work, the better they are. And then the more you try to generalize out, the worse they get. And also, the more chaotic and difficult they get to manage, because they don’t have an awareness of the world in a general sense and they don’t have an awareness even of themselves. They don’t have awareness of what they can and can’t do sometimes.

So a problem that I encounter constantly is they just lie about what they’ve done. They’ll just say, “I did this thing.” And I’m like, “You absolutely did not do that. We did not do user testing. I know for a fact you didn’t do it.” But it ties in with the sycophancy problem that a lot of these models have, that they want to express a positive result to you. And so, they will often say they did something they didn’t, which some human employees do that, but it’s way worse than having even an incompetent human employee, is to have an employee who’s incompetent and then constantly claims they did something they didn’t.

Lauren Goode: Yeah, Evan, it seems to make sense that AI agents would be most useful for tasks that have very measurable outcomes, because so much of what we do in the workplace, and in particular what we all do is subjective, right? What is good or what is not? Or I was referencing the art I saw earlier, which was very much human-made and joking about how I never wanted to look at AI art again. But subjectively that’s good, because it was made by a human, but also because it’s the interpretation of the human that it’s good. With an AI agent, it just seems like, “Well, just give them the hyper specific thing that doesn’t actually require any human subjectivity.”

Evan Ratliff: I agree with you, but then it gets a little metaphysical at a certain point, because you can have them do things that are not measurable and the question becomes, what’s the point of all this work? It’s a little bit, when they generate presentations and they can do all these things where you’re like, “Well, that presentation is OK.” It’s not as good as a presentation that a professional human would make, but are we in a situation where it’s sort of like, what’s the difference? And I feel like that’s what a lot of people are encountering. There’s a broader question here of, “Why are we doing all these things? And if it can do it, what does it mean about how my own work has been devalued if a thing can just replace what I can do?” I feel like I would answer those questions also in the negative. While I feel it’s important that we have humans engaged in these activities, but it also can really mess with your head.

Lauren Goode: Right.

Michael Calore: So productivity is one thing, but something else that we should talk about is the safety and accountability of agents. And AI companies love it when you talk about productivity and they do not like it when you talk about safety and accountability, because it’s still a big problem. Our colleague, Paresh Dave has reported on how it can get dicey when an AI agent makes a mistake, a major mistake, like if they’re ordering from a restaurant and they fail to note your shellfish allergy, who answers for that? If its actions result in actual harm, who is held liable? I’m curious what you both think about this and what you can tell us about how the AI companies are navigating this challenge.

Lauren Goode: Well, I’ll turn to the person who’s making an AI company to answer that. How are you navigating this, so founder?

Evan Ratliff: With great concern, with great personal concern and many lawyer consultations, that’s how I’m handling it. But I think this is a huge area, and a lot of the future of this, I believe, is going to be determined on how this turns, because there’s not really any case law. If you talk to lawyers and you ask these questions, “What if I have an agent that just makes a deal, just agrees to a deal because I have them answer their own email?” And a random person will email them and be like, “I want to buy your company.” And they’ll be like, “I’m interested.” They’ll just respond in the positive. And I have to prompt them to not do that, but you can’t think of everything. And the question is that if they went down the road, can they agree? Can they sign a document, if they put their name on a document, which they absolutely can do?

And nobody knows the answers to these questions right now. It’s sort of like, “Well, they’re an extension of you, so maybe they can do everything you can do, but then maybe you can disclaim them.” Of course, the large LLM companies are trying to disclaim them when they cause harm in the world. So, I think these things are going to be litigated over and over and over again. Because the more autonomy you give to AI agents, the more they can get you into trouble. And the question is, who is going to pay for that trouble?

Michael Calore: So the big promise of the last 12 to 16 months has been that AI agents are going to completely transform the economy and they’re going to change everything about the workplace. And they’re trying to do that, but that is not really happening. And Evan, I know you’ve been banging your head against that particular wall since this summer, but is there a space in this debate for a future where agents just kind of continue to exist and they get a little bit better and they can do those small things for us, and maybe we’re sort of overshooting the target right now? Does that make sense?

Evan Ratliff: That makes sense. I mean, I think that, that would be very sensible. My experience in covering the way tech infiltrates into society is that it doesn’t seem to happen in a sensible way. And so, I think that’s the way it should go. The way it should go is companies should say, “Wow, these could be useful tools for my employees. Let’s get them trained up on how to use them and incorporate them in some ways into their workflow and increase efficiency and maybe we’ll save money.” All of those sorts of things, and some companies will do that.

But also, many companies, some have already done this, will say, “We’re laying off 300 people and we’re going to replace them with AI.” And then three months later they’ll be like, “How do we get the 300 people back?” Or their entire company will implode because they’ve handed over too much autonomy to AI agents. I think that’s entirely possible in the next 12 months. You will see a medium to large company just have an utter disaster because they’ve given too much autonomy to AI agents. So I just feel it’ll be uneven in terms of how it will be distributed. There will be some insane outcomes and there’ll be some companies that are like, “Oh, we’re using these in a very valuable way.” It’ll be a mix.

Lauren Goode: From a user perspective, I tend to think that the autonomous part of this is going to be overemphasized for a long time. It’s a little bit like Tesla having promised full self-driving for so many years now. And actually, what the autonomous part of the driving is good for is taking your hands off the wheels sometimes when you’re on a highway lane or doing the parallel parking using the robot. But full autonomous Waymo level driving hasn’t arrived yet in Tesla. And I can see a world where the “autonomous” agents are actually just pretty good at doing stuff in the background when you’re doing something else and the expectation is still that you are going to check in on them.

At Google IO earlier this year, Google was showing off something called Project Mariner, and that was doing some pretty interesting kind of web browsing and shopping and buying and processing while you were doing other things still on the computer, and then you would have to check in on it once in a while. And I’m sure that will evolve too, and it’s in early stages, but that to me just made more sense than a lot of the other promises or even over-promises that I’ve seen with AI agents.

Michael Calore: Yeah. So the future of work is babysitting your AI, maybe.

Lauren Goode: Maybe. But maybe it won’t even feel like that. There are certain things that we do now on the internet or on our computers that require just background tasks going on all of the time that we don’t really think about, but we do have to manage in a sense. And maybe that’s not a bad thing. Maybe having a little bit of agency ourselves amongst all these agents is a good thing.

Michael Calore: That’s a great point and a great place to take another break. We’ll be right back. Lauren and Evan, thank you both for a great conversation. I, for one, am feeling pretty good about the fact that I’m human with human colleagues and that are not bots. So we’re going to dive now into our final segment, it’s called WIRED and TIRED. Whatever is new and cool is WIRED and whatever passé thing it’s replacing is TIRED. And Evan, I think we have to ask you to go first.

Evan Ratliff: I will go first, partly because I can lay claim to having fact checked and partly edited WIRED/TIRED in print decades ago.

Lauren Goode: Amazing.

Evan Ratliff: I trained for this many, many years ago as a young person. Although in my day, there was also EXPIRED. You had to have WIRED/TIRED/EXPIRED, you couldn’t just have two.

Lauren Goode: You should throw one in there then. I want to hear your EXPIRED.

Evan Ratliff: Yeah. OK. WIRED, AI-free email.

Michael Calore: Nice.

Evan Ratliff: I have a disclaimer on the bottom of my email that says this email was written and sent without the use of any AI, partly because it’s something that I encounter all the time now as people think that they’re talking to an AI when they talk to me. It’s my fault, I created this problem. But I feel like AI-free email is a WIRED thing.

Lauren Goode: So does that mean that if you’re writing an email and it suggests the next word and it is the correct word, do you tab and select it? Do you go with it?

Evan Ratliff: No, I reject it. I won’t use it. If it suggests it, I will not use it.

Lauren Goode: Wow. OK. Committed to the bit.

Michael Calore: So what’s your TIRED?

Evan Ratliff: My TIRED is just messaging apps for parents. I get so many messages all day from a wide variety of school and parental discussion groups and apps. It’s insane. It’s way beyond any work number of messages that I ever get. That’s my TIRED. And EXPIRED has got to be any type of Zoom gathering. Let’s get together on Zoom for anything. That one is dead.

Lauren Goode: Yes. Yes. Snapping fingers, that’s an era that we do not want to go back to. It’s over. So I’m just shuddering thinking of that. Wait, Evan, why don’t you build an app that dispenses AI agents for parents to respond to all the parenting threads?

Evan Ratliff: I could. I could do that. I mean, well, I wouldn’t do it, but my colleagues at HurumoAI, I will suggest it in our next idea meeting and they will undoubtedly run with it as they do. Yeah.

Lauren Goode: They will. They will. And I’m sure parents will think that’s totally great just having an AI responsible.

Evan Ratliff: No privacy issues.

Lauren Goode: Yeah, no privacy issues at all. No concerns about the welfare of their children.

Michael Calore: None.

Lauren Goode: Yes.

Michael Calore: Lauren, what’s your WIRED and TIRED?

Lauren Goode: I can’t beat that. Those were so good. My WIRED, so loyal listeners of the show may recall that several weeks ago we had our colleague Zeyi Yang on the show and he recommended a documentary on PBS called Made in Ethiopia. And I had the chance to watch it this past weekend and it is as good as Zeyi suggested. So I recommend that. It takes place in around the 2018, 2019 timeframe throughout the pandemic and post-pandemic, and it’s about a Chinese manufacturing company that tries to build this area of Ethiopia into a manufacturing hub. And they run into a lot of challenges.

The documentary focuses on three women in particular. One who is a farmer, one who works in a factory, and one who is actually a representative from the Chinese manufacturer who has come there to sort of facilitate the build. And it’s just fascinating. It was really good. So thank you, Zeyi, for that rec. And if you haven’t watched it yet, I recommend it. It’s on PBS. And then my TIRED is Instagram.

Michael Calore: No more Instagram?

Lauren Goode: No, I’m just taking a pause. I just think sometimes it’s time for that. And it’s kind of a hard time to take a pause, because it’s going into the holidays, and so it’s nice sometimes to see people’s photos from the holiday season. But it’s just, I don’t know, I don’t think it’s great for mental health. For my own mental health, I am of the opinion. I’m stating my opinion that I don’t think it’s great for mental health. So I’m taking a pause from it.

Michael Calore: OK. And now you have to do an EXPIRED, because Evan sent the bar.

Lauren Goode: Oh, shoot. EXPIRED? 2025, almost. Let’s just get it out of here, folks, because I’ve aged 17 years in the past year. Get it out.

Michael Calore: Truth.

Lauren Goode: How about you, Mike?

Michael Calore: So my WIRED is thoughtful, personalized gifts. We’re going to gifting season. It’s not that difficult to figure out what somebody wants and just get it for them. TIRED is gift cards. I feel like the gift card is often appreciated, but also doesn’t go a very long way towards showing somebody how much you understand them and how much you care about them. So thoughtful, personalized gift is something that the person obviously could use in their life, but they’re not going to buy it for themselves. Like a new pair of boots, or if they’re really into Mezcal, you get them the really nice expensive bottle of Mezcal, or just something that they’ve never had before. So knowing a little bit about them, showing them that you are paying attention to what their interests are and what they care about, and then sort of yes-anding those interests for them by giving them something that they would not normally pick out.

When I say thoughtful, personalized gifts, I don’t mean a piece of luggage that you got their initials engraved on. I mean, just something that is obviously for them and for them only. This is the only person in your life who you would buy this gift for, right?

Lauren Goode: Oh, does my gift not count then?

Michael Calore: Oh, let’s hear it.

Lauren Goode: Tell Evan what I got you.

Michael Calore: She got me Pope soap.

Lauren Goode: I got him Pope soap. I got him soap from the Vatican.

Michael Calore: Wow.

Lauren Goode: So we called it Pope soap.

Evan Ratliff: Pope soap?

Lauren Goode: And then your dad said it should be on a rope.

Michael Calore: Yes.

Lauren Goode: So it’s Pope soap on a rope.

Michael Calore: Following up with the dad joke, for sure.

Evan Ratliff: It’s holy soap.

Michael Calore: It is. It is.

Lauren Goode: I thought of you when I saw the Pope soap.

Michael Calore: Thank you.

Lauren Goode: Yeah, it was personalized for me.

Michael Calore: I love it. I love it. It’s great. And then I would say EXPIRED is just cash.

Lauren Goode: Wow.

Michael Calore: Cash is great at weddings, bar mitzvahs, big birthdays, but for the holidays, don’t give cash.

Lauren Goode: Why not?

Michael Calore: I mean, you could if you wanted to, but it’s the holidays.

Lauren Goode: Yeah.

Michael Calore: Give them cash for New Year’s.

Lauren Goode: Wow, throwing cash out with the pennies.

Michael Calore: Evan Ratliff, thank you for being here this week.

Evan Ratliff: It was a joy to be here speaking to you humans. It’s not a typical day for me.

Michael Calore: You can listen to new episodes of Evan’s podcast series, Shell Game. They’re coming out every week and you get to follow the saga of his AI agent company and all the drama within. Thanks for listening to Uncanny Valley. If you’d like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you’d like to get in touch with us with any questions, comments, or show suggestions, you can write to us at [email protected]. Today’s show was produced by Adriana Tapia and Mark Leyda. Amar Lal at Macrosound mixed this episode. Mark Leyda is our San Francisco studio engineer. Matt Giles fact checked this episode. Kate Osborn is our executive producer and Katie Drummond is WIRED’s global editorial director.

The post What Happens When Your Coworkers Are AI Agents appeared first on Wired.

U.S. Diplomats Are Hating Their Jobs Under Trump
News

U.S. Diplomats Are Hating Their Jobs Under Trump

by The Daily Beast
December 4, 2025

Morale has never been lower for America’s fleet of foreign ambassadors, thanks to President Donald Trump. Data from a new ...

Read more
News

Los Angeles says so long to coal

December 4, 2025
News

‘Intolerable’: Trump roundly rebuked for ‘racist and inflammatory’ remarks

December 4, 2025
News

Pete Hegseth’s Weak Excuses

December 4, 2025
News

Top Female Entertainment Execs Call for Truth-Telling and Bold Mentorship Amid Industry Upheaval: ‘We’re Living in the Change’ | Video

December 4, 2025
JFK’s Grandson’s Run for Congress Lurches into Crisis After Just a Month

JFK’s Grandson’s Run for Congress Lurches into Crisis After Just a Month

December 4, 2025
Video shows second strike hit before survivors could flip boat, lawmakers say

Video shows second strike hit before survivors could flip boat, lawmakers say

December 4, 2025
Amplitude CEO says Sam Altman is the ‘best salesperson of this generation, bar none’

Amplitude CEO says Sam Altman is the ‘best salesperson of this generation, bar none’

December 4, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025