KEVIN ROOSE:
Have you noticed how insanely polarized the A.I. discourse has become recently?
CASEY NEWTON: From time to time I take a look at my Bluesky mentions and am reminded of the many, many skeptics out there and how differently they see the world.
ROOSE Honestly, I think all the arguing about whether A.I. is good or bad obscures a more interesting thing happening right now, which is that this stuff, in its present form, has become genuinely useful. ChatGPT is the sixth-biggest website on Earth. Something like 43 percent of Americans in the work force use generative A.I. I can’t think of another technology, besides maybe the smartphone, that has gone from “doesn’t exist” to “basically can’t function without it” in less time.
NEWTON The only historical analogue I could think of would be the “Hard Fork” podcast.
ROOSE I used to feel like a crazy early adopter for using A.I. all the time, but now I feel as if I am actually closer to the median of the people I know, in terms of my daily usage.
NEWTON I’m seeing a wide range of uses. Some people I know are essentially just having fun — like my mom, who used a chatbot to help find songs for her 50th-wedding-anniversary photo montage. I’m increasingly using it for work functions — asking to research unfamiliar topics to help me get a jump start, for example, or taking a first stab at fact-checking.
My boyfriend is probably the biggest power user I know. He’s a software engineer, and he will give his A.I. assistant various tasks, and then step away for long stretches while it writes and rewrites code. A significant portion of his job is essentially just supervising an A.I.
ROOSE A.I. has essentially replaced Google for me for basic questions: What setting do I put this toaster oven on to make a turkey melt? How do I stop weeds from growing on my patio? I use it for interior decorating — I’ll upload a photo of a room in my house and say, “Give this room a glow-up, tell me what furniture to buy and how to arrange it and generate the ‘after’ picture.” A friend of mine just told me that they now talk to ChatGPT voice mode on their commute in their car — instead of listening to a podcast, they’ll just open it up and say, “Teach me something about modern art,” or whatever.
NEWTON That’s a terrible threat to our business. What are we doing about this?
ROOSE I guess it’s time to pivot to modern art. Another person I know just started using ChatGPT as her therapist after her regular human therapist doubled her rates.
NEWTON And let’s just say: If you’re a therapist, this is maybe not the best time to double your rates.
ROOSE Right? Some of this is just entertainment, but we’re also starting to hear from listeners and readers using this stuff to solve real problems in their lives. One of my favorite emails we’ve ever gotten on the show was from a listener whose dog’s hair was falling out. She went to multiple vets, tried a bunch of different treatments. And then one day, she thought, Well, I’m going to try putting my dog’s symptoms into Claude. And Claude figured out, correctly, that her dog had an uncommon autoimmune condition that none of the vets had caught.
NEWTON Actually, the dog was stressed out about A.I., and that’s why its hair fell out.
A Very Smart Assistant Who Is Also High on Ketamine
ROOSE So these are some of the amazing and wonderful things that today’s A.I. systems are capable of. But we should also say there are limitations that still remain.
NEWTON That’s right. If you don’t pay close attention to them, they tend to be bad at certain common-sense things. For technical reasons, they don’t have great memories yet; they’re not amazing at long-term planning. Also, they’re not always aligned with human values: They might lie or cheat or steal to get what they want.
ROOSE And then, of course, there’s the hallucination problem: These systems are not always factual, and they do get things wrong. But I confess that I am not as worried about hallucinations as a lot of people — and, in fact, I think they are basically a skill issue that can be overcome by spending more time with the models. Especially if you use A.I. for work, I think part of your job is developing an intuition about where these tools are useful and not treating them as infallible. If you’re the first lawyer who cites a nonexistent case because of ChatGPT, that’s on ChatGPT. If you’re the 100th, that’s on you.
NEWTON Right. I mentioned that one way I use large language models is for fact-checking. I’ll write a column and put it into an L.L.M., and I’ll ask it to check it for spelling, grammatical and factual errors. Sometimes a chatbot will tell me, “You keep describing ‘President Trump,’ but as of my knowledge cutoff, Joe Biden is the president.” But then it will also find an actual factual error I missed. So I get to see the limitations of the chatbot but also the power.
ROOSE For me, the tasks I tend to use A.I. the most for are ones where there is no clear right or wrong answer: It’s for brainstorming, it’s for extrapolating, it’s for helping me come up with 20 ideas for different questions I could ask a guest on our show, and maybe one or two of them is directionally useful to me. What about you?
NEWTON Yeah, brainstorming is huge. I would also say finding things in long documents, summarizing long documents or asking questions of long documents. How many times as a journalist have I been reading a 200-page court ruling, and I want to know where in this ruling does the judge mention this particular piece of evidence? L.L.M.s are really good at that. They will find the thing, but then you go verify it with your own eyes.
ROOSE The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time. But also, the bar of 100 percent reliability is not the right one to aim for here: The base rate that we should be comparing with is not complete factuality but the comparable smart human given the same task.
NEWTON Dario Amodei, the chief executive of Anthropic, said recently that he believes chatbots now hallucinate less than humans do. That feels like a hot take to me, but I would like to see the data.
ROOSE I would, too. But we know humans are not perfect either: The New York Times has a corrections page every day with stuff we hallucinated, so to speak. And actually, that gives me an idea: A.I. companies should publish regular lists of the most common mistakes their models make, so we can steer clear of them on those topics.
‘Hi, We’re Here to Take Away Your Job’
ROOSE Casey, despite the fact that we are both fairly struck by the sophistication and the capabilities of these A.I. tools, there are a lot of skeptics out there — people who don’t believe that these things are doing much more than just predicting the next word in a sequence, who don’t think they’re capable of any kind of creative thinking or reasoning, who think that this is just fancy autocomplete and that the limitations around these tools will somehow turn this whole A.I. thing into a flash in the pan. What do you make of that argument?
NEWTON Well, I think we already have enough evidence to know this is not a mere flash in the pan. A.I. companies are doubling their revenue year over year or growing even faster than that. Businesses are hiring them to solve real problems, and they keep spending more. So that suggests to me that those customers are seeing real results, that this has moved out of the experimental stage. At the same time, there are so many reasons to critique A.I. For the way it was trained, largely without the permission of anyone who created the training data. For the environmental concerns — the construction of so many data centers, the energy use, the effect on local populations and water supply. And for the threat it poses to human creativity and ingenuity — a lot of these A.I. executives are really saying in a pretty loud voice, “Hi, we’re here to take away your job.” So it’s no surprise to me that you see surveys where a majority of Americans say they think A.I. will have a negative effect.
ROOSE I think so too. Look, I am not an A.I. Pollyanna or even, on some days, much of an optimist. I think there are real harms these systems are capable of and much bigger harms they will be capable of in the future. But I think addressing those harms requires having a clear view of the technology and what it can and can’t do. Sometimes when I hear people arguing about how A.I. systems are stupid and useless, it’s almost as if you had an antinuclear movement that didn’t admit fission was real — like, looking at a mushroom cloud over Los Alamos, and saying, “They’re just raising money, this is all hype.” Instead of, “Oh, my God, this thing could blow up the world.”
NEWTON Yeah, I think so much A.I. denialism comes off as a kind of wishful thinking — which, again, I’m sympathetic to, because in a lot of ways it would be easier if all this stuff was fake and was going to fall into the ocean the way that cryptocurrency did after its 2021 peak. But as journalists, the more we talk to people, the less likely we think that is.
ROOSE Casey, you and I both work in a creative industry. We write words, and say them into microphones, and make videos that go on the internet. How worried are you that the A.I. tools of today, or the ones that are coming, will make it harder for us to earn a living?
NEWTON I am worried. I think that already the value of text feels lower than it did two years ago. My job is basically to analyze and synthesize the news for readers, and that is a skill that chatbots are getting pretty good at. So it does have me thinking about what the next iteration of my job looks like. And I don’t love most of my options. What do you think?
ROOSE I have a somewhat more optimistic take on this: I think that yes, people will lose opportunities and jobs because of this technology, but I wonder if it’s going to catalyze some counterreaction. I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it — it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries — a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.
NEWTON Yeah, I think there are a lot of human skills that we are going to appreciate more as more of our lives are mediated by robots. In some ways it’s already happening: Analysis of the news used to be pretty scarce, and now you can get it from a chatbot. But what used to be abundant — and is now scarce — is friendship between male adults. We started a podcast, and now people hang out with us every week, because they might not actually have that in their lives. And so this is the way we forge a future in the media: by capitalizing on American loneliness.
ROOSE Wait, you think our listeners are losers?
NEWTON I’m just looking at the statistics. Adults these days have shockingly few friends.
ROOSE No. All of our listeners are attractive and popular and flourishing in their lives. But yeah, I used to think that the most important question for people to ask themselves was, “Is A.I. smarter than me?” And now I’m starting to think that a better question is, “Is A.I. more interesting than me?” In the same way that a chatbot is very good at writing a very generic English-class essay about James Joyce, it is also going to be able to replace the median, middle-of-the-road performer at many jobs. But I think there are other qualities that are actually more important and harder to automate. The one everyone out here in Silicon Valley is talking about these days is taste, which they claim is the ultimate antidote to A.I. replacement.
NEWTON Which is really big talk for a community that wears nothing but hoodies and jeans.
ROOSE And I think as these A.I. systems become more “agentic” — more capable of acting on their own without explicit direction — there’s going to be a lot of renewed interest in how we humans can be even more agentic. Casey, how agentic do you feel today?
NEWTON This is a day when I wish I could just set some A.I. agents to work and go take a nap.
Kevin Roose is a Times technology columnist and a host of the podcast “Hard Fork.”
The post Everyone Is Using A.I. for Everything. Is That Bad? appeared first on New York Times.