A.I. slop is taking over the internet. As the line between human-made and machine-made art blurs — and real and fake images collapse into one another — how should we think about culture now? On “The Opinions,” the Opinion culture editor Nadja Spiegelman sits down with the columnist Tressie McMillan Cottom and the creative consultant Emily Keegin to discuss what A.I. slop is for, who benefits from it and what comes next.
Below is a transcript of an episode of “The Opinions.” We recommend listening to it in its original form for the full effect. You can do so using the player above or on the NYTimes app, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.
The transcript has been lightly edited for length and clarity.
Nadja Spiegelman: To start, I want to know for each of you, Tressie and Emily, when was the first time that you were engaging with something online, thinking it was real, and then realized that it was A.I.?
Tressie McMillan Cottom: I am easily convinced to quickly reshare any sort of funny man-on-the-street content. So, there was a video of a guy saying something that I thought was very funny in a way that I found hilarious and I shared it very quickly. This was Instagram, I’m pretty sure.
But something in my gut said, “That was too funny.” It was too perfect for me. I went back and rewatched it and then caught the unnatural emotion on the face, which I think — for the time being anyway — is still a tell for A.I. slop.
I unshared it so that I wouldn’t participate in the A.I. slop economy. But my defenses are much lower when the content is funny, which I suspect is true for all of us.
Spiegelman: Can you describe what that emotion was?
McMillan Cottom: Well, one, I felt tricked. There’s that sense of betrayal and then there was also, if not shame, certainly a little chagrin.
Of all people, a person who studies and teaches and thinks about and writes about digital technologies and our authenticity crisis and affect and emotion and all of that stuff — the idea that I could have been tricked was a little ——
Emily Keegin: It’s very humbling, isn’t it?
McMillan Cottom: It is.
Spiegelman: Yeah. What about for you, Emily? Do you have a moment?
Keegin: Honestly, I’m really sad to admit this, but I was fooled by the photographs from last weekend of the Nicolás Maduro capture in Venezuela.
There were a few that went around that were horizontal and showed him handcuffed outside of a plane. And the reason that I believed them to be real — and I believed them to be real for 24 hours — was because they had been shared on Instagram by someone who I trusted.
That person had created a reel about how they looked very similar to Saddam Hussein pictures from 2003. So, I had just kind of gone with it.
When we see images on these platforms, we’re seeing them very small. We’re seeing them very quickly. And both of those things make it very hard to sit with an image and decode it and make sure that it’s real.
I’m embarrassed to say I was tricked by that.
Spiegelman: So, embarrassed is one of the feelings. How else did it make you feel?
Keegin: I was embarrassed because I’m a photo editor. [Laughs.] I came up in newsmagazines; my job has been to check the truth of an image. That was my job, is my job. I didn’t do due diligence on this one.
The takeaway is that we can’t take candy from strangers anymore.
Spiegelman: Of course, the scariest element of all of this isn’t just when things that are funny are too funny, but the manipulation of these hugely consequential world events and our ability to trust that what we’re seeing in the news is actually news — like those fake images of Maduro’s capture in Venezuela, like people using A.I. to try to identify the ICE agent who shot at a civilian in Minneapolis.
Tressie, do you think people are getting more savvy or are you worried about this breakdown in trust?
McMillan Cottom: Oh, no, there’s no way that we can become more savvy. One of the things that people, certainly in my world, who think about the digital space and the social world are pretty much in agreement about is that this is not a problem that developing the right skill set is going to solve.
As Emily points out, everything about the affordances of digital technology — meaning what the app or the tool allows you to do, how it sets up and controls and directs your attention — is designed to overcome pretty much anything that we would train a person to do.
So for Web 2.0, for example — and certainly Web 1.0 — we would tell people to check the person who was sharing it. In Emily’s case, this is a person she trusts, who was probably tricked too, by the way. Or we would tell you to look at the web address and you could trust institutions, especially a .org or a .edu or a .gov — Today, I don’t trust almost anything on a .gov website.
But those are the literacies that we spent the last 10 or 15 years training people on so that they could be better consumers of information. The reality is that technology has outstripped our ability to teach ourselves a set of tools, at the level of accuracy that I think we would need.
The speed is so great, the sophistication of the tool is so good, and there’s just so much of it. We talk about the scale of it.
It isn’t that I am becoming worried. I think the time to become worried is behind us. I also think that this is a consequence of trust already being broken.
One of the reasons that I think the content generated by especially nefarious actors using A.I. really works on us is because we are already so distrustful of our social institutions.
This is one of those cases where I’m not sure that the A.I. slop is creating the crisis. It is exacerbating it, but it is not creating it. This is a consequence of low social trust already.
Keegin: A hundred percent. What we’re seeing over the last few days coming out of the ICE shooting in Minnesota are real videos and real photographs, and we are having conversations about how to understand those images and we’re seeing a country divided on what they’re seeing.
Spiegelman: Yeah, I agree. What’s so interesting about this Minnesota case is that it also depends on which of these images you’re looking at. They’re all real footage, but you can see something very different depending on the angle from which it’s filmed.
And if we can’t agree on how we’re reading real images, how can we agree in a world where we can’t even know if the images are real? From both of you, I’m curious, how do you think all of this AI content is going to impact media organizations like The New York Times, and what can journalists and media organizations do to keep building trust?
Emily Keegin: I mean, I think we have a real opportunity for legacy media to be the place where you go to find real, trustworthy information. What these organizations have in place are teams of people dedicated to verifying images and facts, and when we are scrolling, having their icon next to an image or next to a piece of information is helpful in verifying it as real.
But I think that one of the things that we should remember is that what we’re really focusing on here are how images and text come across through tech platforms, namely social media.
We’re talking about what happens when we’re looking at the news or entertainment through Instagram, X, Threads. Those three tech platforms have been built around images and the trafficking of images — and they’ve done very little design work to help make sure that the person who is looking at that image can understand what they’re seeing.
Print media has spent a long time figuring out how to make sure that when they print an image, the person who’s taking that image in can properly read it. An image is very slippery and the way that we understand the world through photography is actually not usually what’s inside the frame, but how it’s contextualized with text and design around the frame.
None of those three platforms have done any legwork to make sure that those images are being held properly for the viewer to understand what they’re looking at. Now A.I. is here and they have a lot of work to do.
McMillan Cottom: I would be even more pointed and say there is no economic incentive for these platforms to do a better job of making consumers more informed and making them more media literate. In fact, the incentives are in the other direction.
When the tools become cheap enough and accessible enough, which is what we have seen with things like Grok or Sora, which is really popular to manipulate video and photo images in particular, is that the ability to manipulate images has existed for a very long time. It is now, however, democratized. It is available to so many more actors.
We saw what happened, though, when that came for text. One of the reasons that Twitter became such a lightning rod in the popular discourse and in political discourse was because people had come to trust the text that was shared on Twitter — and then suddenly, it felt like overnight you were overwhelmed by what we might call pre-A.I. slop or that moment right before the A.I. slop.
You were flooded with questionable text suddenly, and you saw a decline in trust, but you didn’t necessarily see a decline in people using it.
So, the takeaway, if I’m a person running a social media platform, is that people will not change their user behavior based on whether or not they trust the platform. They will change their user behavior based on whether or not it’s easy to use or it appeals to us emotionally. That means the incentives aren’t there to be a trustworthy media platform, and that is actually not a job that the platform itself should be deciding.
The real question here is, where is the government? Where is legislation? Where is the popular outcry to say we deserve a better media environment than the one that we have?
If we leave this to just pure economic incentives, the social media platforms are doing exactly what they should be doing. They are making us feel something. They are not necessarily informing us.
Spiegelman: I think that’s such a good point. To that point that they don’t have an economic incentive to regulate this, but they are driven by what people will look at, people will look at A.I. images that trick them. But they don’t like looking at A.I. images and videos — I think there’s a survey that says half of U.S. adults would use social platforms less if there was more A.I. content on them.
Can this be fixed by the fact that people simply don’t like looking at this imagery?
Keegin: We can hope, crossing fingers. I mean, it’s called slop because it sucks. It’s slop because it’s not good, and the reason it’s not good is because A.I. is trying to be photography and it is nothing like photography.
The reason that photography is interesting to us is because of the way that it’s created, it’s because of how it’s built, and it’s because it’s based in the real. When you take that out, that image is boring.
What’s happening is A.I. is like, “Oh, we can be the new photography. We can look so real, just like photography looks so real.” But if it’s not based in the real, those images hold very little interest to us.
McMillan Cottom: Yeah, I strongly agree. I think the feeling that we are having when we see an A.I. image or an A.I. video is very similar to the feeling we have when we read text written by A.I., which is: The words can be in the right order, you can recognize the form of it as being a sentence, a paragraph, a story or a book, but you do not have the appropriate emotional response to it.
.op-aside { display: none; border-top: 1px solid var(–color-stroke-tertiary,#C7C7C7); border-bottom: 1px solid var(–color-stroke-tertiary,#C7C7C7); font-family: nyt-franklin, helvetica, sans-serif; flex-direction: row; justify-content: space-between; padding-top: 1.25rem; padding-bottom: 1.25rem; position: relative; max-width: 600px; margin: 2rem 20px; }
.op-aside p { margin: 0; font-family: nyt-franklin, helvetica, sans-serif; font-size: 1rem; line-height: 1.3rem; margin-top: 0.4rem; margin-right: 2rem; font-weight: 600; flex-grow: 1; }
.SHA_opinionPrompt_0325_1_Prompt .op-aside { display: flex; }
@media (min-width: 640px) { .op-aside { margin: 2rem auto; } }
.op-buttonWrap { visibility: hidden; display: flex; right: 42px; position: absolute; background: var(–color-background-inverseSecondary, hsla(0,0%,21.18%,1)); border-radius: 3px; height: 25px; padding: 0 10px; align-items: center; justify-content: center; top: calc((100% – 25px) / 2); }
.op-copiedText { font-size: 0.75rem; line-height: 0.75rem; color: var(–color-content-inversePrimary, #fff); white-space: pre; margin-top: 1px; }
.op-button { display: flex; border: 1px solid var(–color-stroke-tertiary, #C7C7C7); height: 2rem; width: 2rem; background: transparent; border-radius: 50%; cursor: pointer; margin: auto; padding-inline: 6px; flex-direction: column; justify-content: center; flex-shrink: 0; }
.op-button:hover { background-color: var(–color-background-tertiary, #EBEBEB); }
.op-button path { fill: var(–color-content-primary,#121212); }
Know someone who would want to read this? Share the column.
Now, we can get mystical and say that there’s something about the human spirit that we infuse into our art, and I am not disinclined to believe that. But I think that whatever that process is, A.I. can look like reality, but it cannot communicate emotionally to us in a way that resonates as being authentic.
So, I actually think using the word authentic to only speak about the aesthetics of A.I. is not exactly right. Something can be very beautiful and still leave you cold.
What I think people are experiencing is, “Hey, that was a really cute thing of a cute puppy jumping up and down on a trampoline. I should like that and instead, I don’t really feel anything.”
Having said that, we have spent the last 30 years developing a pretty nasty habit of scrolling, and I think it is going to take a lot more than a couple of feelings of betrayal to interrupt that loop.
I have to tell you, I’m in a group chat with a group of elderly people who I love and respect a lot — that’s why I’m in the group chat. It’s my mother and some of her friends and they’re technologically savvy. They are used to using it in their work lives and in their personal lives, and they’re crazy for A.I. images.
They are crazy about them. They are making them, posting them, sharing them constantly. And you can point out to them, I will say, “Hey, Auntie, that is A.I.” And they will say, “Oh, yeah, I know. Hahaha.”
They are not actually then looking for or responding to authenticity. They’re responding to this the way I think about the art they sell in the mall, right? Nobody is going up to look at that art so that they can have an emotional experience with it. They’re consuming it on a slightly different level.
I think the conflation of those levels, though, does create a social problem for us. But I say all that to say I’m not sure that leaving us emotionally cold is going to be enough to break the habit of using these tools.
Spiegelman: I think you’re creating such a good distinction.
There is: Can A.I. create art? Art is something that we think of as a way of communing with another soul, and if A.I. does not have a soul, then it cannot create art.
And then there is: What are we doing when we are scrolling on our phones, when we are addicted to social media?
We’re not looking for the same experience that looking at a painting in a museum gives us. We’re actually looking for a much cheaper, faster emotional rush. Perhaps A.I. can do that. And if it can do that, we will continue consuming it.
Keegin: I do think A.I. can create art, though.
McMillan Cottom: Do you, Emily? I’m surprised!
Keegin: You can’t be the person working in photography and say that a new form of technology is not art. There are photographers who have been working with A.I. recently who are making incredible things.
When A.I. works as an art practice, it is hyperconscious of itself. It’s speaking to A.I. and talking about the medium in the process.
McMillan Cottom: What I hear is that you’re saying one that is honest. I actually just saw a really beautiful art piece that a local artist did recently where he superimposed — using an A.I. prompt — to try to recreate a memory that he had of a house from when he was a child. Then he takes the A.I. image and he sort of paints through that.
So, you see both the A.I. impression of what they thought he was describing juxtaposed against his painting of his memory. Again, super self-aware, self-referential. I absolutely would call that art.
I do question, however, the difference between experts using it, or artists using the tool to bring to life the conversation they want to have with the culture, and an A.I. prompt that someone comes up with off the cuff to send each other like a funny video of them dancing in a Santa costume.
I think a part of the problem is that I don’t think that there are that many incentives for us to make that distinction online, and I’m not sure that we have the shared sensibilities to care that there’s just so much more of the A.I. slop than there is the artistic interpretation of A.I.
Keegin: Yeah. I guess I would say that if I’m taking a funny video of myself via A.I. and I turn myself into a cat and I send it to you, that’s not slop. Don’t you want it? You want that!
McMillan Cottom: Well, cats are the reason for the internet, arguably, I get that. But you would enjoy it.
Keegin: There’s not a big jump between the filters that we have on our phones already and putting it through the tool of A.I.
I think when we talk about slop, what we’re talking about is that it’s not well done.
Spiegelman: I’m so interested in this question of “Can A.I. create good art?” But part of that question gets to how do we define art? My dad is an artist, and when I was 7 years old and he was about to go to the dentist, I asked him what art is — and he always asks for laughing gas at the dentist and says that this is a really good time for him to think. So, he was like, “Hold that thought” and then got a lot of laughing gas and then came out and was like, “I know the answer to this question.” [Laughs.]
The answer he gave me then — and it’s been really useful for me my whole life — is that art is the means through which we give shape to our thoughts and feelings. It’s very much a definition that comes from a perspective of the artist, but I think for, even as a consumer of art, part of the joy of it is feeling like we have connected with someone else’s experience of the world or we have connected with someone else’s emotions.
I think that A.I. just can’t do that, which makes it bad in a new way.
Emily, I’m so curious about your thoughts on this because I love that you want to defend that A.I. can create art. I think it’s so interesting.
Keegin: Well, A.I. isn’t creating art; the person who’s prompting A.I. is creating the art. They are making a choice to turn on their computer and type in some words, and those words produce something.
It’s the same process for the reason you pick up a pencil and draw, and the reason you decide to click the shutter. The person making the prompt is the artist in that conversation. They’re making the choice to start that art process. It’s a tool.
McMillan Cottom: I hear that, but that is not actually why we have so much A.I. slop. It isn’t that there are billions of people programming and going and putting in one prompt at a time. It is that you can create an A.I. that will create a prompt based on generative A.I. text. The steps of human removal from the process isn’t just human-to-prompt. If it was, then maybe we’d just be talking about the new era of Pop Art.
It is that there is a point at which the human can absolutely be removed from the loop entirely. I think that is why it is so valuable to social media platforms who have figured out an equation where it is just about generating something new for people to respond to, whether they’re actually having a strong emotional response or not — that it’ll just keep people engaged and doom scrolling and it needs to be cheap enough to do that. So, I think one of the reasons we feel so cold is that a lot of this actually isn’t a human being sitting down and writing a prompt, right? I think the technology could improve and it could be harder to discern, and we would still be left emotionally cold when we engage with it.
Spiegelman: Can we also talk about the aesthetics of what it is that A.I. creates? Because that is changing rapidly as it gets better and better at making things that look exactly like the kind of art we create. But, in the past year, we call it A.I. slop, but what we think of when we think of A.I.-generated imagery is actually something that’s like really slick and perfect and is also akin to Trump’s tackiness.
I think there’s a reason he loves sharing this kind of video.
I’m curious if either of you think that A.I. itself has its own aesthetic. How you would describe it, and where does that aesthetic come from?
Keegin: Well, I think it’s changing. That aesthetic that you just described, a very slick, kind of plastic feeling to skin that we get — I feel like recently what I’ve been seeing with A.I. is trying even harder to look like old photography with lots of grain, lots of pixelization.
You saw that with the J. Crew and Vans advertisements from earlier in 2025. So, I think it might be changing a little bit.
McMillan Cottom: Yeah, I think if we over-rely on aesthetics to tell us when A.I. has created an image or a video, then we have already fallen for the trick of A.I., which is to think that we individually alone can discern when it is enacted and when it isn’t.
I’m struck by how much of A.I. slop is just nihilistic in its position on society, that there’s no choice about any of it.
There’s no political statement, there’s no cultural statement, it’s not an artistic statement except that I made you respond. I captured your energy for about eight seconds — the eight seconds it took you to hit repost.
So, I think the aesthetics will get better, but that doesn’t necessarily mean it’s going to have an aesthetic position.
It does get scarier about how much better aesthetics will make it easier for us to be fooled by A.I.
Spiegelman: It’s true that it is cycling through different trends the way art always has, that it’s moved from slick to looking more analog and that it could only have a true aesthetic if it’s guided by a human’s sense of aesthetic and position and meaning.
I’m seeing a lot of people say, just thinking about the return to a graininess and analog that you’re both talking about, that 2026 in reaction to the creation of A.I. imagery is going to be the year of analog. That it’ll be a moment when people want to craft in person, that they want to meet in real life, that they want to step out from behind the screen.
Do you think that has any teeth or is everything too digital now for that to happen?
Keegin: I think we’re definitely seing an aesthetic trend. Return to film has been an aesthetic trend for the last few years. It’s only building. What is funny is that then A.I. then looks like film. It follows you wherever you go and shape-shifts into whatever you create.
McMillan Cottom: Yeah. I like to think that we are entertaining the return to material craft and all that. But we’ve had eras where we’ve said that before, and it’s not that it isn’t true and that there isn’t some trend and groups starting and people going through cycles of making zines, for instance, or whatever.
I’m just not sure that they are true enough to say that it is an antidote to whatever it is about A.I. slop that scares us because that’s fundamentally what we’re talking about.
There’s just some great unknown there. It scares us. It’s a little intimidating, and I’m not sure that getting back to crafting culture, as much as I love it — I’ve been working on a zine with my stepdaughter, trying to expose her to that whole world, and so I love it — but I’m not sure that is the antidote to “Why does this scare us so much?”
Keegin: We’re seeing corporations, though, really take this to heart in how they are now showing us how their advertisements were made to prove that humans were behind it.
Apple has done this — they did this beginning the middle of last year. A lot of the ads that they were creating, they then did behind-the-scenes video to make sure that we all knew that there were people involved, that the puppets that were on display were handmade and had real puppeteers attached to them.
Even though we all saw it small, scrolling on our phones. Ultimately, we weren’t sitting in a theater watching puppets. We were watching an image that could have honestly been done digitally. But they wanted to prove that they’re human. And I feel like we’re trapped in a terrible episode of “Is It Cake?” or something, and it’s like we all have to prove ourselves. We’re just stuck there.
I’m not convinced that it’s a bad thing, honestly. I’m not convinced that we will get out of this in a worse place. This might actually be a great exercise in reminding ourselves what we care about and how we are human beings that want to be around other human beings and are excited about real things.
McMillan Cottom: I will agree with Emily on that one. I think there are some dark days ahead, don’t get me wrong, where it’s going to be like, “I cannot consume another A.I. movie or another A.I. script.” By the way, I’m now obsessed with movies. I watch and I go, “I’m positive that it was written by A.I.”
I grade essays all the time for a living. I think I know when it’s a human voice and when it isn’t, and I am watching so much these days where I am so sure it was written by A.I., but all that does is make me want to then read a real book. I want to cleanse myself of that after I’ve experienced so much of it.
I tell my fellow writers, younger writers, who are all wrestling with these questions about A.I. and what it might do to publishing and the written word: “Yeah, there will be this era of fascination, but in the end, I think it only creates more desire for real writing, for a real conversation, for real engagement.”
I do think, in the long run, the human experience and our desire for it wins. It’s just that in the interim, there’s going to be a lot of really bad stuff to wade through and sort through to reconnect with human nature.
Keegin: Yeah, just look at the history of photography and what it did for painting and how painting still sells at a much higher price point than any photograph ever taken — and it’s because of the human hand attached to the act of painting and not about how much time it takes for a photograph to be made by a professional photographer, because we know that a professional photographer could take years to create a piece of art and it’ll still sell for less.
We have to ask ourselves why. Because it’s not just about the amount of time put in, but about how we understand the role of machines in creating a piece of art and whether or not we value machine-made objects and art as much as we value art and objects that are made with a human hand.
Spiegelman: That feeling that you’re describing, Tressie, of needing to cleanse myself by reaching out for connection or that you’re talking about, Emily, of wanting to feel the human hand behind this piece that makes us still value painting, even in an era when an identical image could be created by a machine in seconds — that creates these existential questions that are actually hopeful and useful for us: What is it that we want from the world and what does it mean to be human?
But, as you said, there’s dark days ahead. We’re not there yet. And I wonder, for those dark days, as we’re being inundated by more and more of these images — I personally would love tips on how to spot A.I. content.
I know you’ve said throughout this conversation that it’s getting harder and harder to spot, but as journalists and consumers of culture, how do you still try to differentiate things?
Keegin: OK, I’m going to answer your question, but I keep thinking about how whenever there’s a flood, there’s a photograph that shows up of a shark in floodwaters. That’s years old — and every time I’m like, “Oh! Oh, wait. No, that’s fake.” I forget every time. So, I don’t think there’s a way, honestly.
I think that the reason you know that there’s not sharks swimming in the flood outside your door is because it’s not in the papers of The New York Times. I think that, at this point, the only way to know if an image is “real” is if the person who’s trafficking it is a place or person that you trust and is verified.
McMillan Cottom: Yeah. Listen, you are on TikTok at 2 a.m. and you think that the cute baby saying the funny thing — if it’s funny, no harm, no foul, right?
I think the question becomes: When do we allow the image to move us to act? Whether that means we share it, whether that means we get very angry or anxious about some major news development. Then the question becomes, “Wait, is this real?” and that question becomes all the more important.
So, I think the first question is: Does this make me want to do something?
“Am I enjoying it too much?” is one of the questions I ask myself. “Do I agree with it too much?” If so, that may be less about the artifact and more about how it affirms my biases or my beliefs, or makes me feel right or makes me feel superior. So, one of the easiest things we can do is if you like it too much, interrogate it. That’s all. And if you can’t easily interrogate it, maybe we don’t share it.
Maybe we just had a little blip of enjoyment online for eight seconds, and it’s OK to let that just wash over us because, ultimately, that’s all A.I. slop is designed to do. It’s just supposed to wash over us.
Spiegelman: That is perfectly said, Tressie. Thank you. Although, I really hope that we can also all go outside and touch something real.
McMillan Cottom: Yes, please go outside. I cannot encourage people to do that enough. Go outside and remember what the sky looks like, everybody.
Spiegelman: And on that note: Emily, Tressie, thank you so much for being here. I hope that you find genuine moments of feeling and connection in your day among the A.I.-generated imagery.
Keegin: Listen, if you dress yourself up as a cat dancing, just send it to me.
McMillan Cottom: Emily’s the audience for people as cats is what I’m taking away from this conversation. [Laughs.]
Keegin: Thanks for having me.
McMillan Cottom: Thanks for having me.
Thoughts? Email us at [email protected].
This episode of “The Opinions” was produced by Vishakha Darbha. It was edited by Alison Bruzek and Kaari Pitkin. Mixing by Pat McCusker. Original music by Issac Jones and Carole Sabouraud. Fact-checking by Mary Marge Locker. Audience strategy by Shannon Busta and Kristina Samulewski. The director of Opinion Audio is Annie-Rose Strasser.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post The Internet May Look Different After You Listen to This appeared first on New York Times.




