When DoorDash delivery driver Livie Rose Henderson posted a video alleging that one of her customers sexually assaulted her in October, it set off a firestorm of reactions.
Henderson’s TikTok claimed that when she was dropping off a delivery in Oswego, New York, she found a customer’s front door wide open and inside, a man on the couch with his pants and underwear pulled down to his ankles. Henderson was dubbed the “DoorDash Girl,” and her video accrued tens of millions of views, including some supportive and consoling responses to what she said she had endured on the job as a young woman. Many others on the platform made commentary videos that called into question Henderson’s alleged victimhood, defended the customer, and spread misinformation, with TikTok’s algorithm seemingly amplifying these “hot takes.” Then, following Henderson’s November 10 arrest—she has been charged with unlawful surveillance and the dissemination of unlawful surveillance imagery—a new wave of reactions emerged. (Police have dismissed her sexual assault allegation.)
None of these responses came from Black content creator and journalist Mirlie Larose.
But Larose opened TikTok one day to find dozens of messages from friends and supporters alarmed by a video of her responding to the situation in favor of the customer and DoorDash’s decision to terminate Henderson. (Henderson was fired for sharing a customer’s personal information online, DoorDash spokesperson Jeff Rosenberg tells WIRED.) As Larose stared at the video in disbelief, for a split second she second-guessed herself as she became flushed with anxiety about the comment section “tearing her apart.”
“Did I film this?” she asked. “It’s my face, it’s my hair.”
“Then, within three or four seconds, I noticed something’s off. There’s no way I said this. I didn’t [want to] talk about this topic,” Larose tells WIRED. The video had been AI-generated.
The situation highlights an increasingly common form of digital blackface, buoyed by the rise of generative AI. The term, popularized by culture critic Lauren Michele Jackson, describes various contemporary types of “minstrel performances” on the internet. This looks like the overrepresentation of reaction GIFs, memes, TikToks, and other visual and text-based media that use Black imagery, slang, gestures, and culture. TikTok’s reliance on attention-grabbing short-form video content, coupled with apps like Sora 2, has made it far easier for non-Black creators and bot accounts to adopt racialized stereotypical Black personas using deepfakes. This is also known as digital blackfishing.
In the midst of the DoorDash/Henderson controversy, users on TikTok began to notice two videos in particular: one from a bot account and another from an actual Black content creator parroting the same script. They adopted seemingly DARVO (Deny, Attack, and Reverse Victim and Offender) positions, minimizing the allegations Henderson made and justifying her termination: “I saw the original video posted by the DoorDash girl, and … I understand why DoorDash fired you and why you’re blocked from the app.” The videos go on to say, “As for the guy, I can see why everyone is saying he did it on purpose. But when you look at the original video, that couch is not in eye view unless you angle yourself and look over, and if you really want to break it down, he’s inside his house.” In a statement on Facebook, the Oswego City Police Department said the male was “incapacitated and unconscious on his couch due to alcohol consumption” and that the video was taken outside his house. Police also said they “determined that no sexual assault occurred.”
As the deepfakes gained virality across TikTok, Instagram, X, and Reddit, the assumption was that both videos were AI-generated because the talking points were eerily identical; some even speculated that DoorDash orchestrated an AI PR campaign against Henderson. DoorDash tells WIRED that the company is “aware of AI-generated content surrounding this case and in no way condones or supports it.”
In reality, a bot account, uimuthavohaj0g, dubbed the creator NDR Antonio V’s DARVO response video over an AI-generated video using Larose’s face and likeness. The bot account’s deepfake video also uses an out-of-context clip from Henderson’s original video in the background to further drive the point that Henderson fabricated her SA allegations to garner a “platform.”
After TikTok removed Henderson’s original video, she posted that she attempted to upload it again, without the footage of the assailant. But it was removed again, and she received a second strike. In the absence of the original footage, altered images from screenshots of her original TikTok video appeared to make it look like she opened the alleged perpetrator’s residence to record him, instead violating his privacy. According to TikTok, the company removed Henderson’s videos for displaying “content that shows or promotes sexual abuse and exploitation, including having, sharing, or creating intimate images (real or edited) of someone without their consent.”
Both of the charges against Henderson are class E felonies, with the potential penalty of up to four years in prison for each charge.
Larose says the sensitive nature of the allegations is why she didn’t feel compelled to comment on the situation publicly.
“Sexual assault is not something you want to play with, and you can’t freely talk about it without knowing [all] the facts.”
But the bot account had other plans, posting videos using her likeness and that of another Black creator without their consent.
Larose said she already knew about the account because it had used her face in 10 other videos before. Mostly, the videos were commentaries on pop culture topics and celebrity culture, the most viral one being about a prank gone wrong. As with those earlier cases, she says she quickly asked TikTok to remove the videos. Each time, she says, her requests were denied.
It wasn’t until a more well-known Black creator, @notKHRIS, stitched the bot account’s video to warn others about this misleading AI-generated deepfake video and its use of digital blackface that more people reported the page. As a result, the page was finally removed from the app.
Still, for days, Larose says, other accounts posted AI-generated videos with her face—not only pertaining to the DoorDash incident. “What was also annoying about the situation is the voice that they use in certain videos is unfair and harmful, because they’re imitating a certain type of [Black] stereotype.“
WIRED reached out to dtff2727, one of the bot accounts posting deepfakes of Black creators, for comment and did not receive a response. As of this writing, the account has a total of 19 AI-generated videos exploiting Larose’s likeness.
Digital minstrels like the bot accounts and AI influencers are not only profiting from this content, but are granted anonymity by using digital blackface while rage-baiting and reinforcing stereotypes about Black communities online.
OpenAI’s Sora 2, like Google’s Veo, has added to the proliferation of AI slop on TikTok, much of it plagued with racist, sexist, ableist, and classist biases.
Before the page was removed, the TikTok account @impossible_asmr1 posted Sora-generated AI content of Black women yelling in public about food stamps. The clips depicted realistic-looking Black women using butchered African American Vernacular English or blaccent, complaining in stores about not being able to use SNAP benefits to purchase fast food or alcohol, or even selling EBT in exchange for cash. Some clips had Sora watermarks and others did not. But once the clips went viral, it ignited a faux panic about Black people and their misuse of the welfare state on the heels of the Trump administration and Supreme Court halting SNAP benefits. Recently, OpenAI had to block users from making videos of Martin Luther King Jr. on its Sora app after his estate protested the spread of dehumanizing, minstrelsy AI depictions of the civil rights leader.
In a statement to WIRED, OpenAI spokesperson Niko Felix said that OpenAI’s policies prohibit “misleading others through impersonation, scams, or fraud.” The spokesperson said the company is working to “detect patterns of misuse and apply penalties or remove content when violations occur.”
Digital blackface is more than just the act of pretending to be Black for entertainment or profit, says Yeshimabeit Milner, founder and CEO of Data for Black Lives, an advocacy group fighting against discriminatory uses of data and algorithms. “It’s about harness[ing] the power of these very violent stereotypes of Black people for the purpose of pushing a specific political agenda. One would call it social or cultural engineering to create the sort of chaos and conversations, and strife that drives up viewership and engagement,” she says.
Similarly, in 2018, the Cambridge Analytica scandal revealed how the data of 87 million Facebook users was being used to plant and sow the seeds of political divisiveness between the right and left. Milner says, “Trusted messengers online were impersonating Black, Latino, and white people as well as entire [political] organizations and chapters to build a consensus that certain demographics of people think a certain way.”
Some Black content creators are pushing back. Zaria Imani tells WIRED that she is taking legal action under “copyright infringement” against multiple bot pages in AI-generated commentary videos engaging in content farming.
Professor Meredith Broussard, author of More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, says content creators should be granted the same protections and safeguards as celebrities and copyrighted characters, who’ve raised alarms about their likeness being used in AI videos without their permission.
“ This is a structural issue that the platform needs to fix,” Broussard states.
In May 2025, the Take It Down Act was signed into law, criminalizing the distribution of (authentic and computer-generated) nonconsensual intimate imagery, such as AI-generated deepfakes and “revenge porn.”
While Big Tech companies can work toward minimizing some of the harms caused by AI, it’s becoming increasingly clear that bigger intervention is needed to hold these companies truly accountable, according to Milner.
“With actual education and collective action, we can do more than just get TikTok to stop. We can really push for legislation that’s going to make this completely not OK,” she says.
The post The Viral ‘DoorDash Girl’ Saga Unearthed a Nightmare for Black Creators appeared first on Wired.




