When Business Insider learned in August that two freelance pieces it published under the byline “Margaux Blanchard” appeared to be written by AI, the site removed them, saying the first-person essays didn’t meet its standards.
But last month, Business Insider began publishing AI-generated stories carrying the byline Business Insider AI News Desk, a sign of automation’s increasing role in news production at a time when thousands of jobs are being cut. Editor-in-Chief Jamie Heller told TheWrap that the newsroom can leverage AI to produce quick stories that would be of interest to readers and for which additional reporting wouldn’t necessarily “add a ton of value.”
Such AI-generated stories, ranging from chief executive obituaries to politics briefs to the latest Powerball jackpot, are overseen by human editors and are part of a month-long pilot program at Business Insider, which ramped up its use of AI this past year. The move comes at a sensitive time. When CEO Barbara Peng announced plans in May to go “all-in on AI,” the company had just laid off a fifth of its staff. At a union rally last month, one employee said that “people are feeling very threatened by the rollout of all of this.”
Despite these fears, Heller insists that AI doesn’t hold “a candle to reporters.” Whether “getting people on the phone, going out to conferences, bearing witness at events, meeting people, building relationships, building trust — AI does zero of that,” Heller said. “But what it can do, we should try to learn and see what its capabilities are, and we’re still in the early innings.”
Business Insider is just one of many newsrooms across the country, from the Washington Post to Los Angeles Times, that are grappling with how to deploy generative artificial intelligence in ways that increase speed and scale without undermining trust — or the role of journalists. There are already number of cautionary tales, with Sports Illustrated, CNET and Gannett creating controversy with past AI experiments, and so the pressure is on news outlets to incorporate AI without wading into an ethical minefield and tainting their hard-earned reputations.
The possibilities and pitfalls of AI are front of mind for media leaders as the new year gets under way, evident in a scan of NiemanLab’s 2026 predictions: “AI will rewrite the architecture of the newsroom,” “AI breaks the hamster wheel of journalism” and “AI will force us to be more ambitious, more human storytellers,” to cite just a few.
Of course, news organizations have long adopted new technologies to gather and distill information, from data-driven reporting tools to machine learning used to analyze complex datasets. The New York Times has an eight-person AI team working regularly with reporters on specific stories and handling large document dumps, like the Epstein files, while also building internal tools for journalists to use independently.
But advances in AI clearly present risks for an industry where public trust has steadily declined. The stakes are high when it comes to rolling out reader-facing AI tools given that any perceived departure from a news outlet’s mission of providing reliable and vetted information can damage the brand, especially as Americans are skeptical over AI use.
Roughly half of US adults polled last spring by Pew said that AI will have a very or somewhat negative impact on news over the next 20 years; only 10% said it will have a very or somewhat positive effect. Regardless of such reservations, and following some missteps at major news outlets over the past year, industry leaders expect experimentation to only accelerate in 2026.
Take the Washington Post, which faced pushback in December after rolling out AI-generated podcasts that its own journalists said produced errors and flouted standards. “These errors are a threat to the core of what we do,” one staffer wrote on Slack. The Post, however, didn’t drop the project, telling TheWrap: “This is how products get built and developed in the digital age: ideation, research, design and prototyping, development, and then Beta.”
Jonathan Soma, a data journalism professor at Columbia Journalism School, noted that “moving fast and breaking things is normal and accepted and wonderful,” in the tech world, “and if you’re right nine times out of 10, who cares about that other 10%?
“But in journalism, all we really have is integrity,” Soma added. “So if something goes even a little bit wrong with the Washington Post, everyone is going to lose their minds for better or worse.”
Behind the Times’ AI strategy
Zach Seward estimates he’s spoken to 93% percent of the New York Times newsroom since he joined just over two years ago as the paper’s first editorial director of AI initiatives.
Part of Seward’s role has been “demystifying AI,” he told TheWrap, as “AI means a million things and everyone comes to it with a lot of baggage.” He recalled laying “out principles for the use of generative AI in the Times newsroom before we did any actual experimentation with the technology.”
Now a couple years in, Seward said reporters are increasingly using AI in their work, especially for investigative projects, organizing “massive messy data sets that just were impenetrable previously.”
Seward said the AI team did a lot of work this past year with the Times’ Washington bureau in helping comb through voluminous public statements of Trump cabinet officials, many of whom came from the world of television and have spoken extensively on a variety of platforms. One Washington-based member of the AI team, machine-learning engineer and journalist Dylan Freedman, has shared a byline on recent stories related to the Epstein files and Donald Trump’s health, while contributing to many others.

Such research-intensive use cases, Seward said, don’t “raise any of the ethical questions that are rightly asked about using AI for writing.” The Times allows AI use for some editorial tasks, like brainstorming search-optimized headlines, but not for writing articles. Seward described the published article as a “red line” and “sacrosanct,” while experimentation is permitted downstream of it.
What’s changed in the past couple years, Seward said, is going “from doing one-off projects where we’re doing basically all of the technical work” to creating AI tools for journalists to use. One such tool, called Cheat Sheet, can be used to analyze large data sets of documents, photos, or transcripts and display the findings in a spreadsheet. The team also created an internal “Manosphere” report, an automated daily email for editors and reporters that summarizes podcasts geared toward a heavily male, right-leaning audience.
Rather than looking for an AI solution to everything, Seward said the team tries to focus on “the gnarliest, most challenging problems and do those with an eye toward building tools to make that kind of analysis repeatable.”
“We are the AI team, so there is a risk of being an AI hammer that sees nails everywhere,” Seward added. “But we try to check that impulse. We are also AI skeptics ourselves. We are not going around the newsroom to boost this technology for its own sake. We see some potential uses amid what we already do really well in other ways.”
Possibilities and pitfalls
Beyond kicking off the AI article pilot program last month, Business Insider also launched five newsletters on niche topics, such as the future of driving, that utilize AI to canvas its site and factor in audience engagement for curation. An editor can modify the drafted newsletters, which are primarily links with limited copy, before they are sent out.
Heller makes a distinction between these shorter newsletters, or briefs, and the site’s more substantive ones, like First Trade, which was launched in October and is written by executive editor Joe Ciolli, and Tech Memo, written by Alistair Barr. “It’s their voice, it’s their smarts and their authority,” Heller said of those marquee newsletters. “The more AI can help us do other stuff, the more time we have to put into [what] makes our journalism most distinctive,” she said.

Heller noted that Business Insider’s owner, German conglomerate Axel-Springer, is tech-forward and has stressed the importance of AI in publishing. “So we’ve been approaching it with curiosity,” she said, “and seeing it as more of an opportunity than a threat.”
“We’ve had a very scientific approach trying things, seeing if they worked, trying to learn from them. If they didn’t, should we adjust? Should we move on?” she said. “Neither boosterish nor cynical.”
When it comes to accepting submissions, a Business Insider spokesperson said the site has “bolstered our verification protocols” since the publication of the two freelance pieces, which “were removed because we were unable to verify the identity or veracity of the person whose byline appeared on our site.”
Given the experimentation across the industry is sure to continue in 2026, Soma suggested that newsrooms build “a really strict evaluations culture,” a process of assessing “what might go wrong” and preemptively stopping it, or if something does go awry, to have levers in place to block it.
Last month, Soma assigned his Columbia students to create a tool that provides AI analysis of opinions columns, editorials and commentary, not unlike the Los Angeles Times first attempted last March. The paper came under fire after its “Insights” AI tool downplayed the Ku Klux Klan in automatically providing another viewpoint to an article.
In his work, Soma said he aims to bridge the divide between C-suite executives who are generally more enthusiastic about utilizing AI in news production, technologists who are excited to play with new toys and journalists who may be more skeptical.
“Journalists on an individual level will often love AI because they will use it for research, they’ll use it for ideation, they’ll use it for feedback,” he said. “But on an institutional level, it is different because they have thoughts about their jobs, they have thoughts about the state of journalism, trust with audiences, all of which are 100% valid.”
Going forward, Seward said he hopes that “some of the proven AI use cases can mature and recede into the background,” while phasing out others that “are just not as fruitful or as worth the time and energy as that they were hyped up to be.”
The post After a Rocky Year, Newsrooms Push Deeper Into AI appeared first on TheWrap.




