This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.
I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.
I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy.
For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.
Many people frame the problem of funding A.I. as choosing the lesser of two evils: restrict access to transformative technology to a select group of people wealthy enough to pay for it, or accept advertisements even if it means exploiting users’ deepest fears and desires to sell them a product. I believe that’s a false choice. Tech companies can pursue options that could keep these tools broadly available while limiting any company’s incentives to surveil, profile and manipulate its users.
OpenAI says it will adhere to principles for running ads on ChatGPT: The ads will be clearly labeled, appear at the bottom of answers and will not influence responses. I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules. (The New York Times has sued OpenAI for copyright infringement of news content related to A.I. systems. OpenAI has denied those claims.)
In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy. Privacy changes marketed as giving users more control over their data were found by the Federal Trade Commission to have done the opposite, and in fact made private information public. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.
The erosion of OpenAI’s own principles to maximize engagement may already be underway. It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway, likely by encouraging the model to be more flattering and sycophantic. This optimization can make users feel more dependent on A.I. for support in their lives. We’ve seen the consequences of dependence, including psychiatrists documenting instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal ideation in some users.
Still, advertising revenue can help ensure that access to the most powerful A.I. tools doesn’t default to those who can pay. Sure, Anthropic says it will never run ads on Claude, but Claude has a small fraction of ChatGPT’s 800 million weekly users; its revenue strategy is completely different. Moreover, top-tier subscriptions for ChatGPT, Gemini and Claude now cost $200 to $250 a month — more than 10 times the cost of a standard subscription to Netflix for a single piece of software.
So the real question is not ads or no ads. It is whether we can design structures that avoid both excluding people from using these tools, and potentially manipulating them as consumers. I think we can.
One approach is explicit cross subsidies — using profits from one service or customer base to offset losses from another. If a business pays A.I. to do high-value labor at scale that was once the job of human employees — for example, a real-estate platform using A.I. to write listings or valuation reports — it should also pay a surcharge that subsidizes free or low-cost access for everyone else.
This approach takes some inspiration from what we already do with essential infrastructure. The Federal Communications Commission requires telecom carriers to contribute to a fund to keep phone and broadband affordable in rural areas and to low-income households. Many states add a public-benefits charge to electricity bills to provide low-income assistance.
A second option is to accept advertising but pair it with real governance — not a blog post of principles, but a binding structure with independent oversight over how personal data is used. There are partial precedents for this. German co-determination law requires large companies like Siemens and Volkswagen to give workers up to half the seats on supervisory boards, showing that formal stakeholder representation can be mandatory inside private firms. Meta is bound to follow content moderation rulings issued by its Oversight Board, an independent body of outside experts (though its efficacy has been criticized).
What the A.I. industry needs is a combination of these approaches — a board that includes both independent experts and representatives of the people whose data is at stake, with binding authority over what conversational data can be used for targeted advertisement, what counts as a material policy change and what users are told.
A third approach involves putting users’ data under independent control through a trust or cooperative with a legal duty to act in users’ interests. For instance, MIDATA, a Swiss cooperative, lets members store their health data on an encrypted platform and decide, case by case, whether to share it with researchers. MIDATA’s members govern its policies at a general assembly, and an ethics board they elect reviews research requests for access.
None of these options are easy. But we still have time to work them out to avoid the two outcomes I fear most: a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.
Zoë Hitzig is a former researcher at OpenAI. She is a junior fellow at the Harvard Society of Fellows.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Why I Quit My Job at OpenAI appeared first on New York Times.




