DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Inside OpenAI’s Raid on Thinking Machines Lab

January 15, 2026
in News
Inside OpenAI’s Raid on Thinking Machines Lab

If someone ever makes an HBO Max series about the AI industry, the events of this week will make quite the episode.

On Wednesday, OpenAI’s CEO of applications, Fidji Simo, announced the company had rehired Barret Zoph and Luke Metz, cofounders of Mira Murati’s AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI in late 2024.

We reported last night on two narratives forming around what led to the departures, and have since learned new information.

A source with direct knowledge says that Thinking Machines leadership believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati’s trust, the source says, and disrupted the pair’s working relationship. The source also alleged Murati fired Zoph on Wednesday—before knowing he was going to OpenAI—due to what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph has not responded to several requests for comment from WIRED.)

Meanwhile, in a Wednesday memo to employees, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday—prior to the date he was fired. Simo also told employees that OpenAI doesn’t share Thinking Machines’ concerns about Zoph’s ethics.

Alongside Zoph and Metz, another former OpenAI researcher that was working at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT-maker, per Simo’s announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was first to report the additional hires.

A separate source familiar with the matter pushed back on the perception that the recent personnel changes were wholly related to Zoph. “This has been part of a long discussion at Thinking Machines. There were discussions and misalignment on what the company wanted to build—it was about the product, the technology, and the future.”

Thinking Machines Lab and OpenAI declined to comment.

In the aftermath of these events, we’ve been hearing from several researchers at leading AI labs who say they are exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI’s brief ouster of Sam Altman in 2023, known inside of OpenAI as “the blip.” Murati played a key role in that event as the company’s then chief technology officer, according to reporting from The Wall Street Journal.

In the years since Altman’s ouster, the drama in the AI industry has continued, with departures of cofounders at several major AI labs, including xAI’s Igor Babuschkin, Safe Superintelligence’s Daniel Gross, and Meta’s Yann LeCun (he did cofound Facebook’s longstanding AI lab, FAIR, after all).

Some might argue the drama is justified for a nascent industry whose expenditures are contributing to America’s GDP growth. Also, if you buy into the idea that one of these researchers might crack a few breakthroughs on the path to AGI, it’s probably worth tracking where they’re going.

That said, many researchers started working before ChatGPT’s breakout success and appear surprised that their industry is now the source of nearly constant scrutiny.

As long as researchers can keep raising billion-dollar seed rounds on a whim, we’re guessing the AI industry’s power shake-ups will continue apace. HBO Max writers, lock in.

Got a Tip? Are you a current or former AI researcher who wants to talk about what’s happening? We’d like to hear from you. Using a nonwork phone or computer, contact the reporter securely on Signal at mzeff.88.

How AI Labs Are Training Agents to Do Your Job

People in Silicon Valley have been musing about AI displacing jobs for decades. In the past few months, however, the efforts to actually get AI to do economically valuable work have become far more sophisticated.

AI labs are smartening up about the data they’re using to create AI agents. Last week, WIRED reported that OpenAI has been asking third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI’s agents. The companies ask employees to scrub these documents of any confidential data and personally identifying information. While it’s possible some corporate secrets or names slip by, that’s likely not what OpenAI is after (though the company could get in serious trouble if that happens, experts say).

AI labs are more interested in getting realistic examples of work created by a McKinsey consultant, Goldman Sachs investment banker, or Harvard doctor. That’s why data suppliers such as Mercor specifically seek out professionals that have worked at these companies on their job postings.

Handshake, Mercor, Surge, and Turing are some of the major data suppliers that AI labs rely on to get this data. In the past year, data firms have started paying upwards of $100 an hour to contract top talent for AI labs.

One way they’re using this data is to create “environments,” which are essentially boring video games that teach AI agents how to use enterprise software applications. The idea is that AI agents can test on environments and learn how to use real-world software that professionals would use to do their jobs.

“Over the past year, labs have increasingly recognized that they need to train and fine-tune models for a whole bunch of areas of knowledge work, including legal, health care, consulting, and banking,” says Aaron Levie, the CEO of the enterprise company Box, which offers enterprise agents powered by models from OpenAI, Anthropic, and Google. “These firms have been hiring contractors to generate datasets and rubrics, which offer ways that they can train and evaluate the model so it can get better at particular skills.”

Whether this is enough to train AI agents to execute office tasks accurately and consistently remains to be seen. AI labs have significantly improved their agents in the past year, as shown by viral products like Claude Code, which people are increasingly using for tasks outside of coding. If that’s any indication of what’s to come for other industries, it’s worth watching these enterprise agents.


This is an edition of the Model Behavior newsletter. Read previous newsletters here.

The post Inside OpenAI’s Raid on Thinking Machines Lab appeared first on Wired.

‘Star Wars’ Boss to Depart, Ending an Asteroid-Filled Reign
News

‘Star Wars’ Boss to Depart, Ending an Asteroid-Filled Reign

by New York Times
January 15, 2026

“Star Wars” has new emperors. Disney said on Thursday that Kathleen Kennedy would step down as the president of Lucasfilm ...

Read more
News

Timothy Busfield Sex Abuse Case: New Mexican DA Says Office Won’t Discuss Specifics With Media

January 15, 2026
News

Trump Threatens to Invoke the Insurrection Act and Send Troops to Minnesota

January 15, 2026
News

Star Wars Battlefront 2 Modders Are Keeping the Game Alive With New Content

January 15, 2026
News

‘A Knight of the Seven Kingdoms,’ the newest ‘Game of Thrones’ prequel, is on the horizon. Here’s what to know about it.

January 15, 2026
You Can Buy Canned Coffee With Food Stamps (But Only If It Has Milk)

You Can Buy Canned Coffee With Food Stamps (But Only If It Has Milk)

January 15, 2026
L.A. firefighter union launches sales tax initiative to fund new stations and more

L.A. firefighter union launches sales tax initiative to fund new stations and more

January 15, 2026
Inside the Fight to Keep Iran Online

Inside the Fight to Keep Iran Online

January 15, 2026

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025