OpenAI is asking third-party contractors to upload real assignments and tasks from their current or previous workplaces so that it can use the data to evaluate the performance of its next-generation AI models, according to records from OpenAI and the training data company Handshake AI obtained by WIRED.
The project appears to be part of OpenAI’s efforts to establish a human baseline for different tasks that can then be compared with AI models. In September, the company launched a new evaluation process to measure the performance of its AI models against human professionals across a variety of industries. OpenAI says this is a key indicator of its progress towards achieving AGI, or an AI system that outperforms humans at most economically valuable tasks.
“We’ve hired folks across occupations to help collect real-world tasks modeled off those you’ve done in your full-time jobs, so we can measure how well AI models perform on those tasks,” reads one confidential document from OpenAI. “Take existing pieces of long-term or complex work (hours or days+) that you’ve done in your occupation and turn each into a task.”
OpenAI is asking contractors to describe tasks they’ve done in their current job or in the past and to upload real examples of work they did, according to an OpenAI presentation about the project viewed by WIRED. Each of the examples should be “a concrete output (not a summary of the file, but the actual file), e.g., Word doc, PDF, Powerpoint, Excel, image, repo,” the presentation notes. OpenAI says people can also share fabricated work examples created to demonstrate how they would realistically respond in specific scenarios.
OpenAI and Handshake AI declined to comment.
Real-world tasks have two components, according to the OpenAI presentation. There’s the task request (what a person’s manager or colleague told them to do) and the task deliverable (the actual work they produced in response to that request). The company emphasizes multiple times in instructions that the examples contractors share should reflect “real, on-the-job work” that the person has “actually done.”
One example in the OpenAI presentation outlines a task from a “Senior Lifestyle Manager at a luxury concierge company for ultra-high-net-worth individuals.” The goal is to “Prepare a short, 2-page PDF draft of a 7-day yacht trip overview to the Bahamas for a family who will be traveling there for the first time.” It includes additional details regarding the family’s interests and what the itinerary should look like. The “experienced human deliverable” then shows what the contractor in this case would upload: a real Bahamas itinerary created for a client.
OpenAI instructs the contractors to delete corporate intellectual property and personally identifiable information from the work files they upload. Under a section labeled “Important reminders,” OpenAI tells the workers to “Remove or anonymize any: personal information, proprietary or confidential data, material nonpublic information (e.g., internal strategy, unreleased product details).”
One of the files viewed by WIRED document mentions an ChatGPT tool called “Superstar Scrubbing” that provides advice on how to delete confidential information.
Evan Brown, an intellectual property lawyer with Neal & McDevitt, tells WIRED that AI labs that receive confidential information from contractors at this scale could be subject to trade secret misappropriation claims. Contractors who offer documents from their previous workplaces to an AI company, even scrubbed, could be at risk of violating their previous employers’ non-disclosure agreements, or exposing trade secrets.
“The AI lab is putting a lot of trust in its contractors to decide what is and isn’t confidential,” says Brown. “If they do let something slip through, are the AI labs really taking the time to determine what is and isn’t a trade secret? It seems to me that the AI lab is putting itself at great risk.”
The documents reveal one strategy AI labs are using to prepare their models to excel at real world tasks. Firms like OpenAI, Anthropic, and Google are hiring armies of contractors who can generate high-quality training data in order to develop AI agents capable of automating enterprise work.
AI labs have long relied on third-party contracting firms such as Surge, Mercor, and Scale AI to hire and manage networks of data contractors. In recent years, however, AI labs have required higher-quality data in order to improve their models, forcing them to pay more for skilled talent capable of producing it. That has created a lucrative sub-industry within the AI training world. Handshake said it was valued at $3.5 billion in 2022, while Surge reportedly valued itself at $25 billion in fundraising talks last summer.
OpenAI appears to have explored other ways of sourcing real company data. An individual who helps companies sell assets after they go out of business told WIRED that a representative of OpenAI inquired about obtaining data from these firms, providing that personally identifiable information could be removed. The source, who spoke to WIRED on condition of anonymity because they did not want to sour any business relationships, said the data would have included documents, emails, and other internal communications. The source said they chose not to pursue the idea because they were not confident that personal information could be completely scrubbed.
The post OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents appeared first on Wired.




