DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

A key type of AI training data is running out. Googlers have a bold new idea to fix that.

September 15, 2025
in News
A key type of AI training data is running out. Googlers have a bold new idea to fix that.
497
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter
Google Deepmind CEO Demis Hassabis
Google DeepMind CEO Demis Hassabis.

Benoit Tessier/Reuters

  • Google DeepMind researchers have found a new way to make use of data deemed unsafe for AI training.
  • Labs try to avoid data that is toxic, inaccurate, or contains personally identifiable information.
  • The researchers believe it could solve a big bottleneck in AI training.

Google DeepMind researchers have an idea for how to solve the AI data drought, and it might involve your Social Security number.

The large language models powering AI require vast amounts of training data pulled from webpages, books, and other sources. When it comes to text specifically, the amount of data on the web considered fair game for training AI models is being scraped faster than new data is being created.

However, a large portion of the data isn’t used because it’s deemed toxic, inaccurate, or it contains personally identifiable information.

In a newly published paper, a group of Google DeepMind researchers claim to have found a way to clean up this data and make it usable for training, which they claim could be a “powerful tool” for scaling up frontier models.

They refer to the idea as Generative Data Refinement, or GDR. The method uses pretrained generative models to rewrite the unusable data, effectively purifying it so it can be safely trained on. It’s not clear if this is a technique Google is using for its Gemini models.

Minqi Jiang, one of the paper’s researchers who has since left the company to Meta, told Business Insider that a lot of AI labs are leaving usable training data on the table because it’s intermingled with bad data. For example, if there’s a document on the web that contains something considered unusable, such as someone’s phone number or an incorrect fact, labs will often discard the entire thing.

“So you essentially lose all those tokens inside of that document, even if it was a small single line that contained some personally identifying information,” said Jiang. Tokens are the units of data, processed by AI, which make up words within text.

The authors give an example of raw data that included someone’s Social Security number or information that may soon be out of date (“the incoming CEO is…”). In these instances, the GDR would swap or remove the numbers, ignore the information that risks becoming obsolete, and retain the remainder of usable data.

The paper was written more than a year ago and was only published this month. A Google DeepMind spokesperson did not respond to a request for comment about whether the researcher’s work was being applied to the company’s AI models.

The authors’ findings could prove helpful for labs as the usable well of data runs dry. They cite a research paper from 2022 that predicted AI models could soak up all the human-generated text between 2026 and 2032. This prediction was based upon the amount of indexed web data, using statistics from Common Crawl, a project that continuously scrapes web pages and makes them openly available for AI labs to use.

For the GDR paper, the researchers performed a proof of concept by taking over one million lines of code and having human expert labelers annotate the data line by line. They then compared the results with the GDR method.

“It completely crushes the existing industry solutions being used for this kind of stuff,” said Jiang.

The authors also said their method is better than the use of synthetic data (data generated by AI models for the purpose of training themselves or other models), which has been a topic of exploration among AI labs. However, using synthetic data can degrade the quality of model output and, in some cases, lead to “model collapse.”

The authors compared the GDR data against synthetic data created by an LLM and discovered that their approach created a better dataset for training AI models.

They also said further testing could be conducted on other complicated types of data considered a no-go, such as copyrighted materials and personal data that is inferred across multiple documents rather than explicitly spelled out.

The paper has not been peer reviewed, said Jiang, adding that this is common in the tech industry and that all papers are reviewed internally.

The researchers only tested GDR on text and coding. Jiang said that it could also be tested on other modalities, such as video and audio. However, given the rate at which new videos are generated each day, they’re still providing a firehose of data for AI to train on.

“With video, you’re just going to have a lot more of it, just because there’s a constant stream of millions of hours of video generated each day,” said Jiang. “So I do think, going across new modalities beyond text, video, and images, we’re going to unlock a lot more data.”

Have something to share? Contact this reporter via email at [email protected] or Signal at 628-228-1836. Use a personal email address and a non-work device; here’s our guide to sharing information securely.

Read the original article on Business Insider

The post A key type of AI training data is running out. Googlers have a bold new idea to fix that. appeared first on Business Insider.

Share199Tweet124Share
Workers fired, placed on leave for Charlie Kirk comments after assassination
News

Workers fired, placed on leave for Charlie Kirk comments after assassination

by CBS News
September 15, 2025

By Megan Cerullo Megan Cerullo Reporter, MoneyWatch Megan Cerullo is a New York-based reporter for CBS MoneyWatch covering small business, ...

Read more
News

Apple’s iPhone 17 appears to be off to a strong start in China

September 15, 2025
News

Thousands in Chicago celebrate Mexican Independence Day amid ICE crackdown

September 15, 2025
News

Young People Suing Trump Over Climate Have Their Day in Federal Court

September 15, 2025
News

NJ town points to state’s flag status after Jets legend’s criticism following Charlie Kirk assassintion

September 15, 2025
Charlie Kirk shooting suspect Tyler Robinson allegedly confessed to assassination online before arrest: ‘It was me’

Charlie Kirk shooting suspect Tyler Robinson allegedly confessed to assassination online before arrest: ‘It was me’

September 15, 2025
Charlie Kirk Shooting Suspect Romantically Linked to Trans Roommate: Gov

‘It Was Me’: Charlie Kirk Suspect Allegedly Confessed Online Before Arrest

September 15, 2025
Bessent will meet Chinese officials in Spain for trade and TikTok talks

Sri Lanka survives scare against Hong Kong and UAE gets first points in Asia Cup

September 15, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.