Librarians, and the books they cherish, are already fight a losing battle for our attention spans with all kinds of tech-enabled brainrot.
Now, in a further assault to their sanity, AI models are generating so much slop that students and researchers keep coming into libraries and asking for journals, books, and records that don’t exist, Scientific American reports.
In a statement from the International Committee of the Red Cross spotted by the magazine, the humanitarian organization cautioned that AI chatbots like ChatGPT, Gemini, and Copilot are prone to generating fabricated archival references.
“These systems do not conduct research, verify sources, or cross-check information,” the ICRC, which maintains a vast library and archives, said in the warning. “They generate new content based on statistical patterns, and may therefore produce invented catalogue numbers, descriptions of documents, or even references to platforms that have never existed.”
Library of Virginia chief of researcher engagement Sarah Falls told SciAm that the AI inventions are wasting the time of librarians who are asked to hunt down nonexistent records. Fifteen percent of emailed reference questions that Fall’s library receives, she claims, are now ChatGPT-generated, which include hallucinated primary source documents and published works.
“For our staff, it is much harder to prove that a unique record doesn’t exist,” Falls added.
Other librarians and researchers have spoken out about AI’s effects on their profession.
“This morning I spent time looking up citations for a student,” wrote one user on Bluesky who identified themselves as a scholarly communications librarian. “By the time I got to the third (with zero results), I asked where they got the list, and the student admitted they were from Google’s AI summary.”
“As a librarian who works with researchers,” another wrote, “can confirm this is true.”
AI companies have put a heavy focus on creating powerful “reasoning” models aimed at researchers that can conduct a vast amount of research off a few prompts. OpenAI released its agentic model for conducting “deep research” in February, which it claims to do “at the level of a research analyst.” At the time, OpenAI claimed it hallucinated at a lower rate than its other models, but admitted it struggled with separating “authoritative information from rumors,” and conveying uncertainty when it presented the information.
The ICRC warned about that pernicious flaw in its statement. AIs “cannot indicate that no information exists,” it stated. “Instead, they will invent details that appear plausible but have no basis in the archival record.”
Though AI’s hallucinatory habit is well known by now, and though no one in the AI industry has made particularly impressive progress in clamping down on it, the tech continues to run amok in academic research. Scientists and researchers, who you’d hope to be as empirical and skeptical as possible, are being caught left and right submitting papers filled with AI-fabricated citations. The field of AI research itself, ironically, is drowning in a flood of AI-written papers as some academics publish upwards of one hundred shoddily-written studies a year.
Since nothing happens in a vacuum, the authentic, human-written sources and papers are now being drowned out.
“Because of the amount of slop being produced, finding records that you KNOW exist but can’t necessarily easily find without searching, has made finding real records that much harder,” lamented a researcher on Bluesky.
More on AI: Grok Will Now Give Tesla Drivers Directions
The post Librarians Dumbfounded as People Keep Asking for Materials That Don’t Exist appeared first on Futurism.




