- Major newspapers published a summer reading list filled with AI-invented books that were falsely attributed to real authors, sparking backlash over unchecked AI use in journalism.
- A freelance writer admitted using ChatGPT to generate the list without verification, leading to syndicated errors that went unnoticed before publication.
- The fabricated list included elaborate and very woke descriptions of nonexistent books, with only a few real entries, exposing flaws in AI-generated misinformation.
- Newsrooms, already under financial strain, faced criticism for failing to catch the errors, with one outlet calling it a "serious breach" of policy.
- The scandal highlights the risks of AI in journalism, threatening public trust as cost-cutting and automation replace rigorous fact-checking at some outlets.
Major newspapers such as the
Chicago Sun-Times and
Philadelphia Inquirer have published a summer reading list filled with entirely fabricated books that were hallucinated by AI and falsely attributed to real authors. The debacle, first exposed by social media users, has reignited concerns about journalism’s reliance on unchecked artificial intelligence amid industry-wide cost-cutting pressures.
Freelance writer Marco Buscaglia admitted to using ChatGPT to generate the list without verification, sending it directly to syndicator King Features, which distributed the error-ridden content to multiple outlets. The incident underscores a dangerous trend: as newsrooms shrink,
AI-generated misinformation risks eroding public trust in media.
A list built on fiction
The now-retracted reading list, part of a syndicated "Heat Index" package, included elaborate descriptions of nonexistent books by acclaimed authors. Among the AI-invented titles were Isabel Allende’s
Tidewater Dreams (described as “a multigenerational saga set in a coastal town where magical realism meets environmental activism”) and Min Jin Lee’s
Nightshade Market (“a riveting tale set in Seoul’s underground economy”). Only a handful of entries, like Françoise Sagan’s
Bonjour Tristesse, were real.
Researchers call such fabrications “AI hallucinations”—a persistent flaw in large language models that invent
plausible-sounding falsehoods. Yet neither Buscaglia nor King Features caught the errors before publication. “I just look for information,” Buscaglia told
The Atlantic, defending his reliance on ChatGPT for freelance work. “I’ll source it; I’ll say where it’s from.” In this case, he failed to verify the AI’s output, later calling the incident a “huge mistake” and admitting, “It’s on me 100 percent.”
Mainstream media continues to lose trust
The
Sun-Times, which recently cut 20% of its staff, initially distanced itself from the scandal, calling the content “licensed” and “not approved by the newsroom.” A spokesperson stated, “This should be a learning moment for all of journalism.”
The Inquirer’s CEO, Lisa Hughes, went further, calling the use of AI a “serious breach” of internal policies. King Features terminated its relationship with Buscaglia, emphasizing its policy against AI-generated content.
But the damage extended beyond books. Other articles in the “Heat Index” supplement quoted nonexistent experts, including a “Cornell University food anthropologist” and a “FirepitBase.com editor”—fabrications readers quickly debunked. The
Sun-Times later removed the section from its e-paper and pledged policy updates, vowing they are "committed to making sure this never happens again.”
The scandal highlights a broader crisis in journalism, where shrinking budgets and reliance on freelance labor collide with the seductive efficiency of AI. While tools like ChatGPT can aid research, their unchecked use threatens the integrity of reporting. As Buscaglia lamented, “If people want all this content, they know that I can’t write 48 stories.” His rationalization reveals a troubling normalization of corner-cutting in mainstream media.
As the
Sun-Times conceded, trust in media hinges on “the relationship our very real, human reporters and editors have with our audiences.” When that bond is broken by unvetted automation, the cost isn’t just embarrassment; it’s
the erosion of truth itself.
Sources for this article include:
ZeroHedge.com
JustTheNews.com
Axios.com
TheGuardian.com