Unveiling the Dangers of AI-Generated Content
As the warm weather ushers in thoughts of serene summer days at the beach, many look forward to indulging in captivating reads. Recently, Tina, a digital creator and co-host of the podcast Book Talk, Etc., came across a surprising article in the Chicago Sun-Times. This piece, featuring a “Summer Reading List for 2025”, might have provided a delightful start for anyone seeking fresh book titles. However, given Tina’s expertise in the literary realm, she quickly noticed numerous fictional entries included on this list.
After snapping a picture of the article, Tina took to her Threads account to voice her suspicion that the newspaper relied on AI technology for these recommendations. This post soon gained traction on platforms like Bluesky and Reddit’s Chicago subreddit. Although not a subscriber myself, the Sun-Times later confirmed the list’s authenticity.
Among the 15 titles featured in this summer reading roundup, only five are genuine works: Bonjour Tristesse by Françoise Sagan, Beautiful Ruins by Jess Walter, Dandelion Wine by Ray Bradbury, Call Me By Your Name by André Aciman, and Atonement by Ian McEwan. Interestingly, these real titles are not new and happen to be the last five on the list. The remaining ten entries are entirely fabricated, with amusing titles such as The Last Algorithm, supposedly an AI thriller by The Martian author Andy Weir, and Boiling Point, purportedly an engaging exploration of environmental ethics by celebrated author Rebecca Makki. Quite a letdown indeed!
With fictional titles attributed to real authors, it’s easy for fans of these writers to assume there’s an exciting new release on the horizon. Even those unfamiliar with the authors could find themselves heading to their local library or bookstore, unaware that the first ten recommendations do not exist, resulting in prolonged searches.
What Went Wrong?
A post on Bluesky from the Sun-Times indicated that the article in question was not created or vetted by their editorial team. The post did not clarify whether the list was AI-generated, but 404 Media had a conversation with the author who acknowledged utilizing AI for this and similar pieces: “I do use AI for background at times but always check out the material first. This time, I did not, and I can’t believe I missed it because it’s so obvious. No excuses.”
Even prior to this statement, there were indicators suggesting that the newspaper relied on generative AI for the content. The writing, often lacking in flow, is a common indicator of AI-generated text. As previous analyses have revealed, AI can create misleading or entirely false information, leading to confusion among readers. The reasons behind this ‘hallucination’ phenomenon remain unclear—possibly a result of unreliable training data or errors in logical conclusions drawn from that data—but this issue is becoming increasingly prevalent, even as advancements in AI continue.
Employing better prompts won’t eliminate the risk of inaccuracy; generative AI can produce misleading content at times, necessitating diligent fact-checking. While the capability of software like ChatGPT to generate quick lists may be appealing, it does not guarantee quality or accuracy in recommendations.
There’s a strong belief against employing generative AI for content creation of this nature. If a newspaper chooses to delegate authorship to a machine, it’s crucial to have a human editor or fact-checker to ensure the material’s validity. Otherwise, the solution might simply be hiring a human writer to craft recommendations directly. Numerous talented journalists could readily seize such an opportunity. While it appears the Sun-Times did have a human involved, efficient review processes must take place to avoid misleading outputs in the future.
Curiously, I queried ChatGPT for information about The Last Algorithm by Andy Weir. The bot searched online and correctly noted the book’s non-existence. It speculated that the Sun-Times likely relied on AI, driven by social media cues, but also provided an insightful observation: “This incident underscores the importance of verifying information, especially when AI-generated content is involved.”