The Growing Influence of AI in Journalism
Artificial intelligence has permeated various sectors today, including journalism. When encountering articles from well-respected media outlets, one might instinctively assume a human crafted the content. However, as AI technology evolves, this distinction is becoming increasingly ambiguous. It’s essential to understand the mechanisms at play in curating the news that influences public perception.
AI Integration at The New York Times
A recent report from Semafor highlighted a notable shift in how The New York Times is approaching AI technology. Despite currently facing a lawsuit relating to possible copyright violations involving OpenAI and Microsoft, the editorial and product teams appear enthusiastic about incorporating AI tools in their workflow.
The Times is now equipping its journalists with AI training, unveiling an internal application named “Echo,” and endorsing external AI platforms such as Google’s Vertex AI and certain Amazon AI offerings. Interestingly, even Microsoft’s Copilot and an alternative to ChatGPT from OpenAI have found footing in their processes. Although these tools may not directly generate web articles, they are encouraged for tasks like basic content revisions and interview question formulation.
According to the Times’ publicly available editorial guidelines, “Generative AI can assist our journalists in uncovering the truth and helping more people understand the world. We see this technology as a powerful tool rather than a perfect answer.” However, there are still limitations in place; staff have been cautioned against allowing AI to draft or significantly alter articles due to potential copyright infringements and the risk of unintentionally revealing sources.
With indications that writers should consider AI for crafting headlines and social media posts, it’s prudent to approach certain viral stories with a critical eye to avoid being misled by inaccurate information.
Quartz’s Experimentation with AI
It’s noteworthy that Quartz, owned by G/O Media, has become a prominent advocate for AI integration within journalism. Browsing through Quartz’s offerings reveals contributions from the Quartz Intelligence Newsroom, which approaches AI-generated content with fewer reservations than The New York Times.
This AI “writer” has been generating earnings reports for some time and has recently expanded its repertoire to produce general blogs covering topics such as Bitcoin fluctuations and the deletion of Meta accounts. While Quartz acknowledges that these articles are AI-generated and includes source citations, human oversight appears lacking, raising concerns about content quality.
An article discussing how to remove one’s Meta accounts was criticized for merely echoing a piece from TechCrunch without proper authorization, departing from clarity and instead featuring ambiguous guidelines. This lack of rigor indicates a potential drop in quality, suggesting that readers may leave more puzzled than informed.
Additionally, AI-generated content on Quartz frequently references sources such as Devdiscourse, which resembles an AI content farm. This practice invites skepticism: if robots cite each other, why not engage directly with AI like ChatGPT? Despite G/O’s label of this approach as an experimental phase in reporting, the long-term vision for this endeavor remains unclear.
The Associated Press’s Middle Ground
The Associated Press (AP) has adopted a more measured approach, leveraging AI tools while maintaining a prominent human editorial presence. AI is employed for translations, transcription, headline creation, and even some automated articles, but AP assures that general blog content remains strictly in the realm of human journalists.
In a statement, AP’s Vice President of News Standards and Inclusion, Amanda Barret, emphasized the goal of cautious experimentation balanced against safety. AP has utilized the AI platform Wordsmith for tasks like summarizing sports scores and elections since 2014. While AI-assisted content may increasingly populate its pages, especially in areas devoid of a specific author, human oversight continues to dominate the narrative.
AP’s limited direct use of AI-generated news articles stems from specific reporting trials in a Minnesota publication. Most AI contributions manifest as brief summaries or factual overviews, reinforcing the importance of careful evaluation when consuming news from the agency.
The Unique Role of The Washington Post
The Washington Post employs AI uniquely, enhancing rather than directly altering their content. Known as “Ask the Post AI,” this tool allows users to pose questions and receive AI-generated answers drawn from published articles, enabling readers to explore topics further by accessing relevant pieces.
Accompanying warnings emphasize that responses originate from existing reporting, urging users to verify information by consulting the linked articles. The nature of the responses seems to vary depending on the amount of coverage a topic has received, with some inquiries yielding concise answers and others offering detailed insights.
Although the technology powering Ask the Post AI has not been disclosed, it is designed more as a research aide than a replacement for human-generated news. This level of transparency and user guidance offers a foundation for safe engagement with AI-assisted tools while highlighting the significant role of traditional journalism.