A leading journalism group has called for Apple to remove its new generative AI feature following an incident in which the technology created a misleading headline about a high-profile murder case in the United States.
The BBC lodged a complaint with Apple after its Apple Intelligence tool, which uses artificial intelligence to summarise and group together notifications, falsely created a headline suggesting that Luigi Mangione, accused of the murder of healthcare insurance CEO Brian Thompson, had shot himself. The claim was inaccurate, as Mangione has not made any such action.
Following the error, Reporters Without Borders (RSF) voiced concerns about the risks posed by generative AI tools to media outlets. The group stressed that the incident demonstrated the AI’s unreliability and immaturity in providing trustworthy information to the public.
Vincent Berthier, head of RSF’s technology and journalism desk, stated, “AIs are probability machines, and facts can’t be decided by a roll of the dice.” He added that the misattribution of false information to a respected media outlet like the BBC undermines the credibility of both the news outlet and the public’s trust in the information they receive.
Apple Intelligence, which was launched in the UK last week, allows users to group notifications, including news summaries, to reduce interruptions from constant alerts. The feature is available on devices running iOS 18.1 or later, including the iPhone 16, iPhone 15 Pro, and iPhone 15 Pro Max, as well as some iPads and Macs.
The BBC spokesperson confirmed the corporation had contacted Apple regarding the issue, urging them to address the problem. However, it has not yet been confirmed if the company has responded. In addition to the misleading headline regarding Mangione, the notification summary also provided accurate details on unrelated topics, including the political situation in Syria and updates on South Korean President Yoon Suk Yeol.
This is not the first instance of Apple Intelligence misrepresenting news. In November, three articles from the New York Times were grouped together in one notification, which included the false claim that Israeli Prime Minister Benjamin Netanyahu had been arrested. The notification misrepresented an arrest warrant issued by the International Criminal Court, leading to confusion about the actual content of the articles. The New York Times has not commented on the incident.
Apple has yet to respond to the complaints, but the company’s notification feature has raised broader concerns regarding the reliability of AI-generated news summaries. While users can report issues with notifications, Apple has not disclosed how many reports it has received. As the debate continues, the accuracy of generative AI in journalism remains a hot topic.