Apple’s AI feature, designed to summarise breaking news notifications, has landed in hot water after it inaccurately generated alerts on its latest iPhones. Intended to streamline news updates for users, the tool has been accused of fabricating entirely false claims. The BBC, which first raised concerns in December, reported that the summaries misrepresented its journalism. Apple responded this week, acknowledging the issue and pledging to clarify that the summaries are AI-generated.
The controversy has stirred wider criticism, with media figures calling the feature premature and a potential source of misinformation. Former Guardian editor Alan Rusbridger weighed in, warning of the risks posed by such flawed technology to public trust in news.
AI feature generates embarrassing blunders
The feature’s troubles became evident through a series of high-profile errors. Last month, a BBC headline was inaccurately summarised to suggest that Luigi Mangione, accused of killing UnitedHealthcare CEO Brian Thompson, had taken his own life. Another alert prematurely declared Luke Littler the winner of the PDC World Darts Championship hours before the event began. Even tennis star Rafael Nadal was dragged into the mix, with an AI-generated claim that he had come out as gay.
These inaccuracies have raised serious concerns, particularly as the notifications appear as though they come from official news apps. The BBC criticised Apple, stating that accurate reporting is crucial to maintaining public trust. Other organisations, including Reporters Without Borders, have echoed these sentiments, urging Apple to disable the feature entirely.
Growing criticism from news organisations
Apple is not the only company grappling with generative AI’s unpredictable behaviour. In November, ProPublica highlighted false AI-generated summaries of New York Times alerts. One summary incorrectly suggested Israeli Prime Minister Benjamin Netanyahu had been arrested, while another mishandled a story about the Capitol riots anniversary. Despite these blunders, the New York Times has chosen not to comment.
The backlash has prompted calls from industry watchdogs to halt such features until they are more reliable. Critics argue that these AI tools, while innovative, remain far from mature enough to handle sensitive information like news alerts.
Apple promises improvements, but doubts linger
Apple has promised a software update to address the issue, expected to roll out in the coming weeks. The company stated that its AI notifications are optional and still in beta, emphasising that user feedback plays a key role in refining the feature. Apple intends to make it clearer when summaries are AI-generated, encouraging users to report inaccuracies.
Despite these assurances, concerns persist. With Apple trying to position itself as a leader in AI innovation, these missteps highlight the broader challenges tech companies face when integrating generative AI into real-world applications. For now, trust in such tools remains shaky, with scepticism likely to grow if improvements are not swiftly implemented.