Trending:

How Google's AI-generated mushroom pics can lead to major health crises in US, EU

FP Staff September 25, 2024, 09:55:45 IST

There is a broader concern about how AI-generated content shows up on search engines like Google. The sheer volume of AI-generated material now available online, makes it difficult for search algorithms to accurately differentiate between genuine and AI-created content

Advertisement
AI-generated image of a cluster of mushrooms. Experts highlight that many individuals depend on visual references, and when these references are inaccurate, the risk of misidentification increases significantly. Image: AI Generated image
AI-generated image of a cluster of mushrooms. Experts highlight that many individuals depend on visual references, and when these references are inaccurate, the risk of misidentification increases significantly. Image: AI Generated image

Over the years AI has become a major reason to be concerned about for disinformation and the impugning of the election process in several countries across the world. However, because of recent developments, AI-generated content is now spurring one of the biggest health crises in the US and EU since the pandemic.

Google’s recent use of AI-generated images in search results is sparking concerns among experts, particularly in the context of mushroom identification. The potential danger lies in the fact that these AI-generated images, when presented as real, could lead to life-threatening mistakes for foragers who rely on visual cues to determine whether a mushroom is safe to eat.

STORY CONTINUES BELOW THIS AD

The issue was brought to light by a moderator from the Reddit community r/mycology, which focuses on fungi-related topics, including hunting, foraging, and cultivation. The moderator, known as MycoMutant, discovered that when searching for the fungus Coprinus comatus — commonly referred to as shaggy ink cap — the first image displayed in Google’s featured snippet was an AI-generated image.

Alarmingly, this image bore little resemblance to the actual Coprinus comatus, posing a serious risk to anyone relying on it for identification.

This situation isn’t isolated. Google’s search results have previously surfaced AI-generated images from various sources and presented them as if they were genuine.

In this case, the problematic image was taken from a stock image website, Freepik, where it was clearly labeled as AI-generated. Despite this label, the image was incorrectly tagged as Coprinus comatus, and Google’s algorithm pulled it into the search snippet without flagging it as AI-generated content.

The implications of this are severe. Mushroom foraging is an activity where accurate identification is crucial, as many edible mushrooms have toxic look-alikes. The spread of incorrect information, especially in such a visually driven field, could lead to severe health consequences.

Experts, like those from the New York Mycological Society, have expressed concern over the dangers posed by AI-generated images in this context. They highlight that many individuals depend on visual references, and when these references are inaccurate, the risk of misidentification increases significantly.

Moreover, there is a broader concern about how AI-generated content is being integrated into search engines like Google. The challenge lies in the sheer volume of AI-generated material now available online, making it difficult for search algorithms to accurately differentiate between genuine and AI-created content. This issue is compounded by the fact that AI-generated images can often look “close enough” to the real thing, which might lead to confusion and, in the worst cases, dangerous outcomes.

STORY CONTINUES BELOW THIS AD

Google has acknowledged these concerns, stating that it has systems in place to ensure a high-quality user experience and that it is continually working to improve these safeguards. However, the incidents involving AI-generated images in search results highlight the limitations of these systems and the urgent need for better identification and labelling of AI-generated content.

The risks are not new; last year, AI-generated foraging books, which included mushroom identification, appeared on platforms like Amazon. These books were flagged by experts as potentially life-threatening due to the inaccuracy of the information they contained. Similarly, Google has previously featured AI-generated images of famous artworks and historical events, presenting them as real in its search snippets.

The rise of AI, with its ability to generate vast amounts of content quickly, has introduced new challenges in information accuracy. For communities like r/mycology, which strive to educate and protect people, the spread of incorrect information is particularly troubling. As AI-generated content becomes more prevalent, the need for robust systems to filter and correctly label such content is more pressing than ever to prevent misinformation and potential health crises.

STORY CONTINUES BELOW THIS AD
Home Video Shorts Live TV