In a bold move, Google has told the European Union it won’t be adding fact-checking to its search results or YouTube videos, despite the new laws requiring such measures. The tech giant made this clear in a letter to the European Commission, according to a report by Axios, which accessed the aforementioned letter.
According to Google, the new requirements are not a good fit for its services, and it won’t be changing its content moderation policies to comply.
Google stands firm on its content moderation approach
Google has always resisted the idea of using fact-checking as part of its content moderation strategy, and it’s sticking to that stance. In the letter, Kent Walker, Google’s global affairs president, told the Commission that adding fact-checking to its search results and YouTube videos simply “isn’t appropriate or effective” for the company. He also pointed to Google’s existing system, which he believes works just fine. For example, he noted the platform’s successful content moderation during the 2022 elections as proof that its current approach is effective.
Walker also highlighted YouTube’s new feature allowing users to add contextual notes to videos as a step in the right direction. The feature, which was rolled out last year, is designed to help combat misinformation and, according to Walker, has a lot of potential to improve the platform.
What the EU law requires
The new EU Disinformation Code of Practice, which was introduced in 2022, asks tech companies to take a more active role in tackling misinformation. The law mandates that companies like Google include fact-checking alongside search results and YouTube videos. It also pushes for fact-checking to be integrated into their ranking systems and algorithms, a move the EU believes will reduce the spread of false information online.
But Google isn’t on board with this idea. The company had already signalled to lawmakers that it wouldn’t comply with the new rules, and in the letter, Walker reaffirmed that Google won’t be changing its practices. He also mentioned that Google plans to “pull out of all fact-checking commitments” in the Code before it becomes part of the official Digital Services Act (DSA) Code of Conduct.
A wider trend among tech giants
This isn’t just a Google issue — it’s part of a wider conversation about how much control tech platforms should have over the information we see online. Last week, Meta announced it would stop fact-checking content and reduce its overall policing of speech. Similarly, since Elon Musk took over X (formerly Twitter) in 2022, he’s significantly relaxed the platform’s content moderation policies.
As the debate around misinformation continues to heat up, Google’s refusal to comply with the EU’s demands is just the latest chapter in the ongoing conversation about the role of tech companies in managing online content. It seems clear that these companies are not ready to take on the responsibility of fact-checking themselves, leaving the question of who should police online content still very much up in the air.


)

)
)
)
)
)
)
)
)
