Mark Zuckerberg has done an Elon Musk.
Following in the fellow billionaire’s footsteps, Zuckerberg, the Meta CEO, announced a series of major changes to the company’s moderation policies and practices on Tuesday (January 7) in which it announced that it will end its fact-checking programme in the US with trusted partners and replace it with a community-driven system similar to X’s Community Notes.
“We are going to get rid of fact-checkers (that) have just been too politically biased and have destroyed more trust than they’ve created, especially in the US,” Meta founder and CEO Mark Zuckerberg said in a post. Instead, Meta platforms including Facebook and Instagram, “would use community notes similar to X, starting in the US,” he added.
While some were shocked at Zuckerberg’s decision, including Meta’s fact-checking partners, Elon Musk reacted to the move with a “this is cool” post on X.
What does this mean? We unpack all of it in this explainer.
‘Thumbs down’ to fact-checking
On Tuesday, the Meta CEO, Mark Zuckerberg, posted a video on alongside a blog post by the company explaining the changes he was introducing for Meta platforms — Facebook, Instagram and Threads.
Zuckerberg explained, “When we launched our independent fact checking programme in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organisations. The intention was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.
“That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate.”
He further added, “We are now changing this approach. We will end the current third party fact checking programme in the United States and instead begin moving to a Community Notes programme. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias.”
Notably, this will only happen in the United States for now, with Politico reporting that the company has no plans to end fact-checking in the European Union.
The changes in the policy will now allow users to refer to ‘women as household objects or property’ or ‘transgender or non-binary people as it’.
A switch from 2016
Meta’s plan to abandon its fact-checking in the US is a reversal of its stance taken in 2016 when it first introduced it on Facebook. At the time, the programme included more than 90 organisations that would fact-check posts in more than 60 languages. In the United States, they have included groups such as PolitiFact and Factcheck.org.
It was then graduated to include Instagram in 2019 and Threads in 2024. Fact-checkers were able to review content including “ads, articles, photos, videos, reels, audio and text-only posts.”
Why timing behind removing fact-checking matters
Many believe that Zuckerberg’s move is part of the ongoing political climate — meaning, the return of Donald Trump to power. Most are of the opinion that Meta’s move is an attempt by Zuckerberg to curry favour with the US president-elect. He has been seen warming up to Trump in recent times; Meta, not him personally, donated $1 million to Trump’s inauguration last month. Meta also added Ultimate Fighting Championship CEO Dana White , a Trump ally, to the company’s board of directors.
As Sol Messing, a research associate professor at New York University’s Centre for Social Media and Politics, told ABC News, “It’s very difficult to ignore this [fact-checking] announcement in terms of the timing of those moves, as well.”
Even Ava Lee from Global Witness, a campaign group which describes itself as seeking to hold big tech to account, was of a similar opinion. She told the BBC, “Zuckerberg’s announcement is a blatant attempt to cosy up to the incoming Trump administration – with harmful implications.”
Notably, Trump has been a harsh critic of Meta and Zuckerberg for years, accusing the company of bias against him and threatening to retaliate against the tech billionaire once back in office. He was even kicked off Facebook following the January 6, 2021, attack on the US Capitol by his supporters, though the company restored his account in early 2023.
But after the changes were announced, Trump said that he was impressed by the decision and that Meta had “come a long way”. When asked whether Zuckerberg was “directly responding” to threats Trump had made to him in the past, the incoming US president responded: “Probably”.
Opening floodgates to misinformation
While making the announcement that cancels fact checking, Mark Zuckerberg acknowledged that “that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
And it seems that Zuckerberg isn’t alone in expressing some concern about cancelling fact-checking on his platforms. Critics have lampooned him for the move, saying it would open the floodgates of misinformation.
“This is a major step back for content moderation at a time when disinformation and harmful content are evolving faster than ever,” Ross Burley, co-founder of the nonprofit Centre for Information Resilience, was quoted as telling AFP.
He further added that while free expression was vital, “removing fact-checking without a credible alternative risks opening the floodgates to more harmful narratives.”
Meta already has a chequered history with misinformation. In 2017, Amnesty International found that Meta’s algorithms and lack of content moderation “substantially contributed” to helping foment violence in Myanmar against the Rohingya people. A separate study in 2021 found that the popular social media site, Facebook, could have prevented billions of views on pages that shared misinformation related to the 2020 election, but failed to tweak its algorithms.
However, since the introduction of the fact-checking, misinformation has reduced with one research, published in the journal Nature Human Behavior, showing that warning labels that Facebook uses to inform users about false information reduces belief in false news by 28 per cent and reduced how often the content was shared by 25 per cent.
Many critics of Zuckerberg’s move argue that it’s an abdication of responsibility by the platform. Michael Wagner, from the School of Journalism and Mass Communication at the University of Wisconsin-Madison told AFP, “You wouldn’t rely on just anyone to stop your toilet from leaking, but Meta now seeks to rely on just anyone to stop misinformation from spreading on their platforms.
“Asking people, pro bono, to police the false claims that get posted on Meta’s multi-billion dollar social media platforms is an abdication of social responsibility.”
Nora Benavidez, senior counsel and director of digital justice and civil rights at Free Press, told Los Angeles Times that content moderation “has never been a tool to repress free speech. “Meta’s new promise to scale back fact checking isn’t surprising — Zuckerberg is one of many billionaires who are cosying up to dangerous demagogues like Trump and pushing initiatives that favour their bottom lines at the expense of everything and everyone else.”
Other fact-checkers added that this move will hurt social media users who are looking for accurate, reliable information to make decisions.
Even some employees of Meta have voiced their concern on the move, saying it appears Meta is “sending a bigger, stronger message to people that facts no longer matter, and conflating that with a victory for free speech.”
Another employee told CNBC that “simply absolving ourselves from the duty to at least try to create a safe and respective platform is a really sad direction to take.”
Going the X way
While removing independent fact-checkers, Meta is now going the way of X and its use of ‘community notes’. But what exactly does this mean?
According to Zuckerberg, this feature will empower the community to decide when posts are potentially misleading thanks to people across a diverse range of perspectives. What this means is that anyone who is a user can now add or tag posts as misleading or false.
However, when it comes to efficacy in stemming false news, experts are unsure of Community Notes. As Valerie Wirtschafter, a fellow at the Brookings Institution who has studied Community Notes told the New York Times, “The community based approach is one piece of the puzzle. But it can’t be the only thing, and it certainly can’t be just rolled out as like an untailored, whole-cloth solution.”
With inputs from agencies