Facebook Inc has started scoring its users based on their trustworthiness in an attempt to fight misinformation, the Washington Post reported on 21 August, citing a company executive. The company will assign its users a reputation score by predicting their trustworthiness on a scale ranging from zero to one.
The social media giant developed the rating system over the past year, the publication reported, citing an interview with Facebook product manager Tessa Lyons, who is tasked with the company’s efforts to identify malicious actors.
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.
While the score isn’t meant to be a true indicator of a person’s credibility, assigning a number to an individual can be somewhat tricky, especially because the score is partly related to a user’s track record with reporting stories as false. Besides this fact, Facebook has not been open enough to tell what exactly goes behind the process of scoring.
One thing we know is that if someone reports stories as false on a regular basis, and a fact-checking team at Facebook later finds them to be false, that person's trust score increases, and vice versa.
We already know that tech companies have their own algorithms to measure people's likes and preferences, mostly for advertising and revenue-generating purposes, but listing an individual as trustworthy or not and measuring them on a scale sounds like a very authoritative measure, especially coming from a company that we would very happily rate a zero on our own trustworthiness index.
Also, "this may be less of a trust score and more of a fact-check score," notes The Verge.
With inputs from Reuters.