Partha P ChakrabarttyMar 19, 2019 12:38:58 IST
It has been a year since the Cambridge Analytica expose burst into our consciousness, revealing how billionaire donor Robert Mercer managed to assemble a team of data experts to hack key elections. These included the Brexit vote and the 2016 US Election, both of which delivered unexpected results by narrow margins. The expose showed, once again, the disproportionate influence that the few had on the lives and fortunes of the many, and revealed how sophisticated the tools of controlling the behaviour of voters had gotten.
Revisiting the Cambridge Analytica data breach
For those of us who have forgotten what Cambridge Analytica did, here is a quick explainer on how Christopher Wylie, a former research lead at Cambridge Analytica blew the whistle on its unethical practices.
First, it got 32,000 US voters to fill out a 120-question psychometric test that assessed their personality, paying them between $2 to $4 for taking the test. Then, it compared this psychological profile with their behaviour on Facebook — data on likes and personal information. This can be thought of as a ‘sample study’, whose results could be extrapolated to bigger populations. Churning all this data using machine learning, it arrived at a way to make 253 predictions about a person’s psychology and political views. Suddenly, you didn’t need every person to fill the 120-question survey. All you needed to predict a voter’s beliefs, and therefore exploit weaknesses in their psychology, was Facebook data alone.
What turned this 32,000-strong, seemingly academic exercise into a political weapon was a separate set of steps. The survey takers had to give the Cambridge Analytica (CA) app access to their Facebook accounts in order to get paid. When they took this innocuous step, which took them less than a few minutes, they somehow signed off not just their data, but also that of their friends. In total, the number of voters CA had data on hit 87 million. CA then picked 2 million voters from this pool, who lived in 11 key States in the US and sent them targeted ads that influenced their vote. A similar method was employed in the Brexit vote, and in elections in other countries, notably Nigeria.
Cambridge Analytica’s influence on India has not been easy to untangle. Facebook claims about 0.5 million Indians were potentially affected by the data leak and this was thanks to just 355 individuals downloaded the 'thisisyourdigitallife' app and took the quiz. This is a small percentage of the 87 million total, and unlikely to have had the influence the corporation had in the US and the UK. However, it is important to point out that the relationship between Facebook data and psychology may be relevant across cultures, as CA used a personality test that holds true in other contexts. This means that those 253 algorithms can be used to predict Indian voter behaviour if someone gets hold of their Facebook data.
The government of India has ordered the CBI to probe the activities of Cambridge Analytica, and its parent firm, Strategic Communications Limited (SCL). The investigation began in August 2018. We await the results.
Cambridge Analytica aftermath
What has happened since the expose went down? Both, who has been held accountable—and those who have not—are notable. Cambridge Analytica was an obvious target and had to shut shop within six short weeks after the expose. Another target of lawmakers’ fury was Facebook, which has had to face a series of investigations, hearings and new regulations, as well as a threat to break up the company. A full timeline of the fallout for Facebook has been compiled by Tech2.
The most recent of the revelations, sparking a fresh investigation, claims that Facebook had been aware of the data breaches back in 2016, a full two years before the expose. Facebook has already shaved 13 percent of its share value since the scandal broke and has recently seen an exodus of 11 senior managers, including chief product officer Chris Cox and Instagram co-founders Kevin Systrom and Mike Krieger. It has also been found to be the company least trusted with personal information by a small survey conducted by Toluna, a research company. If the latest allegations are true, then Facebook is in even hotter water than before.
While CA and Facebook face the heat, there are many who seem to have gotten away entirely. Whistleblower Christopher Wylie told the Guardian, "I feel like the whole story is a lesson in institutional failure. Because although Facebook paid the price in its share value, there have been virtually no consequences for people who have committed unlawful acts. When you look at how, for example, the NCA [National Crime Agency in the UK] has just sat on blatant evidence of Russian interference in Brexit. When you look at how you can go and commit the largest infraction of campaign finance law in British history and get away with it."
Thus far, even Facebook has actually gotten away with doing precious little. The way the company handled the crisis was shameful, using the tactics of delaying, denying and deflecting criticism, most notably hiring a PR firm to place the blame on George Soros, and cynically lobbying against new regulations that would incentivise Facebook and other companies to clean up their act.
Facebook's promises have fallen flat
On the promises they did make when it came to the Cambridge Analytica expose, Facebook has made little progress. Its forensic audit of Cambridge Analytica is still waiting for a go-ahead from the UK’s Information Commissioner’s Office. A promised investigation into apps, like the one CA used, has only led to 400-odd suspensions, with no progress since August. A ‘clear history’ tool, that would allow users to delete browsing data collected by Facebook, has also not yet been released. Facebook says it will be released before the end of this year.
The latest effort to head off criticism is a post by CEO Mark Zuckerberg, released in time for the anniversary of the expose. It claims that Facebook’s new focus is on privacy concerns, but the fine print shows it mostly has to do with integrating and encrypting the messaging services of Facebook, Whatsapp and Instagram, a move that makes more business sense than it addresses concerns over the very different product of Facebook’s News Feed.
It is clear from its lukewarm efforts that Facebook still sees itself, as it used to, as a ‘platform’, with no real accountability for the content posted on it — something that 'publishers' such as media organisations are not exempt from. This was made possible by the infamous ‘safe harbour’ provisions under US law, which were imitated by governments all over the world, including in India. These provisions relieved Facebook of liability on account of misinformation posted on their platform, only requiring them to delete content and block accounts when alerted to misconduct.
It is this that allowed the company to build a media empire without having to invest in the costly and time-consuming tasks of fact-checking and quality control that traditional media organisations were legally bound to conduct. Repeated attempts to get Facebook and other tech companies to take up this responsibility have failed: Facebook has refused to add these costs to its unprecedented profitability of US $634,694 per employee, the highest in the world.
What are the costs of this irresponsibility? Most recently, Facebook was the chosen platform of the mass murderer in the Christchurch shootings, who uploaded his massacre on Facebook Live. While companies were diligent in scrubbing the video, which was uploaded over 1.5 million times on Facebook alone, copies remained available. This is one video, well-identified, and Facebook struggled to take it down. How can it hope to fare better when facing the many-headed hydra of online fake news, or of covert, targeted political ads?
We know of the extent of Facebook’s failure, and the damage that it has caused, from the tragedy of the Rohingya genocide in Myanmar. For most people in Myanmar, Facebook is the entire internet. Facebook’s lax rules on false content had led to it becoming a platform for anti-Rohingya propaganda and hate speech. More than 25,000 people have been killed and over 700,000 displaced, and Facebook has admitted that it is implicated, releasing a report by Business for Social Responsibility. But simply admitting responsibility is not enough. The measure it has suggested, to hire a 100 human moderators (until now they had none who were trained in the languages spoken in the country), is woefully inadequate.
Until now, Facebook has claimed to deliver great value to its users. Many still rely on the platform for news, and to stay connected with loved ones. But the days of blindly celebrating Silicon Valley for their services need to now come to an end. We have to remember that Facebook is hardly the only platform in the world and that the additional value it adds over what other companies could do is not that significant. We can build an online world where we get these services without having to sell our minds for exploitation and manipulation.
We need to overthrow an internet regime that relies only on marketing for its revenues. Until that is the aim of a company, no values will keep it from turning rapacious. Interestingly, analysts have reacted negatively to Zuckerberg’s announcement about a greater focus on privacy; they see the protection of our data as bad for profits.
When we yield up our data and give corporations the license to influencing our private decisions, is it any surprise that this license is abused by everyone from businesses to politicians? Can we permit blatant propaganda in the name of political marketing?
It is time to throw away our enchantment with internet companies like Facebook, and fight to win our data and privacy back. Governments the world over are beginning to take measures; we must support them in bringing irresponsible technocrats to book.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.