Rise in cases shows it is time for systematic scrutiny of algorithms that determine behaviour.
The US is approaching record-breaking levels of measles outbreaks since the disease was officially eliminated there in 2000. As of last week, 626 cases have been confirmed in 22 states, breaking out mostly among unvaccinated children.
While authorities scramble to contain the contagion, the influence of information taken from the internet on people’s real-life choices is undeniable. Doctors have long discouraged patients from googling their symptoms and pursuing solutions they read about online. It is now time for systematic scrutiny of the algorithms that make anti-vaccination and other toxic information go viral.
As the debate about Google and Facebook’s influence in the 2016 US presidential election has intensified, tech executives have come close to admitting that algorithms do more than simply match advertisements to people. But, to avoid regulation, they offer tweaks. After an outcry over the proliferation of anti-vaccination books and films on its site, Amazon stopped recommending films titled We Don’t Vaccinate! and Vaxxed: From Cover-Up to Catastrophe. YouTube said it would pull ads from popular anti-vaccination channels, while leaving the videos up.
Yet these ad hoc measures distract from broader questions about the impact of algorithmic ranking on user behaviour. Without research and oversight, we may never understand the links between, for example, the commercial models that decide who has access to what information, public debate, voting behaviour, and the future of democracy. Without oversight, the public interest is not protected. The return of measles is just one reminder of how powerful a small but vocal minority can be in changing minds.
We are largely unaware of the impact algorithmic ranking has on people’s choices because we have no way of looking under the hood. Algorithms are trade secrets. Annual profits of Google and Facebook suggest that the companies have mastered their use for ad sales, but regulators and researchers are still being blindsided. Tech platforms cannot remain immune to regulation when their tools lead to the spread of diseases, lies or hatred.
Even without knowing how algorithms work exactly, we can be sure that the illusion of sizeable support behind a message matters. Some have worked out how to game the system to make profits, or win hearts and minds. A tweet that has 500 likes looks more popular than a post that harvests three thumbs up. People have come to trust the wisdom of the crowd, or the top results in a search, whether on the subject of heart disease or crimes committed by immigrants. On platforms like YouTube and Google search, whether information is sent up or down the rankings is, at least in part, determined by how many people click on and share it.
Knowing whether such reactions come from real people or are auto-generated is crucial. Bots can be distinguished from people through pattern recognition: an account that sends a message exactly every 30 seconds during 72 hours is unlikely to be from a person typing and swiping.
Transparency rules should require platforms to make clear when bots are involved and the sources of advertising. Knowing who is paying to amplify and spread medical hoax messages is as important as knowing the sources of political ads. With more information, we may better understand the links between the anti-vaccination movement and politicians including Marine Le Pen in France, Beppe Grillo in Italy and Donald Trump in the US, who have all questioned the medical, as well as political, establishments.
The recent measles outbreaks remind us that our understanding of the toxic impact of algorithms on people’s actions is proven, and that ad hoc protection measures are not enough. Systematic oversight of the way the online information ecosystem steers people’s behaviour is needed to prevent future epidemics, whether medical or political.
The writer is a Dutch politician and Member of the European Parliament.
Please find the article here.