Governments and internet firms are wrestling with the rules for free speech online.
THE arrest of a senior executive rarely brings helpful headlines. But when Brazilian authorities briefly detained Google’s country boss on September 26th—for refusing to remove videos from its YouTube subsidiary that appeared to breach electoral laws—they helped the firm repair its image as a defender of free speech.
Two weeks earlier those credentials looked tarnished. Google blocked net users in eight countries from viewing a film trailer that had incensed Muslims. In six states, including India and Saudi Arabia, local courts banned the footage. In Egypt and Libya, where protesters attacked American embassies and killed several people, Google took the video down of its own accord.
The row sparked concern about how internet firms manage public debate and how companies based in countries that cherish free speech should respond to states that want to constrain it. (Freedom House, a campaigning think-tank, reckons that restrictions on the internet are increasing in 20 of the 47 states it surveys.)
In June Google revealed that 45 countries had asked it to block content in the last six months of 2011. Some requests were easily rejected. Officials in the Canadian passport office asked it to block a video advocating independence for Quebec, in which a citizen urinated on his passport and flushed it down the toilet.
Most firms do accept that they must follow the laws of countries in which they operate (Nazi content is banned in Germany, for example). Big internet firms can prevent users accessing content their governments consider illegal, while leaving it available to visitors from countries where no prohibition applies. Some pledge to be transparent about their actions—Twitter, like Google, releases six-monthly reports of government requests to block information. It also alerts citizens when it has censored content in their country.
Tell us what you did
Legislators in America want more firms to follow suit. In March a congressional subcommittee approved the latest revision of the Global Online Freedom Act, first drafted in 2004. This would require technology firms operating in a designated group of restrictive countries to publish annual reports showing how they deal with human-rights issues. It would waive this for firms that sign up to non-governmental associations that provide similar oversight, such as the Global Network Initiative. Founded in 2008 by Google, Microsoft, Yahoo! and a coalition of human-rights groups, it has since stalled. Facebook joined in May but only as an observer. Twitter is absent, too.
Managing free speech in home markets is hard too. American websites enjoy broad freedom but most users support policies that forbid hate speech or obscenity, even when these are not illegal. Well-drafted community guidelines give platforms personality (and reassure nervous parents). But overzealous moderation can have “absurd and censorious” results, says Kevin Bankston at the Centre for Democracy and Technology, a think-tank. Citing rules that prohibit sexually loaded content, Facebook last month removed a New Yorker cartoon that depicted a bare-chested Eve in the Garden of Eden. It also routinely removes its users’ photos of breast-feeding if they show the mother’s nipples, however unsalacious the picture may be.
Commercial concerns can trump consistency. In July Twitter briefly suspended the account of a journalist who had published the e-mail address of a manager at NBC while criticising it for lacklustre coverage of the London Olympics. Twitter admitted it had monitored tweets that criticised the firm (a business partner) and vowed not to do so again. Automated systems can also be too zealous. Citing a copyright violation, YouTube’s robots briefly blocked a video of Michelle Obama speaking at the Democratic Party convention on September 4th (perhaps because of background music). In August official footage of NASA’s Mars landing suffered the same fate. Jillian York at the Electronic Frontier Foundation, a free-speech group, thinks some services refuse to host any images of nudes, however innocent or artistic, because they can trigger anti-porn software.
Aware of the problem, web firms are trying to improve their systems. Facebook’s reporting tool now helps users resolve simple grievances among themselves. Tim Wu at Columbia Law School speculates that video-hosting services may one day ask committees of users to decide whether to allow sensitive footage to be shown in their countries. Europeans unvexed by nudity might then escape American advertisers’ prudish standards. But it would be hard to enforce on social networks that prize their cross-border ties.
Simpler remedies might make users happier. Rebecca MacKinnon, an expert on internet freedom, says web firms act as “legislature, police, judge, jury and executioner” in enforcing moderation policies and should offer their members more opportunity to appeal. Marietje Schaake, a Dutch politician helping to formulate European digital policy, thinks web users wanting to challenge egregious judgments need more help from the law.
Changing the law in some countries could help platforms avoid bad decisions. Some governments menace web firms with antiquated media laws that consider them publishers, not just hosts, of their users’ content. In 2010 an Italian court handed down suspended jail sentences to three Google executives after a video showing the bullying of a disabled boy appeared on YouTube—even though the firm removed it when notified. Sites in countries with fierce or costly libel laws often censor content the moment they receive a complaint, regardless of its merit. England (Scotland’s legal system is different) is changing the law to grant greater immunity to internet platforms that give complainants easy access to content originators.
Some users value avoiding offence more highly than the risk of censorship. The majority see things the other way round. So internet firms will never please everyone. But good laws at least point them in the right direction.