Critics take on ‘nonsense’ EU plan to fight illegal online content


The European Commission published a new recommendation Thursday pushing Big Tech to take down flagged terrorist propaganda in an hour.

But what happens if Facebook, Google, Twitter and their peers fail to comply?

Nothing — no fines or penalties. As authorities in France, the U.K. and Germany warn that terrorist groups are still spreading propaganda online, experts who track the spread of illegal content say the Commission’s non-binding approach is not forceful enough to stop it.

Not only are big companies exempt from legal trouble if they leave illegal content up for too long, they point out, but dozens of smaller niche sites have also not signed up for voluntary self-policing, making them playpens for hate-spreading trolls and radicalizing.

“It’s [more] political nonsense than an actual policy document. It’s obviously spin,” Joe McNamee, the director of European Digital Rights, said of the Commission’s recommendations.

Hate speech lives on

According to the Commission’s new recommendation, which was unveiled with fanfare by four commissioners, companies should remove flagged ISIS propaganda and other terrorist content within an hour.

But the fact that such rules are not written into any new law means they have no sting.

Meanwhile, hate speech keeps spreading.

Analysis by researchers at the Counter Extremism Project shows that on Twitter, terrorist content is still staying up beyond the one-hour limit, sometimes for days.

As recently as a few weeks ago, it was easy to watch bomb-making videos that had been posted to video-sharing site Dailymotion, and were being shared on the encrypted messaging app Telegram in a chatroom called “Le Moujahid Solitaire” (The Lone Mujahid), according to posts seen by POLITICO.

Commission officials are aware of the loopholes. Even as he prepared to unveil the new guidelines, Digital Vice President Andrus Ansip said that “terrorists [were] able to adapt and they are moving to smaller platforms,” referring to a story published in POLITICO last month.

Indeed many of platforms, including Telegram, are not taking part in the Commission’s existing programs fighting terrorist content.

Decisions, decisions

The Commission has thought hard about legislating.

In September, it vowed to assess improvements or failures in platforms’ ability to remove illegal content. Going back a few years, policy officials prepared draft rules on the topic that were promptly shelved.

Policymakers could dust off their plans for legislation and update them for today’s circumstances.

But the Commission is in a bind, largely due to a piece of legislation that went on to the books in 2000: the E-Commerce Directive.

The foundational internet law lays out a clear liability regime for internet companies, large and small. It says that, unless sites are actively monitoring the content that gets uploaded to their platforms, they cannot be held legally responsible for it.

Already, critics argue that the Commission’s practice of imposing voluntary regimes that demand monitoring for a range of content types is watering down the legal stability of the E-Commerce Directive.

The closer the Commission moves to legislating on the removal of illegal content, the more likely they are to put this law into question.

“This recommendation undermines online rights as well as the legal foundation of the entire European internet economy,” said Maud Sacquet, a senior public policy manager at the Computer & Communications Industry Association.

National pressure growing

The persistence of such content is mainly a concern for activists and crime-fighting authorities for now.

But it’s likely to turn into a political problem as the bloc gears up for European Parliament elections in 2019.

Between now and then, the Commission faces a dilemma: stick with its current voluntary approach to avoid inflaming free-speech opposition — or present hard tangible rules that would force platforms to self-police.

Several countries are trying to sway the EU into action, and the political pressure is expected to increase this year.

France has already said that it will push for European legislation if internet companies fail to remove illicit content, especially promoting radicalization, within one hour in the coming months. And in Germany, regulators are ironing out the kinks of their own hate speech rules, which threaten internet platforms with up to €50 million fines if they don’t remove certain kinds of content fast enough.

The office of Commission President Jean-Claude Juncker knows that something must be done. Staffers have gone on a listening tour among cabinets and officials within the Commission, according to officials involved in the talks.

While some commissioners, like Ansip and justice chief Věra Jourová, seem to think the voluntary mechanisms are working, other staffers in policymaking units and cabinets want bigger platforms to face heavier rules and regulations.

Which means that, for now, the Commission is at odds with itself on illegal content.

“Commissioner Ansip believes the EU’s liability exception is one of the cornerstones of e-commerce and the open internet. Now other commissioners go in the opposite direction,” said Dutch Liberal MEP Marietje Schaake.

MEPs and critics are growing impatient.

It’s time for the EU “to put meat on the bones of the E-Commerce Directive and be more specific,” said Jens-Henrik Jeppesen, the director of European Affairs for the Center for Democracy and Technology.

Targeted rules on dealing with specific types of content would be a good place to start, he said.

But with the EU campaign for the next Commission about to kick off, top officials have only months in which to launch a new public consultation, gather research and draft new legislation.

Otherwise, the legacy of dealing with the scourge of terrorist propaganda or finally updating the EU’s two-decade-old liability regime will get passed on to the next Commission.