By Hortense Goulard and Joanna Plucinska
The European Commission on Tuesday will try to shame social media companies into taking more aggressive efforts to combat hate speech.
Twitter, Facebook and YouTube were among the major companies that signed onto a Commission’s code of conduct in May, pledging to review posts flagged by users as illegal hate speech. But early results are drawing criticism from lawmakers across Europe, alarmed by the spikes in hate crimes in some areas.
Out of 600 posts that were deemed illegal and flagged by associations, the social media companies responded to 316 — about 40 percent within one day and another 48 percent within two days. A total of 163 posts were deleted, while 153 items were not removed because companies didn’t think they violated national legislation or internal company guidelines, according to the Commission.
“It is crucial for the EU to lead in ensuring the rule of law has meaning in a hyper-connected world,” said Marietje Schaake, a Dutch member of the European Parliament from the Alliance of Liberals and Democrats for Europe. “I am very cautious when private companies are asked to, or volunteer to, regulate speech … Rules must include the proper judicial and democratic oversight safeguards. There is a need to get this right, and to make sure that in the absence of EU legislation, we do not see self-regulation and fragmented approaches.”
Hate speech — comments inciting hatred or violence against an individual or a group based on the color of their skin, their religion or their nationality — is illegal under the EU’s 2008 framework decision against racism and xenophobia. The e-Commerce Directive, which regulates free speech on the internet, requires social media companies to delete hate speech when they know users are breaking the law.
Germany has taken a leadership role in discussions in Brussels and is considering stronger national laws. Authorities are currently monitoring how many racist posts reported by users of social media sites are deleted within 24 hours. German Justice Minister Heiko Maas has pledged to take legislative measures if the results are still unsatisfactory by March.
Companies “have the legal obligation to delete posts when they learn that users are committing criminal offenses, and when they don’t, it must have consequences,” said Gerd Billen, a secretary of state at the justice ministry who heads the country’s task force against hate speech.
Julia Reda, a German MEP from the Green party, blames the lack of cooperation between social media and law enforcement.
“The larger issue in my view is that police reports are not followed up properly,” she said. “Almost anyone who has ever tried to bring threats and hate speech via social media to the attention of law enforcement can attest to the lack of awareness of the topic and a lack of digital training among the police. Identifying the perpetrator is often not the problem. Many people seem perfectly comfortable posting racist or sexist threats under their own name because they don’t fear law enforcement.”
The industry says it is trying hard and needs direction from regulators.
“It’s not clear what they would like to have done. There are different interpretations of active or passive platform involvement in different rules,” said Siada El-Ramly, the director general of EDiMA, a trade group representing internet platforms. “As it stands, almost no platform can enjoy the limited liability regime. The law has become targeted at specific areas, like copyright. It’s quite concerning.”