William Echikson, a former senior policy manager at Google, is head of the Digital Forum at the Centre for European Policy Studies.
David Ibsen is Executive Director for the Counter Extremism Project (CEP), a not-for-profit, international policy organization formed to combat the growing threat from extremist ideologies.
The European Commission proposed legislation targeting illegal online terrorism propaganda. While an important step in recognizing the importance of the fight against online extremism, the proposal looks set to alarm free speech advocates, while failing to curb online hate.
The Commission proposal is limited to measures designed to fight terrorism and political extremism, not all types of illegal material. That’s good. Different types of illegal content require different policies. One should not equate terrorist incitement with counterfeit products.
It is also promising that the Commission aims to create a single Europe-wide model, superseding problematic national moves among its 28 separate members. Already, Germany has enacted a controversial NetzDG law, the Network Enforcement Act.
Brussels-based think tank Centre for European Policy Studies and the Berlin office of the international organization, Counter Extremism Project (CEP) have teamed up to study this NetzDG law, seeing it as a key test for combating extremist online speech.
Under the NetzDG, which came into effect at the beginning of 2018, online platforms with more than two million German users face up to €50 million fines if they do not remove “blatant illegal” hate speech and other postings within 24 hours of notification.
Supporters see the legislation as an efficient response to the threat of online hate-speech and extremism. Critics view it as a privatizing draconian censorship regime, with social media platforms responding to the new liability risk by engaging in unnecessary takedowns.
The truth lies in between.
On New Year’s Day when the NetzDG law came into effect, Twitter and Facebook took down a post from German far-right AfD politician Beatrix von Storch, accusing the Cologne police of appeasing “barbaric, gang-raping Muslim hordes of men”. Twitter suspended the account of a German satirical magazine Titanic mocking von Storch’s tweet.
Since these headlines, however, no press reports of dubious false positives have emerged. No fines have been imposed, either. Although free expression groups continue to oppose the law out of censorship concerns, there seems little evidence of widespread blocking.
In July 2018, Facebook, Google (Google+ and YouTube) and Twitter issued their first six-month NetzDG report cards. Contrary to expectations, they showed that the law has generated only a trickle, not the feared flood, of takedown requests.
Nor, contrary to expectations, are the three big Internet platforms blindly pushing the delete button. Removal rates among the big three platforms ranged from 21.2 percent for Facebook and only 10.8 percent for Twitter.
At the same time, the goal of eradicating extremist content from the Internet remains far off. Just recently, the Counter Extremism Project released a study that shows YouTube’s efforts to proactively remove ISIS terrorist content is failing. Some 91 percent of ISIS videos studied were uploaded more than once; 24 percent of terrorist videos remained online for more than two hours. YouTube is losing this game of whack-a-mole with ISIS campaigners.
Additional measures are required to fight this alarming phenomenon. Under the NetzDG, tech companies face no obligation to stop re-uploads. Every time the content appears online, it must be flagged and checked again. This is neither efficient nor effective and should prompt the European Commission to implement binding obligations for platforms to work with Europol to build up a comprehensive database of hashes, to prevent re-uploads of known harmful content.
The European Commission proposes three possible definitions of illegal “terrorist content”. It is now up to the European Parliament and the Council to find a common classification. At present, each country adopts its own definition. Since EU member states already have an agreed list of terrorist groups it is reasonable to expect that they will come to a consensus on a base line of terrorist speech and content to be removed.
An effective European crackdown on terrorist content must be careful to avoid overreach.
Authoritarian governments in China, Turkey and Russia control online speech either by imposing restrictive firewalls or by issuing ultimatums, which, if disobeyed, lead to blocking of entire digital platform. Although Europe’s approach is far from that of the Chinese or Russians, it must be careful to get the balance right between free expression and security.