Internet platforms may get caught in patchwork of global terror-content rules

25 April 2019 00:00 by Matthew Newman

As terrorists take advantage of social media platforms to spread propaganda, recruit followers and live-stream shooting sprees, some lawmakers around the world have decided that enough is enough.

Australia passed a tough new law this month that could bring huge fines for platforms and even jail terms for their executives, after one of its citizens live-streamed his mass shooting attack at two mosques in New Zealand. The EU looks set to pass legislation that’s almost as strict, while the US is weighing its options.

Dig into the details, though, and differences begin to emerge. What is the definition of “terrorist content”? How long should platforms be given to take such material down — or indeed should they prevent it from being posted in the first place? Is it their fault if terrorists find ways to trick their algorithms, for example by rotating or speeding up a copy of a blocked video?

Critics point out that too-strict laws could harm freedom of speech — which is itself a right protected by law in many western countries.

And if the line between free speech and incitement to hatred is a thin one, how should a global platform respond when it has conflicting obligations in two different jurisdictions? What if a US company blocks content to abide by Australian law, and in so doing breaches US laws on free expression?

This suggests that rules on terrorist content could end up being more difficult for Internet platforms even than privacy rules such as the EU’s General Data Protection Regulation. There, at least, they have the option of applying the strictest rules — the GDPR in this case — as a global standard as there aren’t any laws pushing the other way, at least in the western world.

To comply with terrorist-content and free-speech rules at the same time, the world’s biggest platforms may end up having to tailor their business models to meet the regulatory standards of multiple jurisdictions. And even that might not be enough, given that many laws have extraterritorial reach.

Governments and lawmakers, too, may have to start thinking about global coordination to avoid putting Internet platforms in an impossible position. The first sign of that could come in May, when New Zealand’s Prime Minister Jacinda Ardern will co-chair a summit on terrorist content in Paris.

Aussie rules

The toughest approach has come from Australia, which rushed through legislation earlier this month after the government bristled at Facebook’s perceived slowness in removing a live stream of the March 15 terrorist attack in Christchurch, New Zealand — for which an Australian man has been charged.

The attacker live-streamed his 17-minute shooting spree on Facebook and the original video remained online for more than an hour after the attack. As a result of this delay, Australian lawmakers vowed to regulate the platform as though it were a television broadcaster.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 sailed through Australia’s legislative process, although lawmakers were assured that there would be a chance to review the measures after an upcoming federal election.

The law has left Australian authorities with the power to pursue international technology companies found to have broadcast the banned content — even when the images were filmed overseas by foreign citizens with no links to Australia.

Penalties include fines of up to 10 percent of a company’s global turnover and even jail terms of up to three years for individual executives. A separate part of the law sets out penalties for individuals who fail to refer details of offensive material to Australian Federal Police.

The Digital Industry Group, which represents the likes of Facebook, Google and Twitter in Australia, said the law failed to take account of the reality of operating an Internet platform, where huge amounts of content are uploaded every second.

Moreover, it “does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” the group’s director Sunita Bose said.

Other critics say the law is weighed down by loopholes likely to spark legal challenges. This, they say, is a result of the high number of people the rules could affect and the legislation’s failure to define the requirement for tech companies to “expeditiously” remove content.

EU legislation

While technology companies grapple with the sudden arrival of Australia’s new regime, they’ll also have an eye on the horizon as the EU pushes for similar — if slightly less draconian — legislation.

Under the proposed rules, Facebook, Google’s YouTube and other online platforms must take down terrorist-related content within an hour of being notified. Platforms that fail to abide by orders face fines of up to 4 percent of their global turnover — the same level as under the GDPR.

The draft bill was approved by the European Parliament last week, eight months after being proposed by the European Commission, the EU’s executive arm — breakneck speed by EU standards. National governments and the commission put tremendous pressure on lawmakers to approve the draft, following a series of terror attacks across the continent.

The bill will now be discussed with representatives of EU governments, probably starting in September. It’s unclear when it will finally be approved, and in what form — which will have to be agreed by the commission, the parliament, and the European Council of national governments.

A commission proposal for mandatory upload filters was removed from the draft approved by the parliament, after tech companies and free-speech activists protested that it would lead to legitimate content being taken down by platforms erring on the side of caution.

For example, the Washington-based lobby group Center for Democracy and Technology argued that filtering removes videos that may be useful in pursuing human-rights abuses. It told EU parliament members in a letter that YouTube has deleted 100,000 videos from the Syrian Archive, a civil society organization preserving evidence of human-rights abuses there.

US reaction

There are some calls for similar legislation in the US, but the situation there is complicated by an existing law that curtails Internet companies’ liability for the content posted on their platforms.

Section 230 of the 1996 Communications Decency Act states that: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Facebook, Google and Twitter have all used Section 230 to defeat numerous lawsuits claiming they enable terrorism by letting organizations such as Hamas post messages on their platform. The dating app Grindr also recently relied on the law to escape a lawsuit filed by a New York man, who claimed he’d experienced months of harassment when an ex-boyfriend impersonated him on the app.

For some lawmakers, though, that shouldn’t be an excuse for inaction. Representative Bennie Thompson, a Democrat from Mississippi and chairman of the Committee on Homeland Security, demanded answers in a March 19 letter to Facebook, Twitter and YouTube about their failure to promptly take down footage of the Christchurch attack.

“Your companies must prioritize responding to these toxic and violent ideologies with resources and attention,” Thompson wrote. “If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms — including by studying the examples being set by other countries.”

Members of the Homeland Security Committee followed up with letters on April 10 demanding to know how much the companies spend on counter-terrorism programs, how many people they’ve hired to work on the issue, and the number of experts on staff who specialize in white nationalist and foreign terrorist organizations.

Now regulators and lawmakers, both Democrats and Republicans, are suggesting that tech companies may be relying too heavily on Section 230 to escape liability for what users post on their websites and apps, hinting that the protection they’ve enjoyed could be taken away.

US House Speaker Nancy Pelosi, congresswoman for San Francisco, recently warned tech companies that the law could be “in jeopardy” if they don’t treat it “with the respect that they should.”

“And it is not out of the question that that could be removed,” Pelosi said.

Related Articles

No results found