Australia's online violent content law unlikely to go viral anytime soon
16 July 2020 07:05
Anyone reading statements coming out of the meeting of the world’s top 20 economies in Japan last month would have been forgiven for thinking Australia had gained a new level of influence in the group.
A G20 statement urged online platforms to “step up the ambition and pace” of their efforts to prevent terrorist content being “streamed, uploaded or re-uploaded,” and referred to “violent extremism conducive to terrorism,”. The wording was very close to that of legislation recently passed in Australia.
Canberra pushed that legislation through parliament in just one week, following a massacre at two mosques in the New Zealand city of Christchurch. The new law could see local employees of US-based and other non-Australia-based companies jailed locally if their employers fail to remove extreme content from their services.
The alleged gunman in the Christchurch attack is an Australian and broadcast the shooting on Facebook, which struggled to remove the re-posted video.
Australia was quick to welcome the G20 statement, and to draw parallels between the sentiments expressed by the top economies and the new law, known locally as the “abhorrent violent material law."
But as a lawyer told MLex, it’s very easy for both Australia and the G20 to welcome so far untested laws. And the international expressions of support of Australia’s tough stance on violent video content shouldn’t be mistaken for a global policy triumph.
Observers suggest that the call for digital platforms to develop technology to more closely monitor streaming content — something they have repeatedly struggled to do — can hardly be described as revolutionary.
Instead, they say Australia should look more closely at national commitments following the G20, rather than merely welcoming encouraging statements. And what’s now clear is that there aren’t many such commitments to be found.
Policymakers in Canberra may deserve credit for getting the need to regulate online content onto the agenda internationally, but to claim that the abhorrent violent material law will usher in similar legislative changes elsewhere may be a step too far.
Defamation tussle
The G20 statement said little on the need to change laws, however much it reflected a groundswell of national discussion in the US and the EU suggesting that platforms should take more responsibility for violent content.
But in Australia, where demands for companies to take more responsibility are now entering into law, a policy disconnect is forming.
In the past year, the government has introduced legislation affecting technology companies with little or no industry consultation, and is warning of even tougher laws in the future.
Yet the belief among Australian policymakers that big technology companies already bear sole legal responsibility for the content they host was challenged recently by a court ruling in the state of New South Wales.
The Supreme Court of New South Wales ruled that media companies Nine Entertainment and News Corp were responsible for defamatory comments connected to their news content via Facebook.
Yet the specific circumstances of that case are unlikely to offer platforms complete comfort.
The judge found that the news companies hadn’t used the tools allowing them to review the comment that had been made available to them by Facebook. But if the same defamatory comments had appeared on a platform such as Twitter, on which a media organization has no power or responsibility to delete someone’s comment, the outcome of the case could have been different, and legal responsibility would have been likely to rest with the technology company.
Technology taskforce
Australian policymakers’ belief in platforms’ legal responsibility for content they publish doesn’t mean Canberra has given up on its drive to push US-based and other technology companies into accepting codes of conduct that sit alongside legal requirements.
In a report published at the same time as the G20 meeting, an Australian government and industry grouping laid out recommendations for companies following the introduction of the country’s abhorrent violent content law.
Staff from Amazon.com, Facebook, Microsoft, Twitter and YouTube, alongside their peers from telecoms operators Optus, Telstra, TPG and Vodafone worked on the 15-page report, which set out nine areas of agreement between the government and industry, including “prevention; detection and removal; transparency; deterrence; and capacity building,”.
The report included plans for a “testing event” before the end of 2020 to gauge digital platforms’ responses to extreme violent content being posted on their services. That "fire drill" will also be a test to determine which parties should be held to account if companies fall short.
Yet observers say that more had been expected from the report, which failed to clear up confusion sown by the law’s rapid drafting, including a lack of clarity over whether the rules cover infrastructure providers such as cloud service Amazon Web Services, alongside Facebook and YouTube.
This again points to the fact that the new law's ink is barely dry.
Until Australia demonstrates that its controversial law is workable, Canberra’s international campaign may prompt other governments merely to pay lip-service to its spirit rather than replicating it in their own jurisdictions.
Related Articles
No results found