Online platforms face new misinformation challenges from Covid-19 pandemic
02 Apr 2020 12:00 am by Xu Yuan
Online platforms including Twitter, Facebook and Google’s YouTube are in the spotlight over their role as global vectors for misinformation related to the Covid-19 pandemic.
With nearly a third of the world’s population under some form of lockdown, social-media sites and content platforms have played an important role in providing people with entertainment and connection to their friends and family.
But they are also an important source of information about the virus, in an age where a majority of people in many countries get more news from social media than from trusted publications. While this issue has long been known in policy circles, it has taken on a new importance due to the potential lethality of fake news about Covid-19.
Misinformation about the virus takes many forms. Some, such as rumors about unproven cures, may be spread with good intentions by people who don’t know better; they range from the ineffectual but harmless, such as eating garlic, to dangerous ideas such as drinking bleach.
In other cases, fraudsters are trying to capitalize on people’s fear by selling sham cures or counterfeit protective equipment on online platforms. Another category is misinformation that is seemingly random, such as posts with fake information about the location of infected people.
Finally, there is misinformation from within governments themselves, which provides platforms with perhaps their greatest challenge. Twitter and Facebook this week took the extraordinary step of deleting posts by Brazilian President Jair Bolsonaro that contradicted guidelines from the World Health Organization and Brazil’s own health ministry.
That in turn raises questions about who holds the platforms accountable. The US-based companies that stood up to Bolsonaro have not yet dared to touch posts by US President Donald Trump, many of which contradict facts and advice provided by global and US health authorities. If he goes further, they will face a tough decision.
Jurisdictions around the world vary greatly in the steps they have taken to enforce the removal of fake news. In very broad terms, Asian countries have taken the strictest measures, while the US has been the least prescriptive.
China’s extensive system of surveillance and censorship, in place long before Covid-19 took hold, has given the authorities broad powers to stifle misinformation. Chinese news and social-media sites broadcast government messages, and the police have made a number of arrests for spreading rumors about the virus.
Platforms and other websites have been subject to stricter rules since March 1, and could be banned from operating altogether if they are found to be hosting illegal content.
This strict approach, combined with a draconian physical quarantine of infected areas, has helped China to almost eradicate the virus.
Nevertheless, such a system only works when the message from the government is aligned with the facts. In the early stages of the outbreak in December, China suppressed reports from medical staff about the new virus and forced them to sign confessions about “spreading rumors,” costing precious days.
Of course, such an all-encompassing instrument of state control can’t be turned on and off at will, and is therefore not an option for democratic countries. Nevertheless, other Asian countries have taken steps to limit misinformation about Covid-19.
South Korea has managed to control the spread of the virus without resorting to a strict physical lockdown, due partly to extensive testing, but also by controlling the flow of information.
The authorities have collected and published extensive information about infected patients including their ages, movements, and districts of residence — the most invasive measures taken by any democratic government.
Seoul has taken an equally active approach to restricting misinformation, ordering such cases to be deleted when hosted on locally owned platforms such as Naver and Kakao, or blocked when the information is hosted abroad on global platforms such as Facebook and YouTube.
Since the end of January, the Korea Communications Standards Commission ordered 126 pieces of misinformation to be deleted and 37 to be blocked, out of 702 cases it reviewed. Information can be blocked if it sows social confusion, defames an individual, or discriminates against people on the basis of nationality, region or race.
In one example, the KCSC ordered a YouTube video to be blocked for falsely stating there was a confirmed case of Covid-19 in a certain hospital, and that the government was covering up the information.
The Korea Communications Commission has also been working with global platforms to promote official information, such as by providing a link to government websites.
Japan, too, appears so far to have avoided a major Covid-19 outbreak without a strict physical lockdown. The authorities have provided very little information besides the number of confirmed cases and deaths, but have been actively promoting official accounts on online platforms to spread correct information.
Twitter also said last week that it would delete information that contradicted Japan’s public-health policies, as well as “fake news” and discrimination more broadly, and would expand its definition of “harm” with regard to misinformation in Japan.
European countries, Australia and New Zealand have not, in general, taken such extensive steps as those seen in Asia, though many are attempting to combat misinformation related to the virus within their existing legal frameworks.
A spokesperson for the Australian Competition & Consumer Commission said only that the watchdog was “working with other government agencies to identify scams and misinformation, and to raise problems with the providers of that content and the platforms that host it.”
New Zealand’s response appears to be limited to sham cures and counterfeit products. The New Zealand Commerce Commission is working “to identify and address any unsubstantiated claims about products related to Covid-19,” a spokesperson told MLex, adding that “other aspects” of misinformation fall outside its remit.
The EU’s response has been to work with platforms to urge them to pick up the pace of their actions to take down misinformation related to Covid-19, but so far has stopped short of proposing new regulations or active enforcement.
This aligns with the EU executive’s approach to fake news more generally, which has been to allow platform operators to “self-regulate” and trust, for now, that they will take down harmful content in a reasonable timeframe. A plan for stricter regulation will only emerge toward the end of this year, and will take longer still to be approved and come into force.
The EU diplomatic service has shown frustration that platforms aren’t doing enough to stop Russia-backed sources from painting the EU as inept in fighting the virus, blaming “a system of broken incentives which prevents Internet platforms from adequately protecting the public interest".
National governments in Europe, meanwhile, are working with platforms to fight fake news. On Feb. 28, Facebook, Google, Microsoft, Qwant and TikTok promised French officials that they would promote links to official information on the government’s or the World Health Organization’s websites, and to work with fact checkers at Agence France-Presse.
The UK government announced this week that it had set up a “rapid response unit” to directly rebut misinformation circulating online. The team will also be “working with platforms to remove harmful content and to ensure public-health campaigns are promoted through reliable sources,” a government statement said.
The exception in Europe is Hungary, which has introduced the threat of jail time for journalists found to be spreading “false information” about the virus. But the announcement of those measures this week came alongside indefinite rule-by-decree powers for Prime Minister Viktor Orbán, and follows his years-long campaign against independent institutions.
The US exception
The US stands apart even from other democracies, because it has strict rules protecting platforms from liability and strong rights of free expression. Under Section 230 of the Communications Decency Act of 1996, online platforms can do as much or as little content moderation as they like.
Nevertheless, platform operators face pressure from Congress to remove harmful content, with the threat of tighter legislation hanging over them. A gaggle of the largest platforms — Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter and YouTube — together pledged on March 16 to elevate “authoritative content” and to combat “fraud and information about the virus.”
Retail behemoth Amazon.com earlier took steps to remove listings that claimed to be a treatment or cure for Covid-19.
The US has also been both a target and a source for politically motivated disinformation about the coronavirus. A Chinese Ministry of Foreign Affairs spokesman has publicly suggested that “it might be the US army who brought the epidemic to Wuhan,” as part of what appears to be a coordinated campaign to shift blame for the epidemic.
On the other side, US Republican lawmakers have repeated unsubstantiated theories that the Covid-19 virus originated in a Chinese research facility. A congressional candidate went further, tweeting that the virus was “man-made.”
Trump himself has repeatedly referred to Covid-19 as “the Chinese virus” which, while being correct regarding the disease’s point of origin, has been labeled by some critics as a racist term that foments discrimination toward people of Chinese ethnicity.
He also spent weeks minimizing the threat posed by the novel coronavirus, against all the scientific evidence. Even now he routinely speaks at odds with experts in his own administration, and attributes partisan motives to criticism of his response.
And that puts the big global platforms, all of which are based in the US, in a quandary. Does a company have a moral obligation to take down false or misleading information posted by a head of state?
The decision by Twitter and Facebook to take down Bolsonaro’s posts suggest that the answer is “yes” — so long as he’s not American.
11 Jan 2021 8:00 am by Matthew NewmanProposals for EU powers to fine “very large” platforms up to 6% of their annual revenue for violating rules on hate speech and the sale of illegal goods will spark a debate.
24 Dec 2020 12:00 am by Mike SwiftThe executive committee of state attorneys general heading the antitrust investigation into Google convened for another regular planning call.
18 Dec 2020 12:00 am by Mike SwiftFreshly filed antitrust cases against Google and Facebook are slated to play out in the months, and almost certainly years, ahead.