Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
Twitter, Facebook policies tested by world leaders' posts on Covid-19
01 Apr 2020 12:00 am by Mike Swift
Like many online services, Twitter has seen traffic surge during the Covid-19 pandemic, even as its physical infrastructure, advertising revenue and policy rules have come under unprecedented pressure.
Traffic is up 23 percent over the same period in 2019, with about 30 million additional users coming to the service every day on average. But advertising has been hit so hard that Twitter in recent days was forced to withdraw its revenue and operating income guidance for the quarter, and its operations teams are struggling to get the components to keep its data centers humming.
Those problems, however, may pale before the policy headaches Twitter and other social media platforms are facing. The level of fear around the virus, the significant unknowns about how it spreads, and the dangerous utterances of some world leaders about Covid-19 have put pressure as never before on the rules of conduct for the Internet’s public square.
Ultimately, the Covid-19 crisis is forcing social media platforms to refine rules about harmful content and write new ones on the fly that are likely to place firmer limits on what can be said on social media, even after the pandemic ends.
One world leader crossed those lines this week and saw his Covid-19-related posts deleted by both Twitter and Facebook. Still unknown is what Twitter will do should the world leader with perhaps its most controversial account, US President Donald Trump, violate the firm line that the San Francisco company drew around Covid-19 related content in recent days.
“WE CANNOT LET THE CURE BE WORSE THAN THE PROBLEM ITSELF,” Trump tweeted to his 75 million followers on March 22, referring to self-isolation rules, in a tweet posted five days before Twitter published an updated list of rules on harmful Covid-19 content. “AT THE END OF THE 15 DAY PERIOD, WE WILL MAKE A DECISION AS TO WHICH WAY WE WANT TO GO!”
That was a controversial sentiment when the president tweeted it, but even now, after the severity of the crisis has become more apparent, it’s unlikely that tweet would be taken down if it were posted today.
Bolsonaro steps over the line
Twitter took down two videos posted by Brazilian President Jair Bolsonaro for misinformation concerning the coronavirus, including one in which Bolsonaro toured several small shops, greeted locals, shook hands with people and asked whether they “should be allowed to work or not.”
That video, Twitter decided, was a clear violation of its policy that it will not allow content that tries to influence people to do something that could expose them to the virus.
Specifically, Twitter said that after March 27, it would remove tweets that included: “Denial of global or local health authority recommendations to decrease someone’s likelihood of exposure to COVID-19 with the intent to influence people into acting against recommended guidance, such as: ‘social distancing is not effective’, or actively encouraging people to not socially distance themselves in areas known to be impacted by COVID-19.”
The following day, Facebook took down a video Bolsonaro posted on Facebook and Instagram where the president said, "hydroxychloroquine is successfully working everywhere" to cure Covid-19. The video was taken down Monday about 24 hours after Twitter removed Bolsonaro’s posts.
Bolsonaro appears in the deleted video talking to a street vendor, saying he has “heard people want to work.” He tells the vendor that people younger than 60 years old should return to their jobs, and he described hydroxychloroquine as “working everywhere” against the coronavirus, even though tests are still being performed.
Facebook has removed harmful misinformation since 2018, including false information about the measles in Samoa where it could have furthered an outbreak, and rumors about the polio vaccine in Pakistan where it risked harm to health aid workers. Since January, Facebook has applied this policy to misinformation about Covid-19, a company spokesman said.
It was Bolsonaro’s false claim that hydroxychloroquine cures Covid-19 that put the president over the line in Facebook’s view, it is understood, because he was specifically encouraging people to try a drug that could harm them. Bolsonaro is the only world leader so far to have Covid-19 posts taken down on Facebook’s platforms, it is understood.
While Facebook will not disclose who made the call to take down Bolsonaro’s posts, Twitter’s Covid-19 takedown calls are made by its Trust & Safety team under the leadership of Vijaya Gadde, Twitter’s legal, policy and trust and safety lead, who tweets at @vijaya. Gadde has been at Twitter for nearly nine years and has been its general counsel and top legal officer for nearly seven years.
Brazil’s president hasn't commented on having his posts taken down by Twitter and Facebook. Today, Bolsonaro posted a news video in which an employee at a food distribution center in Belo Horizonte said food supplies had been interrupted and that the local governor was responsible for the expected chaos.
That information was quickly vetted and discovered to be false, and Bolsonaro deleted the posting himself.
Chain of Circumstances
Like Facebook, Twitter’s rules of engagement require a chain of circumstances to prompt a takedown. The posted content needs to be contrary to public health knowledge or known scientific facts about safe conduct during the pandemic, such as recommending people use a drug whose effects are not fully known.
Second, the content needs to encourage or influence people to do something that could harm them or others, such as ignoring social distancing rules. If those conditions are met, under the rules of engagement for both Facebook and Twitter, the offending content would be scrubbed from the platforms.
Trump’s tweet that the coronavirus cure could be worse than the disease might be a questionable opinion, but it falls short of specifically urging people to ignore public health recommendations.
And while Trump tweeted March 21 that hydroxychloroquine has “a real chance to be one of the biggest game changers in the history of medicine,” he did not overtly urge people to take the drug now to avoid the virus. Reports did later surface that some people took the drug and were harmed, based on similar comments Trump made at a TV news conference.
Under Section 230 of the Communications Decency Act, a foundational US Internet bill that dates to 1996, interactive online platforms have broad legal immunity for content posted by users. But they also have a free hand to take down content they deem to be harmful or illegal.
One reason Congress passed Section 230 was growing concern that online platforms would ignore illegal or harmful content for fear that actively editing their platforms would put them at greater legal risk than simply ignoring problem content. In the 1990s, two then-dominant interactive online platforms, Prodigy and CompuServe, took different positions on moderating content, with Prodigy promoting its efforts to screen out abusive or illegal content.
Yet it was Prodigy, not CompuServe, that found itself in legal jeopardy after a New York judge in 1995 ruled Prodigy was liable for defamatory content posted about a controversial Wall Street financier — in large part because the platform promoted the editorial control it exercised and was therefore not just a neutral distributor of content, such as a bookstore.
“The answer to this was straightforward and painful: online services needed to avoid claiming any control over their users’ content,” author Jeff Kosseff writes in his history of Section 230: “The Twenty-Six Words that Created the Internet.”
The protective bulwark of Section 230 theoretically allows US online platforms to generally ignore harmful content. But problems such as child pornography, terrorist content and, now, misleading Covid-19 statements by world leaders are pushing platforms to set out ever-more detailed rules for what content is acceptable and what is not — and to enforce those rules.
Platform companies are acutely aware of the growing challenges to Section 230, such as the proposed EARN IT Act, which would tie platform immunity to child protection standards established by a new government commission to be chaired by an appointee of the US attorney general.
A clear set of enforceable rules to take down illegal or harmful content — rules that are even enforced against world leaders — could be one way to limit the push to erode Section 230. Whether President Trump will cross Twitter’s clear rules on Covid-19 remains to be seen, but if he does and Twitter acts, that would demonstrate that the platform is serious about weeding out harmful content.
—With assistance from Caio Rinaldi in Sao Paulo.
08 Sep 2021 4:32 pm by Matthew NewmanWebsite owners that haven’t followed French guidelines on cookie banners are likely to face fines before the end of the year
06 Sep 2021 5:29 am by Xu YuanThe rapid development of the smart-vehicle industry could cause significant security risks if regulation fails to catch up
03 Sep 2021 9:44 pm by Dave PereraApple’s attempt to thread the needle between privacy and combatting child exploitation.