EU, China offer contrast over curbing intrusive tech that blossomed in Covid era

03 August 2022 06:00

facial-recognition

In the worldwide fight against Covid-19, many governments relied heavily on technology to fight the virus and track or monitor those infected, notably through artificial intelligence and biometric systems.

As Covid's threat recedes, however, hard questions have surfaced over to what extent intrusive technologies such as facial-recognition systems will remain in use or be proscribed. Differing approaches in Europe and China offer an instructive contrast.

The EU will view any legacy of anti-virus technology through the prism of the tight standards set by its General Data Protection Regulation, or GDPR, and ongoing efforts to create comprehensive regulation on the use of artificial intelligence. Strict rules are in preparation to ensure that invasions of people’s privacy tolerated at the height of Covid-19 are rolled back.

In China, measures employed to monitor the pandemic’s spread are still very present and lack strict regulation and enforcement. In this respect, an EU-style, post-Covid reckoning on technological overreach doesn’t appear on the cards.

EU philosophy

When the current European Commission took office in 2019, its president, Ursula Von der Leyen, pledged to propose laws for the use of AI tools, focusing on their ethical use and trustworthiness and taking a human-centric approach, in a bid to give European companies a competitive advantage over other regions.

That year, the bloc's antitrust chief Margrethe Vestager warned that Europe would lose the battle with China and the US if it focused on allowing organizations to trawl as much data as possible and spending its way to technological superiority. AI needed "a greater purpose," she said.

The EU’s draft legal framework for AI — finally proposed last year after rounds of expert group consultations and ethical guidelines — sets red lines for companies and focuses on high-risk applications of the technology. This means that the likes of facial-recognition tools and other biometric tools would face higher market-entry conditions, plus monitoring and enforcement for non-compliance, than less invasive tools.

The European Parliament and EU governments started discussions earlier this year to hammer out a final version of the AI Act, but their talks are expected to continue for another two years.

The main differences concern whether to allow the use of invasive technologies, including facial-recognition tools, at all. In particular, EU lawmakers have expressed concerns about their widespread use and some are calling for a complete ban to prevent mass surveillance in public spaces.

Up to now, the 27 EU governments have taken different national approaches. In some countries, such as Hungary, the use of biometric tools has been allowed, albeit by the Justice Ministry for law-enforcement purposes. Others, such as Italy, have been looking at ways to tackle illegal AI use through the GDPR privacy rulebook.

European scrutiny

Meanwhile, the use of facial-recognition tools has sparked regulatory scrutiny across Europe; privacy watchdogs seem unwilling to wait for the AI Act and are jumping in to enforce in situations that involve an obvious consent or public-interest justification (see MLex comment here).

In a notable development, the Italian and Greek privacy regulators have this year sanctioned US company Clearview AI, which operates a database with more than 10 billion images of faces of people from all over the world, extracted from public sources via web scraping. They each fined Clearview AI 20 million euros ($20.2 million) for applying biometric monitoring to individuals and banned it from any further collection of their citizens' data.

These penalties come after an alliance of civil society organizations filed complaints in the UK, Austria, France, Italy and Greece for Clearview AI's mass surveillance practices. More penalties are expected to follow, and the French regulator CNIL recently said it was close to wrapping up its probe.

On the use of AI tools in the context of the Covid-19 pandemic specifically, some EU privacy enforcers have been proactive in applying the GDPR's stringent conditions. Many were wary of the widespread use of phone apps to track people's exposures or vaccination status and they have also moved to probe other health measures that compromised privacy.

In Belgium, for example, two airports that carried out checks to identify travelers with a temperature of more than 38°C and asked questioned them on possible symptoms linked to the Covid-19 virus were fined by the data protection authority for having the wrong legal basis for processing such sensitive data.

In Luxembourg, investigations into Covid-19 monitoring tools are coming to a head. The data protection authority's head said that at the height of the pandemic in 2020, it received many complaints in particular around employee monitoring systems in workplaces.

With the AI Act far from being a done deal, more of these data protection fines are expected to follow in coming months.

China’s health-code systems

In China, widespread use of facial recognition for monitoring citizens’ health status was central to the pandemic response and debate has been heated about what boundaries should be set for its use by government departments and businesses.

Questions began with the wide uptake of a digital color-coding system aimed at identifying the health status of citizens. Part of China's zero-Covid strategy, it has drawn new attention following their misuse by a local government.

The digital health-code app, employed since 2020, contains information about people’s identities, Covid-19 vaccination status, test results, diagnosis details and places they have recently visited. Facial recognition is part of the technology — it is required to register and used in identity verification for access requests.

Privacy concerns around the health-code system, including its use of facial recognition, have been persistent due to the sensitivity of the information gathered.

These were amplified by claims of misuse in China’s Henan Province recently. The local government was alleged to have used information gathered through the digital health code to maintain social order beyond Covid-19 controls, prompting calls for prosecutors and lawmakers to investigate.

A municipal watchdog did conduct a probe, but it issued penalties limited to “intra-party punishment” involving demotions and the loss of titles for some local Chinese Communist Party officials — a response that raised questions about whether those responsible had been properly held to account.

The episode helped focus public attention on how to ensure authorities don't misuse a tool that has amassed a huge trove of personal data.

The ‘digital sentry’ dynamic

Even before the Henan controversy, concerns had been aroused about the widespread deployment of health-condition verification devices, known as "digital sentries," in cities including Shanghai, Beijing and Shenzhen, as authorities there scrambled to curb Covid-19 cases.

The devices were widely deployed in communities and public venues such as shopping malls and hospitals to speed up inspections of people’s health status to identify and track virus carriers. The presence of multiple players — including equipment makers, government departments and property-management companies — led to questions about who the data controllers and processors were and how their responsibilities were defined.

Government departments are expected to clarify the roles of different parties involved, especially those within government, and assign corresponding responsibilities.

In a decision in May boosting measures for verifying people's health status, however, Shanghai lawmakers dodged the tricky question, instead requiring broadly that “the collection and processing of health data should comply with China’s data privacy law”.

They did emphasize obligations on property-management companies in verifying the identities of people trying to access public venues, however — some of which had begun to insist on facial recognition as the only means of verification. In June, a court in northern Tianjin opposed the approach adopted by the property-management company, with a ruling against the practice, saying alternative verification must be available.

Legislative catch-up

These individual cases serve to throw into relief the absence of nationwide rules on facial recognition in China, despite repeated calls for lawmakers in Beijing to pass dedicated legislation to protect biometric information, including faces, voices, gaits and emotions.

China’s Personal Information Protection Law, data-privacy legislation that came into effect last November, assigned the cyberspace regulator to draft rules or standards on personal-data protection relating to facial-recognition technology and artificial intelligence. But it was not until recently that China’s top legislature revealed that a government department is drafting measures on image collection and management in public places.

Last July, the country's highest court issued an interpretation on the application of law for handling civil lawsuits involving the use of facial-recognition technology in processing personal information. It highlighted specific situations as problematic, including abusive use of facial recognition in public areas.

Last year, draft security requirements governing online verification systems using facial recognition went through a public consultation, but a final version has yet to be published. The draft imposed conditions for when the technology should be used and set limits on the scope for using data related to facial recognition.

In both Europe and China, comprehensive legislation on facial recognition and wider use of AI applications has yet to land, with piecemeal regulation and enforcement holding sway. A broad difference, though, can perhaps be seen in a EU focus on whether such tech should be used at all, compared to China's emphasis on preventing its misuse while accepting its spread.

It remains to be seen just how far such a divergence might see Europe try to put the genie back in the bottle and China steer clear of a dystopian surveillance society.

- Additional analysis by Wang Juan.

Related Articles

No results found