Clearview AI's global regulatory woes suggest a future of siloed privacy regimes

17 March 2022 09:06 by James Panichi, Mike Swift

facial-recognition

Outside the United States, Clearview AI’s business model is under threat. Australia and Canada have forced the company to shutter its local operations; Italy and the United Kingdom have imposed fines; and a posse of European regulators has launched probes.

The chances of Clearview ever returning to many of these jurisdictions now appear slim. And it’s not just because Europe’s tough privacy rules are unlikely to accommodate the company’s indiscriminate collection and storage of images to help track down criminals.

Australian and Canadian privacy enforcers have fired warning shots at the law-enforcement agencies that trialed the controversial software, concluding that by using Clearview’s services, they too had violated privacy laws and could face costly and damaging lawsuits.

That means that both Clearview and its clients could wind up in court if they attempt to repeat a rollout of the software — no matter how successful the technology may be in helping police identify and apprehend criminals through the use of biometric data.

In the United States, however, the controversial company is on firmer ground, despite finding itself on the receiving end of lawsuits involving the American Civil Liberties Union, the attorney general of the state of Vermont and a class action in Illinois.

Last month, the US Patent and Trademark Office awarded Clearview a patent; then the company’s technology was rated the most accurate by the US National Institute of Standards & Technology Facial Recognition Vendor Test. This recognition could translate into greater access to capital and more resources with which to pay fines and fight lawsuits.

These vastly different regulatory landscapes suggest that Clearview’s future success — or failure — could become emblematic of the growing gulf between privacy standards in the US and the rest of the world. The company’s fate may also reveal whether a tech business can prosper when confined to the privacy regimes of US states.

Yet if the prospect of operating in a siloed regulatory environment is fazing Hoan Ton-That, Clearview’s Australian founder isn’t letting on. In his view, the technology that has been embraced by law-enforcement agencies, US retail chains, casinos, the National Basketball Association and even actor-turned-venture capitalist Ashton Kutcher, has a bright future.

“Startups, particularly in the tech space, have long faced regulatory and legal challenges,” Ton-That told MLex. “Airbnb, Uber, PayPal and other iconic innovative startups had similar challenges early on. Our business model is strong, our technology is effective and uniquely bias-free and accurate.”

Ton-That’s hope appears to be that Clearview will follow the blueprint of other tech companies that are now household names. When the case for the technology becomes so compelling and the number of hardened criminals it helps arrest so high, no regulator will want to shut it down.

This strategy goes some way in explaining Clearview’s recent efforts to gain public acceptance by stressing the crime-busting nature of its technology and arguing that it can be a force for good.

This week, a Clearview spokeswoman confirmed that the company had provided facial-recognition technology to the Ukrainian armed forces to help identify soldiers killed in action, to vet people at checkpoints and for other defense related uses. The offer to Kyiv was made personally by Ton-That.

Law-enforcement pivot

Despite the legal and regulatory challenges Clearview is facing in Europe, Australia and the US, Clearview’s pivot to providing services exclusively for law enforcement and national security agencies may yet give it enough legal cover to mature into a lucrative future.

That, at least, is the scenario envisioned by Ton-That, who said the company had cancelled database access granted to retailers Macy’s and Home Depot, as well as to Ashton Kutcher and other individuals with a penchant for facial recognition.

Clearview’s legal strategy against the three lawsuits — with one of them playing out in Chicago, Illinois, on the back of one of the strongest privacy laws in the US — centers on the claim that law-enforcement agencies are exempt from the rules.

The Illinois suits are based on alleged violations of the state’s Biometric Information Privacy Act, or BIPA — the same law that includes the hefty statutory damages that forced Facebook to pay $650 million and TikTok $92 million to settle.

Falling foul of BIPA would expose Clearview to the risk of substantial damages, as well business practice changes in Illinois. When added to the lawsuit in the northeastern state of Vermont, the image-scraping company could be facing damages claims worth hundreds of millions of dollars — well in excess of the 20-million euro ($23 million) fine imposed by Italy’s privacy regulator.

But Clearview’s filing in a US District Court in Chicago Monday noted that BIPA exempts entities working as the contractor or agent of a government agency. And given that Clearview no longer provides services to retailers, casinos and sports associations, the tech company argues that the exemption is all it needs.

"All of the facial vectors that are currently used by the Clearview app were created at a time when Clearview acted solely as an agent of governmental entities," the company said in that court filing. "Clearview's licensed users/customers can use Clearview's product only for legitimate law-enforcement and investigative purposes."

What’s more, Ton-That believes that the undeniable success of his database of more than 10 billion images in helping US law enforcement catch insurrectionists and pedophiles is dampening privacy concerns, despite litigation and regulatory challenges.

It’s unclear whether the US judge hearing the case in Chicago will accept Clearview’s arguments. But that defense now appears unlikely to gain traction in the Vermont case, where the state’s attorney general is suing Clearview over its database of images scraped from social media. The Vermont judge said Clearview wasn’t covered by Section 230 of the federal Communications Decency Act, which protects interactive online platforms from liability for third-party content.

But if Clearview were to overcome the US-based legal challenges, the prospect of fines similar to those announced in Italy and the UK may see the tech player forced to limit its operations to the confines of the US market — something Ton-That appears ready to accept.

“We do no business in countries where fines have been proposed,” he told MLex by email, adding that the penalties “lack jurisdiction and lack due process.” “Almost every privacy law worldwide supports exemptions for government, law enforcement and national security,” he said.

‘Solving heinous crimes’

The divide between the regulatory challenge faced by Clearview’s US operations and those in other jurisdictions is highlighted by the company’s setbacks in both Australia and Canada, where there are no parallels with the government-entity exemptions in the US.

In a June 2021 ruling, the Office of the Privacy Commissioner of Canada concluded that both the country’s federal police force and Clearview itself had violated privacy law when officers used the tool to conduct searches.

The ruling was followed by legally binding orders from the provinces of Alberta, British Columbia and Québec forcing Clearview to stop collecting, using and disclosing images of people in those provinces and to delete images and biometric facial arrays collected without consent.

The Privacy Commissioner also ordered Clearview to stop providing its services in the country — a ruling that, by then, had become academic, because the tech company had already withdrawn from the country.

Meanwhile, a joint probe by Australia’s privacy regulator and the UK Information Commissioner’s Office led to two, similar conclusions: that Clearview had breached privacy laws. In a decision echoing that of the Canadian privacy watchdog, the Office of the Australian Information Commissioner, or OAIC, concluded that the country’s federal police force had also violated privacy legislation.

The Australian Federal Police accepted the ruling, but noted that the fight against child exploitation involved offenders using “sophisticated and continuously evolving operation methods to avoid detection” and, therefore, online tools needed to be part of the force’s response.

Clearview has since appealed the OAIC’s ruling in Australia’s Administrative Appeals Tribunal, with Ton-That, a dual citizen of Australia and the US, saying that his company had acted in the best interests of these two countries and their people by “assisting law enforcement in solving heinous crimes against children, seniors and other victims of unscrupulous acts.”

“I respect the time and effort that the Australian officials spent evaluating aspects of the technology I built,” he said in a statement to MLex. “But I am disheartened by the misinterpretation of its value to society.”

Similar concerns have been raised in New Zealand, where the national police force also undertook a trial of Clearview technology — a decision that eventually prompted an apology from police over the force’s failure to consult then-Privacy Commissioner John Edwards.

Three months before Australia and the UK announced their joint investigation into Clearview, Edwards said that “the extent to which any such technology would be fit for purpose in New Zealand [was] unknown” but he would have expected to have been informed of the trial.

New Zealand Police discontinued the trial and ordered a “stock take” of police use of surveillance technology. The six-month review began in April 2021.

A report published in December last year made 10 recommendations, which were immediately adopted by the country’s police force. At the top of the police-department’s response was a pledge not to deploy live facial-recognition technology.

‘Overly invasive’

In Europe, Clearview’s failure to comply with both national and EU privacy requirements appears set to add significant penalties to the company’s accounts.

In the UK, the joint investigation with Australia culminated in the November 2021 announcement that the Information Commissioner would request a fine of more than 17 million pounds ($22 million today) and would ban Clearview from processing UK citizens’ data, as part of a provisional enforcement action.

This followed a warning by former UK Information Commissioner Elizabeth Denham that the rapid spread of live facial recognition, which can be "overly invasive" in people's "lawful daily lives," could damage trust both in the technology and in the "policing by consent" model.

In the EU, the regulatory obstacles facing any company attempting to profit out of scraping biometric data from the Internet is even more stark under the provisions of the General Data Protection Regulation, or GDPR, which have placed facial recognition tools under scrutiny.

Biometric data, including those generated by facial-recognition software, are considered a special category of personal data because they make it possible to uniquely identify a person. The GDPR prohibits the processing of such data unless there is explicit consent, a legal obligation or a public benefit.

A dedicated framework on artificial intelligence is currently being negotiated — the European Commission’s AI Act will restrict the use of biometric tools in the Union.

However, data protection authorities across the bloc aren’t waiting for the law to be passed over coming years. Clearview is already facing probes in Greece, Austria and France following complaints filed in those countries in 2021 by a coalition of NGOs including Privacy International and NOYB.

The Greek privacy watchdog started to look into potential breaches of the EU’s privacy rules last May, but at this point can’t disclose when the probe will be finalized.

New developments in France are imminent too, as the country’s data protection authority in February gave Clearview two months to respond to questions about its use of biometric data without a legal basis. The regulator also ordered the company to stop collecting and using photographs and videos of people in France and told Clearview that it must help people exercise their right to have their data erased.

Meanwhile, in 2019 the Swedish data-protection authority fined a school that tracked the attendance of a small group of students by comparing images, concluding the institution had violated GDPR provisions — a decision suggesting a tough stance on the misuse of biometric data. And last May, the data watchdog of the German state of Baden-Württemberg began investigating PimEyes, a facial recognition search engine, for its processing of biometric data.

Italy, however, is the first EU country to have probed Clearview’s practices and hit the company with a fine.

The probe that culminated in the penalty began with a handful of complaints, lodged with Italy’s privacy watchdog between February and July 2021. Although names were scrubbed from the Italian-language documents published last week, the Garante per la protezione dei dati personali, or GPDP, revealed that four individuals and two data-privacy advocacy groups had been behind the complaints.

In March 2021, Clearview responded to the GPDP’s initial inquiries, saying the Italian and European Union privacy rules didn’t apply to the complainants’ concerns and, as a result, the GPDP had no role to play in the matter. Clearview said it was certain that it had no case to answer because it had employed technical remedies to ensure that no Italian IP addresses could log on to its platform — a policy it employs throughout the European Union.

The technology company also argued that it couldn’t be seen as tracing or monitoring the Italians complainants because it simply offered a snapshot in time, as would be the case with Google Search. What’s more, Clearview held no list of Italian clients and had no business interests in the country.

“Clearview’s only goal is to offer a search engine to allow for the search of Internet images on the part of its clients” and the facial vectors contained in its database can’t be used to link an image to other personal data, Ton-That said.

The San-Francisco based founder of the company also said he was prepared to accept regulation — provided it be firmly based on Clearview’s role as a search engine of facial images. What’s important is that the regulation “makes sense ... as this new technology finds its place in the crime-fighting universe," Tom-That said.

Related Articles

No results found