Joint UK-Australian probe into Clearview AI can spur on global privacy enforcement

17 Jul 2020 1:23 pm by Vesela Gladicheva, Laurel Henning

Facial recognition

The opening of a joint UK-Australian probe into controversial US facial-recognition company Clearview AI underscores a growing interest among data watchdogs to cooperate in tackling global threats to people’s privacy, even where the technology isn’t yet on the market. The probe will be a test case for the regulators in their aim to share resources and information and to speed up investigations.

The Office of the Australian Information Commissioner and the UK Information Commissioner’s Office will check Clearview’s compliance with the Australian Privacy Act 1988 and the UK Data Protection Act 2018, which implements the EU’s strict General Data Protection Regulation.

The company, based in New York, has said its image-search technology is currently unavailable in EU countries — including the UK — but that it does process data-access and data-deletion requests from people in the bloc. Its technology is also unavailable in Australia, it said.

But concern among EU regulators has risen in recent months about facial-recognition technology, and specifically the apparent illegality of Clearview’s technology and its use by private businesses as well as law enforcement. The company faces litigation in the US, where more than 600 law-enforcement agencies have been using its software.

Why now?

The UK-Australian joint move follows a pledge last October by members of the Global Privacy Assembly — which groups around 180 data regulators, led by the UK's ICO — to work toward "more effective cooperation in cross-border investigation and enforcement in appropriate cases".

Its specific starting point was in Australia, when the OAIC made preliminary inquiries in January in the wake of reports that Clearview had clients all over the world.

Clearview initially denied trialing the software outside North America, but a list leaked by news website Buzzfeed appeared to show that its clients had included, at one point at least, dozens of employees at Australian law-enforcement agencies, including the Australian Federal Police, or AFP, and state-level agencies.

In the UK, Buzzfeed also reported in February that Clearview’s facial-recognition technology had users registered at London's Metropolitan Police, Surrey Police, the National Crime Agency and Standard Chartered Bank, among others.

Earlier this year, the OAIC signed a memorandum of understanding with the ICO over sharing experience, expertise, ways of working — and, importantly, cooperation on specific projects and investigations. The agreement brought a new avenue to cooperation that the UK and Australia are used to as members of the Five Eyes intelligence alliance, along with the US, Canada and New Zealand.

Clearview’s sights on rapid international expansion with a controversial data-processing practice not only rang alarm bells for both regulators, but must also have presented itself as the perfect test case for their formalized cooperation.

Throw into the mix an ongoing investigation in Canada, and soon the ingredients for a joint probe came together. The Canadian probe is currently separate, but investigators are likely to coordinate it with the OAIC and ICO under existing formal sharing arrangements.

Clearview is being investigated by Canada's national privacy authority, as well as in the provinces of Alberta, British Columbia and Quebec. The enforcers still plan to issue their findings, despite Clearview's decision earlier this month to withdraw from Canada.

The probe, which followed reports that Clearview technology was used to collect images and make facial recognition available for law-enforcement investigations, focuses on halting the collection of personal information of Canadian citizens and deleting data already collected.

Compliance, penalties

The joint investigation underlines growing concerns around the use of biometric data and developments in facial-recognition technology. It should provide clarity for AI-focused businesses and others on the legality of data scraping and facial-recognition technology.

In the EU, organizations deploying facial recognition in public spaces must comply with the GDPR, under which sensitive biometric data need extra safeguards, and data processing must be shown to be "strictly necessary" for the intended purpose.

Other likely lines of investigation include Clearview’s collection of facial images without consent, its accountability processes, transparency, algorithmic bias and, more generally, its approach to data ethics.

If found in breach of the UK law derived from the GDPR, Clearview will be exposed to fines of up to 17 million pounds ($21.5 million) or 4 percent of its global turnover, whichever is higher.

In Australia, the maximum sanction would be A$2.1 million ($1.5 million). Australian penalties for privacy breaches are expected to be raised to A$10 million in an upcoming review of the country's Privacy Act.

Motivation

The reason for Australia and the UK mounting a joint investigation lies not just in their regulators’ recent commitment to collaborate, but their urge to avert another global data-privacy crisis.

Think of the Facebook-Cambridge Analytica scandal, which the ICO and OAIC investigated separately.

For Australian enforcers, this has become a quagmire. It has led to legal action against Facebook at the Federal Court of Australia, which in initial hearings has become bogged down in a dispute over whether Facebook Ireland or its parent company, Facebook Inc., should be the target.

Establishing a united front now might mean a firmer response to the issue further down the track. As the OAIC states in its regulatory action policy: "In dealing with an interference with privacy or potential privacy risk that operates across national boundaries, there can be a practical and resource advantage in liaising with other privacy regulators to avoid duplication, share information and coordinate the release of investigation findings."

For the ICO, meanwhile, its move to open an investigation into a nascent technology is perhaps an attempt to spur a national debate, something a senior ICO official said was sorely needed as controversial new technologies emerge.

The UK's withdrawal from the EU provides another backdrop for the ICO to team up with its Australian counterpart. UK regulators have already embarked on a search for their place in a post-Brexit world, and the ICO is certainly looking beyond collaborations with EU privacy regulators.

The Clearview investigation is a plain signal from the ICO that more international enforcement cooperation is on the horizon. The watchdog wants to be seen as relevant and influential, suggesting that multinational companies and startups wishing to expand globally should expect regulatory scrutiny and investigations not only across the EU but also in the UK, potentially in tandem with other non-EU regulators.

Australian impetus

With landmark cases now before Australia’s Federal Court against both Facebook and Google, the country’s regulators are making their voices heard at an international level when it comes to holding technology companies to account. That will no doubt help the OAIC, which has been seen as under-resourced in the past, to make its case for more funding in the upcoming Privacy Act review.

The regulator is also pushing for tougher enforcement. In a recent submission to the Australia’s Human Rights Commission, the OAIC made specific reference to the increased use of AI technologies, and said it had a keen focus on the use of biometric information and facial-recognition technology.

This is already regulated by the Privacy Act, with biometric information classified as sensitive information and afforded a higher level of privacy protection than other personal information. But the upcoming review could add rights to erasure of personal information and higher levels of accountability for companies as digital technology develops, the OAIC said.

UK focus

Organizations’ use of artificial intelligence and facial recognition are regulatory priorities for the ICO, partly because of the potential for harm to individuals and for enabling mass surveillance. The joint probe fits well with ongoing ICO investigations into facial recognition, which might suggest the regulator is keen to intervene before the technology has entered the market.

The ICO is investigating a case of live facial-recognition technology being used in London, and it has previously probed the Metropolitan Police. It has also intervened in a high-profile court case in Wales about the legality of live facial recognition by police. At an appeal hearing last month, it argued there was no clear legal framework for police use of the technology.

The ICO has repeatedly called on the government to introduce a statutory and binding code of practice for live facial recognition used by police forces.

Senior ICO officials have previously drawn a distinction between tech giants acting responsibly in regard to new technologies such as facial recognition, and other companies paying little heed to the implications for individuals and privacy.

The regulator's technology policy chief, Simon McDougall, said that large companies such as Microsoft and Intel are “good actors,” because they invest in thinking about the impact of technology. “There are many other organizations out there, less sophisticated, not thinking at all,” he said.

The ICO worries that Clearview may well fall in the latter category, and there's little time to waste in finding out.

With additional reporting by Mike Swift in San Francisco

Related Articles