Fragmented AI oversight looms in EU as governments plan their own regulators

01 February 2022 14:34 by Cynthia Kroet

AI

No agreement on the final shape of the EU’s Artificial Intelligence Act is yet in sight, but businesses are starting to get an idea of how enforcement might look — and diverging approaches may give some cause for concern.

National data protection authorities have argued that they are best placed to police the new rules, but some governments are starting to set up designated AI regulators, which could make coordinated oversight across the EU eventually more difficult.

The European Commission’s AI Act, presented last April, aims to regulate what it calls high-risk applications such as biometric systems, which face tight market entry rules. But in terms of oversight, the draft proposal does not spell out which regulator will be tasked with its enforcement; the text only says that national governments will need to have a designated authority in place.

It has been widely expected that privacy watchdogs — who are already policing companies’ compliance with the General Data Protection Regulation — will be in charge of overseeing these rules too.

For example, Hamburg’s privacy commissioner, Thomas Fuchs, told MLex late last year that “DPAs are the obvious parties to oversee the privacy aspects of AI”.

But the new duties under the AI Act will go beyond the current capabilities of the privacy regulators. AI oversight will also mean investigations into health and safety or fundamental rights risks, as well as imposing sanctions on companies including the removal of harmful systems from the market.

Concerns

In its opinion published in June, the European Data Protection Board, or EDPB, the umbrella body of privacy watchdogs, said data protection authorities should be designated as national supervisory authorities on AI, as it would “ensure a more-harmonized regulatory approach.”

They are “already enforcing the GDPR and the law-enforcement directive on AI systems involving personal data,” the board's opinion said, adding that “a smooth application of this new regulation” would mean DPAs getting sole enforcement powers for the AI law.

MLex reported recently that some watchdogs are indeed already using the GDPR to regulate fast-developing AI technology, including facial-recognition applications, rather than waiting for the dedicated EU law to be passed (see MLex comment here).

For its new tasks, however, they would need significantly more budget as they are already struggling with a heavy workload and a backlog of GDPR complaints (see MLex comment here).

Concerns about this came from Norway, where a senior official worried that data protection regulators aren't always best placed to provide oversight. Kari Laumann, head of the Norwegian authority’s AI regulatory sandbox, warned that they don’t tend to have the skills to check systems that deal with non-personal data.

AI board

In its plans, the commission also calls for the creation of a European Artificial Intelligence Board, to be chaired by the EU executive and including representatives from each supervisory authority and the EDPB. The board, which would be able to issue opinions and guidance as to how to interpret the regulation, would resemble the structure of the EDPB, and might interfere with its decision-making process.

In addition, the commission’s plan to give authorities powers to impose penalties of up to 30 million euros ($34 million) or 6 percent of global annual corporate turnover on companies that violate the AI rules, might complicate the work of DPAs even more.

Some are currently struggling with a backlog of data protection cases, notably Ireland, which is home for the EU headquarters of most Big Tech companies. The country has received repeated criticism for weak GDPR enforcement against those companies as well as its budget and staff resources since the privacy rules entered into force.

One solution might be along the lines of a plan by the Spanish government, which said in its general spending plans published last month that it would create an Artificial Intelligence Supervision Agency with a budget of 5 million euros to work on the potential risks caused by algorithms.

Similarly, the Dutch government will set up a new algorithm watchdog as a department of the country’s privacy regulator, the Autoriteit Persoonsgegevens. The new regulator will tasked to check for transparency, discrimination and bias and will have a budget of 3.6 million euros by 2026.

It remains to be seen how the AI oversight discussion will develop across the bloc, as negotiations on the AI Act started in the European Parliament only last month. But given lawmakers' concerns about the lack of significant GDPR enforcement, they might well envisage dedicated regulators to ensure the rules are comprehensively enforced from the off.

Related Articles