New AI rules in Europe threaten yet more work for stretched data regulators

17 February 2021 14:16

AI

The growth of artificial intelligence in a wide range of business applications is likely to place another burden on European data protection authorities already struggling to enforce the EU's General Data Protection Regulation.

Data protection is a major concern in the rise of AI, as complex and opaque algorithms process users’ data in ways that could breach their privacy or lead to discrimination, while remaining undetected by regulators or even the company operating the system.

So the European Commission’s AI framework, set to come out on April 21, is very likely to give a significant enforcement role to national data protection authorities, alongside telecom and antitrust regulators — as is already the case in its “white paper” on the subject that came out last year.

That policy paper also expressed concerns about the supervisory authorities’ means to enforce existing regulations on AI. With the likely introduction of new regulations, it’s hard to see how data protection authorities will cope unless they’re given a significant boost in funding and staffing.

Some regulators have begun to offer advice to companies on certain aspects of AI: For example, the UK Information Commissioner’s Office recently issued guidance to companies on the use of algorithms in hiring decisions. But guidance is only one side of the coin. The other — enforcement — is considerably more labor-intensive for regulators.

Furthermore, given that the regulators’ budgets are set by national governments rather than the commission, there is the risk that some countries’ regulators will be able to enforce the EU regulation more thoroughly than others, undermining the single market — a concern that already exists for the enforcement of the GDPR.

Under-resourced

The European Data Protection Board — the umbrella organization of national data watchdogs — said in a recent evaluation of the GDPR that most privacy watchdogs considered their resources to be insufficient from a human, financial and technical point of view. A majority of those authorities have voiced similar concerns in their respective annual reports.

The pressing need to regulate was pointed out by a scandal in the Netherlands that prompted the government to resign last month. The national tax authority had wrongly accused 26,000 parents of making fraudulent childcare benefit claims in the period 2013 to 2019; the errors arose because of biased and discriminatory AI systems.

To prevent similar things happening again, the Dutch data protection authority will need to play an active role. But a report last November found that it is already struggling to handle its existing duties, and that its budget must be more than tripled by 2025 to keep on top of GDPR enforcement and new technologies.

The authority itself warned specifically earlier this month that “new tasks” such as algorithm use cannot be properly policed. It called for more funding and better coordination with other regulators to keep on top of the growth of these new technologies.

A solution that has been proposed by some, including EU lawmakers, is the establishment of a new EU-wide AI oversight body to help national regulators adapt to upcoming rules.

This might take the burden off the data protection authorities, but it seems that national governments will also need to give their data watchdogs more resources if they are to have any hope of getting to grips with AI.

Related Articles

No results found