EU plan for AI regulatory framework unveiled to chorus of concerns

21 April 2021 00:00

facial recognition

Makers and users of "high-risk" artificial intelligence tools, such as facial recognition, are at the center of the European Commission proposal unveiled today for a strict regulatory framework to govern the use of AI in the EU and prevent potentially harmful applications.

The rules were laid out in a leak of policy documents last week, and have stirred controversy quickly. Within days, a group of 40 cross-party EU lawmakers weighed in and called on the commission to toughen its plan by explicitly banning the use in public spaces of remote biometric identification technologies.

They also strongly protest a proposed exemption in the rules over the use of AI for surveillance purposes by law-enforcement authorities, and private operators acting on their behalf, to safeguard public security.

The commission’s proposed legal framework is, at its core, a way to regulate what it calls high-risk AI systems — in particular biometric identification tools in publicly accessible spaces — and those intended to be used in operating essential public infrastructure, such as the supply of water, gas and electricity.

The EU executive has pledged a human-centric approach to regulation that adheres to EU values, signaling a different perspective from countries such as China and the US. This means that the use of high-risk AI tools within the bloc will be subject to requirements such as transparency about datasets and insight into algorithms.

Even AI systems likely to pose a limited or minimal risk — the vast majority — face transparency obligations. For example, companies using chatbots will have to make users aware they are interacting with a machine.

— Scope —

The commission ranks AI applications from low to high risk, but its list is not conclusive. To ensure that the framework can be adapted to address potential harms as they evolve, it can change over time after a commission evaluation and with the help of a European Artificial Intelligence Board to be created.

But this already seems to be a sticking point in discussions with EU governments and the European Parliament.

For example, lawmaker Alexandra Geese, a German member of the Greens, calls for a clear red line to outlaw “all systems that involve the recognition of gender, sexual orientation or characteristics such as ethnicity, state of health or disability,” as she fears they will conflict with individuals' fundamental rights.

She is backed by civil society groups, including Access Now and Privacy International, that also wrote to the commission to ask for red lines for AI applications that threaten people's rights, including tools that impede fair access to justice and procedural rights.

Makers and users of such AI systems will closely watch the debate given the high stakes: for high-risk applications, the rules would penalize violations with fines that could eventually reach 20 million euros or 4 percent of their turnover, along the same lines as the EU’s General Data Protection Regulation.

— Innovation —

With the package presented today largely focusing on developers of high-risk AI tools, there has been criticism, too, of a lack of attention on a much larger group engaged in less sensitive applications — for example, building tools for language optimization or logistics efficiency.

These developers had anticipated some leeway to help them innovate and compete with other regions of the world. But trade organizations, including Digital Europe, have voiced upset at a lack of such support.

“Even in high-risk areas, we want to make sure we are not shutting down innovation with the number of regulatory hoops. Here, the legislation should set out requirements that are clear and flexible, and we should focus on industry-driven standards," said Digital Europe’s director-general Cecilia Bonefeld-Dahl.

Telecom association ETNO said the EU “needs to step up if it is to realize its AI vision” to ensure that its envisioned trustworthiness doesn't hamper support for investment into AI.

Omer Tene, vice-president at the International Association of Privacy Professionals, is worried the draft regulation establishes complex mechanisms for oversight and review, but “remains vague with respect to basic principles such as algorithmic harms.”

“The idea of competing with AI leaders such as the US and China through regulatory innovation is untested. To be sure, the EU must protect fundamental rights; but it should also ensure competitiveness of EU innovators in the global market,” he added.

Businesses might have to look to national governments for funding support, as these will be tasked with drawing up investment plans under a revised Coordinated Action Plan on AI to be produced by the commission and member states jointly.

They may also need to lower their expectations, as the first version of this voluntary plan, published late in 2018, has suffered from a lack of implementation, with at least six countries failing even to draw up national plans.

Related Articles

No results found