Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
EU will struggle to keep AI rules up to date without falling prey to fads
17 March 2023 09:28 by Nicholas Wallace
EU policymakers’ recent focus on the ChatGPT program when discussing the Artificial Intelligence Act lays bare a dilemma: How to keep tech regulation up to date without legislating for every new fad?
The forthcoming law will require providers of AI systems intended for “high-risk” uses — to be listed in the law’s annexes — to obtain certification before putting them on the market. That raises the obvious question of what does and doesn't pose a high risk, and disagreements on that have slowed talks in the European Parliament.
But another hard question is how to treat “general-purpose” AI systems that could have a multitude of uses: some anodyne, some risky, many not yet even conceived.
The recent popularity of ChatGPT presents an early example of why this question matters. Developed by US tech lab OpenAI, it is a chatbot that scours the web to find detailed answers to complex questions — with varying degrees of accuracy. The technology has myriad potential uses; obvious starting points include automating customer services to writing computer code.
But the information that ChatGPT hoovers up to develop its model and answer questions includes personal data and intellectual property, both used without permission. And it's unclear how keystone EU data protection rights — such as the right to have personal data erased or inaccurate information corrected — could be exercised in the case of a system such as ChatGPT.
The focus on ChatGPT specifically, though — as opposed to the broader questions about general-purpose systems or data protection that it illustrates — suggests there is a danger that politicians might be tempted to rewrite AI laws every time a new advance generates a lot of hype.
ChatGPT has featured prominently in recent public discussions of the AI Act. It has prompted commentators such as Politico, for example, to claim it "broke the EU plan to regulate AI" because it "can serve both the benign and the malignant." But EU member states were already debating how the AI Act should treat such systems before ChatGPT was even released.
Speaking at a European Parliament event on March 7, the prominent British-American computer scientist and AI researcher Stuart J. Russell summarized the danger, warning that it would be “a bad idea to keep adapting legislation to the latest fad, because that means it will be outdated immediately”.
'Already dealt with'
The EU’s industry commissioner, Thierry Breton, responded coolly to a question about ChatGPT in a parliamentary hearing on March 2. “We’re not going to regulate this, because it’s a specific application. Tomorrow there’ll be a different one. Then a third one, which will do something else,” he said.
Breton argued that proper enforcement of existing laws should be sufficient to deal with the concerns about ChatGPT. “What’s important to know is what is being done with the data. How is the data being obtained? Is it being stolen or is it being obtained with your consent? Are they scanning in our e-mail inboxes to see what you’ve looked at to then provide an answer to a question that you’ve asked?” he said.
“All of this is already dealt with in our regulation. So, I would reassure you here: We’re not going to come up with a new regulatory proposal. We’re just going apply what has already been implemented,” Breton said, adding that he hoped he could “count on the European Parliament to speed up” its deliberations on the AI Act. The goal, as he put it, is “to be ahead of the game.”
Yet the commission’s legal proposal for the AI Act does not address general-purpose AI at all. It regulates systems on the basis of the “intended purpose” specified by the system provider “in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.”
The questions Breton raised about the acquisition and use of data seem to pertain to the General Data Protection Regulation, rather than the AI Act.
Kicking the can down the road?
National governments, represented in the Council of the EU, finalized their position on the AI Act on Dec. 6 last year, just six days after the ChatGPT prototype came online.
The council decided the AI Act should allow the commission to unilaterally create legally binding rules, called an “implementing act,” for general-purpose AI systems.
But the European Parliament’s Dragoș Tudorache called that “[kicking] the can down the road” and said members would revise the draft regulation to make it “futureproof”. Negotiators for the parliament’s political groups are close to agreeing a common position, with the talks led by Tudorache, a Romanian liberal, and Brando Benifei, an Italian socialist.
They are discussing a dedicated regime for general-purpose AI systems, with separate obligations for the provider of such a system and a company that incorporates one into a product or service intended for high-risk uses.
The latter would be obliged to obtain certification in a similar way to companies using systems dedicated to high-risk uses, while the former would be obliged to agree to certain clauses in its contract with the latter, such as providing particular information about the system.
Providers of general-purpose AI systems, regardless of how they end up being used, would also be subject to some basic obligations, such as documenting systems’ behavior and assessing the risk of misuse.
Meanwhile, the European Commission would monitor the list of uses considered high-risk and have some powers to amend them, but legislators are still negotiating how much of a say other EU institutions should have in such amendments.
In any case, it’s clear that both the parliament and governments found the commission’s neglect of general-purpose AI unsatisfactory, and that they will agree a provision to regulate it.
What’s not clear is whether that will keep EU law ahead of the game in the way that Breton hopes.
Both sets of legislators are “kicking the can down the road” to some degree by adding provisions for the commission to amend the law in response to future developments, and it may well be prudent of them to do so.
But politicians, including those high up in the EU's executive, experience different pressures and impulses over time and respond to them, for better or worse. New uses of AI will inevitably emerge and prompt concerns that may well require changes to the law. There will also be fads that would fade if left alone. Telling the difference, though, will require cool heads.
No results found