Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
In high-stakes AI regulation race, UK stresses innovation over EU’s prescription
29 March 2023 14:11 by Sam Clark
The UK’s ambitions to grow its flourishing artificial intelligence industry and promote innovation are set out clearly in a long-anticipated policy paper, published today.
The approach is unsurprising. Since Brexit, government ministers have consistently set out their stall as promoters of a light-touch approach, diverging from the EU’s more prescriptive methods, particularly in the area of digital regulation.
The UK approach sets out five principles for AI, which different regulators will apply in their various domains. The idea is that this will create a “tailored” and flexible regime, setting companies free to innovate under rules set by expert regulators who know their industry, all while adhering to high-level principles that keep people safe.
“Instead of creating cumbersome rules applying to all AI technologies, our framework ensures that regulatory measures are proportionate to context and outcomes, by focusing on the use of AI rather than the technology itself,” the paper says.
By contrast, the EU approach — not yet finished but approaching a crucial legislative stage — looks set to oblige providers of high-risk AI technology to obtain a license for their products before putting them on the market.
The principles set out by the UK, such as fairness and contestability, are similar to those contained in the EU's draft AI Act. They are based on OECD principles for the technology, and experts say they are very familiar to anyone versed in the area of AI ethics. Experts appear to agree that they are sensible and unsurprising.
Where the real difference lies is in the way those principles will be enforced. This is cited as a possible major sticking point, and has also been the subject of criticism. Darren Jones, a UK lawmaker for the opposition Labour party, and head of Parliament’s business committee, tweeted today: “No new regulation, no new regulators, no new money … We’ve been waiting all this time for this?”
The government has promoted its plan of giving an AI remit to multiple regulators as meaning that each will be able to create a tailored regime for the areas they regulate.
But the examples it used in its own press release — health and safety, equalities and human rights, and competition watchdogs — raise a flaw in this plan: each of those regulators operate across all sectors.
Add in the UK's data protection authority, the Information Commissioner’s Office, plus sector-specific enforcers such as the Financial Conduct Authority, and companies looking to use AI could find themselves dealing with five different regulators, each with different rules.
Sarah Pearce, a partner at law firm Hunton Andrews Kurth, said this was “one of my biggest concerns and criticisms of what’s been released so far. It sounds great in theory, but practically, how’s that going to work out?”
Regulators will be forced to work together to address this problem. There is already a model, the Digital Regulation Cooperation Forum — which often receives praise as a smart approach to a complicated problem — for this type of engagement, but the UK’s plans announced today are on a grander scale and regulators won't have any statutory footing for AI specifically.
Enforcement will be based on existing laws, such as human rights, product safety or data protection legislation. This will require more cash, according to Michael Birtwistle, a director at the Ada Lovelace Institute, a data and AI research body. The UK will “struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators,” he said.
“It’s important for the UK’s global AI leadership that its regulatory ecosystem is credibly resourced and empowered, and at the moment there’s a lot more that could be done to ensure that.”
Wait and see
The UK government said today that, as well as a consultation on its approach and practical guidance, it could introduce legislation to “ensure regulators consider the principles consistently,” if parliamentary time allows.
This tentative attitude towards introducing legislation is a hallmark of UK lawmaking in many different areas, experts said, particularly under the ruling Conservative government, which has been in power since 2010. The preferred approach is to hold off on legislating, see how different attempts work in different jurisdictions, and go from there.
It’s particularly appropriate in the fast-moving area of AI regulation, as this allows rules to be updated quickly and mapped on to new technologies.
The tradeoff is between certainty and flexibility, said Tom Sharpe, a director at law firm Osborne Clarke. “The risk with the UK's light-touch approach is that it is suitably fast-moving and flexible, but creates a complicated regulatory patchwork full of holes,” he said.
“However, given how the EU's AI Act is tying itself up in knots over definitions of 'high risk' [and] what to do about generative AI, a light-touch, sector-focused approach is starting to feel like it might be better.”
In a story repeated in other legislative areas, however — notably data protection — the EU’s larger market and stricter approach may mean that its requirements become the de facto standard.
Companies seeking to get licensed in the EU would have to get their AI products up to a standard that would probably also mean they are fine to use in the UK. The practicalities of this are likely to be complicated, Pearce said: “There’s no hiding it, it’s going to be difficult.” Given the similar principles in both jurisdictions, though, it should be possible, she said.
The competing regulatory visions are a proxy for a wider battle for AI dominance. There is a common view, laid out most recently by senior UK government scientist Patrick Vallance in a major report on pro-innovation regulation, that we are entering a short timeframe for the UK to get it right, or risk being left behind.
An example of stalled decision-making is in copyright, where the government has flip-flopped for several years about the extent to which AI developers should be able to use copyrighted materials via text and data “mining.”
According to Toby Bond, an intellectual property lawyer at Bird & Bird, this has been discussed as “one of the most fundamental questions on generative AI,” the type of AI used for tools such as OpenAI's ChatGPT program, which is already disrupting the hugely valuable online search market.
“To me it seems like potentially quite a significant thing if the UK can find a way to create a permissive environment for those systems that balances the rights of developers and rights holders," he said. "It would then go hand-in-hand with the broader regulatory framework.”
The UK may want to take a hands-off approach, but the regulatory gaps that creates could leave it lagging behind in the race to create an attractive environment for what is likely to be a hugely important technology.
And if the UK continues to dither as it has done on copyright, or have initiatives delayed by political chaos, as has happened with the Online Safety Bill and the Data Protection and Digital Information Bill, then it might not be in the race at all.
No results found