Europe effort to enact ethical AI rules could lay groundwork for global standard

12 August 2019 00:00

Scour a big tech company dabbling in artificial intelligence and chances are it already has an ethics guidelines document somewhere on the company server.

The question looming before all manner of industries — auto manufacturers, device makers, retailers and others whose world artificial intelligence is positioned to permanently transform — is whether today’s proliferating set of voluntary, non-binding ethics rules will become cold, hard law.

Certainly that’s the prospect for European businesses, which face the likelihood of becoming the first in the world to be governed by artificial intelligence regulations now that incoming European Commission President Ursula Von der Leyen has pledged to unveil proposed laws for the ethical use of artificial intelligence within her first 100 days in office.

Those keeping track should look for legislation no later than February. And not just those doing business in Paris, Frankfurt, Milan or other European mainstays: Much as Europe’s General Data Protection Regulation affected the privacy discussion across the globe, von der Leyen’s proposals could shape the AI ethics discussion worldwide.

Early signs show European policymakers have global ambitions, with the Group of Seven industrialized countries set to discuss artificial intelligence as one of five main objectives for their meeting in France in late August.

Of course, it’s too early to say whether the aim of a global approach to AI ethics is possible, much less whether it’ll come to pass.

Today’s AI ethics landscape is already divided among national governments, international organizations and individual companies spanning numerous economic sectors. Still, the faster the EU drafts its rules, the more powerful it could be in setting the scene.

Europe versus the world

Europe’s knack for setting the global pace on technology regulation is a testament to the economic heft of the EU's 28 member nations (27, excluding the United Kingdom) — and the difficulty of building bespoke technical infrastructure. Big firms with global reach tend to build out their technology to accommodate the most onerous set of regulations.

That’s mostly what has happened with the General Data Protection Regulation, said Jacob Metcalf, a researcher with Data & Society, a New York-based think tank.

“It’s highly inefficient for a company to have a completely different infrastructure for the US and EU, even if their compliance efforts are different,” he told MLex. And once the ability to comply with EU regulations is baked into a company’s operational core, some of it can spill over, even when not required.

Still, “just because you have the capacity to do the right thing, doesn’t mean you do the right thing,” Metcalf added.

The European Commission so far has refrained from setting overarching rules when it comes to artificial intelligence. Instead, last year it handed off the work of creating ethics guidelines to a 52-member expert group, only four of them ethicists. The group’s ethical principles, finalized in April, are now being tested by some 300 companies that will apply them in practice.

Although the effort has been applauded, it’s also been criticized, even from within the expert group. Thomas Metzinger, a philosophy professor at Germany's Johannes Gutenberg University of Mainz who participated in drafting the guidelines, told MLex the final product was heavily shaped by industry bias despite it still being the “best ethics guidelines in existence” and “a good first step.”

Rather than ethically constraining AI, such as by drawing red lines for applications such as lethal autonomous weapon systems or social scoring systems, Metzinger charges that the guidelines let industry gain the appearance of ethical observance without committing to anything beyond a deliberately vague set of requirements.

He urged the next commission to assume and sustain global intellectual leadership in the AI ethics debate, even while describing himself “pessimistic” over the probability of a truly substantial global agreement.

“If we have no global standards, then there will be a race-to-the bottom and businesses will go to the countries with the lowest standards,” he predicted. The EU should become “the focus of a truly global debate. Even if some are pessimistic about its potential, this is a question of self-respect and the moral obligations following from classical European values. Unfortunately, the global pledges until now, at G-20 level for example, have been quite empty and non-committal,” he added.

The commission’s efforts to seek partnerships to make human-centric AI go global could make more progress at the G-7, where the Union will try convincing Canada, Japan and the US that its approach should be the international way forward. In May, top European officials said after meeting Japanese counterparts that they seek to promote international common understanding on AI principles through G-7 and G-20 meetings.

National governments inside the European Union have also taken slow steps toward regulating the new technology. In late 2018, they agreed to set up their own national strategies by the end of this year, although only about half of these plans have been drawn up to date.

International forums stake out their own standards

The European Union isn’t the only pan-national organization with an eye on artificial intelligence. The Organisation for Economic Co-operation and Development, human rights organization the Council of Europe, and standard-setting body IEEE have also been active.

Non-binding OECD principles for “robust, safe and fair AI” were adopted by 42 counties in May. Dirk Pilat, deputy director of the OECD, said at an event in Brussels in June that the call to “do something” came from member states.

“There is a lot of alignment among all the rules. And eventually we all need to move in the same direction,” Pilat said. He added that the OECD’s guidance is now a criteria for membership of the international economic and trade organization.

They’re unlikely to present much of a hurdle: Countries would have to agree to commonplace notions such as “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.”

Australia looks abroad

Whatever Europe ends up doing, it already has a potential partner in Australia, whose national government says it's keeping a close eye on developments on the continent.

“For us, it’s helpful to understand its considerations. We are supportive of the EU agenda,” said Justin Brown, Australia’s EU ambassador during a June event held in Brussels.

An Australian government discussion paper published in April calls international coordination “crucial” to AI development. 

“Many AI technologies used in Australia won’t be made here,” the paper said. “Regulations can induce foreign developers to work to Australian standards to a point, but there are limits,” it added, calling for coordination with international standard-setting bodies.

Australia’s Department of Industry, Innovation and Science, which was responsible for the April discussion paper, said in June that securing citizens’ data while fostering innovation in artificial intelligence was at the top of its agenda. The department opened a public consultation on the development of national ethics to govern AI at the time of publishing the April paper, but has been silent about the process since the May deadline for responses.

Speaking with MLex, Melissa Fai, a technology and digital-focused partner with Gilbert & Tobin in Sydney, said that for now, it’s “not clear what the Australian government hopes to do on ethics governing AI, but the work going on locally is similar to other jurisdictions.”

Fai said there could be “high-level standards” adopted by a multinational body such as the OECD, with countries implementing their own more detailed regulations. Anything more than high-level principles at an international level would be a huge lift, she said.

“I think global standards will be difficult, as with any regulation or reform, but looking at the consultations and discussion papers circulating — these are all focused on the same types of issues and principles,” Fai said.

United States, home of the tech-regulation averse

Tech regulation of any sort is traditionally a hard sell in the United States, which sees in Silicon Valley a source of global economic domination. Talks about reining in tech wizards come heavily freighted with worries that slowing down industry’s move-fast-and-break-things ethos could bring the whole artifice down.

True, the Trump administration endorsed the OECD AI principles, but those broadly worded requirements aren’t even a regulatory speed bump. Hardly anyone is going to argue that AI shouldn’t respect “the rule of law, human rights, democratic values and diversity,” even without the OECD telling them to do so.

“There is very little regulation, either at the federal level, or at a state level,” said Kay Butterfield, head of artificial intelligence and machine learning at the World Economic Forum.

That’s not to say that there couldn’t be regulation in the future. Silicon Valley’s many years of deference from the national capital show signs of coming to an end, with lawmakers now suspicious of the power accumulated by tech companies.

There is talk in Congress of comprehensive privacy legislation and some House Democrats say the bill should penalize artificial intelligence-enabled discrimination. Two senators in April introduced legislation that would require companies to fix algorithms resulting in biased or discriminatory decisions.

In some cases, industry has even asked the government to step in to regulate AI, either as a prophylactic measure letting industry set the terms of its regulation, or possibly stemming from a belated realization of artificial intelligence’s terrible potential for transforming society into an Orwellian nightmare.

Whether this growing sense of tech industry vulnerability gets converted into actual regulation likely depends on the outcome of the 2020 presidential election. Most Democratic presidential candidates have signaled some sort of discomfort with unfettered artificial intelligence, whereas the incumbent Republican, President Donald Trump, is mostly obsessed with his treatment at the hands of social media companies, said Metcalf, the Data & Society researcher.

South Korea looks inward

The debate in South Korea over artificial intelligence has a mostly domestic patina, although an increasing number of voices in Seoul can be heard arguing for a global ethics charter created by an international organization such as the United Nations.

“We need global standards that govern AI from its early development to prevent possible threats from early on,” said Jeon Chang-bae, chairman of the board of the Korea Artificial Intelligence Ethics Association.

Last year, the Korean Ministry of Science and ICT published an ethics document meant for voluntary adoption by all developers of emerging technologies, including artificial intelligence.

An official from the Science Ministry told MLex that the ministry may draft AI-specific ethics guidelines. “The transparency of algorithm is a key issue with AI, and it is possible that the government comes up with guidelines on AI development [to address the issue],” said Son Do-il, a partner at top South Korean law firm Yulchon and a frequent advisor to the government on matters including AI and privacy.

Meanwhile, the Ministry of Trade, Industry and Energy has formed a robot ethics advisory committee earlier this year charged with writing a charter on “intelligent robot ethics.”

MLex understands that the Korea Institute for Robot Industry Advancement, an institution under the trade ministry, is in the final stages of drafting these principles.

Related Articles

No results found