OpenAI's Altman embarks on global charm offensive with AI regulators

05 June 2023 21:43 by Matthew Newman, Amy Miller, Madeline Hughes

digital-maze

OpenAI CEO Sam Altman is on a global charm offensive, hoping to convince regulators to rein in artificial intelligence, but not too much.

The US and the entire world need an oversight agency to regulate AI, he's argued to lawmakers at home and abroad, from Brussels to Seoul. But to some skeptical politicians and policymakers, Altman’s goodwill tour is a smokescreen from another rich US tech company that will distract from the real harms AI is causing now, whether it's potential price-fixing or antitrust violations, or invasions of privacy.

Altman is traveling the globe warning officials about the need to regulate the latest iteration of artificial intelligence technologies like the ones his company makes and sells. He’s testified before the US Senate, and has been meeting, or will meet, with regulators and policymakers in the European Union, United Kingdom, and Japan. He’s also traveling to Israel, Jordan, Qatar, and the UAE, India and South Korea, he announced on Twitter yesterday.

His message has been straightforward, but dire: without rules and guardrails around AI, humanity could face catastrophic consequences akin to nuclear war or a global pandemic. Altman joined 350 AI experts in a one-line statement published on May 30.

But Altman has a solution he’s pitching to AI regulators. What he calls “superintelligence” needs a global oversight body, much like the International Atomic Energy Agency, which would inspect AI systems, require audits, and restrict deployment, Altman said in a May 22 blog post.

The agency’s power should be limited, Altman said. Most AI systems, those that are "below a significant capability threshold," would be exempt. The agency's main focus “should be reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say,” he said.

He’s even willing to fund new experiments in AI regulation. OpenAI’s nonprofit is awarding 10 $100,000 grants to fund experiments around the world aimed at setting up a democratic process for deciding what rules AI systems should follow.

“We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward,” OpenAI said when announcing the grants on May 25.

Critics, however, say they’ve heard it all before. Tech giants like Meta’s Mark Zuckerberg and Google’s Sundar Pichai have also traveled the globe lobbying governments on major policies, such as the EU’s content-moderation Digital Services Act and the Digital Markets Act to rein in digital gatekeepers.

Tech giants will say they support regulation when pressed by politicians, critics argue, but usually not whatever policy or piece of legislation is being proposed and debated.

“Whenever a hyper rich corporate executive makes a public statement embracing oversight, we should always ask ourselves ‘how does this fit into the corporation’s plans for profit-making?’” Emily Tucker, Executive Director of the Center on Privacy & Technology, said in a blog post for the think tank at Georgetown University.

A spokesperson for OpenAI declined to comment.

European tour

Altman has spent much of his global goodwill tour in Europe. He’s met with the leaders of Spain, Poland, Germany, France and the UK, where regulators are addressing concerns about generative AI tools like ChatGPT. And he’s already pushed back on proposed AI rules he didn’t like.

On June 1, Altman met with Ursula von der Leyen, the president of the European Commission, the EU’s executive body that proposed the Artificial Intelligence Act in 2021. Under the bill, technologies will be regulated based on impact and risk. Some uses of AI will be subject to different rules, based on whether they are deemed “high risk.”

Following the meeting, von der Leyen said in a tweet that “AI can fuel huge progress and improve our lives, but we must mitigate risks and build trust. To match the speed of tech development, AI firms need to play their part.”

European Parliament members will vote on a version of the bill in the coming weeks that would regulate any generative AI tools such as ChatGPT. These “foundation models” would have to comply with additional transparency requirements and disclose the content was generated by AI. Developers would have to prevent models from generating illegal content and publish summaries of copyrighted data used for training.

Altman suggested that OpenAI could exit Europe over its AI Act if the requirements are too onerous, but then backtracked on the threat after a week of “productive” conversations in Europe “about how to best regulate AI.”

“We are excited to continue to operate here and of course have no plans to leave,” he said in a tweet on May 26.

Altman’s call for regulation has attracted the attention of UK Prime Minister Rishi Sunak, who seeks to place the UK in the middle between the US “wait and see” approach and the EU’s strident steps to impose regulation.

Sunak met on May 24 with Altman and the bosses of Google DeepMind and Anthropic (see here) and discussed “safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation.” The AI developers agreed to work with the UK’s Foundation Model Taskforce “to build the UK’s capabilities in developing safe artificial intelligence.”

But Altman’s visits in Europe have also raised questions. During a meeting of US and EU leaders on developing closer ties on tech standards, as part of the Trade and Technology Council on May 30 and 31, AI experts expressed concern that Altman’s warning of catastrophic change brought on by AI in the future masks the current problems with the technology.

“Here comes generative AI with good news and bad news. The bad news is it detracts us from a conversation on the current actual harms of AI systems,” said Gemma Galdón-Clavell, CEO of Eticas, a consultancy that helps clients detect algorithmic vulnerabilities, speaking alongside European Commission digital policy chief Margrethe Vestager at the TTC meeting. The good news is that tools such as ChatGPT are leading to an understanding of societal rather than individual harms.

Policymakers are distracted by the science fiction threats of AI, which becomes an opportunity cost, said Galdón-Clavell: “How do we protect people today, this generation from the harms that we already understand?”

Another issue facing regulators is a lack of common scientific standards and evaluation tools, said Dario Amodei, CEO of Anthropic, an AI startup founded by former OpenAI staff, speaking at the TTC. “Both sides of the Atlantic have an interest in developing this science. Even if regulatory approaches taken by different countries are different, as I'm sure they will be, operating from the same base of facts seems important,” said Amodei.

There’s also concern about whether OpenAI is abiding by a May 15 deadline set by the Italian Data Protection Authority for the company to release an information campaign on its ChatGPT service.

"I think we should be balanced — the risks of the very powerful AI and the need to fight existential threats and more catastrophic scenarios needs to be addressed and not underestimated. But this cannot prevent from regulating the AI that is here now," Italian socialist Brando Benifei, a leading rapporteur on the AI bill expected to be voted on in the coming weeks, told MLex.

US Senate testimony

While Altman attempts to charm regulators in Europe, he’s also meeting with US lawmakers, making similar arguments about the need to set up an oversight agency for AI.

Last month, Altman testified before the Senate Judiciary Committee about AI’s potential to spread election misinformation, copyright and intellectual property issues, and impersonation, as well as job disruptions.

When the conversation pivoted to the enormous amount of information needed to operate AI systems like OpenAI, and potential privacy issues, Altman tried to reassure the senators.

“We're not trying to develop the storehouse of our users'” information, he said.

He didn’t advocate for or against proposed comprehensive privacy legislation that lawmakers in the lower chamber are currently negotiating.

But Altman did tell the committee that the US needs a new federal agency to regulate AI, so the government could set safety standards and require independent audits for AI technologies with a high impact. An agency could be more “nimble” than Congress and respond more quickly to new issues that will inevitably arise, he argued.

Some senators said they liked the idea, noting the failure to regulate social media effectively.

“Congress doesn’t always move at the pace of new technology, which is why we may need a new agency,” said Senator Richard Blumenthal, chair of the committee's Privacy, Technology and the Law subcommittee.

Similar to his talks in Europe, Altman agreed there needs to be a way to distinguish creative materials made by AI versus humans, likening generative AI products to photoshopped images “but on steroids.”

But creating an oversight agency like that will take time, critics say, and the technology is already moving fast and creating issues. An oversight body also wouldn’t address whether the massive data collection OpenAI needs to function should be permitted in the first place, they say.

“Altman’s hope is that it is a story that will distract us from the fact that the enormous probabilistic engines that OpenAI is building are not only not inevitable, they are not possible without extreme concentrations of wealth, and not maintainable without extreme concentrations of corporate power,” Tucker said. “It is this concentration of wealth and power that the government must act to prevent, and it is this concentration of wealth and power that Sam Altman wants to co-opt the regulatory process to conserve.”

As Altman’s tour continues this week, there are already indications regulators are catching up to OpenAI’s technology, and enforcement may soon follow.

Japan, which was the first destination of Altman’s world tour, has largely promoted generative AI’s usage. But Japan abruptly issued an administrative guidance against OpenAI last week to limit the machine learning of sensitive personal information. The country’s privacy commission also cautioned users on how the chatbot may use information they input, warning against inadvertent violation of privacy laws if they don’t opt out of machine learning.*

—With reporting assistance from Jenny Lee in Seoul and Toko Sekiguchi in Tokyo.

Related Articles

No results found