Japanese privacy commission's caution about OpenAI indicates shift toward more scrutiny

09 June 2023 09:08 by Sachiko Sakamaki

magnifying-lock

The Japanese privacy commission is generally known to be more of a follower than a trailblazer when it comes to going after technology giants. But last week, it became the world’s second regulator to take action against OpenAI, the provider of artificial intelligence chatbot ChatGPT, although the guidance it issued was only a slap on the hand.

While experts have mixed views about the caution to the US company, the regulator’s action seems to indicate a shift in Japan's approach that will see more scrutiny of generative artificial intelligence for better governance.

The Personal Information Protection Commission, or PPC, last week cautioned the Japanese public about the use of generative artificial intelligence services, amid the explosive popularity of ChatGPT's chatbot. The regulator also disclosed that it had issued an administrative guidance to OpenAI, warning it not to acquire user’s sensitive information without consent for ChatGPT’s machine learning.

Privacy experts have mixed views about the PPC’s guidance, but one thing they agree on is that the regulator probably wanted to demonstrate that Japan doesn’t turn a blind eye to personal-data issues in AI services, ahead of the G7 Data Protection and Privacy Authorities Roundtable to be chaired by the PPC chairperson later this month.

The PPC didn’t find OpenAI had violated the Act on the Protection of Personal Information, or APPI, but said it might take additional measures, if it detects more problematic issues in the future.

Surprise

“I was surprised that the PPC has cautioned OpenAI and published its action, ahead of the EU, which is still discussing its AI Act bill. This shows the Japanese government’s determination to regulate generative AI systems more strictly than before,” said Daisuke Tatsuno, a partner of Baker McKenzie in Tokyo at a seminar* this week.

In April, OpenAI chief Sam Altman chose Tokyo as the first destination of his tour to meet with global regulators, and met with Prime Minister Fumio Kishida  — a rare opportunity for a startup chief.

Speaking on the sidelines of the seminar, Tatsuno told MLex that the Japanese government seems to have shifted from its earlier, more lenient AI stance to promote a more disciplined use, to keep harmony with the G7 summit, which called for the promotion of “responsible AI”.

In another move today, the government said that risks around generative AI copyright infringements and other issues would be sorted out to clarify standardized rules to protect creators’ rights and to promote AI development.

Narrow focus

In reaction to the PPC’s guidance, privacy experts wondered whether OpenAI could actually identify and delete an individual’s sensitive information, after it was used for ChatGPT’s training. They also questioned the narrow focus of the guidance.

“I don’t understand why the guidance was limited to machine learning, and didn't cover the data used for the prompt’s input and output,” said Ryoji Mori, a privacy lawyer at Eichi Law Offices.

Hiroshi Miyashita, law professor at Chuo University, questioned why the PPC focused on sensitive information, although noting it may be an easy target to raise a yellow flag under the APPI about OpenAI’s data collection for machine learning.

“If you focus on the use of sensitive information for AI training to pursue stronger enforcement measures, you may run the risk of over-regulating the use of sensitive data from websites in general, from Google's search engine to the aggregation sites of Twitter and other social media, obscuring the issues specific for AI,” he said.

In contrast, Miyashita said the Italian regulator has objected to the use of personal data to improve algorithms. He said to use humans as tools for machines goes against the EU General Data Protection Regulation’s goal to respect human dignity.

“Rather than focusing on narrow issues, we need a large picture of how AI should be governed from the fundamental values,” he added.

Unresolved issue

“Something looks odd,” said privacy lawyer Yoichiro Itakura, about PPC’s guidance to OpenAI.

He said the PPC may have been pressured to take some action against OpenAI, now that ChatGPT has gained popularity in Japan and the PPC will soon host a roundtable for G7 data protection authorities.

He said that the problems of collecting sensitive information from third-party sources was an issue long ago, with regard to online searches, for instance.

In the past, the PPC has avoided clarifying when and why its permissible to search a politician’s illness on Google, or to tweet on an entertainer’s sexual orientation, without the data subjects' consent, under the APPI.

“The use of sensitive data was unresolved from the time of search-engine issues. The PPC's cautions don’t help much as guidance for companies’ use of AI,” said Itakura, a partner at Hikari Sogo Law Offices.

He receives more questions about the use of ChatGPT and other similar AI services, but the main issue is the transfer of personal data overseas to OpenAI, he said. His typical advice is to secure consent from data subjects, and avoid entering personal data for prompts.

Itakura also pointed out that the PPC made a distinction between collecting data and acquiring data, and allowed OpenAI to collect data, while prohibiting acquiring sensitive data from ChatGPT users and non-users without consent.

“This seems lenient, in terms of the existing PPC rules,” he said.

No legislative plan

Despite the shift toward more regulated use of AI, there’s no sign of calls for a new legal framework in Japan.

Many Japanese politicians still maintain that less regulation fosters innovation, although this approach to AI seems to be losing support, even among many US tech leaders, who are concerned about “catastrophic” AI risks for humanity.

On his global tour, OpenAI's Altman has been calling for guardrails, including a global oversight body which would inspect AI systems, while a Microsoft executive supports transparency rules, safety tests and a licensing program for some powerful AI models.

With no signs of a legislative plan for AI, Mori said Japan might become an isolated country because Japanese politicians seem “a couple of decades behind” and are clinging to the outdated idea that less regulation is better for innovation.

According to Miyashita, having clear values and rules would actually help Japanese businesses to pursue innovative AI services and gain competitiveness, rather than having a gray area, which could create uncertainty and have a chilling effect on businesses.

*Privacy Seminar: The opportunities and challenges for corporations created by development in data protection laws and AI, hosted by Baker McKenzie and Hakuhodo DY Holdings, June 7, 2023.

Related Articles

No results found