Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
Google privacy chief wants Australia to learn from EU, US in privacy-law update
15 June 2023 09:06 by Laurel Henning
Google Chief Privacy Officer Keith Enright is in Australia setting out the US technology giant’s wish list for an overhaul of the country’s privacy law in meetings with government officials.
It’s Enright’s fourth attempt to get to Australia, but “it is serendipitous,” he says, that the trip “coalesce[d] with the privacy review being in the state that it is in.”
Enright said he has met with government officials to discuss what the company has seen in Europe and the US, lessons to be learned and “where things can be done better, or things have actually gone quite well.”
“Rigid and prescriptive systems have a hard time anticipating and adapting to radical rapid change in technology,” Enright told MLex in an interview in Sydney.
But a “legitimate interest mechanism” to process data including for targeted advertising is “one of the greatest innovations in the GDPR,” Enright told MLex.
“Having a notion like legitimate interest, which is more flexible and more nuanced puts more responsibility on the covered entity like us to do more of the analysis ourselves and to make sure that we can document and demonstrate and be held accountable if we make the decision correctly or incorrectly. Those kinds of measures are quite helpful,” he said.
Enright’s visit comes four months after Australia’s Attorney General’s Department, which is overseeing a review of national privacy law, published more than 100 proposals, including an “unqualified right” for individuals to opt-out of receiving targeted advertising and to stop their personal information being used or disclosed for direct marketing.
The proposals also included new definitions for direct marketing, targeting and trading.
Enright told MLex Google doesn’t have “grave concerns about the way the language is drafted,” but the company is “actively engaged with policymakers” on clarifying the terms.
Enright recognized a “hostility” in the public debate around advertising towards targeted ads but said that “users prefer ads that are relevant to them.”
“I do recognize there is a challenge and I think that organizations ought to have an obligation to be clearly communicating with users, what information is being collected and used to make ads relevant to them,” he said.
“If users don't want targeted ads, they should be able to opt out of targeted ads. Absolutely,” he added, explaining that Google has “long offered” the ability for users to control what ads they see.
“Too much rigidity, we think, would actually be bad for our billions of users — in Australia and around the world,” he said.
“That said, we welcome the effort to try to improve privacy notices and standards for everyone, he added.”
Privacy enhancing technologies
One response to Australia’s potential move towards a clampdown on targeted advertising could be the promotion of Privacy Enhancing Technologies, or PETs.
Google started to roll out its “privacy sandbox” initiative in February on a small number of devices using its Android mobile operating system in a move that raised antirust scrutiny.
New programming interfaces allow relevant ads but without tracking user activity across apps and websites.
“If data can be anonymized, if it can be pseudonymized, you can continue offering certain kinds of services to people in ways that allow them to achieve maximum value from the service when you’ve already managed the privacy risks suitably,” Enright told MLex.
One way to speed up the adoption of PETs would be to see their acknowledgement in legislative updates like Australia’s privacy act review, Enright said. This would incentivize and reward participants to use these technologies, he added.
Enright’s visit to Australia comes less than two weeks after the Australian government published a report highlighting gaps in existing legislation when it comes to the rapid development of generative AI.
The report highlighted the increased cyberattack and data-breach risks created by AI built on Large Language Models, or LLMs, such as ChatGPT.
“I would be cautious about overstating the significance of privacy and data protection to the overall landscape. of policy issues that are going to be impacted by AI [machine learning],” Enright said.
Still, in terms of privacy issues linked to AI, Enright said the main things he and other privacy experts can bring to the table is expertise over “risk management, data governance, compliance, [and] oversight.”
“We want to proceed cautiously and carefully. But proceed. Because the benefits of this technology could be so profound,” he said.
All efforts to create “a responsible way forward” to oversee the use and development of generative AI are “laudable,” Enright said, adding that although the EU passed its AI Act this week, it’s too soon to pick a preferable way forward.
“The technology is moving so fast […] I think that regulation and regulators are going to struggle to to keep pace with this technology,” Enright told MLex.
“All of those features that made it challenging for law and regulation to consistently govern and constrain behavior in the context of data protection are even more extreme in the context of [AI ML], [and in order] for this technology to continue to evolve optimally to advance the best interests of users all around the world, it is going to require very active engagement from private sector companies, governments, regulators, academics and intellectuals. We need to be sure that we're having that conversation,” he said.
Australia’s government is consulting on its AI report through July 26.
A consultation on proposals to update Australia’s privacy law closed in February, but submissions haven’t been published by government, which is still to respond to the consultation.
No results found