UK's AI ethics review puts public procurement in the spotlight
28 June 2019 00:00
Developers of artificial intelligence could build greater transparency into their products, but their public-sector clients aren't putting them under pressure to do so, the head of a UK review into the technology has said.
Jonathan Evans, the chairman of the Committee on Standards in Public Life, told MLex the state was failing to use its leverage in the purchasing process to drive higher ethical standards from suppliers.
His remarks turn a spotlight on the adoption of AI-based products by state bodies and on the UK procurement process in particular.
In an interview with MLex, Evans said new technologies that pose questions of bias, accountability and transparency may compel a new look at the UK public sector's code of ethics, which dates to 1995.
There’s concern among regulators and academics around the world about the possible consequences of AI systems being used to make decisions by government agencies. Applications of the technology may be opaque in their workings, potentially biased, and installed without clear chains of responsibility for their output.
Right now, the committee, which reports to the UK prime minister, is conducting an inquiry into whether more widespread deployment of AI in government poses a challenge to the public sector’s code of ethics. The ability to hold to values of openness, accountability and objectivity may be made more difficult by the shift from human to machine-assisted decision making.
The UK is already deploying AI systems to identify patterns in prison reports, sift through visa applications and target inspections of garages offering vehicle-testing services. At the same time, it has no comprehensive overview of where and how AI technology is being used.
— Don't ask enough —
Early evidence to the committee suggests public officials don’t ask enough of contractors, said Evans, who previously served as the head of MI5, the UK’s domestic security service. He now sits in the House of Lords —the upper chamber of the UK parliament — and is an advisor to AI companies Darktrace and Luminance.
“Our thesis — which we are checking — is that we need to think about the procurement process not just as getting something which is as cheap as possible, but also that meets other needs," Evans told MLex. "If there are ethical concerns — if we need to explain why decisions are being made — we factor that into the commercial negotiations."
"It’s striking that, at this stage, not many of the individual companies we have talked to have said that when they have these discussions with government, standards issues come into it. And indeed some have said: ‘If people wanted more ‘explainability’, we could probably build that into the system. But nobody’s asked for it, so we haven’t,” he said.
“There may be a missed opportunity there. There’s a significant amount of buying power in the public sector. If we need to have certain characteristics in the system, there are companies out there who can help, but we haven’t asked them to do so.”
— Hearing —
During an evidence hearing conducted by the committee in May, UK regulators and policy makers said the procurement process for AI systems may require reform. That would see officials involved earlier in the design of products and companies having a greater responsibility for their use.
Simon McDougall, a senior official at the the Information Commissioner's Office, the UK privacy watchdog, said the presentation of some AI systems as “black box” technology whose workings couldn’t be explained was often a result of lazy thinking and poor design. “There is a challenge to be made of vendors and people who are building the system,” he told the hearing.
The committee heard from Jimmy Elliott, the general counsel of US software developer SAS Institute, who said the ethical challenges posed by AI mean tech providers can't “deploy and scoot off."
“We will to some extent have then an ongoing responsibility to take on more risk, if we are going to be responsible vendors,” Elliott said, according to a transcript of the event published on June 17.
Ethical considerations aren’t part of the mandatory process for assessing bids, noted Sabine Gerdon, an official at the UK government’s Office for AI. Putting these requirements into the procurement criteria would mean “suppliers that think about ethics and build that in their system have a competitive advantage compared to other suppliers,” she said.
— 'Not holy writ' —
Evans' inquiry into government deployment of AI has as a backdrop a flurry of work by UK agencies on how to manage the risks of the technology. Among those with projects under way are the ICO, the Centre for Data Ethics and Innovation, and the Alan Turing Institute, a state-supported research centre.
As part of the inquiry, Evans's committee will look at the effectiveness of their combined efforts. “We will look at the various pieces and we will try to clarify in our own minds whether the various pieces of this jigsaw overall make a sensible picture,” he said.
The inquiry is looking at policies pursued in Singapore, Canada and the EU, and it is taking written submissions from the public and tech industry until July 12, and will report next year.
The state’s code of ethics, known as the Seven Principles of Public Life, might need revision if AI results in a “radically different” way of operating, Evans said. The principles were first codified in 1995 following a slew of scandals.
“They are not holy writ, but a way of articulating what expectations the public could reasonably have of how their money and their power is used by those working on their behalf. If services are going to be delivered differently, we need to be at least open to the thought that there are different ways in which to articulate that,” he said.
While the potential for AI decision-making systems to adopt and amplifying the biases found in the data they're fed has troubled many policymakers, Evans said there were grounds for optimism.
“We are already starting to feel that AI, rightly applied, could have positive implications in terms of standards,” he said. “It may be easier to work out what the biases are in a machine than human beings, who are quite difficult to deconstruct.”
Related Articles
No results found