Critics say US housing authority wants to legalize algorithmic discrimination

19 August 2019 00:00

Legalized discrimination could return to the US as long as it’s perpetuated by machines, warn civil rights advocates alarmed at a regulatory filing by the Trump administration.

Under a proposed update to US Department of Housing and Urban Development policies, landlords, banks and insurance companies could beat lawsuits alleging bias if they show their decisions were based on the outcome of a neutral algorithm or shaped by a vendor-developed computer model.

The proposal, favored by home insurance and mortgage industries, would make it much harder for fair-housing advocates to prevail in cases of unintentional discrimination, situations of so-called “disparate impact” where exclusion isn’t the stated goal but nonetheless is the outcome, often revealed through statistical analysis.

The Supreme Court upheld the doctrine of disparate impact in 2015 after the state of Texas questioned its constitutionality. “This proposed rule is intended to increase legal clarity and promote the production and availability of housing in all areas while making sure every person is treated fairly under the law,” HUD Secretary Ben Carson said in a statement.

HUD officials say judges should be able to dismiss disparate-impact lawsuits when defendants show their machine-assisted decisions didn’t depend on a model built to exclude applicants based on their ethnicity or other protected status, or use “close proxies” to sort out protected classes.

Critics say HUD, in its proposal, overlooks a basic fact of machine learning: Algorithms don’t need close proxies to discriminate.

They also worry the housing agency wants to diffuse accountability by giving safe harbor to companies that rely on third parties to develop algorithms. Automated processes already inform the vast majority of mortgages made in the US today, said Lisa Rice, president and CEO of the National Fair Housing Alliance.

As computer models come to guide everything from job offers to medical care, their use has provoked concern that industries of all types use algorithms to digitally redline populations. Machine learning is effective because it draws on vast pools of data, allowing computers to make connections between seemingly unrelated data points. With enough data, there’s no need for a close proxy.

During a Friday press call, HUD General Counsel Paul Compton defined a close proxy for women as someone who “wears a dress most often.” A more common example is ZIP code and income, since geographic location, income and race are tightly correlated in the United States.

But “you could have an algorithm pulling together variables that seem neutral on their face” that nonetheless eliminates nearly all African-American applicants, Rice said.

For example, an algorithm could combine the state where an applicant bought a car, the model year of the car, and how many times the owner has moved. By setting the right parameters — such as returning results that include more than five moves — that algorithm can identify African-Americans with 90 percent accuracy, Rice said.

“There are things like that that come up all the time. Lenders know it,” she added.

Critics also fault HUD for proposing to let the housing industry turn back lawsuits if its data model comes from a “recognized third party that determines industry standards.”

Artificial intelligence is often sold as an online service. If HUD gets its way, the rule will lead to a situation where plaintiffs in disparate-impact cases successfully point to assertions from their vendor about nondiscrimination, while simultaneously giving those vendors no reason to investigate whether their product actually produces disparate impacts, said Jacob Metcalf, a researcher with Data & Society, a New York-based think tank.

The language is a “perverse incentive to actually not ask about discrimination,” he said.

Even should statistical analysis show that a third-party algorithm produces disparate impacts, the new rule would make it harder to sue the algorithm’s paid users.

Under current law, “if there is a statistical disparity, the burden shifts to the defendant to justify the criteria used to show there was nothing else they could have done that suited their purposes,” Compton told reporters. Under the proposed rule, “the burden will be on the plaintiff to show these were artificial requirements that really weren’t needed.”

National housing advocate Rice accused HUD of “basically saying they don’t care if these companies continue to use the version of the model that has a more discriminatory outcome. Because it’s developed by a third party, it’s OK.”

Advocates will find it hard to break the barriers of secrecy and proprietary intellectual property wrapped up in algorithms to establish that the models are skewed, said Frank Pasquale, a University of Maryland law professor who studies machine-assisted decision-making.

“It’s just a perfect way of distributing liability so that nobody ends up accountable,” he said.

Today’s official publication of the proposed rule begins a 60-day period for public comments, and advocates have vowed to fight it. At stake is more than housing discrimination, they say.

“This is the canary in the coal mine," Pasquale said. "This is the initial effort to effectively unravel decades of antidiscrimination law with sophisticated algorithms."

Related Articles

No results found