Online platforms' commitment to ad relevancy could be legal Achilles' heel

18 April 2019 00:00

Technologists may have discovered a secret vulnerability in online platforms’ otherwise impenetrable legal shield.

Unsurprisingly, it’s connected to advertising, at once the source of Big Tech’s economic might and the backlash fueling a growing pile of lawsuits, including challenges filed by governmental agencies.

The weak spot is ad distribution: the algorithms meant to get paid content before as many eyeballs as possible, as quickly as possible. New findings suggest those algorithms are racist, ageist and chauvinistic. So is a lot of advertising. But US law demands that opportunities for housing, employment or credit be cast widely and impartially.

For more than two decades, digital giants have turned back accusations that they enable a host of illegal activities by citing Section 230 of the Communications Decency Act. Research and a complaint filed by an unlikely government agency — the Department of Housing and Urban Development — could show how to slip past that defense.

The CDA is a cornerstone of today’s permissive online landscape: it immunizes tech platforms from liability related to the content users post online. “If you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune,” the US Court of Appeals for the Ninth Circuit wrote in 2008 in Fair Housing Council of San Fernando Valley v. Roommates.com.

The law has let Facebook, Google and Twitter defeat claims they support terrorism by letting organizations such as Hamas post messages. It protected Grindr against a lawsuit filed by a New York man who experienced months of daily harassment directed by an ex-boyfriend impersonating him on the hookup app. It’s the reason now-defunct social network Experience Project didn’t suffer legal consequences after connecting a Florida dealer of fentanyl-laced heroin to a man who died from overdosing on it.

But where attorneys beforehand sought to hold online platforms culpable for the actions of others, they’re now implicating them directly.

“Decisions about who to show what, and when, can give rise to liability,” said Peter Roemer-Friedman, a civil rights attorney with Outten & Golden. He’s not an impartial observer: Roemer-Friedman is suing T-Mobile, Amazon and others for allegedly using Facebook to discriminate by age in employment advertising.

Neither is Eric Goldman, a Santa Clara University law professor and a vociferous champion of Section 230. Usually when asked if Section 230 applies, he says "yes."

This time, assuming that activists’ findings are correct, the Menlo Park, California, social media giant has entered a “gray area where I’m not sure how things will proceed,” he said.

“Part of the reason I don’t know is I don’t know when I can think of another circumstance where some advertiser places a legal ad and the publisher does something illegal with it,” he added.

Researchers have long suspected that ad distribution algorithms — a different layer of audience selection from the filters that advertisers themselves select — skew ad placement by excluding genders, ethnicities or other classes protected under civil rights law.

Facebook in particular is famed for its insistence that advertising be “relevant” to users. “Facebook ads work because they’re relevant for people, and easy to create and measure for businesses,” as the company itself puts it.

Determining relevancy is a two-step process. First, advertisers select their audience, whether by trusting Facebook to find it through specifying criteria such as “moms of grade school kids,” or by uploading their own list.

The social media giant recently promised to stymie discriminatory ads by funneling landlords, employers and financial institutions into a dedicated portal where their options for audience targeting will be limited.

But Facebook’s promises account for only half the problem, critics say. The second step is distribution, over which advertisers have no control.

The CDA may well protect Facebook from being sued for content that third parties upload to its servers, but it can’t shield illegal actions stemming from how Facebook itself distributes that content. That’s the theory behind a complaint federal attorneys filed in late March with the Department of Housing and Urban Development administrative law court. It charges that Facebook’s reliance on machine learning to anticipate who will click on ads leads to excluding protected groups.

Even if advertisers don’t want discriminatory groupings, the ad distribution algorithm will force ads into becoming exclusionary, they wrote. Facebook violates civil rights law, not because of the actions of a third party, but because of its very own algorithm coded to relentlessly pursue relevancy.

Where other attempts to argue around Section 230 failed in framing online platforms as more than passive hosts of third-party content, this may succeed. Plaintiffs against Grindr argued the app doesn’t just publish profiles, it sorts and matches them and uses a geolocation feature to track users’ location.

Experience Project steered users into online groups based on keywords inside users’ posts. Those arguments failed because increased website sophistication doesn’t automatically create liability — those sorting, location and recommendation tools were equally available to malefactors and law-abiders alike. As a result, the platforms aren’t implicated in the development of unlawful content.

The algorithm isn’t neutral tool, but a material contributor to illegal behavior, activists argue. For legal buttressing, they point to the Ninth Circuit, which denied immunity to California operators of Roommate.com for discriminatory housing ads when the discrimination originated in how the website filtered ads before showing them to users.

How exactly the algorithms function has resisted investigation even as researchers accumulated evidence from controlled experiments showing skewed distribution. A 2018 academic study of Google’s ad delivery system found discriminatory activity such as employment ads for truck driving jobs being shown to men while women saw ads for secretarial positions. But the paper only speculated on why that was the case.

A new paper initiated by Washington, DC, think tank Upturn, "Discrimination through optimization: How Facebook's ad delivery can lead to skewed outcomes," may shed new light. Collaborators from Northeastern University and the University of Southern California tried advertising to a protected class-neutral audience of American adults, and failed. A simulated ad campaign for employment as a lumberjack was shown almost entirely to men, while ads for supermarket clerks clustered similarly to women. The audience for housing ads varied depending on whether photos of white or black families accompanied them, with the skew leaning toward the same ethnic group. It’s likely Facebook employs an image recognition system, the study concludes.

“If I were Facebook, I would be at least a little bit nervous,” said Aaron Rieke, managing director of Upturn.

Facebook didn’t respond to a request for comment to this story, but did point to an earlier statement declaring itself “disappointed” with the HUD complaint, as well as its response to the Upturn study, in which the social media giant said it’s “been looking at our ad delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic — and we're exploring more changes.“

In none of the statements does Facebook deny that its ad distribution algorithm discriminates.

Related Articles

No results found