Vizio case shows the hidden pitfalls of privacy, unfairness
By Neil Averitt . Originally published on FTC:Watch on June 16, 2017
The Federal Trade Commission is doing solid, necessary work in the privacy area, using its consumer "unfairness" authority to rein in companies that spy on people. Yet problems lurk beneath the surface. The unfairness authority remains poorly defined, and if the defense bar is alert the agency may soon have to choose between definitions that are either too broad or too narrow. Now would therefore be an excellent time to start creating some additional options.
These issues came up in the commission's case against television maker Vizio earlier this year. Vizio is one of the world's largest makers of Internet-connected "smart" televisions. It has sold more than 11 million sets in the US. These were designed so that they could capture continuous information about the program being played — whether from consumer cable, broadband, DVD, over-the-air broadcasts, or streaming devices — and then forward this information back to the company. The viewing information about each individual set was matched up with demographic data about things such as household wealth, was made anonymous by the removal of customer names, and was sold to data managers for purposes such as audience measurement or targeted advertising.
That's troublesome enough in itself, but it could easily become worse. The anonymity might be pierced, or information about viewing habits might be sold in the future to other and less benign buyers.
The FTC had a legal handle on this conduct, however, because the customers never gave valid consent to Vizio's monitoring. The agency charged that the conduct was improper on two different grounds.
Some grounds involved deception. Consumers had been told that the monitoring software was a "Smart Interactivity" feature, blandly described in the settings menu as something that let the television make "program offers and suggestions." Given this partial description of the program's functions, the failure to describe the remaining, monitoring features can be understood as deception by omission. (A later one-time pop-up notice did specifically mention the monitoring, but this was too fleeting to constitute effective disclosure.) Even where consumers made a conscious decision to leave the software in place, therefore, rather than disabling it, this didn't constitute valid consent to monitoring.
Other, broader grounds involved unfairness. The conduct met the three-part test for consumer unfairness. It involved a substantial harm to consumers, in a form that they could not have reasonably avoided, which wasn't outweighed by other benefits to competition or consumers.
Vizio agreed to a stipulated order against the practices and to a token financial payment of $2.2 million. The commission accepted the order on Feb. 6 on a vote of 3-0 in one of Chairwoman Edith Ramirez's last official actions.
(Maureen Ohlhausen, the agency's current acting chair, concurred separately to ask for better information about the importance to consumers of safeguarding various kinds of data. That's a vital factual question, but something different from the legal definition of the unfairness power.)
The commission thus reached a fine result in this particular case. It used the full range of its authority, didn't overreach, halted the abuse, and set precedent for the future.
The problem is the long-term nature of the unfairness precedent. This may turn out to be either a good bit broader or narrower than the commission needs.
As articulated in the Vizio case, the unfairness standard is alarmingly broad and vague, a throwback to the jurisprudence of an earlier generation. Conduct may be condemned as an "unfair…act or practice" if it (1) causes substantial injury, (2) which is not offset by other benefits, and (3) could not have reasonably been avoided by consumers. Just think about what that really means. This is basically an omnibus balancing test, empowering the commission to weigh all the relevant factors and then do the right thing.
To be sure, this standard has been examined and found acceptable in a number of ordinarily skeptical places. The commission's 1980 policy statement on consumer unfairness used very similar language. Congress, worried about the FTC's perceived overreaching in the 1970s, codified that statement's three-part test as a new section 45(n) of the FTC Act. And reviewing courts have found that this formula provides acceptable levels of notice and guidance, as the Third Circuit did in Wyndham Worldwide in 2015.
But this level of comfort really rests on the fact that the agency has been sparing and prudent in its use of the unfairness power. It is easy to imagine the uproar if some future commission asserts a general power to do good and to right wrongs.
There is an alternative.
The 1980 commission actually intended a different construction of its policy statement. The key element to them was the notion that consumers could not reasonably have avoided the harm. Consumers normally avoid harm by making appropriate selections in the marketplace. Thus, a harm that couldn't be avoided was one that stemmed from conduct impeding this exercise of consumer choice. Unfair conduct, in other words, was not just anything that injured consumers generally, but conduct that injured free and informed purchase decisions in particular.
Unfair practices under this definition would be fairly specific. They include things like coercion, bribery, withholding of material information, undue influence over vulnerable consumers, and deception (which is one specialized form of unfairness).
The commission underscored this construction in footnote 47 of International Harvester, 104 F.T.C. 949, 1061 (1984), the decision in which the 1980 policy statement was memorialized as an appendix: "Some commentators have interpreted our policy statement as involving essentially a general balancing of interests, with all the imprecision of that course, rather than a definable economic rule. In fact, however, the principal focus of our unfairness policy is on the maintenance of consumer choice or consumer sovereignty, an economic concept that permits relatively specific identification of conduct harmful to that objective."
The commission could always go back to this construction. Section 45(n) is phrased as an outer-limits boundary on the commission's powers, rather than as a complete definition of unfairness. The agency remains free to articulate more stringent requirements.
That would be nice, but it wouldn't get us entirely out of the woods. The "consumer choice" construction solves the problem of overbreadth, but it may lead to a definition of unfairness that is now too narrow.
This definition requires the context of a purchase transaction. If a firm fails to disclose material information about privacy at the time a product is bought — and if this isn't close enough to an affirmative representation to count as deception — then it can be challenged as unfair. But in the new world of electronic data and monitoring, many abuses will be unrelated to purchases.
This gap can be seen in the Vizio case itself. People who bought televisions in the first few years got them without the monitoring software installed. For those customers Vizio installed the software later and remotely. As it happened, Vizio also provided some brief notice at the time — notice which was inadequate and permitted a finding of deception. But suppose that no notice at all had been given? Customers would have been even more seriously injured, but the injury would have been years after the purchase transaction and not obviously related to it.
Today's electronic privacy issues can show up in many other forms that are equally remote from purchase decisions. Consider possible misconduct by background data custodians who may never interact directly with consumers. Or consider the needs of firms involved in international data flow, who may need to show that they are comprehensively supervised under the EU-US Privacy Shield.
How can these situations be reached through principles that are neither too broad nor too narrow?
Three possible approaches come to mind.
Until a satisfactory definition of unfairness is devised, one option is to make sure that cases have a deception count as backup wherever possible. Even in the absence of affirmative representations, a great deal can be accomplished through the concepts of half-truths and of omissions that are deceptive in light of consumers' prior assumptions.
A second approach would be to use the narrow, choice-based definition of unfairness as a general proposition, but to identify privacy as a specific add-on that is treated by different standards. This is conceptually untidy, but there are practical precedents for it. Even the original 1980 statement, which focused strongly on the choice approach, identified health-and-safety risks as special cases that could be reached apart from purchase contexts. As an example, the commission cited a case where a firm had distributed free sample razor blades in such a way that they could come into the hands of children.
The third and most definitive solution would be to recognize that privacy is really a whole new area of responsibility for the commission — distinct from both antitrust and consumer protection — and persuade Congress to pass separate legislation on it.
These clarifications will also help the work of other agencies such as the Consumer Financial Protection Bureau, whose statutes are modeled on the FTC Act.
There is a fourth approach, which should be avoided. Attorneys may want to compensate for a weakness in one legal theory by importing concepts from another. If an unfairness case appears weak on particular facts, for example, there is a temptation to point out that the conduct is also deceptive. But this mindset very quickly blurs the lines between different laws. In the 1970s, the agency routinely asserted that conduct was illegal because it was unfair, deceptive, and an unfair method of competition — a portmanteau approach that made it impossible to know which elements of the offense had to be proved or rebutted.
To sum up, the commission's privacy program is on track and doing good work. It relies too much, however, on unfairness concepts that haven't been updated and formalized. Rather than waiting until this problem actually bites somebody, the agency may want to begin refining its standards now. The key is to be clear and specific so that the courts will respect the new theory.