Hillary Brill is the Glushko-Samuelson Intellectual Property Practitioner in Residence at American University, Washington College of Law, where she teaches intellectual property and technology law issues. Brill also taught at Georgetown Law Center, where she taught Internet Policy and the Internet Economy issues. Previously, she was Head of Government relations, Senior Global Counsel, and Legislative Counsel for eBay and PayPal.
The rapid growth of the Internet of Things (IoT)—devices that connect to the Internet and each other, such as smartphones, smart thermostats, smart cars and smart fitness trackers—has brought incredible technological advances and thorny regulatory issues, particularly in information privacy. Traditional regulators of privacy, namely the Federal Trade Commission (FTC), have stretched to apply conventional tools to technological advances and the privacy challenges they bring.
An analysis of the latest FTC cases and outcomes reveals that the FTC has established new rules of the road for how information practices and principles are expected to be applied to new technologies, specifically with respect to information that “Tracks” consumers’ habits, such as viewing habits. Initially, the FTC applied traditional Section 5 “deception” jurisprudence in a novel way to advance traditional notions of privacy, but recently, it transitioned to a new paradigm in VIZIO, which was an unprecedented “unfair tracking” cause of action. These new rules have implications for current and future users and manufacturers of tracking devices, such as smart TVs and more IoT devices.
I. When Traditional Privacy Regulations Meets Unconventional Tracking Methods
At the beginning of the Internet age, fair information practices (in hindsight) were comparatively straightforward. The collection of personal information was obvious and in plain view. Purchasers completed online order forms, with payment and address details, which were used in mostly non-surprising ways—e.g., order fulfillment and customer service. Consumers had to fill out their home or email address for a sweepstakes giveaway. Individuals could opt out of marketing messages and simply get on a do-not-call or do-not-email list.
As a result, traditional rules grounded in traditional ideas of privacy were simpler and easier to implement. Today, these rules do not necessarily apply as even our idea of privacy is no longer concrete.
As the boundaries defining privacy are challenged, it creates uncertainty for businesses and consumers that are part of the economy of IoT. This uncertainty creates challenges for regulators, like the FTC, that need to protect users without unnecessarily stifling innovation. Our traditional modes of governing privacy may not be well suited for meeting these new challenges. While traditional data collection practices and compliance expectations certainly still exist, today, they no longer present the same range of enforcement or policy challenges to regulators and businesses.
Today, much of what the IoT does is enable rich learning about the world through the tracking of activity (personal or not) and analyze that tracking information. This tracking may occur across time (e.g., how many steps you take in a day) and across devices (e.g., whether you already watched a particular YouTube video on your iPhone and should be recommended a related show on your iPad). So-called “tracking” provides valuable individualized recommendations (e.g., geolocation tracking may help you find the closest gas station or emergency room) or informed aggregate analysis that creates overwhelming human value (traffic trends, or aggregate health data about a flu outbreak).
In the face of these new technology advances and the massive amounts of information collected through new tracking, the FTC recognized the need to reevaluate privacy in light of IoT, and it held its first workshop in November 2013. The workshop—and ensuing reports and enforcement cases—confirmed that the IoT era presents unique problems, which requires novel expansions of consumer protection doctrines, even where devices are only handling data points that are traditionally viewed as “anonymous,” such as IP addresses. Simply put, the traditional rules no longer apply.
II. The Evolution of New Rules of the Road
The FTC Act empowers the FTC to bring enforcement actions when companies engage in “unfair or deceptive acts or practices in or affecting commerce.” When the FTC brings an enforcement action against a company, the commission prepares a complaint concerning the alleged conduct and that complaint serves either as the basis for a settlement or the matter is litigated in federal court. If there is a settlement or a successful prosecution by the FTC, the resulting order typically contains certain common provisions binding the defendant: injunctive relief against continued violations, compliance and reporting obligations, recordkeeping requirements, employee acknowledgment of the order, and, in some cases, equitable monetary relief. The FTC is generally limited to equitable monetary relief, except where it has been given explicit statutory authorization to bring civil penalties. Importantly, these orders often have a 20-year term, and violation of the order can lead to civil penalties of up to $40,000 per violation.
A. UNFAIR OR DECEPTIVE APPROACH TO PRIVACY
The FTC’s enforcement actions have come to operate as a de facto common law of informational privacy, and this “common law” is properly read to apply to the IoT as well. Enforcement actions by the FTC must be understood to apply universally, and the principles established through enforcement actions are expected to be followed.
1. UNFAIR ACTIONS
The FTC may bring an enforcement action if it views a company’s practice as unfair. The FTC’s 1980 Policy Statement on Unfairness explains that “unfair” acts or practices “cause or [are] likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”
The FTC Policy Statement on Unfairness spends considerable time covering the “substantial injury” prong. For a practice to be “unfair,” it must “cause or be likely to cause substantial injury to consumers.” The injury cannot be trivial or merely speculative. Consequently, most cases brought under the unfairness doctrine involve allegations of monetary harm. Of course, practices that impose substantial health or safety risks on consumers have also been subject to scrutiny under the unfairness standard.
It is worth noting that it is difficult to find in the FTC Policy Statement on Unfairness room for “privacy harms”—i.e., emotional harm caused by unauthorized access to or disclosure of private information—that are not connected to tangible harm. Indeed, the FTC Policy Statement on Unfairness expressly states that emotional impact and subjective harms are generally insufficient to support a claim of substantial injury.
It is for this reason that until recently, the FTC has only alleged unfairness in instances involving the unauthorized disclosure of (1) directly-identifiable personal information that is (2) clearly “sensitive,” (e.g., health information, financial information). For example, the FTC has brought enforcement actions when a company posted illicit photographs of individuals, along with their names and contact information, without consent; when a company collected and transmitted usernames, passwords, financial account information and other sensitive personal information without consent; and when the FTC has alleged that sensitive health information was not adequately protected from unauthorized disclosure.
A practice is not unfair if it is “reasonably avoidable,” however. A consumer can reasonably avoid a substantial injury where “they have reason to anticipate the impending harm and the means to avoid it, or they may seek to mitigate the damage afterward if they are aware of potential avenues toward that end.” This is the basis for most notice and consent forms; if a practice causes or is likely to cause substantial injury, then appropriate notice should be provided and sufficient consent obtained prior to engaging in the practice. Otherwise, a data practice without notice and consent may be vulnerable to liability under the “unfairness” doctrine.
2. DECEPTIVE ACTIONS
The FTC may also bring an enforcement action if a company engages in deceptive acts or practices. In 1983, the FTC published the FTC Policy Statement on Deception. The FTC Policy Statement on Deception explained that deceptive acts or practices are practices that involve a “representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” In other words, a practice is deceptive within the meaning of Section 5 “(1) if it is likely to mislead consumer acting reasonably under the circumstances (2) in a way that is material.” Whether a misrepresentation is likely to mislead is based on the “net impression that it is likely to make on the general populace.” The FTC’s analysis requires “‘common sense,’ and . . . a section 5 violation is not determined by fine print, technicalities, and legalese.”
In approaching the issue of materiality, the FTC Policy Statement on Deception explained that a “‘material’ misrepresentation or practice is one which is likely to affect a consumer’s choice of or conduct regarding a product. In other words, it is information that is important to consumers.” Additional guidance goes on to state that “the Commission presumes that express claims are material . . . [w]here the seller knew, or should have known, that an ordinary consumer would need omitted information to evaluate the product or service, or that the claim was false.”
The FTC Policy Statement on Deception also recognized that certain claims may be more important than others, especially those that “significantly involve health, safety, or other areas with which the reasonable consumer would be concerned.” Characteristics “central” to a product at issue are presumptively material.
Indeed, FTC enforcement actions on privacy have generally focused on allegations of deceptive privacy policies as opposed to allegations of unfairness. Consequently, this is where we begin our discussion of recent FTC cases and how they have grappled with privacy and the Internet of Things and created new rules of the road.
III. FTC Enforcement Moves from Deception to Unfairness
The FTC, in response to regulatory challenges for new IoT technology, invoked the deceptive practice standard as seen in its enforcement decisions in Nomi, InMobi, and Turn. The use of the deceptive standard was a flexible approach to a problem that did not traditionally fit the deceptive practice or unfair standard. However, in the recent VIZIO enforcement, the FTC moved away from deceptive to an unfairness approach. In the process, the commission created an unprecedented “unfair tracking” standard, which applies “consent and choice” rules to a wholly new category of “sensitive” information. These are new rules of the road that have implications for any company that “tracks” information.
A. In the Matter of Nomi Technologies, Inc.
The FTC’s 2015 settlement with Nomi signaled a new approach to the FTC’s regulation of the Internet of Things. Nomi is a company that has “listen” technology to help retail stores learn about customer patterns and traffic by using its sensors in retail stores to collect and analyze the movement of mobile devices held by consumers. WiFi routers collected the MAC addresses (uniquely-identifiable addresses) being broadcast by the mobile devices of customers. In addition to these device identifiers, Nomi collected other information like WiFi signal strength (to determine a device’s proximity to the sensor or router) and the date and time that the MAC address was collected (to track activity over time). Neither Nomi, nor its retail clients, were alleged to have paired any of this tracking data with known shoppers.
Instead, Nomi collected and analyzed this data to provide aggregate analytics to the participating retail store clients. Retail stores could learn from this data the percentage of individuals passing by that actually entered the store, how long customers spend at their store on average, the rate of repeat customers, and how many customers visited multiple locations of the same retail chain. Again, Nomi was never alleged to have used the data to retarget marketing to customers’ devices, nor to have attempted to re-identify the customers (determine their names or contact information).
Nomi (and its retail clients) did not publish notices on the premises of participating retail stores explaining its data collection practices. Customers were not specifically made aware of Nomi’s tracking practices in the context of their visits to or other interactions with retailers. Customers were not alerted to the presence of tracking technology at all, and had no means to encounter the Nomi brand.
B. United States v. InMobi PTE, Ltd.
The FTC also used its deception authority in InMobi to police a similar issue of end-user tracking. Like Nomi, the FTC’s case against InMobi challenged a defective privacy control under claims of deception, rather than unfairness. InMobi marketed a software development kit (“SDK”) that could be integrated into mobile applications to enable the delivery of advertisements (for example, banner ads) within the mobile app environment. A mobile app developer looking to monetize a new app could incorporate this SDK into its app to deliver ads to app users.
The InMobi SDK enabled ads to be targeted based on geolocation data (latitude and longitude). Unless disabled, the InMobi SDK would access the device’s geolocation application programming interface (“Geolocation API”) and use that data to target ads delivered through the InMobi SDK. Consistent with requirements by both Android and iOS, after installing apps with the InMobi SDK embedded, device users were prompted whether they wanted to deny the app access to the Geolocation API. By disabling the Geolocation API, neither the Android nor iOS device would make geolocation data available to the InMobi SDK.
However, the InMobi SDK also collected data about the WiFi networks to which devices connected. For users who did not disable the Geolocation API, InMobi simultaneously collected both latitude and longitude data through the Geolocation API and details about the WiFi network to which each device was connected at that moment. With these two data sets, InMobi could populate a database that mapped each WiFi network to the latitude and longitude delivered by the Geolocation API. Consequently, the locations of app users who had disabled access to the Geolocation API could nevertheless be pinpointed by merely looking up the location of the WiFi network they were using. InMobi targeted ads to users based on the location they derived through this WiFi network lookup process.
As with Nomi, the concern presented by InMobi’s practices was that InMobi tracked geolocation without the device owner’s actual notice or consent (indeed, some would argue, in contravention of the express intentions of the user). The FTC alleged that InMobi’s practices were deceptive because they were allegedly false as compared to certain representations made not to app end users, but to the app developers who incorporated the InMobi SDK. The Complaint alleges that InMobi’s SDK integration guide and product marketing materials suggested that it was the Geolocation API feature alone that enabled geo-targeting.
Like in Nomi, the FTC applied its deception authority flexibly to address the alleged tracking of highly-specific consumer activities on their connected devices without notice and consent. But unlike Nomi, the FTC did not look to consumer disclosures; instead, the Court found InMobi principally liable for having made deceptive representations to its business partners (app developers), not to consumers.
C. In the Matter of Turn, Inc.
The FTC used its deception authority to address a matter of “tracking,” for a third time in In the Matter of Turn, Inc. This case involved a similar issue in which a defective control was challenged under the FTC’s deception authority, not under unfairness. Turn offers a digital marketing platform (“DMP”) designed to allow advertisers to target consumers across devices. The digital advertising ecosystem Turn relied on uses various identifiers and techniques to try to connect user activity across the Internet and across devices to inform (personalize) the advertising delivered to particular users.
Many will be familiar with two types of identifiers Turn used to track digital activity across devices─cookies and device advertising identifiers. Cookies, as the Turn Complaint describes, are unique text files stored in a browser that allow a company like Turn to recognize the user accessing a website. Device advertising identifiers─like the Google’s advertising ID and Apple’s Identifier for Advertisers (“IDFA”)─allow companies like Turn to recognize a device that accesses a website.
Internet users looking to control their information privacy by preventing efforts to track their activity across devices can generally do so by deleting their cookies and resetting their device advertising identifiers. But Turn also collected another type of identifier called a unique identifier header (“UIDH”) from those using the Verizon Wireless network. Web traffic data by users of the Verizon Wireless network was encoded with this UIDH and, like in InMobi, which allegedly mapped WiFi network data to location data, Turn allegedly mapped its UIDH data to device advertising identifiers and cookies. As a result, if a user of the Verizon Wireless network attempted to stop efforts to track cross-device activity by deleting cookies and resetting device advertising identifiers, Turn could easily read the UIDH on later device activity and know which cookies to replace in the user’s browsers and reconnect the reset device advertising identifier to the existing profile.
Again, the FTC attacked Turn’s practice on deception grounds, not on grounds of unfairness. According to the Complaint, Turn voluntarily posted online privacy guidelines which stated, in pertinent part, that users could opt-out of tracking by opting out of accepting cookies. The Complaint alleged that this was deceptive because doing so would not ultimately disable tracking for those using the Verizon Wireless network.
IV. New Rules of the Road from the VIZIO case: Unfair Tracking
Previous seminal FTC cases concerning highly-specific tracking of users via mobile devices looked to deception grounds as a basis to effectively impose notice-and-choice principles on those new use cases. On February 6, 2017, however, the FTC in VIZIO found privacy violations for the first time by invoking and creating a new cause of action – “unfair tracking.”
The Complaint alleged that VIZIO offered a feature called “Smart Interactivity” that used embedded “automated content recognition” (ACR) software in VIZIO Smart TVs. ACR software can automatically detect the content appearing on a television. VIZIO allegedly collected information about what was shown on VIZIO TVs (“viewing data”) and shared it with authorized data partners, who then used the viewing data to carry out familiar services─(1) the generation of summary reports and analytics about device (television) usage and (2) ad re-targeting. Per the Complaint, neither process required the association of viewing data with direct, personally-identifiable information (like name or contact information). Instead, the Complaint alleges that VIZIO paired viewing data with device IP addresses and that each IP address was sometimes used to (1) enhance data with demographic information to allow for richer analysis and (2) match TVs to other devices for ad-retargeting and for other analytical purposes.
A. NEW RULE: UNFAIR TRACKING
Count 1 of the complaint alleged a cause of action pled for the first time─“unfair tracking.” This allegation was unprecedented for several reasons. First, the count provided the full weight of the FTC’s enforcement authority behind a concept it had only previously endorsed in informal speeches and a letter to the FCC─that an IP address could be treated as personally-identifiable, or that, at the very least, the fact that IP addresses are not directly personally-identifiable does not mean data associated with an IP address are not deserving of privacy protection. Indeed, the Complaint specifically acknowledged that VIZIO’s contracts with licensees prohibited the re-identification of viewing data, yet this precaution was not sufficient to foreclose allegations of “unfairness.”
Second, the “unfair tracking” count created a new category of sensitive data─“viewing data.” It alleged that the viewing data was “sensitive.” The count states that consumers “would not expect” viewing data to be collected from their televisions. Commissioner Ohlhausen noted in her concurring statement that there may be policy reasons to treat viewing data as sensitive, as evidenced by the Cable Privacy Act which protected viewing data in other contexts.
Taken together, the FTC alleged that VIZIO’s collection and sharing of viewing data without sufficient notice and consent met a new count for “unfair tracking” and it “caused or is likely to cause substantial injury,” as is required to sustain a Section 5 claim for unfairness. However, Commissioner Ohlhausen pointed out that the FTC must actually “determine whether the practice causes substantial injury” and explained that “[t]his case demonstrates the need for the FTC to examine more rigorously what constitutes ‘substantial injury’ in the context of information about consumers.” The link between viewing data and “substantial” injury is not apparent on the face of the Complaint.
B. NEW NOTICE-AND-CHOICE RULES FOR VIEWING DATA
The new notice-and-choice rules set forth in VIZIO are perhaps just as important as the new count for “unfair tracking.” Section II of the Order established a new set of notice-and-choice ground rules for the collection of viewing data:
Second, the notice must contain certain substantive elements, including a description of the types of viewing data that will be collected and used (which includes any data appended to viewing data), what will be shared with third parties, and the purposes for sharing that data; and
Third, when the notice is provided, true “opt-in” consent must be collected from the consumer before viewing collection may be enabled.
The FTC in VIZIO established new unprecedented regulations of the Internet of Things. The VIZIO case established a new count of “unfair tracking” and a new set of notice-and-choice rules. The VIZIO decision reveals that the FTC, under its new leadership, is prepared and willing to flexibly interpret the unfairness standard and to establish new standards and develop new tools to regulate the IoT. These new rules of the road will require guidance for companies to follow moving forward. The FTC needs to continue its proactive guidance with industry and stakeholder participation to ensure that its new rule is effective in protecting information privacy as well as allowing the tremendous benefits and advances that the IoT promises to bring.
Until the Federal Trade Commission is fully staffed with new Commissioners and leaders, it may be premature to assess the long-term impact of this case. Although, it is expected that all entities that engage in highly-specific profiling and tracking practices, even on a basis that was formerly considered “anonymous,” will want to reevaluate whether and how they provide detailed notice and secure individual consent for use that consumers may not otherwise expect within the context of their use of any particular IoT device.