The nation’s consumer protection watchdog–the Federal Trade Commission (FTC)–took extraordinary law enforcement measures on January 11, 2021, after finding an artificial intelligence company had deceived customers about its data collection and use practices.

In a first of its kind settlement involving facial recognition surveillance systems, the FTC ordered Everalbum, Inc., the now shuttered maker of the “Ever” photo album app and related website, to delete or destroy any machine learning and other models or algorithms developed in whole or in part using biometric information it unlawfully collected from users, along with the biometric data itself. In doing so, the agency expanded its range of law enforcement tools and signaled it would use them again to protect consumer rights in matters involving artificial intelligence (AI) technologies.

I won’t go into the agency’s findings of fact and details of Everalbum’s violations, which you can read here. But worth exploring is the fundamental question the Everalbum decision raises about the propriety of agency command-and-control regulatory governance of AI systems when some of the entities the agency regulates–i.e., the large AI technology companies–operate as if “all engagement [by customers using with their systems] is good engagement — the longer the better — and all with the goal of collecting as much data as possible”, as Apple’s Tim Cook recently suggested.  Even precedent-setting orders with delete/destroy and big monetary fines may have limited success in the face of such covetous policies.

What’s a feasible alternative? It might involve a fundamental shift in how the law treats personal data rights.

Depriving Wrongdoers of Their Technology

One obvious takeaway from the Everalbum decision is the FTC’s stated position that one should not be allowed to unjustly enrich himself at the expense of others. As FTC Commissioner Rohit Chopra argued in remarks published with the Everlbum Consent Agreement and Order, it is critical “that the FTC meaningfully enforce existing law to deprive wrongdoers of technologies they build through unlawful collection of Americans’ facial images and likenesses.”

The thought that the government can order fully-trained machine learning models and algorithms be deleted or destroyed is likely viewed by some AI companies as a big deal, even the ones that do not collect facial images and likenesses like Everalbum did. For those whose intrinsic value may be closely tied to their technology, deleting models and data could cause long-term, possibly irrecoverable economic harm. For others, trained models and related algorithms may be key to continuity of operations, and deleting core technology and related data could force them out of business (in the case of Everalbum, the company said it shut down operations last summer because of competition from the large tech companies that dominate facial recognition).

It’s not hard to imagine, in the wake of the Everalbum decision, AI companies in the business of mining user data are undergoing internal reviews of operations to find those that are not wholly aligned with their published privacy and data use policies (as impenetrably vague and complex as they are), so they may realign operations and policies to reduce the risk of being accused of deceptive practices. This should result in a net positive benefit to the public.

Even larger tech companies may do the same, as they will want to avoid adding to mounting public distrust of AI systems and the possibility of over-regulation backlash.  But unlike their smaller competitors, larger companies can probably absorb the loss of one or two machine learning models, algorithms, a couple of data sets, and the investment in compute costs that was sunk into training those models (which can rise into the hundreds of thousands and even millions of dollars). After all, with the resources larger companies have on hand, more user data can be collected from users through other channels and new machine learning models can easily be rebuilt in a matter of a few days or weeks to monetize the new data.

What About Monetary Fines?

Notably, the Everalbum order does not include monetary penalties. As Commissioner Chopra noted, the FTC imposed its delete/destroy order on Everalbum while forgoing seeking monetary penalties under its other inherent powers. Those power were previously used by the FTC to impose fines on technology companies that misled consumers about their data collection and use activities. Facebook, for example, was fined a record $5 billion in 2019 following the Cambridge Analytics matter. In that case, two of the five-member commission called the amount insufficient and said “it would do little to change the company’s behavior.”

Even so, to deter unfair and deceptive trade practices in the context of data privacy, or even curb unacceptable and risky behavior, monetary penalties must continue to be part of FTC settlements. Consistently applied and large monetary penalties, with or without delete/destroy orders, may provide better disincentives while possibly also addressing other concerns that data-based systems create, like the economic incentives paradigm that has led to asymmetries of power tech companies have with respect to their customers when it comes to monetizing user data.

But the FTC can only go so far, given that its authority to impose fines lies within the realm of protecting consumers from unfair and deceptive commercial practices. Data miners and AI companies will, in full compliance of those legal parameters, continue to collect user data, some of it highly personal, for the purpose of extracting its inherent monetary value to make profits.

Did Chopra Hint at an Alternative Governance Approach to Personal User Data?

In Chopra’s remarks discussing the Everalbum decision, he says that the FTC’s law enforcement framework seemingly allows it to address the unlawful collection of Americans’ facial likenesses as part of its authority to impose penalties for unfair and deceptive commercial practices.  The term “likeness” in the context of faces (or faces in image data) is a term used in another area of privacy law: publicity. The legal framework that defines publicity law (also called “right of publicity”) is very much one of property rights, like intellectual property rights that automatically arise at the creation of new artistic works and inventions. Indeed, publicity rights have been likened to a form of trademark in one’s own name and likeness. Publicity rights are regulated by state law (statutory in some states, well-developed common law in others), so if the FTC asserts authority in matters involving a person’s face as well as their likeness, maybe that is a recognition that, like publicity rights and the right a person has to control his or her name and likeness, user data ought to inherently belong to the user at its formation as well.

The notion that electronic user data has commercial value and thus is protectable under right of publicity laws could provide the basis for a new legal framework to counter the collect-as-much-data-as-possible mindset that some AI companies subscribe to. Adapting the publicity rights framework to user data requires recognition of this new form of privacy rights, one that includes not just one’s electronic likeness data, but also any form of personal data that arises in the course of a person’s passive or active interaction with the technology world.  This might include data such as, which website did a person visit and when, what links did they click on, how long did they view articles, where do they live, what products have they purchased, do they surf using a phone or desktop application, where do they move throughout a city during the day, what things do they indicate online that they like or dislike, how attentive are they to certain topics online, and what do they say in online posts, among others.

What is required is recognition, first by legal scholars, courts and then lawmakers, in the interest of the individual in the exclusive use of his or her own data, just as predecessors first recognized privacy interests of individuals in other contexts and their right “to be let alone.”  W. Prosser, Privacy, 48 Cal.L.Rev. 383, 389 (1960). This new right should be recognized as existing in so far as it is represented by a person’s interactions with electronic data collection systems, and in so far as the use of the data may be of benefit to him or to others. Just as the right of publicity creates rights in one’s name and likeness under certain circumstances, this new recognition of user data rights would create rights in the nature of a data property right “for the exercise of which an exclusive license may be given to a third person, which will entitle the licensee to maintain an action to protect it.” (See Restatement (Second) of Torts, Sec. 652C).

I will have more to say about this later. For now, you can read about publicity rights applicable to AI systems here.

FTC Press Release:

Original complaint:


The post FTC Orders AI Company to Delete its Model Following Consumer Protection Law Violation first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.