Imagine you’ve spent months developing and deploying a revenue-generating deep neural network model only to discover that an attacker has stolen the model’s knowledge and will soon offer a service that will steer potential users away from yours.  Flashes of late nights and weekends spent collecting and cleaning data cross your mind, accompanied by a sinking feeling when you think about the significant monetary investment made in computation power.  The joy you felt finding just the right hyperparameters that made the model unique and, you hoped, lucrative are now in the past.  After second-guessing what technical measures could have prevented the cyber theft, you’re left evaluating legal options.

Artificial intelligence model theft is expected to increase in 2020 as the race for AI dominance heats up (Mosafi et al. 2019).  Deep neural networks, which are behind many of today’s most popular AI-based smartphone apps and recent advances in healthcare, autonomous vehicles, and biometric recognition, could be particularly targeted, as they are highly valued by company’s seeking operational efficiencies and revenue generation.

Model knowledge theft is different than traditional concepts of software theft or network breaches that result in exfiltration of company data from electronic storage devices.  Software theft typically involves stealing source code files or related company know-how and is often perpetrated by insiders or by outsiders using spear-phishing attacks that gain access into a company’s network.  Cyber attacks often focus on stealing user passwords, identification, financial, and other data following network system breaches.

Deep learning model knowledge theft involves exploiting a model’s powerful data classification abilities.  After being trained on thousand and sometimes millions of data records (often proprietary data), deep neural networks can take relevant new input data and map it to a label with a confidence value.  For example, a computer vision model properly trained could, if fed an image of a boat, output a “boat” label along with a percentage confidence (e.g., 95%).  An attacker can use input-output pairs along with confidence information from a target (or “mentor”) model to train a new deep learning algorithm that possesses the same knowledge and provides essentially the same results as the target, thus mimicking the original model’s knowledge.  Mimicking attacks can be applied to deep learning models used in autonomous vehicles, drones, facial recognition systems, text recognition apps, or any other deep learning use case.  (David 2019).

Unfortunately, some of today’s most-used technical approaches for preventing machine learning model theft, including withholding some of the model’s output (specifically, the confidence value) and electronic watermarking, may not stop attackers from extracting a model’s knowledge and mimicking that knowledge in the attacker’s own model.  To make matters worse, an attacker might pass off its mimicked model as new intellectual property, though all it did was piggyback off of someone else’s hard work (and investment).

Plan for model attacks

A company providing machine learning services via an application programming interface (API) for instance, or through some other input-output means, can reduce its risk of attack and the resulting potential for adverse financial impacts by taking certain proactive steps beyond technical measures.

At the very least, before a company grants access to its model, users should be required to agree to specific policies related to their activities on the service.  Those policies may be embodied in written terms of service, end user license agreements (click through agreements), and data privacy policies that provide for penalties, including monetary damages, and that establish personal jurisdiction and choice of law in the case of future court action.  A user agreement that defines “confidential information” should be updated to reflect attributes of deep learning models: knowledge representations in data-based trained algorithmic models, hyperparameters, and training datasets.

Though such agreements may not deter would-be thieves bent on replicating your model, without them certain legal recourse may be unavailable.  Thus, terms of service, end user agreement, and other application company plans and policies related to model use and activity on a company’s services should be made part of each active user account.

Policies and plans should then be reviewed on a regular basis and preferably at the same time and across business units to make sure they all contain consistent provisions.  Notices should be immediately sent to users when terms of service and other applicable policies and agreements are updated.

Of course, a company would be wise to also update its cyber attack incident response plan to include mimicking attacks as part of the company’s regular stress testing, conducted by experts hired to probe for weaknesses in network systems.

To the extent possible, vetting potential users before granting access to company services could help identify potential bad actors.  And after access, monitoring user activity as part of a regular network and data security efforts could help pinpoint incidents of theft.  Due diligence (including independent stress testing noted above) may be required for insurance purposes, and presumably a company offering deep learning model services has already invested in suitable cyber attack and data breach insurance coverage that includes model knowledge theft.

Respond to model attacks

Once an attack has been identified or a third party is suspected as mimicking your model’s knowledge in a competing service, it may be time to put the company’s cyber attack incident response plan into action.  Although the exfiltration of customer data is not directly at issue in a mimicking attack, attackers looking to steal a deep learning model’s knowledge might not stop there.  They could, for instance, also try to steal the training dataset from the company, especially if it contains sensitive and proprietary data.  Thus, broadening an investigation beyond just the mimicking attack might be in order.

An incident response plan includes keeping law enforcement appraised of an attack.  The US Department of Justice’s intellectual property task force and state attorneys general oversee enforcement of laws aimed at preventing and responding to theft of intellectual property and proprietary company data.  In California, for example, the state’s Attorney General oversees a High Technology Theft Apprehension and Prosecution (HTTAP) Program that may assist in prosecutions, once perpetrators are identified.  Successful intellectual property and proprietary data theft prosecutors, however, can be stymied when it comes to cyber criminals operating on the web anonymously and offshore.  A successful criminal prosecution may be difficult to achieve.

Aside from criminal investigations, a company may take civil legal action against known mimickers who offer competing services, which could include asserting potential state law claims such as breach of the company’s terms and conditions by the attacker, unfair competition, unjust enrichment, and others.

A company could also assert trade secret and patent infringement claims, assuming it can demonstrate that the attacker misappropriated the company’s trade secrets or is infringing a relevant company patent and has caused damages.  Under trade secret laws, a company would need to establish that it took reasonable steps to maintain secret its deep learning model, training datasets, and model knowledge.  No court has yet decided whether a model’s knowledge is protectable under trade secret law separate from its underlying algorithm, computational architecture (layers, nodes, hyperparameters, etc.), and source code.  Two factors (among others) courts may use to assess whether something can be a trade secret is whether the secreted thing confers a competitive advantage to its owner over those in similar businesses who do not know of it, and how hard or easy is it to duplicate.  While model knowledge provides a competitive advantage, copying the knowledge from a deep learning model could be moderately easy, as experts in the field have recently shown.

Regardless of the technical and legal measures taken prior to a mimicking attack and the legal response options available after, successful deep learning models will make enticing targets for cyber criminals and others in 2020 and beyond.  As artificial intelligence technologies continue to proliferate, companies and their lawyers need to be more vigilant in their efforts to identify the types of cyber attacks that could be perpetrated against a company and be able to effectively respond.