It didn’t take long for someone to turn generative adversarial networks (GAN)–a machine learning technique that at first blush seemed benign and of somewhat limited utility at its unveiling–into a tool with the ability to cause real harm.  Now, Congress has stepped up and passed legislation to focus the federal government’s attention on the technology.  If signed by the president, the legislation will require two federal agencies to study the role GANs play in producing false media content and report their findings back to respective House and Senate committees, which is seen as a prelude to possible notice-and-comment regulations and possibly federal criminal statutes targeting those who intentionally deploy GANs that deceive or cause harm.  That Congress was able to pass GANs-focused legislation in a year like 2020 suggests a resolve that artificial intelligence developers and consumers who make false media would be wise not to ignore.

GANs were introduced to the artificial intelligence world in 2014 when Ian Goodfellow, then a Ph.D student at the University of Montreal, conceived of and helped develop the technique as a way to synthesis new labelled image data, which is an essential ingredient in the development of artificial intelligence representation models in the field of computer vision.  Now some six years after Goodfellow and his colleagues published their seminal paper on the topic, the original idea has been exploited in ways that, like other technology pioneers and disruptors before them, were likely unforeseen.

In the years since, GANs have been used, along with other artificial intelligence tools, to superimpose existing celebrity faces onto actors, in one case in a pornographic video–a so-called “deep fake” video–seamlessly making it appear as if the actor was the person depicted in the video.  In a similar way, GANs have been used in the creation of videos of politicians uttering sentences they never expressed (a notable example is a technology demonstration video involving Barrack Obama appearing to be speaking but whose words are actually those of actor Jordan Peele impersonating Obama’s voice).  It’s one thing to display disinformation on social media, but to spread it as false media in the form of an influential person appearing to speak in a video while an algorithm behind the curtain pulls the levers raises the stakes for what is real and what is fake in our modern connected society.  And it doesn’t take a data scientist or AI engineer steeped in GANs know-how to conceive nefarious other ways the technology could be used to the detriment of individuals, governments, and societies writ large.

Despite their original useful purpose–to generate new data–GANs are, for better or worse, part of a larger problem attributed to artificial intelligence technologies: spreading false or misleading content.  Although GANs were developed around the same time as other advances in AI, the technology was not deployed in the same harmful ways like other developments at the time, at least not right away.  Disinformation bots, for example, proliferated on social media before and during the 2015-2016 presidential primary and general election cycle by domestic and foreign actors intent on spreading and amplifying false and misleading content.  Also around that time, content recommender systems permeated some social media platforms, which some attribute to fomenting divisiveness and creating echo chambers that continues to this day.  While those use cases were and continue to be troubling, GANs-based technologies, by themselves or combined with other technologies like auto-encoders and automated speech recognition (ASR), are on a whole different level when it comes to the potential for deception and direct harm.

Not surprisingly then, lawmakers on Capitol Hill and in state capitals have responded.  At the federal level, Senate Bill S.2904–Identifying Outputs of Generative Adversarial Networks Act or the IOGAN Act, introduced by Sen. Catherine Cortez Masto (D-NV), passed both the House and Senate in early December 2020. The legislation was sent to the White House on December 11, 2020, where it is waiting the president’s signature.  In the bill, Congress expressed the sentiment that outputs from GANs raise “grave” national security and societal impacts concerns, but at the same time recognized that, “[g]aps currently exist on the underlying research needed to develop tools that detect videos, audio files, or photos that have manipulated or synthesized content, including those generated by generative adversarial networks.”

If signed by the president, the new law would define “generative adversarial network” to mean, with respect to artificial intelligence, the machine learning process of attempting to cause a generator artificial neural network (G) and a discriminator artificial neural network (D) to compete against each other to become more accurate in their function and outputs, through which the generator and discriminator create a feedback loop, causing the generator to produce increasingly higher-quality artificial outputs and the discriminator to increasingly improve in detecting such artificial outputs.  The law would require the National Science Foundation (NSF) and National Institute of Standards and Technology (NIST) to produce, within one year of enactment, reports to Congress about needed research and educational outreach in areas of manipulated or synthesized content from GANs.  Funding for their research could come from federal spending authorized under the recent National Defense Authorization Act for FY 2021, which includes spending targets for NSF and NIST in areas overlapping with the IOGAN Act, or from other appropriations.

At the state level, California’s legislature passed AB730 last year, which became effective for a three-year period beginning January 1, 2020, to give political candidates the right to sue to stop others from using materially deceptive audio or visual media in political advertising, including media created with the help of GANs.  The year before, AB602 was enacted and created a private right of action for a “depicted individual” who, as a result of digitization, appears in a video or image to be giving a performance they did not actually perform or to be performing in an altered depiction.

In Texas, the legislature passed SB751, effective September 1, 2019, which makes it a criminal offense to fabricate a deceptive video with intent to influence the outcome of an election, where a deceptive video, or “deep fake,” is one that appears to depict a real person performing an action that did not occur in reality.

In Virginia, lawmakers in 2019 updated existing law 18.2-386.2, making it unlawful to disseminate or sell without permission or authorization certain nude images of another person that includes a person whose image was used in creating, adapting, or modifying a videographic or still image with the intent to depict an actual person and who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristic.  The law has been interpreted to apply to GANs-produced deep fake videos.

Finally, and most recently, in New York, Governor Cuomo signed into law new deceptive media legislation on December 1, 2020, which amended the state’s civil laws to add Section 52-C, a “Private right of action for unlawful dissemination or publication of a sexually explicit depiction of an individual.”  The new law applies to “depicted individuals,” who are individuals that appear, as a result of digitization, to be giving a performance they did not actually perform or to be performing in a performance that was actually performed by the depicted individual but was subsequently altered to be in violation of the law.  “Digitization” is defined as to realistically depict the nude body parts of another human being as the nude body parts of the depicted individual, computer-generated nude body parts as the ude body parts of the depicted individual or the depicted individual engaging in sexual conduct, as defined in subdivision ten of section 130.00 of New York’s penal law, in which the depicted individual did not engage.

Assuming IOGAN is signed into law, a year from now we may see new federal legislation that targets the same abuses covered by the handful of state digitization/false media content laws. Other states could follow with their own measures, if not to address existing GANs and their ability to synthesize new data and cause potential harm, but also to anticipate what comes next when the next new advance in artificial intelligence replaces GANs.

The post Artificial Intelligence, GANs, and the law of Synthetic Data: Lawmakers React to False Media Content first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.