On Christmas day, two speeches by the Queen aired on UK television. But only one was real. The real speech was delivered by the monarch herself and aired on the BBC, as is customary. The second was a parody performed by a “deepfake” of the Queen, and broadcast by Channel 4.
Hundreds of viewers are reported to have complained against the digitalisation of the Queen to Ofcom, the UK’s communication regulator. Channel 4 viewers are not alone in their concern about deepfakes. They have been the preoccupation of law-makers around the world since their release for public consumption in 2017.
What are deepfakes?
Deepfakes (or deep fakes) have been aptly described as “a form of digital impersonation”.
Deepfakes are videos or sound recordings that imitate the likeness of a person’s face, voice or performance. They manipulate real (authentic) footage of a person to generate a digital imitation. In essence, they are digital look-alikes or sound-alikes.
Deepfakes are produced using machine learning technology freely available online. This easy access to the technology has contributed to their spread. Convincing digitalisation of people’s face or voice used to be the preserve of well-resourced content producers with access to expensive computer-generated imaging and other visual effects technology. Machine learning decentralised the making of good quality digitalisation, enabling the proliferation of deepfakes as a result.
What are deepfakes used for?
Deepfakes can be used for a range of purposes but are better known for their harmful or malicious applications. They went viral when the technology was used to insert the faces of women into pornographic content without their consent, in a practice known as “morph porn” or “revenge porn”.
Deepfakes can also be used for creative expression or entertainment, as with the deepfake of the Queen already mentioned. The technology can find commercial application in television, films, video games or social media – to only name a few.
We anticipate deepfake technology to be part of mainstream customer service in sectors beyond the creative industries very soon. For example, the technology may help customers visualise themselves in new clothes when shopping online or in stores. Deepfakes could also be used to assist customers with disabilities, or in multi-languages.
Response by regulators so far
National law-makers have adopted different strategies to protect individuals against non-consensual deepfakes.
For example, states in Australia amended its criminal legislation to class deepfakes as a form of online sexual abuse in 2017 (New South Wales) (pdf) and 2019 (Western Australia) (pdf). Similar changes to criminal law are currently under consideration by the Law Commission in the UK.
More recently, New York (state) went down a different path. In December 2020, the state reformed the scope of the right of publicity, using civil remedies to combat undesirable deepfakes.
Civil remedies can be an effective strategy to control deepfakes. This is because they can be used to stop or prevent malicious deepfakes, but they can also go further by giving individuals a legal right to the commercial benefits generated by their deepfakes.
In this regard, civil remedies are more versatile than criminal legislation. A comprehensive response to deepfakes should provide solutions based in both civil and criminal laws.
A solution in intellectual property law
Reforming the right of publicity can only be envisaged in countries where such right, or a close equivalent, exists. This is not the case in many jurisdictions, including England and Wales.
This does not mean that other civil remedies cannot be easily adapted to protect individuals against deepfakes. There is one intellectual property right in particular, known as “performers’ rights”, that is well suited for the task.
Performers’ rights have been noticeably absent from discussions on legal responses to deepfakes. This can be explained by the fact that they are one of the lesser-known rights of intellectual property law.
What are performers’ rights and what can they offer?
Performers’ rights control the act of recording performances and the subsequent distribution of these recordings. In practice this means that a performer must consent to the fixation of their performance in a recording, and to the use of these recordings afterwards.
Performers’ rights aim to protect the economic and moral interests of performers in making and selling recordings of their work. In this regard, they are comparable to, but should not be confused with, copyright.
In the UK, performers’ rights are provided under the Copyright, Patents and Designs Act 1988. They are also protected under international law through the Phonograms and Performances Treaty 1996 and the Beijing Treaty 2012, both of which are enforced in many countries. For this reason, performers’ rights could be used by international law-makers to protect individuals against deepfakes on a global scale, when reformed accordingly.
Who holds performers’ rights?
Performers’ rights first belong to the “performer”, that is the individual executing a protected “performance” in the meaning of intellectual property law.
It may come as a surprise that anyone can be a performer. In practice, most people will have acted as a performer in the meaning of the law at one point or another in their life.
This is because the legal definition of “performer” and “protected performance” have been written broadly by legislators. As a result, anyone recorded speaking, singing, dancing or moving under directions (artistic or otherwise) could be classed as a performer and receive protection under performers’ rights.
This will obviously be the case of anyone performing on stage or in the studio. But performers’ rights will also apply to anyone speaking in public, in an interview, or spontaneously on tape or camera.
Under this definition, runway models, reality TV participants, teachers, politicians, the Queen, as well as anyone sharing videos on social media are “performers” legally entitled to control the records made of their “performance”.
An almost perfect solution
For the reasons outlined above, performers’ rights are an attractive strategy to regulate deepfakes internationally, and in countries which do not provide an equivalent to the right of publicity.
However, performers’ rights need to be reformed to apply to digital impersonation such as those generated by deepfakes. At present, performers’ rights do not cover the imitation (digital or otherwise) of performances, such as the digitalisation of the Queen released by Channel 4 on Christmas day. Performers’ rights only cover the recording of a performance, and the copying of such recordings.
This limitation comes from the way in which performers’ rights were written in the law when they were introduced by national and international legislators. At the time, imitations and digital impersonations did not represent a moral or economic threat serious enough to warrant legal protection against it. With deepfakes, machine learning technology has changed this status quo, and calls for a reform of performers’ rights.
In the UK, like most countries, a reform of performers’ rights would involve amending existing statutory texts to expand the scope of protection to digital impersonations. This amendment would complement existing rights, and would be relatively easy to implement.
UK intellectual property law is currently under review the UK Intellectual Property Office to assess the adequacy of the current legal framework with machine learning technologies. This would be the perfect opportunity to envisage a reform of performers’ rights in light of deepfakes.
Image cc by sa Alfred Nitsch.