Unmasking Deepfakes: Navigating the Copyright Quagmire

Photo by Clint Patterson on Unsplash

 

In the ever-evolving landscape of digital technology, the emergence of deepfake technology has raised profound concerns, especially in the realm of copyright law. Deepfakes, sophisticated synthetic media created using artificial intelligence, can manipulate or replace existing content, often blurring the lines between reality and fiction.1 As these digital creations become more prevalent, questions surrounding their implications under copyright law have taken center stage.

 

Deepfakes employ machine learning algorithms to manipulate or replace elements in audio, video, or image files.2 They can seamlessly superimpose one person’s likeness onto another, creating content that appears authentic and convincing. While this technology offers exciting possibilities for entertainment and creative expression, it also poses significant challenges in terms of intellectual property rights and the potential for misuse.

 

In fact, the apprehensions surrounding deepfakes are not merely speculative but have become tangible issues in recent times. Despite the immense potential for positive applications, deepfake technology has unfortunately been exploited for malicious purposes. An illustrative example from 2017 involves a Reddit user who manipulated videos by superimposing the faces of actresses onto explicit content, leading to harassment and unauthorized use.3 Additionally, there have been instances, such as in India, where deepfakes were utilized for disseminating political propaganda during a candidate’s campaign.4 Furthermore, a 2019 study found that 96% of all deepfake videos were nonconsensual pornography.5 These incidents underscore the darker side of deepfake technology, highlighting the need for responsible use and effective countermeasures to prevent misuse.

 

Copyright law traditionally protects the rights of creators by granting exclusive rights over their original works.6 The creation of deepfakes heavily depends on the acquisition and modification of audiovisual content, which copyright regulations protect to a significant extent. 7 Consequently, copyright law serves as a pertinent framework for analyzing the distinct societal challenges arising from the production and circulation of deepfakes.8 However, the rapid advancement of technology has outpaced the evolution of copyright statutes, leaving legal frameworks struggling to keep up with the digital age. Deepfakes add a layer of complexity, as they involve the manipulation of existing copyrighted material.

 

The fair use doctrine is an affirmative defense to copyright infringement that allows for the limited use of copyrighted material without the need for permission or payment to the copyright holder.9 It serves as a crucial safeguard for freedom of expression and innovation, providing leeway for transformative uses of copyrighted works in certain contexts. The doctrine is codified in the United States Copyright Act under Section 107 and considers four factors when determining whether a particular work qualifies as fair use: (A) the purpose and character of the use, including whether it is transformative or commercial in nature, (B) the nature of the copyrighted work, (C) the amount and substantiality of the portion used in relation to the copyrighted work as a whole, and (D) the effect of the use upon the potential market for or value of the copyrighted work.10

 

Deepfakes present a unique challenge when applying the fair use doctrine due to their transformative nature and potential impact on the original works. First, regarding the transformative nature, deepfakes often involve the manipulation or synthesis of existing copyrighted material to create something new and different. If the purpose of the deepfake is to comment on, criticize, parody, or otherwise transform the original work, it may weigh in favor of a fair use defense. Second, while the fair use doctrine does not preclude commercial uses outright, courts may scrutinize deepfakes created for commercial purposes more closely. Non-commercial or educational uses of deepfakes may have a stronger argument for fair use, particularly if they serve a transformative or public-interest purpose. Third, the extent to which a deepfake utilizes copyrighted material is another factor to consider. Deepfakes that incorporate only a small portion of the original work or that significantly alter its context or meaning may have a stronger claim to fair use.

 

In navigating the evolving landscape of digital technology and intellectual property law, the intersection of deepfakes and the fair use doctrine brings to light complex considerations. While the transformative nature of deepfakes may lend weight to fair use arguments, their potential impact on original works and individuals’ rights raises significant challenges. As discussions around the legal implications of deepfakes unfold, recent legislative efforts, such as the No AI FRAUD Act, aim to establish federal protections for digital likenesses and voices.11

 

The No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (H.R. 6943) advocates for the establishment of a federal intellectual property right to safeguard digital likenesses and voices, extending beyond the lifespan of the represented individual. Typically, an artist’s voice, image, or likeness falls under state-specific “right of publicity” laws, offering protection against unauthorized commercial exploitation. However, these laws vary from state to state. The purpose of the No AI FRAUD Act is to create a standardized level of protection across the nation. State-specific right of publicity laws may cover an artist, but federal legislation aims to establish a consistent baseline of safeguards..12

 

When introducing the bill, Rep. María Elvira Salazar (R-FL), the lead Republican sponsor of the bill, stated, “[t]his bill plugs a hole in the law and gives artists and U.S. citizens the power to protect their rights, their creative work, and their fundamental individuality online.”13 While the intention of the legislation is to address concerns related to the unauthorized use and manipulation of digital content, the bill has sparked debates over its potential implications for free expression and constitutional rights.14 Critics argue that the bill may infringe upon free expression rights by allowing rightsholders to control a wide range of digital content without providing clear exceptions for activities such as research or scholarly use. Moreover, constitutional concerns arise from the potential vagueness of the balancing test between the public interest and intellectual property interest, which may lack clarity in its application.

 

In conclusion, the emergence of deepfake technology has brought forth complex challenges at the intersection of intellectual property law, free expression, and technological innovation. As deepfakes continue to evolve and raise concerns about unauthorized use and manipulation of digital content, legal frameworks like the fair use doctrine and legislative efforts such as the No AI FRAUD Act aim to provide protections for creators and individuals. However, these initiatives must strike a delicate balance between safeguarding intellectual property rights and upholding principles of free expression and constitutional rights. Moving forward, it is essential to engage in constructive dialogue, leverage technological advancements, and carefully consider the implications of legislative measures to ensure a fair and equitable approach to addressing the multifaceted issues surrounding deepfake technology.

 

Ben Gross is a Second Year Law Student at the Benjamin N. Cardozo School of Law and a Staff Editor on the Cardozo Arts & Entertainment Law Journal. Ben is interested in Real Estate, Intellectual Property, and Corporate Law.

  1. See Jack Langa, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U.L. Rev. 761 (2021).
  2. Sara H. Jodka, Manipulating reality: the intersection of deepfakes and the law, Reuters (Feb. 1, 2024), https://www.reuters.com/legal/legalindustry/manipulating-reality-intersection-deepfakes-law-2024-02-01/.
  3. Oscar Schwartz, You thought fake news was bad? Deep fakes are where truth goes to die, The Guardian (Nov. 12, 2018), https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth [https://perma.cc/M8VX-ANR6].
  4. Regina Mihindukulasuriya, Why the Manoj Tiwari deepfakes should have India deeply worried, The Print (Feb 28, 2020), https://theprint.in/tech/why-the-manoj-tiwari-deepfakes-should-have-india-deeplyworried/372389/ [https://perma.cc/6J6A-7RGA].
  5. Solcyré Burga, How a New Bill Could Protect Against Deepfakes, Time (Jan. 31, 2024), https://time.com/6590711/deepfake-protection-federal-bill/ [https://perma.cc/NND5-E3T6].
  6. See generally 17 U.S.C. § 110.
  7. Katrina Geddes, Ocularcentrism and Deepfakes: Should Seeing be Believing?, 31 Fordham Intell. Prop. Media & Ent. L.J. 1042, 1046 (2021).
  8. Id.
  9. 3 Business Torts § 29.13 (2024)
  10. 17 U.S.C. § 107.
  11. Kristin Robinson, House Lawmakers Unveil No AI FRAUD Act in Push for Federal Protections for Voice, Likeness, Billboard (Jan. 10, 2024), https://www.billboard.com/business/legal/no-ai-fraud-act-congress-federal-law-explained-1235578930/ [https://perma.cc/8HP6-FUBV].
  12. Id.
  13. Congresswomen Maria Elvira Salazar, SALAZAR INTRODUCES THE NO AI FRAUD ACT, (Jan. 10, 2024), https://salazar.house.gov/media/press-releases/salazar-introduces-no-ai-fraud-act [https://perma.cc/AZF8-RAFZ].
  14. Corynne Mcsherry, The No AI Fraud Act Creates Way More Problems Than it Solves, EFF (Jan. 19, 2024), https://www.eff.org/deeplinks/2024/01/no-ai-fraud-act-creates-way-more-problems-it-solves [https://perma.cc/X25V-HN4E].