Closing the Deepfake Loophole in AI Child Pornography: Tech Companies and Users Must Be Held Responsible for AI-Driven Child Exploitation

Closing the Deepfake Loophole in AI Child Pornography: Tech Companies and Users Must Be Held Responsible for AI-Driven Child Exploitation

By Heidi Goldsmith | Senior Associate

Content Warning: This article discusses cases involving sexual exploitation and harm to minors and vulnerable individuals in the context of AI litigation.

In Brief: AI is supercharging the production of Child Sexual Abuse Material (CSAM)[1] while the legal system lags behind.[2] It is now disturbingly easy—even for those with no tech ability—to generate explicit deepfakes using “nudify” sites[3] and image-based AI tools.[4] Despite a near-unanimous call for a legislative response by state attorneys general, the legal landscape remains fractured and powerful defenses remain available to tech platforms. We need clear national laws to criminalize AI-generated CSAM as well as civil remedies to empower survivors.

* * *

Artificial intelligence enables remarkable creativity but it also presents serious risks, especially for children. One of the gravest is the misuse of AI to produce CSAM—that is, any visual or audio depiction of sexually explicit conduct involving a minor, whether real, manipulated, or wholly synthetic.[5] Under U.S. federal law, production, distribution, or possession of CSAM is illegal.[6] But many of our statutes relating to child sexual exploitation were written long before AI tools existed, and courts are now grappling with how to apply them to a rapidly changing technological reality.

The Threat of AI-Driven Child Exploitation

This isn’t a hypothetical concern. AI tools can now generate hyper-realistic deepfakes, clone voices, and simulate sexual scenes with astonishing detail by studying real photographs of abused or unvictimized children to generate new images and videos of children who do not exist but who may resemble actual children.[7] While early fears focused on teens misusing AI to harass or humiliate classmates, the far graver threat is the adults—predators—who are using these tools to fabricate abusive images and videos involving children. Even where a platform’s license prohibits the creation of explicit or harmful content involving minors—including generation of synthetic CSAM—enforcement is extremely difficult.

In one harrowing example, a Wisconsin man, Steven Anderegg, was charged by the FBI with producing more than 13,000 AI-generated CSAM images using the Stable Diffusion platform, many of which were then distributed online.[8] This was not an isolated case.[9] Similar investigations are ongoing around the country. They underscore a disturbing truth: without legal intervention, generative AI will become a powerful enabler of abuse.

The Attorneys General Letter

On September 5, 2023, a bipartisan coalition of 54 Attorneys General from U.S. states and territories sent a powerful letter to Congress urging the regulation of AI-generated CSAM. They warned that generative AI enables child sexual abuse content to be created “rapidly” and often in “an unrestricted and unpoliced way,” opening “a new frontier for abuse” that makes it harder to prosecute those who exploit children.[10]

Their message was clear: voluntary content moderation isn’t enough, and existing enforcement tools are ill-suited to address AI-driven harms. The letter called on lawmakers to pass new statutes criminalizing synthetic CSAM and to equip law enforcement with tools to investigate and prosecute offenders. It also emphasized that civil enforcement must evolve in tandem, allowing survivors to pursue justice in civil courts.

It’s rare to see the AGs of virtually every U.S. jurisdiction speak with one voice. Their call to action signaled a near-universal recognition that in the AI age, we need new laws and strategies to combat CSAM.

The Statutory Landscape for AI-Generated CSAM

Momentum is finally building at both the U.S. federal and state levels to confront the rising threat of AI-generated CSAM. In July 2024, the U.S. Senate unanimously passed the DEFIANCE Act—a bipartisan bill that, if enacted, would create a powerful civil remedy for victims of digitally forged intimate images, including AI-generated CSAM and deepfake pornography.[11] The legislation targets anyone who knowingly creates, distributes, solicits, or possesses such material, broadly defined to include content that appears real, even if synthetically generated. Survivors would be empowered to sue for actual damages, including the profits earned from their exploitation[12], with a 10-year statute of limitations that begins only once the victim turns 18 or becomes aware of the harm. The bill now awaits action in the House.[13]

The U.S. is not the first country to propose a legislative response to the threat of AI-generated CSAM. In the United Kingdom, pending legislation would criminalize the creation, possession, and distribution of AI-generated child abuse images—even when no real child is depicted—and would outlaw so-called “paedophile manuals” designed to instruct abusers on how to exploit these tools.[14] These laws appropriately treat AI-generated exploitation as a serious crime, with the goal to shut it down before it spreads.

Meanwhile, state legislatures are not waiting for a federal statute. By mid-2025, more than 45 states had passed laws specifically targeting AI-generated or digitally altered CSAM.[15] These laws update statutory definitions to explicitly criminalize the possession, creation, or distribution of synthetic abuse material that depicts minors. States like Alabama, California, Connecticut, and Texas have led the charge with comprehensive statutes designed to close the so-called “AI loophole.”[16] In New York, Senate Bill S3202 seeks to expand the state’s penal code to prohibit sexually explicit depictions of children created or altered using AI or other digital tools.[17] That bill remains pending before the legislature as of September 2025.

Despite this momentum, the legal landscape remains fractured. Some jurisdictions lag behind, and until the DEFIANCE Act passes in the House, there is still no unified federal statute addressing AI-generated CSAM. Existing laws (18 U.S.C. §§ 2251–2256) criminalize visual depictions of child sexual abuse.[18] Their application to AI-produced images that appear to (but do not actually) involve real children, however, has proven controversial.[19] Courts have varied in their interpretations, creating uncertainty.[20]

Civil Actions and Defenses

As U.S. prosecutors contend with the limits of criminal enforcement, civil litigators have a critical role to play in shaping how the law responds to AI-driven abuse. Indeed, civil litigation is a vital tool in the effort to combat CSAM. Survivors and their families can seek redress under a wide array of legal theories:

Product liability and design defect claims, where developers release AI tools that are foreseeably misused to create CSAM and fail to include adequate safeguards or warnings at launch.

Negligence and failure to warn claims, where companies knew or should have known that their technology would be exploited to generate child abuse content but failed to take reasonable precautions to prevent it.

Consumer protection and unfair practices litigation under state fraud statutes, where platforms misrepresented the safety of their tools or failed to disclose known risks of abuse involving minors.

State-law privacy claims, including invasion of privacy and right of publicity, when a child’s name, likeness, or biometric data is used in synthetic CSAM or deepfake pornography.

Intentional or negligent infliction of emotional distress, where companies release tools that cause severe foreseeable harm to children or families through the creation or spread of synthetic exploitation.

Failure to implement safety-by-design, asserted as a negligence or public nuisance theory under state law, when developers launch AI tools without reasonable safeguards despite foreseeable risk of misuse involving minors.

Civil claims under 18 U.S.C. § 2255, which allows survivors of federally prohibited child exploitation to sue for actual damages, punitive damages, and attorney’s fees—even where the abuse is AI-generated, if it causes personal injury and stems from criminal conduct under federal CSAM statutes.

TVPRA claims under 18 U.S.C. § 1595, where companies, platforms, or individuals knowingly benefit—financially or otherwise—from ventures that involve sex trafficking or child exploitation, including those that use AI to generate or disseminate CSAM.

Civil conspiracy or aiding-and-abetting liability under federal common law theories, where a defendant actively facilitates or materially supports the creation or distribution of CSAM through AI systems.

But civil litigation in this space must also contend with powerful federal defenses that tech companies are likely to invoke—most notably, Section 230 of the Communications Decency Act[21] and the First Amendment. These protections have traditionally shielded platforms from incurring liability for user-generated content. But their application to AI-generated CSAM is far from settled.[22] Plaintiffs may argue that when users create CSAM with AI, the platforms generating it are no longer passive hosts but active facilitators—especially when their models are trained on illicit or unlicensed datasets. And courts have held that CSAM, even when virtual, is not protected speech if it depicts real minors or causes real-world harm.[23]

What’s needed now are statutory carveouts that impose strict liability for the creation or distribution of AI-generated CSAM—regardless of the platform’s intent or editorial involvement. Just as the First Amendment draws a bright line around child pornography, Section 230 should not extend to synthetically generated depictions that simulate real abuse. The key point is that this content is not “information provided by another information content provider” within the meaning of § 230(c)(1). Even if a user supplies a prompt, the resulting image originates from the product’s own algorithms and training data. Because the illegal material is generated by the system itself, rather than published from a user’s upload, traditional § 230 protection should not apply—and platforms should bear responsibility for its creation and distribution. That line must hold even when no specific, identifiable child appears in the image. These depictions still cause harm: they normalize abuse, invite predation, and retraumatize survivors who see their experiences digitally reimagined.[24]

Brithem’s Track Record in Combating Tech-Facilitated Abuse

Brithem is uniquely positioned to meet the legal challenges posed by AI-generated CSAM. Founding partners Michael Bowe and Lauren Tabaksblat have long stood at the forefront of high-impact litigation holding tech companies accountable for sexually exploitative material. They currently represent the plaintiffs in a landmark case against Pornhub and its parent company, MindGeek (now Aylo), alleging that the platform knowingly profited from videos uploaded without the consent of those depicted, including minors.[25]

This pioneering litigation has exposed systemic failures in consent verification and content moderation. Mike and Lauren’s work on it exemplifies Brithem’s core mission: using litigation as a lever for change in cases where vulnerable people are harmed by unregulated technology.

Now, as artificial intelligence enables a new and insidious form of abuse, Brithem’s experience in litigating consent-based harms, privacy violations, and digital exploitation will be critical as the legal system adapts to the rise of AI-generated CSAM.

As the AI-CSAM crisis unfolds, Brithem remains committed to the fight for accountability—ensuring that when powerful technologies are weaponized, the law responds with force and humanity.

Where We Go From Here

AI’s ability to simulate reality is evolving faster than the legal system. We need legislation that keeps pace—with clear, national standards criminalizing AI-generated CSAM—and civil remedies that empower victims.

Lawyers, for their part, must be prepared to bring impact litigation—not only to secure justice for individual survivors but also to reshape the incentives of an industry that too often places innovation above safety. The September 2023 AG letter was a wake-up call. Now it’s time for the legal system—civil and criminal, state and federal—to rise to the challenge.

Contact Us

No one should suffer in silence. If you, a loved one, or someone you know has been harmed by CSAM—whether traditional or AI-generated—please know that help is available. Our team at Brithem is here to listen, support, and pursue justice on your behalf. Reach out to us confidentially to explore your options.

The information on this site is provided for general informational purposes only and does not constitute legal advice. Contacting Brithem LLP through this page does not create an attorney–client relationship. An attorney–client relationship is formed only after a written engagement agreement is signed.


[1] The term “child pornography” is currently used in federal statutes and is defined as “any visual depiction of sexually explicit conduct involving a person less than 18 years old.” Department of Justice, Child Sexual Abuse Material, at 1 (DATE), https://www.justice.gov/d9/2023-06/child_sexual_abuse_material_2.pdf. According to the United States Department of Justice (the “DOJ”), although the phrase “child pornography” still appears in U.S. federal law, “‘child sexual abuse material’ is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child.” Id.

[2] See DOJ, CSAM, at 5; Cecilia Kang, A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet, New York Times, https://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html (July 18, 2025).

[3] Anderson Cooper, Schools face a new threat: “nudify” sites that use AI to create realistic, revealing images of classmates, CBS (Dec. 15, 2024), https://www.cbsnews.com/news/schools-face-new-threat-nudify-sites-use-ai-create-realistic-revealing-images-60-minutes-transcript/.

[4] See DOJ, CSAM, at 7-8; Letter of 54 State & Territorial Attorneys General to Congress on AI and CSAM (“Letter of AGs”), at 3 (Sept. 5, 2023). https://www.naag.org/wp-content/uploads/2023/09/54-State-AGs-Urge-Study-of-AI-and-Harmful-Impacts-on-Children.pdf.

[5] “Underlying every sexually explicit image or video of a child is abuse, rape, molestation, and/or exploitation.” DOJ, CSAM, at 1. “The production of CSAM creates a permanent record of the child’s victimization.” Id.

[6] 18 U.S.C. § 2256(8) (definition of “child pornography”), https://www.law.cornell.edu/uscode/text/18/2256 (last accessed Sept. 14, 2025).

[7] See Letter of AGs, at 2-3.

[8] FBI Press Release on Steven Anderegg’s Arrest for AI-Generated CSAM (2024), https://www.justice.gov/archives/opa/pr/man-arrested-producing-distributing-and-possessing-ai-generated-images-minors-engaged (last accessed Sept. 14, 2025).

[9] In another disturbing case, United States v. Arlan Harrell, et al. (C.D. Cal.), federal prosecutors secured convictions against three men—Arlan Harrell, John Brinson, and Moises Martinez—who met through Tor-based child exploitation forums, including one focused on victims under five years old. See United States v. Arlan Wesley Harrell, et al., Case No. 2:17-cr-164-AB (C.D. Cal.), sentencing Feb. 18, 2022. The men coordinated in-person meetings in California to sexually abuse children and produce CSAM. Id. Martinez was sentenced to 55 years in prison; Harrell and Brinson received life sentences after pleading guilty to engaging in a child exploitation enterprise and multiple counts of production of child pornography. Id.

[10] Letter of AGs, at 2-3.

[11] DEFIANCE Act (S. 4242, 118th Cong.) — passed Senate by voice vote (July 2024). https://www.congress.gov/bill/118th-congress/senate-bill/4242 (last accessed Sept. 14, 2025).

[12] DEFIANCE Act of 2024, S. 3696, 118th Cong. § 3(b)(3)(C)(ii) (as passed by Senate, July 25 2024) (“actual damages sustained by the individual, [] shall include any profits of the defendant that are attributable to the conduct at issue in the claim that are not otherwise taken into account in computing the actual damages.”).

[13] The Senate unanimously passed the 2024 version of the bill (S.3696), and it was sent to the House, where a companion bill (H.R. 7569) was referred to the House Judiciary Committee. A new version (H.R. 3562) was introduced in the House in May 2025. DEFIANCE Act of 2025 (H.R. 3562, 119th Cong.), https://www.congress.gov/bill/119th-congress/house-bill/3562 (last accessed Sept. 14, 2025).

[14] UK Parliament, Crime and Policing Bill 2024–26 — Bill overview and status, https://bills.parliament.uk/bills/3919 (last accessed Sept. 14, 2025).

[15] Enough Abuse, “State Laws Criminalizing AI-generated or Computer-Edited Child Sexual Abuse Material (CSAM)” (Aug. 2025), https://enoughabuse.org/get-vocal/laws-by-state/state-laws-criminalizing-ai-generated-or-computer-edited-child-sexual-abuse-materialcsam/#:~:text=State%20Laws%20Criminalizing%20AI%2Dgenerated%20or%20Computer%2DEdited%20Child%20Sexual,(as%20of%20August%202025) (last accessed Sept. 14, 2025).

[16] See id.

[17] New York S.3202 (2025) – offenses involving sexually explicit digital alterations (pending as of Sept. 2025), https://www.nysenate.gov/legislation/bills/2025/S3202.

[18] See DOJ, Citizen’s Guide to U.S. Federal Law on Child Pornography, https://www.justice.gov/criminal/criminal-ceos/citizens-guide-us-federal-law-child-pornography (last accessed Sept. 14, 2025).

[19] See, e.g., Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002). Ashcroft struck down two provisions of the Child Pornography Prevention Act of 1996 (CPPA) § 2256(8)(B) & (D) because they criminalized “virtual child pornography” (images that appear to be of minors, but which do not use actual minors) and speech that conveys an impression that minors are involved, even if no minors were involved. These were found overbroad under the First Amendment. The ruling left open the possibility that more narrowly tailored prohibitions (e.g., synthetic images indistinguishable from real children, or morphed images where real children’s faces are used) might survive constitutional scrutiny.

[20] Compare Ashcroft, 535 U.S. 234 with United States v. Mecham, 950 F.3d 257 (5th Cir. 2020) (holding that morphed child pornography—i.e. real child face + adult sexual conduct—does not enjoy First Amendment protection) and Shoemaker v. Taylor, 730 F.3d 778 (9th Cir. 2013) (affirmed the district court’s denial of a 28 U.S.C. § 2254 habeas corpus petition challenging misdemeanor convictions for multiple counts of possessing and duplicating child pornography and holding that “there is no clearly established Supreme Court law holding that images of real children morphed to look like child pornography constitute protected speech”).

[21] As currently written and interpreted by courts, Section 230 gives online providers immunity from civil action and state and local criminal action for material on their platform created by a third-party. 47 U.S.C. § 230. The sole exception to this blanket immunity, discussed in more detail below, is for conduct related to sex trafficking and the intentional facilitation of prostitution. 47 U.S.C. § 230(e)(5).

[22] See Congressional Research Service Report: Section 230 Immunity and Generative Artificial Intelligence, CRS LSB11097, at 3-4 (2023)

[23] See note 18, supra.

[24] Letter of AGs, at 3.

[25] Brown Rudnick, “Launches Landmark Case Against MindGeek and Visa” (June 17, 2021). https://brownrudnick.com/client_news/brown-rudnick-launches-landmark-case-against-human-trafficking-and-child-pornography-in-the-online-porn-industry/; Reuters, “Lawsuits claim Pornhub, Visa and hedge funds profited from child abuse” (June 14, 2024). https://www.reuters.com/legal/transactional/lawsuits-claim-pornhub-visa-hedge-funds-profited-child-abuse-2024-06-14/.

1200 627 Brithem

Discover more from Brithem

Subscribe now to keep reading and get access to the full archive.

Continue reading