Victims of explicit deepfakes will soon have stronger legal protections in the United States, thanks to new federal legislation that targets the growing threat of non-consensual, AI-generated sexual imagery. The recently signed Take It Down Act criminalizes the distribution of explicit images—whether real or computer-generated—without the subject’s consent. This law represents a landmark step in combating the spread of deepfake pornography and strengthening online safety, particularly as artificial intelligence continues to evolve and challenge existing legal frameworks.
Signed into law by President Donald Trump during a ceremony at the White House on Monday, the Take It Down Act makes it a federal offense to share non-consensual, sexually explicit images online. The law requires tech platforms to remove flagged content within 48 hours of receiving a notice, holding them accountable for prompt action in protecting victims. This applies not only to traditionally manipulated images but also to those created using AI tools that convincingly superimpose a person’s face onto another person’s nude or sexually explicit body.
The urgency for such legislation has been growing over the past few years. High-profile figures like Taylor Swift and Congresswoman Alexandria Ocasio-Cortez, as well as countless private citizens including teenage girls, have been victimized by deepfake technology. These fabricated images often circulate widely before they can be removed, causing immense emotional and reputational damage to their subjects.
Until now, the legal landscape offered inconsistent protections. While there were federal laws criminalizing the creation and distribution of AI-generated explicit content involving minors, no comprehensive nationwide standard existed for adult victims. Legal recourse was dependent on a patchwork of state laws, many of which did not explicitly address AI-generated content or required proof of malicious intent.
The Take It Down Act changes that by establishing clear legal grounds for adult victims to pursue justice. It not only enhances protections for those affected by “revenge porn” but also sends a strong signal that non-consensual sexual content—regardless of how it’s made—will not be tolerated. Law enforcement agencies will now have more clarity and authority to investigate and prosecute individuals responsible for creating or disseminating such content.
Advocacy groups and civil society organizations have welcomed the legislation as a long-overdue step in modernizing digital privacy laws. “AI is new to a lot of us and so I think we’re still figuring out what is helpful to society, what is harmful to society,” said Ilana Beller, organizing manager at Public Citizen, one of the advocacy groups supporting the law. “But non-consensual intimate deepfakes are such a clear harm with no benefit.”
The passage of the Take It Down Act also marks one of the first major federal efforts to regulate the societal impacts of artificial intelligence. As generative AI tools become more accessible, powerful, and sophisticated, lawmakers are being forced to confront the darker implications of the technology—including its use in harassment, misinformation, and exploitation.
Legal experts suggest this law could become a foundational piece of broader AI regulation in the U.S. While civil liberties and digital rights organizations have expressed cautious optimism, they also emphasize the need for careful implementation to ensure tech platforms respond promptly and responsibly without stifling legitimate content.
For now, the Take It Down Act offers victims an important lifeline. It affirms that the federal government recognizes the trauma caused by explicit deepfakes and is willing to take meaningful action to curtail their spread. It’s a strong message: in the face of technology-fueled abuse, privacy and dignity still matter.