The House of Representatives passed the first major law tackling AI-induced harm, the Take It Down Act . Under this act, the legislation aims to enact stricter penalties for the distribution of non-consensual intimate imagery, sometimes called “revenge porn.” Now, the bill is headed President Donald Trump’s way for his approval.
This bipartisan legislation establishes the creation and distribution of non-consensual deepfake pornography as criminal offences and legally obligates online platforms to remove such material within 48 hours of being notified.
The bill, which was introduced by Senator Ted Cruz, a Republican from Texas, and Senator Amy Klobuchar, a Democrat from Minnesota, saw gaining support from First Lady Melania Trump. Regarding the bill, which pertains to both real and artificial intelligence-generated imagery, critics assert that its overly inclusive language could pave the way for censorship and contravene First Amendment rights.
What is the bill?
This legislation criminalises the “knowing publication” or threatened publication of private images without consent, extending to AI-created “ deepfakes .” It also imposes a 48-hour removal requirement on websites and social media companies upon notification by a victim, along with a directive to delete duplicate content. Despite existing state-level bans on sexually explicit deepfakes and revenge porn, the Take It Down Act stands out as a rare instance of federal regulators intervening in the operations of internet companies.
Who are the supporters?
The Take It Down Act has received significant bipartisan support and advocacy, notably from Melania Trump, who lobbied on Capitol Hill in March, expressing her distress over the victimisation of teenagers, particularly girls, through the spread of such content. President Trump is anticipated to sign the bill into law. Cruz said the measure was inspired by Elliston Berry and her mother, who visited his office after Snapchat refused for nearly a year to remove an AI-generated “deepfake” of the then 14-year-old.
Meta, the company behind Facebook and Instagram, supports the Take It Down Act. Meta spokesman Andy Stone conveyed last month that the sharing of intimate images without consent – whether real or AI-generated – can be deeply distressing, and that Meta has developed and actively backs various measures intended to prevent such occurrences.
The Information Technology and Innovation Foundation, a tech industry-supported think tank, said in a statement Monday that the bill’s passage “is an important step forward that will help people pursue justice when they are victims of non-consensual intimate imagery, including deepfake images generated using AI.”
“We must provide victims of online abuse with the legal protections they need when intimate images are shared without their consent, especially now that deepfakes are creating horrifying new opportunities for abuse,” Klobuchar said in a statement after the bill’s passage late Monday. “These images can ruin lives and reputations, but now that our bipartisan legislation is becoming law, victims will be able to have this material removed from social media platforms and law enforcement can hold perpetrators accountable."
What are the concerns?
Free speech advocates and digital rights groups say the bill is too broad and could lead to the censorship of legitimate images including legal pornography and LGBTQ content, as well as government critics.
“While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy,” said the nonprofit Electronic Frontier Foundation, a digital rights advocacy group. “Lawmakers should be strengthening and enforcing existing legal protections for victims, rather than inventing new takedown regimes that are ripe for abuse.”
The takedown provision in the bill “applies to a much broader category of content — potentially any images involving intimate or sexual content” than the narrower definitions of non-consensual intimate imagery found elsewhere in the text, EFF said.
“The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools,” EFF said. “They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal.”
As a result, the group said online companies, especially smaller ones that lack the resources to wade through a lot of content, “will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.”
The measure, EFF said, also pressures platforms to “actively monitor speech, including speech that is presently encrypted” to address liability threats.
The Cyber Civil Rights Initiative, a nonprofit that helps victims of online crimes and abuse, said it has “serious reservations” about the bill. It called its takedown provision unconstitutionally vague, unconstitutionally overbroad, and lacking adequate safeguards against misuse."
For instance, the group said, platforms could be obligated to remove a journalist’s photographs of a topless protest on a public street, photos of a subway flasher distributed by law enforcement to locate the perpetrator, commercially produced sexually explicit content or sexually explicit material that is consensual but falsely reported as being nonconsensual
With inputs from AP