US DoJ arrests a paedophile for creating AI-generated child abuse in a first-of-its-kind case

FP Staff May 22, 2024, 16:49:33 IST

The accused, 42-year-old software engineer from Wisconsin used a modified version of Stable Diffusion AI image generator to produce the images. He then reportedly used these images to attempt to lure an underage boy into inapproriate situations

Advertisement
The US DOJ aims to establish that AI generated sexual content is still illegal, even if no real children were involved in its creation. Image Credit: Pexels
The US DOJ aims to establish that AI generated sexual content is still illegal, even if no real children were involved in its creation. Image Credit: Pexels

The US Department of Justice (DOJ) made headlines last week by arresting a Wisconsin man for generating and distributing AI-generated child sexual abuse material (CSAM). This landmark case marks the first instance where AI technology was used to create illegal material, setting a significant judicial precedent.

The DOJ aims to establish that such exploitative content is still illegal, even if no real children were involved in its creation. “Put simply, CSAM generated by AI is still CSAM,” stated Deputy Attorney General Lisa Monaco in a press release.

STORY CONTINUES BELOW THIS AD

The accused, 42-year-old software engineer Steven Anderegg from Holmen, WI, allegedly used a modified version of the open-source AI image generator, Stable Diffusion, to produce the images. He then reportedly used these images to attempt to lure an underage boy into sexual situations. This aspect of the case is expected to be crucial in the forthcoming trial, where Anderegg faces four counts of producing, distributing, and possessing obscene visual depictions of minors, and transferring obscene material to a minor under 16.

The DOJ’s charges detail that Anderegg created images depicting “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” He supposedly used specific prompts, including negative prompts (instructions for the AI model on what to avoid producing), to generate these explicit images.

While cloud-based image generators like Midjourney and DALL-E 3 have measures to prevent such misuse, Anderegg allegedly used Stable Diffusion 1.5, a variant known for fewer restrictions, reportedly produced by Runway ML. Stability AI confirmed this version’s origin to Ars Technica.

The DOJ also revealed that Anderegg communicated online with a 15-year-old boy, explaining his use of the AI model to create the images. He allegedly sent the teen direct messages on Instagram, including several AI-generated images of “minors lasciviously displaying their genitals.” Instagram reported these images to the National Center for Missing and Exploited Children (NCMEC), which then alerted law enforcement.

If convicted on all four counts, Anderegg could face between five to 70 years in prison. He is currently in federal custody, awaiting a hearing scheduled for May 22.

This case challenges the belief that CSAM’s illegality is tied solely to the exploitation of real children in its creation. Even though AI-generated CSAM does not involve real human subjects, it could still normalize and encourage the production and distribution of such material, potentially leading to more predatory behaviors. The DOJ’s stance aims to clarify this as AI technology continues to evolve and become more accessible.

STORY CONTINUES BELOW THIS AD

“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco emphasized. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

(With inputs from agencies)

Home Video Shorts Live TV