The US Congress has stepped up pressure on major tech companies, including Apple, Meta, Google, Microsoft, and others, to address the growing problem of deepfake non-consensual intimate images. Letters were sent to top executives, such as Apple’s Tim Cook, highlighting concerns over the proliferation of these harmful images facilitated by dual-use apps.
These letters follow reports of apps enabling users to swap faces into nude or pornographic content, which were actively promoted on social media platforms. According to 404 Media, Congress is now asking these tech giants to explain how they plan to stop such content from being created and distributed on their platforms.
Apple in the spotlight
Apple’s App Store policies were directly called out in Congress’s letter to Tim Cook. Despite the company’s App Review Guidelines, some apps with the potential for misuse have slipped through the cracks. The letter questions Apple’s ability to enforce its own standards and cites the TAKE IT DOWN Act, a legislative effort aimed at combating non-consensual intimate images.
Congress asked Apple specific questions about its plans to address deepfake pornography, including:
How Apple plans to tackle the spread of deepfakes and the timeline for implementing these measures.
Who is involved in developing these strategies.
The process for handling user reports and ensuring timely resolutions.
Criteria for removing problematic apps from the App Store.
Remedies for victims whose images have been misused.
These questions underline the growing demand for accountability from Apple, given its control over the App Store. Critics argue that as the gatekeeper, Apple bears responsibility when harmful apps manage to bypass its safeguards.
Deepfake tools slip through the cracks
Previous reports revealed apps that were used to generate non-consensual deepfake images, including some that sourced videos from adult content platforms like Pornhub for face-swapping purposes. While Apple removed these apps from the store, the incidents exposed glaring weaknesses in its review process.
Apple has taken some steps to curb the misuse of its platform, such as blocking “Sign in with Apple” on deepfake websites and ensuring its AI tools cannot generate explicit content. However, these measures are seen as the bare minimum, with critics calling for stricter oversight and better safeguards against dual-use apps.
Congress pushes for stricter controls
The letter to Apple, along with similar ones to other tech companies, emphasises the need for heightened precautions, particularly for apps offering AI-based image and video manipulation. Congress has urged tech companies to ensure such tools undergo rigorous scrutiny during review to prevent misuse.
This scrutiny reflects a broader debate about the role of gatekeepers like Apple. While the company’s control over its app ecosystem has been a point of contention, instances like these bolster the argument for stricter oversight of platforms that enable harmful behaviour. With public and legislative pressure mounting, Apple and other tech giants face a critical challenge in curbing the misuse of deepfake technology.