YouTube on Tuesday announced it is expanding its likeness detection technology, designed to identify AI-generated deepfakes, to a pilot group of government officials, political candidates and journalists. The tool allows users to detect unauthorised AI-generated content and request its removal if it violates the platform’s policies.
The technology was first introduced last year to around four million creators in the YouTube Partner Program after earlier testing.
Similar to Content ID system
Similar to YouTube’s existing Content ID system, which identifies copyright-protected material in uploaded videos, the likeness detection feature scans for AI-generated faces created using deepfake tools. Such technology can be used to spread misinformation by making public figures, including politicians and government officials, appear to say or do things they never actually did.
Through the new pilot program, YouTube said it aims to balance freedom of expression with the risks posed by AI systems capable of producing highly realistic depictions of public figures.
“This expansion is focused on protecting the integrity of public discourse,” said Leslie Miller, vice president of Government Affairs and Public Policy at YouTube, during a press briefing ahead of Tuesday’s launch.
Risk of AI impersonation
She also noted that the risk of AI impersonation is especially high for people in the civic space, adding that while the platform is introducing this new safeguard, it is also being cautious in how it is applied.
Miller said that not every detected match would automatically be removed upon request. Instead, YouTube would review each case under its existing privacy policy to determine whether the content qualifies as parody or political commentary, both of which are protected forms of free expression.
The company also said it is pushing for similar protections at the federal level by supporting the No fakes Act in Washington, which seeks to regulate the use of AI to create unauthorized replicas of a person’s voice or visual likeness.
To use the new tool, eligible participants in the pilot program must first verify their identity by submitting a selfie and a government-issued ID. After verification, they can create a profile, review detected matches, and request the removal of content they believe violates platform policies.
Quick Reads
View AllAI-generated videos will carry labels
YouTube said it may later expand the system to block uploads of violating content before they go live or allow individuals to monetize such videos, similar to how its Content ID system functions. The company did not disclose which officials or politicians are part of the initial testing group but said the goal is to make the technology widely available over time.
These AI-generated videos will carry labels, though their placement may vary. In some cases, the label appears in the video description, while content covering more sensitive topics displays the label directly on the video. This follows YouTube’s existing policy for AI-generated content.
“There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,” explained Amjad Hanif, YouTube’s vice president of Creator Products, as to the label’s placement. “It could be a cartoon that is generated with AI. And so I think there’s a judgment on whether it’s a category that maybe merits from a very visible disclaimer,” he said.
)