Anthropic has confirmed that part of the internal source code for its popular coding assistant, Claude Code, was accidentally leaked, in what the company described as a human error rather than a security breach.
The incident, which surfaced on Tuesday, has drawn significant attention across the developer community after a post on X sharing access to the code garnered more than 21 million views within hours.
While the exposure has raised concerns about competitive risks, the company has sought to reassure users that no sensitive information was compromised.
“No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said in a statement, reports CNBC. “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
Leak raises competitive concerns
Although Anthropic has downplayed the security implications, the leak could still prove consequential. Source code disclosures can provide valuable insight into how a system is designed, potentially giving rival firms and independent developers a closer look at the architecture behind Claude Code, one of the company’s fastest-growing AI tools.
The assistant has gained traction for its coding capabilities, making the inadvertent exposure particularly sensitive from a competitive standpoint. Analysts note that even partial access to internal systems can reveal optimisation strategies, workflows, or proprietary approaches that companies typically guard closely.
The timing of the leak has also intensified scrutiny, as it comes amid fierce competition in the artificial intelligence sector, where companies are racing to build increasingly capable coding assistants and enterprise tools.
Second incident in a week
The source code leak marks Anthropic’s second high-profile data mishap in less than a week. According to a report by Fortune, unpublished documents detailing the company’s upcoming AI model were recently discovered in a publicly accessible data cache.
The exposed material reportedly revealed early information about a next-generation system referred to as “Claude Mythos”, internally codenamed “Capybara”. The documents suggest that the model represents a significant upgrade over Anthropic’s current offerings, surpassing its top-tier Opus models in both scale and capability.
Quick Reads
View AllAnthropic currently categorises its models into three tiers: Opus, Sonnet and Haiku. However, the leaked documents indicate that the new system would sit above Opus, positioning it as the company’s most advanced and resource-intensive model to date.
The draft materials reviewed by Fortune also highlight potential concerns around misuse, including the system’s possible application in cyberattacks. The company has reportedly completed training and is proceeding with limited external testing, signalling a cautious rollout strategy.
In addition to model-related information, the data cache reportedly exposed details about an invite-only CEO summit in Europe, part of Anthropic’s broader enterprise push. Nearly 3,000 unpublished assets were said to have been accessible before the issue was resolved.
Anthropic has acknowledged that incident as well, attributing it to a configuration error in its content management system.
Together, the two episodes underscore the operational risks facing fast-growing AI firms as they scale infrastructure and accelerate product development. While neither incident involved confirmed breaches of user data, they highlight the challenges of maintaining strict internal controls in a rapidly evolving industry.
)