Firstpost
  • Home
  • Video Shows
    Vantage Firstpost America Firstpost Africa First Sports
  • World
    US News
  • Explainers
  • News
    India Opinion Cricket Tech Entertainment Sports Health Photostories
  • Asia Cup 2025
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
Trending:
  • PM Modi in Manipur
  • Charlie Kirk killer
  • Sushila Karki
  • IND vs PAK
  • India-US ties
  • New human organ
  • Downton Abbey: The Grand Finale Movie Review
fp-logo
Anthropic working with US Department of Energy's nuclear specialists to test if AI models leak nuke info
Whatsapp Facebook Twitter
Whatsapp Facebook Twitter
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
  • Home
  • Tech
  • Anthropic working with US Department of Energy's nuclear specialists to test if AI models leak nuke info

Anthropic working with US Department of Energy's nuclear specialists to test if AI models leak nuke info

FP Staff • November 15, 2024, 10:29:34 IST
Whatsapp Facebook Twitter

Anthropic’s model, Claude 3 Sonnet, is being “red-teamed” by experts at the DOE’s National Nuclear Security Administration (NNSA), who are testing whether people could misuse it for dangerous nuclear-related purposes

Advertisement
Subscribe Join Us
Add as a preferred source on Google
Prefer
Firstpost
On
Google
Anthropic working with US Department of Energy's nuclear specialists to test if AI models leak nuke info
Red-teaming is a process where experts attempt to break or misuse a system to expose vulnerabilities. In this case, the specialists are evaluating if Claude’s responses could be exploited for developing nuclear weapons or accessing other harmful nuclear applications. AI Generated Image

Anthropic is teaming up with the US Department of Energy’s (DOE) nuclear specialists to ensure its AI models don’t unintentionally provide sensitive information about nuclear-making weapons.

This collaboration, which began in April, was revealed by Anthropic to Axios, and it’s a significant first in AI security. Anthropic’s model, Claude 3 Sonnet, is being “red-teamed” by experts at the DOE’s National Nuclear Security Administration (NNSA), who are testing whether people could misuse it for dangerous nuclear-related purposes, according to a report by Axios.

STORY CONTINUES BELOW THIS AD

Red-teaming is a process where experts attempt to break or misuse a system to expose vulnerabilities. In this case, the specialists are evaluating if Claude’s responses could be exploited for developing nuclear weapons or accessing other harmful nuclear applications.

More from Tech
How ChatGPT is becoming everyone’s BFF and why that’s dangerous How ChatGPT is becoming everyone’s BFF and why that’s dangerous America ready for self-driving cars, but it has a legal problem America ready for self-driving cars, but it has a legal problem

The project will continue until February, and along the way, the NNSA will test the upgraded Claude 3.5 Sonnet, which debuted in June. Anthropic has also leaned on its partnership with Amazon Web Services to prepare Claude for handling these high-stakes, government-focused security tests.

Given the nature of this work, Anthropic hasn’t disclosed any findings from the pilot program. The company intends to share its results with scientific labs and other organisations, encouraging independent testing to keep models safe from misuse.

Marina Favaro, Anthropic’s national security policy lead, emphasised that while US tech leads AI development, federal agencies possess the unique expertise needed for evaluating national security risks, highlighting the importance of these partnerships.

Impact Shorts

More Shorts
America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Wendin Smith of the NNSA reinforced the urgency, saying AI is at the centre of critical national security conversations. She explained that the agency is well-positioned to assess AI’s potential risks, especially concerning nuclear and radiological safety. These evaluations are crucial as AI’s potential misuse could be catastrophic.

The collaboration follows President Biden’s recent national security memo, which called for AI safety evaluations in classified environments. Major players like Anthropic and OpenAI had already committed to testing their models with the AI Safety Institute back in August, signalling an industry-wide awareness of these concerns, as per the Axios report.

STORY CONTINUES BELOW THIS AD

Interestingly, as AI developers race for government contracts, Anthropic isn’t the only one in the game. It has just partnered with Palantir and Amazon Web Services to offer Claude to US intelligence agencies. OpenAI, meanwhile, has deals with entities like NASA and the Treasury Department. Scale AI is also making moves, having developed a defence-focused model built on Meta’s Llama.

However, it’s unclear if these partnerships will hold steady through the looming political changes in Washington. Elon Musk, now a key figure in the incoming administration, has unpredictable views on AI safety. Although he has pushed for stricter controls in the past, his new venture, xAI, adopts a more hands-off, free-speech-oriented philosophy. All eyes are on how these evolving dynamics could shape the future of AI governance and safety testing.

End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Impact Shorts

America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

US self-driving cars may soon ditch windshield wipers as the NHTSA plans to update regulations by 2026. State-level rules vary, complicating nationwide deployment. Liability and insurance models are also evolving with the technology.

More Impact Shorts

Top Stories

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Top Shows

Vantage Firstpost America Firstpost Africa First Sports
Latest News About Firstpost
Most Searched Categories
  • Web Stories
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Tech/Auto
  • Entertainment
  • IPL 2025
NETWORK18 SITES
  • News18
  • Money Control
  • CNBC TV18
  • Forbes India
  • Advertise with us
  • Sitemap
Firstpost Logo

is on YouTube

Subscribe Now

Copyright @ 2024. Firstpost - All Rights Reserved

About Us Contact Us Privacy Policy Cookie Policy Terms Of Use
Home Video Shorts Live TV