Firstpost
  • Home
  • Video Shows
    Vantage Firstpost America Firstpost Africa First Sports
  • World
    US News
  • Explainers
  • News
    India Opinion Cricket Tech Entertainment Sports Health Photostories
  • Asia Cup 2025
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
Trending:
  • PM Modi in Manipur
  • Charlie Kirk killer
  • Sushila Karki
  • IND vs PAK
  • India-US ties
  • New human organ
  • Downton Abbey: The Grand Finale Movie Review
fp-logo
OpenAI's o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant
Whatsapp Facebook Twitter
Whatsapp Facebook Twitter
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
  • Home
  • Tech
  • OpenAI's o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant

OpenAI's o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant

FP Staff • September 16, 2024, 12:38:14 IST
Whatsapp Facebook Twitter

As per OpenAI’s system card, the new o1 models have been rated with a “medium risk” for chemical, biological, radiological, and nuclear (CBRN) weapons, the highest risk level the company has ever attributed to its AI technology

Advertisement
Subscribe Join Us
Add as a preferred source on Google
Prefer
Firstpost
On
Google
OpenAI's o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant
OpenAI’s chief technology officer, Mira Murati, emphasised that the company is proceeding cautiously in releasing the o1 model to the public. While it will be available to ChatGPT’s paid subscribers and developers via an API, rigorous testing has been conducted. Image Credit: Composite image

OpenAI has acknowledged that its latest artificial intelligence models, known as o1 or “Strawberry,” poses an increased risk of misuse, particularly in the creation of biological weapons.

The company stated that these models, launched recently, have significantly enhanced capabilities, which inadvertently heightens the potential for dangerous applications in the wrong hands. The models boast improvements in reasoning, solving complex mathematical problems, and answering scientific research questions, marking a step forward in the development of artificial general intelligence (AGI).

STORY CONTINUES BELOW THIS AD

According to OpenAI’s system card, the new o1 models have been rated with a “medium risk” concerning chemical, biological, radiological, and nuclear (CBRN) weapons, the highest risk level the company has ever attributed to its AI technology.

More from Tech
How ChatGPT is becoming everyone’s BFF and why that’s dangerous How ChatGPT is becoming everyone’s BFF and why that’s dangerous America ready for self-driving cars, but it has a legal problem America ready for self-driving cars, but it has a legal problem

This means that the models now enable experts to develop bioweapons more effectively, raising ethical and safety concerns. AI’s advanced reasoning abilities, while a breakthrough in the field, are considered a potential threat if used by bad actors for malicious purposes.

Experts, such as Professor Yoshua Bengio, one of the leading voices in AI research, have highlighted the importance of urgent regulation in light of these risks. A proposed bill in California, SB 1047, aims to address such concerns by requiring AI developers to take steps to minimise the risk of their models being used to create bioweapons.

Bengio and others have stressed that as AI models evolve closer to AGI, the associated risks will only increase unless strong safety measures are implemented.

The development of these advanced AI systems is part of a broader competition among tech giants such as Google, Meta, and Anthropic, all vying to create sophisticated AI that can act as agents, assisting humans in various tasks. These AI agents are viewed as significant revenue generators for companies, which face high costs in training and operating such models.

Impact Shorts

More Shorts
America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

OpenAI’s chief technology officer, Mira Murati, emphasised that the company is proceeding cautiously in releasing the o1 model to the public. While it will be available to ChatGPT’s paid subscribers and developers via an API, rigorous testing has been conducted by “red-teamers,” experts tasked with identifying potential vulnerabilities in the model.

STORY CONTINUES BELOW THIS AD

Murati noted that the latest model has demonstrated better safety performance compared to earlier versions. Despite the risks, OpenAI has deemed the model safe to deploy under its policies, assigning it a medium risk rating within its cautious framework.

End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Impact Shorts

America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

US self-driving cars may soon ditch windshield wipers as the NHTSA plans to update regulations by 2026. State-level rules vary, complicating nationwide deployment. Liability and insurance models are also evolving with the technology.

More Impact Shorts

Top Stories

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Top Shows

Vantage Firstpost America Firstpost Africa First Sports
Latest News About Firstpost
Most Searched Categories
  • Web Stories
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Tech/Auto
  • Entertainment
  • IPL 2025
NETWORK18 SITES
  • News18
  • Money Control
  • CNBC TV18
  • Forbes India
  • Advertise with us
  • Sitemap
Firstpost Logo

is on YouTube

Subscribe Now

Copyright @ 2024. Firstpost - All Rights Reserved

About Us Contact Us Privacy Policy Cookie Policy Terms Of Use
Home Video Shorts Live TV