Firstpost
  • Home
  • Video Shows
    Vantage Firstpost America Firstpost Africa First Sports
  • World
    US News
  • Explainers
  • News
    India Opinion Cricket Tech Entertainment Sports Health Photostories
  • Asia Cup 2025
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
Trending:
  • PM Modi in Manipur
  • Charlie Kirk killer
  • Sushila Karki
  • IND vs PAK
  • India-US ties
  • New human organ
  • Downton Abbey: The Grand Finale Movie Review
fp-logo
AI Seoul Summit: Google, OpenAI, others to add a ‘kill switch’ to AI, commit to certain safety standards
Whatsapp Facebook Twitter
Whatsapp Facebook Twitter
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
  • Home
  • Tech
  • AI Seoul Summit: Google, OpenAI, others to add a ‘kill switch’ to AI, commit to certain safety standards

AI Seoul Summit: Google, OpenAI, others to add a ‘kill switch’ to AI, commit to certain safety standards

Mehul Reuben Das • May 22, 2024, 09:56:00 IST
Whatsapp Facebook Twitter

One significant outcome of this summit was an agreement among attending AI companies to implement a “kill switch” policy. This policy would halt the development of their most advanced AI models if they were deemed to have surpassed certain risk thresholds

Advertisement
Subscribe Join Us
Add as a preferred source on Google
Prefer
Firstpost
On
Google
AI Seoul Summit: Google, OpenAI, others to add a ‘kill switch’ to AI, commit to certain safety standards
OpenAI's Sam Altman, xAI's Elon Musk, Google's Sundar Pichai and Microsoft's Satya Nadella. Composite image.

Representatives from 16 major AI companies, including Anthropic, Microsoft, and OpenAI, along with officials from 10 countries and the EU, met at a summit in Seoul, South Korea. The goal was to set guidelines for responsible AI development.

AI technology has advanced rapidly, sparking both excitement and concern. While it offers immense opportunities, there are fears about its potential risks, including scenarios where AI might become uncontrollable. Recognizing these concerns, the world’s leading AI companies are voluntarily collaborating with governments to address these issues.

STORY CONTINUES BELOW THIS AD

However, without strict legal measures, these conversations can only go so far.

One significant outcome of this summit was an agreement among attending AI companies to implement a “kill switch” policy. This policy would halt the development of their most advanced AI models if they were deemed to have surpassed certain risk thresholds.

More from Tech
How ChatGPT is becoming everyone’s BFF and why that’s dangerous How ChatGPT is becoming everyone’s BFF and why that’s dangerous America ready for self-driving cars, but it has a legal problem America ready for self-driving cars, but it has a legal problem

However, the effectiveness of this policy is uncertain since it lacks legal enforcement and clear definitions of risk thresholds. Additionally, AI companies not present at the summit or competitors to those that agreed are not bound by this pledge.

The policy paper, signed by companies like Amazon, Google, and Samsung, stated, “In the extreme, organizations commit not to develop or deploy a model or system at all if mitigations cannot be applied to keep risks below the thresholds.”

This summit followed the Bletchley Park AI Safety Summit from last October, which faced criticism for its lack of actionable commitments. Participants at Bletchley Park had committed to lofty ideals without concrete regulatory mandates, leading to accusations of the summit being “worthy but toothless.”

Impact Shorts

More Shorts
America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

An open letter from attendees highlighted the need for enforceable regulations rather than voluntary measures, arguing that historical experience shows regulatory mandates are more effective in mitigating risks.

The fear of AI surpassing human control, often referred to as the “Terminator scenario,” has been a recurring theme in discussions about AI’s future. This concept, popularized by the 1984 film “The Terminator,” encapsulates the anxiety that AI, if left unchecked, could become a formidable adversary to humanity.

STORY CONTINUES BELOW THIS AD

While this remains a theoretical concern, the rapid progress in AI capabilities underscores the need for proactive governance.

UK Technology Secretary Michelle Donelan echoed these sentiments, emphasizing the need to harness AI’s potential while managing its risks. The acknowledgement of AI’s dual nature—immense opportunity paired with significant risk—drives the ongoing dialogue among policymakers and tech leaders.

AI companies themselves are acutely aware of these challenges. OpenAI CEO Sam Altman has warned about the risks associated with Artificial General Intelligence (AGI), which he describes as AI that exceeds human intelligence.

Altman acknowledges that while AGI holds tremendous potential, it also carries risks of misuse, accidents, and societal disruption. OpenAI advocates for a balanced approach where development continues but with careful oversight and societal collaboration to mitigate risks.

Despite these initiatives, global regulatory frameworks for AI remain fragmented and largely non-binding. The United Nations recently approved a policy framework to safeguard against AI risks, protect human rights, and monitor personal data usage, but this framework lacks enforceable power. Similarly, the Bletchley Declaration did not commit to tangible regulatory measures.

STORY CONTINUES BELOW THIS AD

In response to these gaps, AI companies have started forming their own organizations to advocate for AI policy. For instance, the Frontier Model Foundation, founded by Anthropic, Google, Microsoft, and OpenAI, and recently joined by Amazon and Meta, aims to advance the safety of frontier AI models. However, the foundation has yet to propose concrete policies.

On the other hand, individual governments have made more substantial progress. President Biden’s executive order on AI safety, for example, includes legally binding requirements for AI companies to share safety test results with the government. The European Union and China have also enacted formal policies addressing issues like copyright law and data privacy in AI development.

State-level actions are also noteworthy. Colorado recently introduced legislation to ban algorithmic discrimination and mandate that AI developers share internal data with state regulators to ensure compliance with ethical standards.

Looking ahead, the global AI regulatory landscape is expected to evolve further. France will host another summit early next year, building on the discussions from Seoul and Bletchley Park. Participants aim to develop formal definitions for risk benchmarks that necessitate regulatory intervention—a crucial step towards creating a more structured and effective governance framework for AI.

STORY CONTINUES BELOW THIS AD

In conclusion, while the collaboration between AI companies and governments represents a positive movement towards responsible AI development, the effectiveness of voluntary measures remains limited without legal enforcement. The ongoing efforts to establish robust regulatory frameworks will be essential in ensuring that AI’s transformative potential is harnessed safely and ethically.

End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Impact Shorts

America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

US self-driving cars may soon ditch windshield wipers as the NHTSA plans to update regulations by 2026. State-level rules vary, complicating nationwide deployment. Liability and insurance models are also evolving with the technology.

More Impact Shorts

Top Stories

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Top Shows

Vantage Firstpost America Firstpost Africa First Sports
Latest News About Firstpost
Most Searched Categories
  • Web Stories
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Tech/Auto
  • Entertainment
  • IPL 2025
NETWORK18 SITES
  • News18
  • Money Control
  • CNBC TV18
  • Forbes India
  • Advertise with us
  • Sitemap
Firstpost Logo

is on YouTube

Subscribe Now

Copyright @ 2024. Firstpost - All Rights Reserved

About Us Contact Us Privacy Policy Cookie Policy Terms Of Use
Home Video Shorts Live TV