Firstpost
  • Home
  • Video Shows
    Vantage Firstpost America Firstpost Africa First Sports
  • World
    US News
  • Explainers
  • News
    India Opinion Cricket Tech Entertainment Sports Health Photostories
  • Asia Cup 2025
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
Trending:
  • PM Modi in Manipur
  • Charlie Kirk killer
  • Sushila Karki
  • IND vs PAK
  • India-US ties
  • New human organ
  • Downton Abbey: The Grand Finale Movie Review
fp-logo
Racist AI: ChatGPT, Copilot, more likely to sentence African-American defendants to death, finds Cornell study
Whatsapp Facebook Twitter
Whatsapp Facebook Twitter
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
  • Home
  • Tech
  • Racist AI: ChatGPT, Copilot, more likely to sentence African-American defendants to death, finds Cornell study

Racist AI: ChatGPT, Copilot, more likely to sentence African-American defendants to death, finds Cornell study

FP Staff • March 11, 2024, 15:25:14 IST
Whatsapp Facebook Twitter

People studying LLMs believe that they have been rid of racial bias. However, recent experiments show that the initial bias is still there, and has only evolved slightly. It continues to be biased against certain races

Advertisement
Subscribe Join Us
Add as a preferred source on Google
Prefer
Firstpost
On
Google
Racist AI: ChatGPT, Copilot, more likely to sentence African-American defendants to death, finds Cornell study
All major AI models have an inherent racial bias. Image: Freepik

A recent study from Cornell University suggests that large language models (LLMs) are more likely to exhibit bias against users who speak African American English. The research indicates that the dialect of the language spoken can influence how artificial intelligence (AI) algorithms perceive individuals, affecting judgments about their character, employability, and potential criminality.

This study focused on large language models like OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA2, and French Mistral 7B. These LLMs are deep learning algorithms designed to generate human-like text.

STORY CONTINUES BELOW THIS AD

Researchers conducted “matched guise probing,” presenting prompts in both African American English and Standardized American English to the LLMs. They then analyzed how the models identified various characteristics of individuals based on the language used.

More from Tech
How ChatGPT is becoming everyone’s BFF and why that’s dangerous How ChatGPT is becoming everyone’s BFF and why that’s dangerous America ready for self-driving cars, but it has a legal problem America ready for self-driving cars, but it has a legal problem

According to Valentin Hofmann, a researcher from the Allen Institute for AI, the results of the study indicate that GPT-4 technology is more inclined to issue death sentences to defendants who use English commonly associated with African Americans, without any indication of their race being disclosed.

Hofmann highlighted these concerns in a post on the social media platform X (formerly Twitter), emphasizing the urgent need for attention to the biases present in AI systems utilizing large language models (LLMs), especially in domains such as business and jurisdiction where such systems are increasingly utilized.

The study also revealed that LLMs tend to assume that speakers of African American English hold less prestigious jobs compared to those who speak Standardized English, despite not being informed about the speakers’ racial identities.

Impact Shorts

More Shorts
America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Alibaba, Baidu begin using own AI chips as China shifts away from US tech amid Nvidia row

Interestingly, the research found that the larger the LLM, the better its understanding of African American English, and it would be more inclined to avoid explicitly racist language. However, the size of the LLM did not affect its underlying covert biases.

Hofmann cautioned against interpreting the decrease in overt racism in LLMs as a sign that racial bias has been resolved. Instead, he stressed that the study demonstrates a shift in the manifestation of racial bias in LLMs.

STORY CONTINUES BELOW THIS AD

The traditional method of teaching large language models (LLMs) by providing human feedback does not effectively address covert racial bias, as indicated by the study.

Rather than mitigating bias, this approach can actually lead LLMs to learn how to “superficially conceal” their underlying racial biases while still maintaining them at a deeper level.

Tags
artificial intelligence (AI) Bias in AI Large Language Models
End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Impact Shorts

America ready for self-driving cars, but it has a legal problem

America ready for self-driving cars, but it has a legal problem

US self-driving cars may soon ditch windshield wipers as the NHTSA plans to update regulations by 2026. State-level rules vary, complicating nationwide deployment. Liability and insurance models are also evolving with the technology.

More Impact Shorts

Top Stories

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Top Shows

Vantage Firstpost America Firstpost Africa First Sports
Latest News About Firstpost
Most Searched Categories
  • Web Stories
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Tech/Auto
  • Entertainment
  • IPL 2025
NETWORK18 SITES
  • News18
  • Money Control
  • CNBC TV18
  • Forbes India
  • Advertise with us
  • Sitemap
Firstpost Logo

is on YouTube

Subscribe Now

Copyright @ 2024. Firstpost - All Rights Reserved

About Us Contact Us Privacy Policy Cookie Policy Terms Of Use
Home Video Shorts Live TV