Firstpost
  • Home
  • Video Shows
    Vantage Firstpost America Firstpost Africa First Sports
  • World
    US News
  • Explainers
  • News
    India Opinion Cricket Tech Entertainment Sports Health Photostories
  • Asia Cup 2025
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
Trending:
  • PM Modi in Manipur
  • Charlie Kirk killer
  • Sushila Karki
  • IND vs PAK
  • India-US ties
  • New human organ
  • Downton Abbey: The Grand Finale Movie Review
fp-logo
Explained: Why can't we trust ChatGPT's answers as academics and reporters?
Whatsapp Facebook Twitter
Whatsapp Facebook Twitter
Apple Incorporated Modi ji Justin Trudeau Trending

Sections

  • Home
  • Live TV
  • Videos
  • Shows
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Health
  • Tech/Auto
  • Entertainment
  • Web Stories
  • Business
  • Impact Shorts

Shows

  • Vantage
  • Firstpost America
  • Firstpost Africa
  • First Sports
  • Fast and Factual
  • Between The Lines
  • Flashback
  • Live TV

Events

  • Raisina Dialogue
  • Independence Day
  • Champions Trophy
  • Delhi Elections 2025
  • Budget 2025
  • US Elections 2024
  • Firstpost Defence Summit
  • Home
  • World
  • Explained: Why can't we trust ChatGPT's answers as academics and reporters?

Explained: Why can't we trust ChatGPT's answers as academics and reporters?

the conversation • February 1, 2023, 13:45:11 IST
Whatsapp Facebook Twitter

Even if the output of ChatGPT appears to be coherent, simply publishing it is the equivalent of letting autocomplete run wild. It’s an irresponsible practice because it implies that statistical tricks are equivalent to well-sourced and verified knowledge

Advertisement
Subscribe Join Us
Add as a preferred source on Google
Prefer
Firstpost
On
Google
Explained: Why can't we trust ChatGPT's answers as academics and reporters?

Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics. Academic publishers have moved to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used. Leading universities and schools around the world, from France’s renowned Sciences Po to many Australian universities, have banned its use. These bans are not merely the actions of academics who are worried they won’t be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI’s ChatGPT chatbot: Why should we trust anything that it outputs? This is a vitally important question, as ChatGPT and programs like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media. Based on my work on the political economy of knowledge governance, academic bans on ChatGPT’s use are a proportionate reaction to the threat ChatGPT poses to our entire information ecosystem. Journalists and academics should be wary of using ChatGPT. Based on its output, ChatGPT might seem like just another information source or tool. However, in reality, ChatGPT — or, rather the means by which ChatGPT produces its output — is a dagger aimed directly at their very credibility as authoritative sources of knowledge. It should not be taken lightly. Trust and information Think about why we see some information sources or types of knowledge as more trusted than others. Since the European Enlightenment, we’ve tended to equate scientific knowledge with knowledge in general. Science is more than laboratory research: it’s a way of thinking that prioritises empirically based evidence and the pursuit of transparent methods regarding evidence collection and evaluation. And it tends to be the gold standard by which all knowledge is judged. For example, journalists have credibility because they investigate information, cite sources and provide evidence. Even though sometimes the reporting may contain errors or omissions, that doesn’t change the profession’s authority. [caption id=“attachment_12082592” align=“alignnone” width=“640”] ChatGPT may produce seemingly legible knowledge, as if by magic. But we would be well advised not to mistake its output for actual, scientific knowledge. One should never confuse coherence with understanding. AFP[/caption] The same goes for opinion editorial writers, especially academics and other experts because they — we — draw our authority from our status as experts in a subject. Expertise involves a command of the sources that are recognised as comprising legitimate knowledge in our fields. Most op-eds aren’t citation-heavy, but responsible academics will be able to point you to the thinkers and the work they’re drawing on. And those sources themselves are built on verifiable sources that a reader should be able to verify for themselves. Truth and outputs Because human writers and ChatGPT seem to be producing the same output — sentences and paragraphs — it’s understandable that some people may mistakenly confer this scientifically sourced authority onto ChatGPT’s output. That both ChatGPT and reporters produce sentences is where the similarity ends. What’s most important — the source of authority — is not what they produce, but how they produce it. ChatGPT doesn’t produce sentences in the same way a reporter does. ChatGPT, and other machine-learning, large language models, may seem sophisticated, but they’re basically just complex autocomplete machines. Only instead of suggesting the next word in an email, they produce the most statistically likely words in much longer packages. These programs repackage others’ work as if it were something new. It does not “understand” what it produces. The justification for these outputs can never be truth. Its truth is the truth of the correlation, that the word “sentence” should always complete the phrase “We finish each other’s …” because it is the most common occurrence, not because it is expressing anything that has been observed. Because ChatGPT’s truth is only a statistical truth, output produced by this program cannot ever be trusted in the same way that we can trust a reporter or an academic’s output. It cannot be verified because it has been constructed to create output in a different way than what we usually think of as being “scientific.” You can’t check ChatGPT’s sources because the source is the statistical fact that most of the time, a set of words tend to follow each other. No matter how coherent ChatGPT’s output may seem, simply publishing what it produces is still the equivalent of letting autocomplete run wild. It’s an irresponsible practice because it pretends that these statistical tricks are equivalent to well-sourced and verified knowledge. Similarly, academics and others who incorporate ChatGPT into their workflow run the existential risk of kicking the entire edifice of scientific knowledge out from underneath themselves. Because ChatGPT’s output is correlation-based, how does the writer know that it is accurate? Did they verify it against actual sources, or does the output simply conform to their personal prejudices? And if they’re experts in their field, why are they using ChatGPT in the first place? Knowledge production and verification The point is that ChatGPT’s processes give us no way to verify its truthfulness. In contrast, that reporters and academics have a scientific, evidence-based method of producing knowledge serves to validate their work, even if the results might go against our preconceived notions. The problem is especially acute for academics, given our central role in creating knowledge. Relying on ChatGPT to write even part of a column means they’re no longer relying on the scientific authority embedded in verified sources. Instead, by resorting to statistically generated text, they are effectively making an argument from authority. Such actions also mislead the reader, because the reader can’t distinguish between text by an author and an AI. ChatGPT may produce seemingly legible knowledge, as if by magic. But we would be well advised not to mistake its output for actual, scientific knowledge. One should never confuse coherence with understanding. ChatGPT promises easy access to new and existing knowledge, but it is a poisoned chalice. Readers, academics and reporters beware.The Conversation This article is republished from The Conversation under a Creative Commons license. Read the original article. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.

Tags
output academics reporter reporters Academic AI chatbot OpenAI ChatGPT scientific results natural language queries chatgpt chatbot cannot be trusted telling the truth knowledge production knowledge verification chatgpt process chatgpt stats
End of Article
Latest News
Find us on YouTube
Subscribe
End of Article

Impact Shorts

‘The cries of this widow will echo’: In first public remarks, Erika Kirk warns Charlie’s killers they’ve ‘unleashed a fire’

‘The cries of this widow will echo’: In first public remarks, Erika Kirk warns Charlie’s killers they’ve ‘unleashed a fire’

Erika Kirk delivered an emotional speech from her late husband's studio, addressing President Trump directly. She urged people to join a church and keep Charlie Kirk's mission alive, despite technical interruptions. Erika vowed to continue Charlie's campus tours and podcast, promising his mission will not end.

More Impact Shorts

Top Stories

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

Russian drones over Poland: Trump’s tepid reaction a wake-up call for Nato?

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

As Russia pushes east, Ukraine faces mounting pressure to defend its heartland

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Why Mossad was not on board with Israel’s strike on Hamas in Qatar

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Turkey: Erdogan's police arrest opposition mayor Hasan Mutlu, dozens officials in corruption probe

Top Shows

Vantage Firstpost America Firstpost Africa First Sports
Latest News About Firstpost
Most Searched Categories
  • Web Stories
  • World
  • India
  • Explainers
  • Opinion
  • Sports
  • Cricket
  • Tech/Auto
  • Entertainment
  • IPL 2025
NETWORK18 SITES
  • News18
  • Money Control
  • CNBC TV18
  • Forbes India
  • Advertise with us
  • Sitemap
Firstpost Logo

is on YouTube

Subscribe Now

Copyright @ 2024. Firstpost - All Rights Reserved

About Us Contact Us Privacy Policy Cookie Policy Terms Of Use
Home Video Shorts Live TV