Explained: The promises, pitfalls and panic surrounding ChatGPT
The potential impact of ChatGPT on society remains complicated and unclear even as its creator on Wednesday announced a paid subscription version in the United States

OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service. AFP
Washington: Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.
The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.
Here is a closer look at what ChatGPT is (and is not):
Is this a turning point?
It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.
What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.
Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.
LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

Some critics describe ChatGPT as a brilliant public relations move that helped OpenAI secure billions of dollars in funding from Microsoft. AP
“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.
“Every time you ask a question and pull the arm, you get an answer that could be marvellous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.
Just like Google
ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.
The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.
“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.
ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.
“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.
“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.
The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.
What now?
From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability. AP
For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.
Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.
“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.
He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.
This of course raises the spectre of disinformation and spamming carried out at an industrial scale.
For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.
And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.
Ridicule
LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.
Quieter releases of language-based bots — like Meta’s Blenderbot or Microsoft’s Tay for example — were quickly shown capable of generating racist or inappropriate content.
Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
also read

AI under attack: Personal info, credit card details of ChatGPT Plus users leaked because of bug
A bug in one of ChatGPT's new plugins leaked personal information as well as credit card details of several of its premium, paying customers to other users of the service. OpenAI quickly fixed the bug and revealed that only a small portion of users were affected.

Can’t wait to try GPT-4? Here’s how to get GPT-4 powered ChatGPT 2.0 and skip the waiting list
People are desperately looking for ways to get their hands on ChatGPT2.0, powered by GPT-4, without having to wait for a long time. But, unlike the original ChatGPT, which was released last November, the GPT-4 version is not exactly a free tool.

Microsoft's Bing AI toes CCP’s line, says Uyghur women's testimonies of forced sterilisation fabricated
Microsoft's Bing AI seems to be biased to follow Chinese propaganda, especially when it is asked about the Uyghur ethnocide. A Twitter user discovered that Bing AI tried avoiding such questions at first, but when pressed took a pro-China side.