AI will get a lot better at conversation. Don't expect it to tell the truth

Experts are still debating whether the strengths of these technologies will outweigh their flaws and potential for harm, but they agree on one point: The believability of make-believe conversation will continue to improve.

The New York Times January 13, 2023 22:00:31 IST
AI will get a lot better at conversation. Don't expect it to tell the truth

Noam Shazeer, left, and Daniel De Freitas, the founders of Character.AI, a company that allows users of its website to converse with sophisticated chatbots that are trained to talk like humans, living or dead, real or imagined, in Palo Alto. The New York Times

On a recent afternoon, Jonas Thiel, a socioeconomics major at a college in northern Germany, spent more than an hour chatting online with some of the left-wing political philosophers he had been studying. These were not the actual philosophers but virtual re-creations, brought to conversation, if not quite life, by sophisticated chatbots on a website called Character.AI.

Thiel’s favourite was a bot that imitated Karl Kautsky, a Czech-Austrian socialist who died before World War II. When Thiel asked Kautsky’s digital avatar to provide some advice for modern-day socialists struggling to rebuild the workers’ movement in Germany, the Kautsky-bot suggested that they publish a newspaper. “They can use it not only as a means of spreading socialist propaganda, which is in short supply in Germany for the time being, but also to organise working-class people,” the bot said.

The Kautsky-bot went on to argue that the working classes would eventually “come to their senses” and embrace a modern-day Marxist revolution. “The proletariat is at a low point in their history right now,” it wrote. “They will eventually realise the flaws in capitalism, especially because of climate change.”

Over the course of several days, Thiel met with other virtual scholars, including G.A. Cohen and Adolph Reed Jr. But he could have picked almost anyone, living or dead, real or imagined. At Character.AI, which emerged this summer, users can chat with reasonable facsimiles of people as varied as Queen Elizabeth II, William Shakespeare, Billie Eilish or Elon Musk (there are several versions). Anyone you want to invoke, or concoct, is available for conversation. The company and site, founded by Daniel De Freitas and Noam Shazeer, two former Google researchers, is among the many efforts to build a new kind of chatbot. These bots cannot chat exactly like a human, but they often seem to.

In late November, OpenAI, a San Francisco artificial intelligence lab, unveiled a bot called ChatGPT that left more than 1 million people feeling as if they were chatting with another human being. Similar technologies are under development at Google, Meta and other tech giants. Some companies have been reluctant to share the technology with the wider public. Because these bots learn their skills from data posted to the internet by real people, they often generate untruths, hate speech and language that is biased against women and people of colour. If misused, they could become a more efficient way of running the kind of misinformation campaign that has become commonplace in recent years.

“Without any additional guardrails in place, they are just going to end up reflecting all the biases and toxic information that is already on the web,” said Margaret Mitchell, a former AI researcher at Microsoft and Google, where she helped start its Ethical AI team. She is now with AI startup Hugging Face.

But other companies, including Character.AI, are confident that the public will learn to accept the flaws of chatbots and develop a healthy distrust of what they say. Thiel found that the bots at Character.AI had both a talent for conversation and a knack for impersonating real-life people. “If you read what someone like Kautsky wrote in the 19th Century, he does not use the same language we use today,” he said. “But the AI can somehow translate his ideas into ordinary modern English.”

For the moment, these and other advanced chatbots are a source of entertainment. And they are quickly becoming a more powerful way of interacting with machines.

Experts are still debating whether the strengths of these technologies will outweigh their flaws and potential for harm, but they agree on one point: The believability of make-believe conversation will continue to improve.

The Art of Conversation

In 2015, De Freitas, then working as a software engineer at Microsoft, read a research paper published by scientists at Google Brain, the flagship artificial intelligence lab at Google. Detailing what it called “A Neural Conversational Model,” the paper showed how a machine could learn the art of conversation by analysing dialogue transcripts from hundreds of movies.

The paper described what AI researchers call a neural network, a mathematical system loosely modelled on the web of neurons in the brain. This same technology also translates between Spanish and English on services like Google Translate and identifies pedestrians and traffic signs for self-driving cars navigating city streets.

A neural network learns skills by pinpointing patterns in enormous amounts of digital data. By analysing thousands of cat photos, for instance, it can learn to recognise a cat.

AI will get a lot better at conversation Dont expect it to tell the truth

Daniel De Freitas, a co-founder of Character.AI, demonstrates a chat on the company’s website. The New York Times

The neural system described in the Google paper was far from perfect but seemed to chat like a real person every once in a while:

Human: what’s your name?

Machine: i’m julia.

Human: when were you born?

Machine: july 20th.

Human: what year were you born?

Machine: 1977.

Human: where are you from?

Machine: i’m out in the boonies.

Human: are you a man or a woman?

Machine: i’m a woman.

Human: why are we here?

Machine: i’m not sure.

When De Freitas read the paper, he was not yet an AI researcher; he was a software engineer working on search engines. But what he really wanted was to take Google’s idea to its logical extreme.

“You could tell this bot could generalise,” he said.

“What it said did not look like what was in a movie script.”

He moved to Google in 2017. Officially, he was an engineer on YouTube, the company’s video-sharing site. But for his “20% time” project — a Google tradition that lets employees explore new ideas alongside their daily obligations — he began building his own chatbot.

The idea was to train a neural network using a much larger collection of dialogue: reams of chat logs culled from social media services and other sites across the internet. The idea was simple, but it would require enormous amounts of computer processing power. Even a supercomputer would need weeks or even months to analyse all that data.

As a Google engineer, he held a few credits that allowed him to run experimental software across the company’s vast network of computer data centers. But these credits would grant only a small fraction of the computing power needed to train his chatbot. So he started borrowing credits from other engineers; as the system analysed more data, its skills would improve by leaps and bounds.

Initially, he trained his chatbot using what is called an LSTM, for Long Short-Term Memory — a neural network designed in the 1990s specifically for natural language. But he soon switched to a new kind of neural network called a transformer, developed by a team of Google AI researchers that included Noam Shazeer.

Unlike an LSTM, which reads text one word at a time, a transformer can use multiple computer processors to analyse an entire document in a single step.

Google, OpenAI and other organizations were already using transformers to build what are called “large language models,” systems suited for a wide range of language tasks, from writing Twitter messages to answering questions. Still working on his own, De Freitas focused the idea on conversation, feeding his transformer as much dialogue as possible.

It was an exceedingly simple approach.

But as De Freitas likes to say: “Simple solutions for incredible results.”

The result in this case was a chatbot that he called Meena. It was so effective that Google Brain hired De Freitas and turned his project into an official research effort. Meena became LaMDA, short for Language Model for Dialogue Applications.

The project spilled into the public consciousness early in the summer when another Google engineer, Blake Lemoine, told The Washington Post that LaMDA was sentient. This assertion was an exaggeration, to say the least. But the brouhaha showed how quickly chatbots were improving inside top labs like Google Brain and OpenAI.

Google was reluctant to release the technology, worried that its knack for misinformation and other toxic language could damage the company brand. But by this time De Freitas and Shazeer had left Google, determined to get this kind of technology into the hands of as many people as possible through their new company, Character.AI.

“The technology is useful today — for fun, for emotional support, for generating ideas, for all kinds of creativity,” Shazeer said.

Designed for Open-Ended Exchanges

ChatGPT, the bot released by OpenAI to much fanfare in late November, was designed to operate as a new kind of question-and-answer engine.

AI will get a lot better at conversation Dont expect it to tell the truth

The ChatGPT bot. Image courtesy: Open AI

It is pretty good in this role, but the user never knows when the chatbot will just make something up. It may tell you that the official currency of Switzerland is the euro (it’s actually the Swiss franc) or that Mark Twain’s Celebrated Jumping Frog of Calaveras County could not only jump but talk. AI researchers call this generation of untruths “hallucination.”

In building Character.AI, De Freitas and Shazeer had a different objective: open-ended conversation. They believe that today’s chatbots are better suited to this kind of service, for now a means of entertainment, factual or not. As the site notes, “Everything Characters say is made up!”

“These systems are not designed for truth,” Shazeer said. “They are designed for plausible conversation.”

De Freitas, Shazeer and their colleagues didn’t build one bot that imitates Musk, another that mimics Queen Elizabeth and a third that parrots Shakespeare. They built a single system that can imitate all those people and others. It has learned from reams of dialogue, articles, books and digital text describing people like Musk, the queen and Shakespeare.

The system also has a way of combining disparate concepts learned during training. The result is a practically endless collection of bots that can imitate a practically endless collection of people, riffing on a practically endless number of topics, as Thiel found when he had this chat with the Kautsky bot:

Jonas Thiel: Marxists in your time were very confident that revolution was a historical inevitability. In light of recent defeats socialists across the world have suffered and the lack of class struggle, even in times of great suffering, do you think it is time to revise that view?

Karl Kautsky Bot: I do not. We may have made mistakes with our tactics, but I refuse to believe that socialism is a pipe dream. I still believe that the proletariat must break the chains of capitalism, and that the working class must unite around Marxism. We must remember that capitalism is still deeply flawed and that it is our job as socialists to agitate and spread class consciousness.

Thiel: I completely agree, but wasn’t it wrong to assume workers uniting to overthrow the bourgeoisie was inevitable? After all, the capitalist class is very powerful and it is hard for workers to come together to resist collectively rather than individually.

Kautsky Bot: You are right, it was a mistake to be over optimistic in that regard. But I find it better to believe in class consciousness and revolution until I am proven wrong. What is the alternative? Believing the workers will stay in chains forever? I don’t agree with that. It is better to struggle against capitalism despite our odds then to give in. I refuse to believe the workers will never wake up. Even if I am over optimistic, we must not give in to pessimism.

Sometimes, the chatbot gets things right. Sometimes, it doesn’t.

When Thiel chatted with an avatar meant to imitate Reed, the 20th-Century American political thinker, it turned him into “some kind of militant Maoist, which is definitely not right.”

Like Google and OpenAI and other top labs, De Freitas, Shazeer and their colleagues plan on training their system with ever larger amounts of digital data. This training can take months and millions of dollars; it can also sharpen the skills of the artificial conversationalist.

Researchers say that the rapid improvement will last only so long. Richard Socher, former chief scientist in charge of AI at Salesforce who now runs a startup called You.com, believes these exponential improvements will begin to level off over the next few years, when language models reach a point when they have analysed pretty much all the text on the internet.

But Shazeer said the runway is much longer: “There are billions of people in the world generating text all the time. People will keep spending more and more money to train smarter and smarter systems. We are nowhere near the end of that trend.”

Cade Metz c.2023 The New York Times Company

With inputs from agencies

Read all the Latest NewsTrending NewsCricket NewsBollywood News,
India News and Entertainment News here. Follow us on FacebookTwitter and Instagram.

Updated Date:

also read

Time to choose ethical intelligence over artificial intelligence
Opinion

Time to choose ethical intelligence over artificial intelligence

While using artificial intelligence be ethical and honest, have your due diligence to avoid hurting anyone, protect confidentiality, and, most importantly, use your brain and experience to improve it

OpenAI believed GPT-4 could take over the world, so they got it tested to see how to stop it
World

OpenAI believed GPT-4 could take over the world, so they got it tested to see how to stop it

GPT-4's final version turned out to be more powerful and capable than OpenAI had anticipated. In fact, they believed that GPT-4 could potentially come up with a plan to take over the world. So, they got it tested for hallucinations before launching it.

ChatGPT powers up: OpenAI finally lets the GPT-4 chatbot to browse the internet for answers
World

ChatGPT powers up: OpenAI finally lets the GPT-4 chatbot to browse the internet for answers

ChatGPT's GPT-4 version now finally has the ability to connect to look for information and other data sets on the internet. A chatbot with open access to the internet can be dangerous, a fact that OpenAI is aware of.