AI’s multiple personalities: Bing's ChatGPT now comes with three different personalities
Microsoft is adding three personalities to Bing's Chat services, Creative, Precise and Balanced AI. Users, depending on the kind of responses they are looking for, can select one of the personalities. This will enhance Bing Chat's conversational style.

Microsoft is adding three personalities to Bing's Chat services, Creative, Precise and Balanced AI. Users, depending on the kind of responses they are looking for, can select one of the personalities. This will enhance Bing Chat's conversational style.
Mike Davidson, a Microsoft employee, revealed that the company has released three different personality types for its experimental AI-powered Bing Chat bot: Creative, Balanced, and Precise.
Microsoft has been testing the functionality with a small group of customers since February 24. Changing modes create various outcomes, shifting the equilibrium between accuracy and creativity.
Bringing a method to the madness?
Bing Chat is an AI-powered helper built on OpenAI’s sophisticated large language model (LLM). Bing Chat’s ability to search the web and incorporate the findings into its responses is a crucial element.
Related Articles
On February 7, Microsoft revealed Bing Chat, and soon after it went live, adversarial assaults routinely drove an early version of Bing Chat to simulated lunacy, and users found the computer could be persuaded to harm them. Soon after, Microsoft drastically reduced Bing Chat’s outbursts by setting stringent time restrictions on how long conversations could last.
Since then, the company has been playing with methods to restore some of Bing Chat’s sassy demeanour while also allowing other users to seek more precise answers. As a consequence, the new three-choice “conversation style” UI was born.
Creative, Precise and Balanced AI
In experiments with the three styles, people discovered that the “Creative” option generated shorter and more out-of-the-box ideas that were not always secure or practical.
“Precise” mode erred on the side of caution, not recommending anything when there were no secure means to accomplish an outcome.
In the centre, the “Balanced” option frequently generated the longest responses with the most comprehensive search results and website citations.
Unexpected inaccuracies (hallucinations) in big language models often rise in frequency with increased “creativity,” which typically means that the AI model will deviate more from the information it acquired in its dataset.
This characteristic is commonly referred to by AI experts as “temperature,” but Bing team members say there is more at work with the new chat styles.
Temperamental
Switching modes in Bing Chat, according to Microsoft employee Mikhail Parakhin, alters basic aspects of the bot’s behaviour, such as moving between various AI models that have gotten extra training from human reactions to its output.
The various modes also use different initial prompts, implying that Microsoft switches out the personality-defining prompt disclosed in the prompt injection assault we discussed in February.
While Bing Chat is still only accessible to those who signed up for a queue, Microsoft is constantly refining Bing Chat and other AI-powered Bing search features in preparation for wider user adoption. Microsoft recently revealed intentions to incorporate the technology into Windows 11.
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
also read

AI Gold Rush: Ex-Googler fired for pointing out AI biases, says companies can’t self-regulate AI
A former Google AI researcher who was fired for voicing her concerns over Google's approach to developing AI has said that tech companies can't be trusted to self regulate how they develop AI models ethically. Timnit Gebru claims that most companies will take short cuts.

No elections safe from AI, deep fake photos, videos of politicians to become common, warns former Google top boss
Google's former CEO, Eric Schmidt, has warned that no democratic election can be free from AI's interference, unless governments across the world, work actively to curb misinformation and deep fakes. He is also calling for strict regulations on AI.

AI Hallucinates: Professor fails entire class after ChatGPT falsely tells him students used AI for essay
A University of Texas professor tried using ChatGPT to check if his students plagiarised an essay they were assigned. However, ChatGPT hallucinated and told the professor that they essays may have been written by an AI, which made him fail his entire class.