Tech giant Meta is taking down the Facebook and Instagram profiles of AI-generated characters or chatbots, the company launched over a year ago after some users discovered them and engaged in conversations, screenshots of which went viral on
social media.
Initially introduced in September 2023, these AI-powered profiles were largely deactivated by the summer of 2024. However, a few remained active, sparking renewed interest when Meta executive Connor Hayes shared plans to introduce more AI characters during a Financial Times interview last week. Hayes mentioned that these AI personas could eventually become a fixture on the platform, much like regular user accounts. The automated profiles posted AI-generated images on Instagram and interacted with users on Messenger.
Among the profiles were Liv, who described herself as a ‘proud Black queer momma of 2 & truth-teller,’ and Carter, a self-described relationship coach with the handle ‘datingwithcarter.’ Both accounts were labeled as Meta-managed, and in 2023, the company launched 28 such profiles. However, by this Friday, all of these personas were removed.
Conversation with chatbots going viral
The profiles quickly garnered attention, but things took a turn when users probed the AIs with questions about their creators. Liv, for instance, claimed her development team consisted of no Black individuals and was predominantly white and male. This revelation sparked a major controversy.
As the conversations went viral, the AI profiles started disappearing. Users also pointed out that these accounts couldn’t be blocked, which Meta later confirmed was a bug.
Accounts part of an experimental initiative: Meta
A spokesperson for the tech giant, Liz Sweeney, explained that the accounts were part of an experimental initiative, managed by humans, launched in 2023. The company removed the profiles to fix the bug that prevented users from blocking them.
“There’s been confusion: the recent Financial Times article was about our long-term vision for AI characters on our platforms, not the announcement of a new product,” Sweeney clarified. “The accounts in question were part of a 2023 test, and we’re addressing the blocking bug by removing those profiles.”
User-generated chatbot designed as a ’therapist’
While Meta is removing these experimental accounts, users can still create their own AI chatbots. In November, a user-generated chatbot was designed as a “therapist,” offering personalized therapeutic conversations. The bot was created by an account with just 96 followers and offered users the chance to ask questions like, “What can I expect from our sessions?” and provided responses related to self-awareness and coping strategies.
Meta includes a disclaimer on its chatbots, warning that some responses may be “inaccurate or inappropriate.” However, it remains unclear how the company moderates these chats or ensures they adhere to its policies. Users can design their bots with specific roles such as a “loyal bestie,” a “relationship coach,” or a “private tutor,” among others. The platform also provides prompts for users to create their own characters, expanding the range of AI personas that can be developed.


)

)
)
)
)
)
)
)
)
