China’s cyber regulator on Saturday released draft regulations for public consultation aimed at tightening oversight of artificial intelligence services designed to simulate human personalities and interact with users on an emotional level. The proposal reflects Beijing’s broader effort to guide the rapid expansion of consumer-facing AI through enhanced safety and ethical standards.
The proposed rules would apply to AI products and services made available to the public in China that present simulated human personality traits, thinking patterns, and communication styles. These services interact emotionally with users through text, images, audio, video, or other formats. Under the draft framework, providers would be required to warn users against excessive use and intervene when signs of addiction emerge.
Service providers would also be expected to assume safety responsibilities across the entire product lifecycle. This includes establishing systems for algorithm review, data security, and the protection of personal information.
Psychological risks and content limits
Addressing potential psychological risks, the draft requires providers to identify user states and assess emotional conditions and levels of dependence on the service. Where users display extreme emotions or addictive behaviour, providers should take necessary measures to intervene.
The measures further define content and conduct red lines, stating that AI services must not generate material that endangers national security, spreads rumours, or promotes violence or obscenity.
)