On Thursday, China issued the world’s earliest and most comprehensive regulations concerning generative artificial intelligence (AI) models, with a focus on promoting healthy content and upholding “core socialist values.” The move comes as Beijing aims to exert control over the deployment of services similar to ChatGPT.
The provisional regulations, jointly published by seven Chinese regulators led by the Cyberspace Administration of China, are set to take effect on August 15. They apply to all generative AI content services, including text, images, audio, and video, provided to the Chinese public.
Compared to the initial draft released in April, which sought public feedback, the updated regulations take on a more supportive tone towards the new technology. Authorities now pledge “effective measures to encourage innovative development of generative AI,” and punitive terms such as fines for technology-related offenses have been removed from the revised version.
As of now, China has not permitted any domestic companies to launch ChatGPT-style services to the public. Baidu’s Ernie Bot and Alibaba Group Holding’s Tongyi Qianwen are either in trial mode or restricted for business-use only. Similarly, foreign models like OpenAI’s ChatGPT and Google’s Bard remain unavailable in China, with access to their links swiftly blocked. Nonetheless, the new regulations are expected to provide a clearer path for domestic developers to introduce their generative AI products to the mass market.
Chinese regulators plan to adopt an “inclusive and prudent” approach towards generative AI services and will implement a “graded” regulatory framework. The National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, the Ministry of Public Security, and China’s broadcasting authority are the other entities involved in drafting the rules.
According to the new regulations, generative AI service providers must align with “core socialist values” and refrain from generating content that incites actions such as subversion of state power, endangering national security, promoting terrorism, extremism, or obscenity.
Additionally, AI models and chatbots should avoid generating false and harmful information. Chinese-developed chatbots and AI models already incorporate functions to ensure the content they generate is free of undesirable elements. For instance, Zhong Hongyi, chairman of 360 Security Technology, emphasized the firm’s chatbot self-censorship function, where the chatbot terminates a conversation if a user inputs a “sensitive word.”