China Digital Economy - Monthly Policy Updates (November 2023)
The China-Britain Business Council will collaborate with LexisNexis as part of the working group programme to provide you the latest updates on the digital economy including case studies, insights, and analysis.
On the morning of October 25, the National Data Bureau (NDB) was officially unveiled, with Liu Liehong was appointed as the director and Shen Zhulin appointed as the deputy director of the new bureau. In March of this year, the State Council issued the “State Council’s Organizational Reform Plan”, which outlined the establishment of the NDB. As one of the highlights of the new round of organizational reform, the creation of the NDB has garnered extensive attention. The new bureau will be responsible for coordinating and advancing the development of the data infrastructure system, organizing the integration, sharing, development, and utilization of data resources, as well as planning and advancing the development of digital China, the digital economy, and the digital society. It will report directly to the National Development and Reform Commission.
On October 19, the Shanghai Data Exchange published an article on its official WeChat account, introducing two new guidance documents: the Data Trading Security and Compliance Guidelines of the Shanghai Data Exchange (the "Guidelines") and the Data Trading Compliance Checklist of the Shanghai Data Exchange (the "Checklist"). According to the article, these two documents were developed based on a comprehensive review of domestic regulatory requirements pertaining to data, along with extensive research on the compliance and security challenges and pain points faced by data trading parties, while taking into account explorations and practices in the field of data trading compliance. Both documents are intended to serve as a reference for data trading parties engaging in data trading and related services on the Exchange.
The Guidelines consist of 25 articles divided into six parts: General Provisions, Compliance Requirements for Trading Parties, Data Security Management Systems, Legality of Data Sources, Tradability of Data Products, and Supplementary Provisions. The Checklist, developed based on the Guidelines, further specifies compliance requirements for data products and provides sample compliance certification documentation for companies to reference when entering the Exchange for trading. The Guidelines aim to guide trading parties in conducting data trading securely and compliantly by enhancing their understanding and awareness of data trading security and compliance, highlighting security and compliance requirements, and offering clear guidance to companies entering the trading floor.
In addition, to facilitate understanding and implementation of compliance assessments, the Checklist quantifies compliance and security considerations. This helps companies and professional service providers efficiently identify data compliance and security risks, thereby improving the efficiency of self-inspections or assessments of data product compliance.
On October 11, 2023, the National Information Security Standardization Technical Committee (NISSTC) released the Basic Security Requirements for Generative Artificial Intelligence Service (Exposure Draft) to solicit public opinions until October 25, 2023.
This marks the first draft domestic standard specifically addressing the security aspects of generative artificial intelligence (AI) services. It also serves as a supporting document for the Interim Measures for the Management of Generative Artificial Intelligence Services, jointly issued in July by seven central authorities, including the Cyberspace Administration of China.
The draft Basic Security Requirements, for the first time, establishes the basic security requirements that generative AI service providers (hereinafter referred to as “providers”) must adhere to. The document also elaborates on the relevant provisions in the Interim Measures for the Management of Generative Artificial Intelligence Services, encompassing data security, model security, security measures, and security assessments, specifically:
1. Source security. The document outlines operational requirements for source security, content security, and labeling security. Regarding source security, providers are required to establish a blacklist of sources and conduct security assessments on data from each source. If more than 5% of the data from a single source contains illegal and unhealthy information, it should be added to the blacklist. Additionally, diversity in data sources is emphasized, and service providers are required to obtain the relevant authorization documents and maintain collection records for self-use data. The document explicitly prohibits certain content from being used as training data. Regarding data content security, the document imposes requirements on providers in terms of content filtering, intellectual property rights, and personal information protection. Furthermore, it specifies detailed requirements for data labeling staff, labeling rules, and the accuracy of labeling content.
2. Model security. The document establishes strict requirements in five areas: the use of foundation models, safety of generated content, service transparency, accuracy of generated content, and reliability of generated content. Any provider wishing to use a foundation model for development should use one that has been registered with the authorities. In terms of service transparency, when providing services through interactive interfaces, the provider should publicly disclose information, including the limitations of the services and the model architecture and training framework used, on the website homepage and in the service agreement to help users understand the service mechanism and logic. If services are provided through a programmable interface, the aforementioned information should be disclosed in the corresponding documentation.
3. Security Measures. The document outlines requirements in seven areas: model applicability to user groups, scenarios, and purposes; processing of personal information; use of user input information for training; labeling of content such as images and videos; handling of public or user complaints and reports; provision of generated content to users; model updates and upgrades.
4. Security Assessment. The document provides detailed references in four aspects: assessment methods, data safety assessment, safety assessment of generated content, and assessment of response rejection.
The draft Basic Safety Requirements can serve as a main basis for self-assessment by providers of generative AI services or for third-party assessments. The detailed requirements outlined in the document provide companies with practical guidance and actionable recommendations for implementation. The document can also be a reference for relevant regulatory authorities to assess the security level of generative AI services. For businesses providing or utilizing generative AI services, the document can be used to conduct a gap analysis of current practices, focusing on secure data collection and use. It is advisable to institutionalize processes for reviewing data sources and source security to ensure compliance, from policies to technology to staffing, with the requirements outlined in the document.