China bans AI-generated media without watermarks

Enlarge / An unmarked AI-generated image of China’s flag, which will be illegal in China after January 10, 2023.Ars Technica

China’s Cyberspace Administration recently issued regulations prohibiting the creation of AI-generated media without clear labels, such as watermarks–among other policies–reports The Register. The new rules come as part of China’s evolving response to the generative AI trend that has swept the tech world in 2022, and they will take effect on January 10, 2023. In China, the Cyberspace Administration oversees the regulation, oversight, and censorship of the Internet.

Under the new regulations, the administration will keep a closer eye on what it calls “deep synthesis” technology. In a news post on the website of China’s Office of the Central Cyberspace Affairs Commission, the government outlined its reasons for issuing the regulation. It pointed to the recent wave of text, image, voice, and video synthesis AI, which China recognizes as important to future economic growth (translation via Google Translate):

In recent years, deep synthesis technology has developed rapidly.

While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honor, and to counterfeit others’ identities. Committing fraud, etc., affects the order of communication and social order, damages the legitimate rights and interests of the people, and endangers national security and social stability. The introduction of the “Regulations” is a need to prevent and resolve security risks, and it is also a need to promote the healthy development of in-depth synthetic services and improve the level of supervision capabilities.

Under the regulations, new deep synthesis products will be subject to a security assessment from the government.

Each product must be found in compliance with the regulations before it can be released. Also, the administration particularly emphasizes the requirement for obvious “marks” (such as watermarks) that denote AI-generated content:

Providers of deep synthesis services shall add signs that do not affect the use of information content generated or edited using their services. Services that provide functions such as intelligent dialogue, synthesized human voice, human face generation, and immersive realistic scenes that generate or significantly change information content, shall be marked prominently to avoid public confusion or misidentification.

It is required that no organization or individual shall use technical means to delete, tamper with, or conceal relevant marks.

Further, companies that provide deep synthesis tech must keep their records legally compliant, and people using the technology must register for accounts with their real names so their generation activity can be traceable. Like the US, China has seen a boom in AI-powered applications. For example, one of China’s leading tech companies, Baidu, produced an image synthesis model that is similar to DALL-E and Stable Diffusion.

A growing number of tech experts have recently recognized that China and the United States face a coming wave of generative AI that could pose challenges to power structures, enable fraud, or even tamper with our sense of history. So far, the two countries have reacted with almost polar opposite reactions–the US with non-binding guidelines versus China’s firm restrictions. In 2019, China published its first rules that made publishing unmarked “fake news” deepfakes illegal.

Those rules took effect in early 2020.

Listing image by Ars Technica