From the Wire

AI Chatbots Are Learning to Spout Authoritarian Propaganda

In the world of AI chatbots, a worrying trend is emerging: they are learning to spout authoritarian propaganda. The governments of countries like China and Russia are rapidly discovering how to use chatbots as a tool for online censorship, manipulating the information that is accessible to their citizens. These chatbots are being programmed to promote state-controlled narratives, while censoring or refusing to engage with topics deemed sensitive or critical. This new form of online repression poses a significant threat to internet freedom and raises important questions about how to respond effectively. As chatbots become more ubiquitous, it is crucial for people to recognize their potential for reinforcing censorship and work together to find solutions.

The Use of Chatbots for Online Censorship

AI Chatbots Are Learning to Spout Authoritarian Propaganda

This image is property of media.wired.com.

Introduction

In recent years, the use of chatbots has become increasingly prevalent, offering a range of benefits and convenience for users. However, it is important to recognize that this technology is not without its drawbacks. In particular, there is growing concern about the potential for chatbots to be used as a tool for online censorship, particularly in authoritarian regimes. This article will explore the use of chatbots for online censorship, focusing on examples from China and Russia, and discuss the implications and warning signs for other countries.

Initial Use of Chatbots for Evading Government Censorship

When chatbot technology first emerged, it quickly gained popularity among users in countries where the government has imposed strict censorship measures. These chatbots provided a means for individuals to access unfiltered information that is otherwise blocked by the government. This was particularly significant for the millions of internet users living in countries where major social media platforms and independent news sites are unavailable.

AI Chatbots Are Learning to Spout Authoritarian Propaganda

This image is property of media.wired.com.

Authoritarian Regimes’ Response to Chatbot Technology

As the use of chatbots for evading censorship grew, authoritarian regimes began to take notice and develop their own strategies for controlling this technology. China, in particular, has been at the forefront of using chatbots to reinforce information controls. In February 2023, Chinese regulators banned major conglomerates from integrating popular chatbot technology into their services. This was followed by the introduction of government rules mandating censorship for generative AI tools, requiring them to promote “core socialist values.” In addition, the Chinese government demanded that over 100 generative AI chatbot apps be removed from the Chinese app store. These actions effectively placed chatbot technology under government control and restricted its potential for providing uncensored information.

China’s Pioneering Role in Chatbot Censorship

China has taken a pioneering role in using chatbots for online censorship. The government has implemented strict regulations and requirements for chatbot developers to ensure that the information provided by these AI tools aligns with the government’s agenda. For example, the Chinese government requires generative AI products to ensure the “truth, accuracy, objectivity, and diversity” of their training data. While these requirements may sound reasonable, they effectively restrict the chatbots from discussing sensitive subjects or providing information that goes against the government’s narrative. As a result, chatbots produced by China-based companies have been found to refuse engagement with sensitive topics and even parrot propaganda.

AI Chatbots Are Learning to Spout Authoritarian Propaganda

This image is property of media.wired.com.

Information Control in China-based Chatbots

The requirement for chatbot developers to ensure the “truth, accuracy, objectivity, and diversity” of their training data in China has raised concerns about biased results. Chatbots trained on censored data, such as online encyclopedias that adhere to the Chinese Communist Party’s (CCP) censorship directives, naturally produce biased outcomes. For instance, a recent study found that an AI model trained on Baidu’s online encyclopedia associated words like “freedom” and “democracy” with more negative connotations compared to a model trained on Chinese-language Wikipedia, which is insulated from direct censorship. This highlights the risk of using chatbots as a source of information, as the content they provide may be heavily influenced by government censorship and propaganda.

Russia’s Approach to Chatbot Censorship

Russia, too, has taken a proactive approach to chatbot censorship. Although efforts to regulate AI are still in their early stages, several Russian companies have launched their own chatbots. Notably, the Russian government places a strong emphasis on “technological sovereignty,” which encompasses control over AI technologies. Russian chatbots, such as Alice created by Yandex, have been found to avoid engaging with certain topics, such as the Kremlin’s full-scale invasion of Ukraine in 2021, potentially under government orders. This raises concerns about the extent to which these chatbots are self-censoring their responses or acting as tools of government censorship.

AI Chatbots Are Learning to Spout Authoritarian Propaganda

This image is property of media.wired.com.

Implications and Warning for Other Countries

The developments in China and Russia with regard to chatbot censorship should serve as a warning sign for other countries. While some countries may not currently possess the resources or regulatory infrastructure to develop and control their own AI chatbots, more repressive governments are likely to view language models as a threat to their control over online information. In fact, Vietnam has already expressed concerns about ChatGPT’s responses to prompts about the Communist Party of Vietnam and its founder, Hồ Chí Minh, labeling them as insufficiently patriotic. Such responses highlight the potential for chatbot technology to be used for reinforcing government narratives and suppressing dissenting viewpoints.

The Need for Awareness and Action

It is crucial for individuals and societies to recognize the potential of chatbots to reinforce censorship and take decisive action. The emerging use of chatbots for online censorship mirrors the early promises of social media platforms to circumvent state-controlled media. However, governments quickly adapted by blocking social media platforms, mandating content filters, or promoting state-aligned alternatives. The increasing ubiquity of chatbots demands a clear-eyed assessment of their potential to perpetuate censorship. It is essential that people understand the threats posed by online censorship and work together to develop effective responses to protect internet freedom.

AI Chatbots Are Learning to Spout Authoritarian Propaganda

This image is property of media.wired.com.

Conclusion

While chatbot technology offers numerous benefits for users, it is essential to recognize its potential for online censorship. The use of chatbots for evading government censorship initially provided a glimmer of hope for individuals in repressive regimes, but authoritarian governments have adapted by imposing stricter controls on chatbot technology. China, in particular, has taken a leading role in using chatbots to enforce information controls, while Russia emphasizes technological sovereignty and restricts chatbot responses. Other countries should take note of these developments and be prepared to address the potential threats posed by chatbot censorship. It is crucial to raise awareness and take action to safeguard internet freedom in the face of evolving censorship techniques.

Source: https://www.wired.com/story/chatbot-censorship-china-freedom-house/