For China, social networks are now a tool of disinformation
In the pantheon of state-run cyber operations, Russia has always led the way when it comes to disinformation and the seed of social discord, while China has traditionally been associated with intellectual property theft. There are signs that are changing, however, as China has apparently stepped up its social media disinformation campaigns.
Earlier this month, Mandiant Threat Intelligence reported two significant advances in online influencer campaigns for the People’s Republic of China– one involving the use of accounts in several languages ââon numerous social media platforms, and the other involving attempts to physically mobilize protests on the ground, on topics ranging from Hong Kong to COVID-19 conspiracy theories and even support for Scottish independence.
The Mandiant Report is careful not to attribute these activities to the Chinese state. However, the scale and characteristics of operations it details indicate a well-funded and carefully planned approach, consistent with activities identified by other researchers, such as Oxford Internet Institute and the Harvard Kennedy School, as being sponsored by the state.
China carries out major propaganda operations, but until recently their main focus was on the country. Beijing understood early on the potential threat the Internet posed to national harmony and regime stability, and quickly created a censored version of the World Wide Web that has been dubbed the Great Firewall. Social media platforms such as Twitter and Facebook are blocked in China, as are some search terms. Technical measures used to filter content are supported by a large number of humans to manage digital consumption at home.
When it comes to international disinformation campaigns, however, China has appeared rather clumsy and “surprisingly imprecise and ineffective”. according to a Stanford Internet Observatory report. Describing the so-called 50 Cent Brigade, which includes social media users paid by China to write social media comments in line with government rhetoric, Kerry Allen of the BBC Monitoring Unit described the typical posts as using “A very serious language, too formal, like” I resolutely oppose … “a certain event … in Hong Kong. ”
In recent times, however, China seems to have learned from Russia’s playbook in several ways.
First, there has been an increase in the scope of topics covered by its online influencer campaigns and the number of languages ââthey use. Mandiant has detected pro-PRC activity on 30 social media platforms as well as over 40 niche internet forums and other websites, such as Argentine social media site Taringa and Russian sites Vkontakte and LiveJournal. The activity would be conducted in more languages ââthan previously identified, including Russian, Spanish, German, Korean and Japanese, although in some cases the messages contain grammatical errors indicating that they were written by non-native speakers.
Beijing always seems to be fumbling over what is enough and what is too far when it comes to disinformation.
Second, other campaigns appear to be engaging in early attempts to mobilize real-world protests and demonstrations. The Mandiant Report quotes thousands of messages in different languages, including Japanese, Korean and English, calling on Asian Americans to demonstrate on April 24 in New York and to “fight back” against alleged “rumors” being spread by Dr Li-Meng Yan. , Guo Wengui and Steve Bannon that “the Chinese had manufactured the coronavirus”. In some cases, social media posts provided an address they claimed Guo lived at.
The physical event turned out to be a failure, but the new use of tactics by Chinese disinformation peddlers appears to be inspired by Russian tactics during the 2016 US presidential election campaigns. The Mueller report describes how Russian Internet Research Agency agents staged political rallies in the United States. It also provides evidence that the Kremlin had been preparing its influence operations for the 2016 election several years in advance.
The content of China’s most recent influencer campaigns also marks a departure from its usual approach. Instead of the familiar, impassive and patriotic statements aimed at improving China’s reputation and characterized by “spammy behavior and rudimentary execution”, the new approach is more aggressive. The Oxford Internet Institute Cyber ââTroops Annual Report 2020 points out that China, along with Russia and Iran, “have capitalized on disinformation about coronaviruses to amplify anti-democratic narratives designed to undermine trust in health officials and government administrators.” Among the COVID-19 stories disseminated in these campaigns, one claimed the virus was created in a US military lab at the Fort Detrick base in Maryland, and that it was introduced to Wuhan by the American armed forces during the Military World Games in 2019.
As interesting as these developments are, the danger they represent should not be overstated. To truly “win” against cyber misinformation requires a reckless and reckless attitude to the consequences, aimed at amplifying social divisions and sowing distrust of authority and government among its adversaries, whatever the cost. This is where Russia excels, drawing on techniques perfected during the Cold War era. Beijing always seems to be fumbling over what’s enough and what’s too far, and this may help explain the somewhat clunky approaches we’ve seen so far.
No matter where China takes its current and more aggressive approach to information operations, these recent reports indicate that whether the perpetrators are from Russia, China, Iran, or domestic actors in the United States and Europe. , things have moved quickly since 2016. At the time, Mark Zuckerberg dismissed the idea that a nation state could have used its social media platform to spread disinformation and interfere in national elections. . Five years later, the Oxford Internet Institute reports that more than 80 countries are now involved in computer propaganda, with “tools, capacities, strategies and resources used to manipulate public opinion”.
When Edward Snowden released his revelations in 2013, he said he wanted to start a conversation about the use of technology and the growing trends in mass surveillance. A perhaps unintended consequence of this conversation was that many states quickly learned from US operations exposed the full extent of what was possible and developed their own ambitions to do the same. Likewise, Russia’s 2016 disinformation campaigns, backed by agents on the ground but mostly conducted remotely, taught state and non-state actors how easy such tactics are – and now everyone is doing it. , although their effectiveness has not yet been proven. demonstrated conclusively.
The Chinese-organized protest in New York City may have been pitiful, and some of the social media posts in these online campaigns remain muted and obvious. But what we’re seeing in terms of the newly aggressive tone and extension to other platforms and languages ââmarks both a departure from previous approaches and a significant investment of resources by China. It seems like a learning experience, and China is a quick study.
Emily Taylor is CEO of Oxford Information Labs and an associate member of the Chatham House International Security Program. She is also Editor-in-Chief of the Journal of Cyber ââPolicy, Research Associate at the Oxford Internet Institute and Affiliate Professor at the Dirpolis Institute of the Sant’Anna School of Advanced Studies in Pisa. She’s written for The Guardian, Wired, Ars Technica, The New Statesman and Slate. His weekly WPR column appears every Tuesday. Follow her on Twitter at @etaylaw.