Taiwan's Problem Is Not Over... And It May Soon Become Our Problem Too
PRC's cognitive warfare targeting Taiwanese elections - Should we care about generative artificial intelligence-driven disinformation? - Unraveling a new TTP involving Taylor Swift
Hey there. It’s time to debrief Taiwan’s election and the PRC’s tactics, including deepfake videos and the use of proxies.
But before we jump in, I want to express my gratitude for all the support you've shown me since the launch of this newsletter. It really means a lot. And I hope you'll continue reading until the end of today’s newsletter. I've got a quick analysis on a potentially new TTP (tactic, technique and procedure) involving Taylor Swift!
But before that, let’s debrief Taiwan’s election!
This week, I have been digging into reports and articles from DoubleThink Lab, Taipei Times, Taiwan News, Graphika, and the Atlantic Council to try to answer the following, not so easy, question: What was the PRC’s strategy and tactics to interfere in the Taiwan 2024 election?
Decoding narratives: a key to understanding the broader strategy
In the run-up to the election and until election’s day, the DPP has used a series of narratives aimed at undermining Lai Ching-te’s candidacy as well as discrediting the DPP’s figures such as the former President Tsai Ing-Wen. These narratives include various fabricated accusations and conspiracy theories to diminish the DPP’s moral integrity. In alignment with the latest instances of Chinese interference in Taiwan's elections in 2022, 2020, and 2018, the PRC’s primary objective likely remains to discredit the ruling party, the Democratic Progressive Party, and specifically, its running candidate, now president-elect, Lai Ching-te.
However, this primary objective is complemented by another goal: to sow discord within the Taiwanese population and increase its distrust of the electoral system. Various narratives have thrived on platforms, leveraging existing domestic issues, but also portraying the 2024 election as a choice between war and peace. The DPP has been depicted as a vassal of the United States, while the United States has been portrayed as skeptical about defending Taiwan in case of war. Furthermore, in the aftermath of the election, attempts have been made to propagate the now infamous narrative of electoral fraud. Allegations, such as ballot-rigging, were directed at the Central Election Commission.
But how did Chinese-affiliated actors succeed in spreading these narratives?
Tactics to convey the narratives include deepfake videos, Facebook personas and AI-generated anchors.
The primary medium for disseminating these narratives seems to be video content. On platforms like YouTube and TikTok, videos targeting the DPP have proliferated. Some of these are AI-generated, featuring an impersonation of President-elect Lai Ching-Te, making false statements such as endorsing the KMT party. How were these videos presented to the Taiwanese audience?
So far, three tactics have been investigated:
DoubleThink Lab: X, Facebook, and YouTube personas published content on Taiwanese politics, presenting it as allegedly personal opinions. This material was then captured as screenshots and published on Facebook pages. Content amplifiers, the fans of these Facebook pages, then spread the content through Facebook communities such as political and business associations that are anti-DPP and pro-KMT.
Graphika: A persona on YouTube and TikTok published video content promoting the KMT party and discrediting the DPP, later amplified in Taiwanese Facebook groups by over 800 inauthentic Facebook profiles posing as Taiwanese citizens.
Liberty Times (also detailed in the English version of Taiwan News): An e-book titled “Secret History of Tsai Ing-Wen” was uploaded on Zenodo, a public repository. This 300-page e-book, filled with allegations against former President Tsai Ing-Wen, was disseminated on WeChat through direct conversations and also figured on Wikipedia. Additionally, it was used to create over a hundred videos spread on YouTube, X, Facebook and Instagram. In these videos, a virtual AI-generated anchor presents the e-book’s allegations, reading the same document. These videos were produced using video-editing software Capcut, owned by the Chinese company ByteDance, according to Taiwan national security department.
Here’s an illustration taken from Liberty Times’ article below.
More and more proxies
The tactics described above demonstrate an increased inclination toward covert actions, utilizing proxies as intermediaries. According to DoubleThink Lab, the actors behind the Facebook pages are located in Cambodia, Malaysia, and China.
Additional reports have emphasized the use of proxies to disseminate PRC’s narratives:
Recruitment of local medias and journalists: Taiwanese journalist, Lin Hsien-yuan, working for the fringe outlet Fingermedia, published a fake online poll in December showing the KMT ahead of the DPP, contrary to other polls. Other Taiwanese outlets and businesses are suspected to have received funds from the Chinese Communist Party. For instance, the Want-Want Group, a media company based in Taiwan, received subsidies to amplify content pro-KMT.
Vote-buying: investigations are ongoing regarding Taiwanese citizens allegedly accepting free trips from Chinese officials in exchange for election support. Online gambling sites have also been used to incentivize people to vote for specific candidates in exchange for money.
Leveraging diplomatic tools: in the aftermath of the election, Chinese diplomatic network has been mobilized to amplify the PRC’s propaganda. In France, the Chinese Embassy issued a statement to denounce French media outlets’ recent articles regarding Taiwan. In the Pacific region, the island of Nauru declared that it switched diplomatic recognition from Taiwan to China, reducing Taiwan’s diplomatic allies to 12.
This increased use of proxies, online and offline, also means an increased micro-targeting of audiences. Over the years, the PRC has demonstrated an ability to spread its content on multiple platforms, including manga subforum and bulletin board.
Looking ahead: what is the impact of PRC’s interference for 2024 election year?
Regarding the Taiwanese election, it may be too early to determine the impact as the information manipulation campaign is still ongoing. According to the Atlantic Council, in the following months, we may expect more economic coercion and military pressure being exerted on Taiwan. As Kenton Thibaut phrases it: “we can expect Beijing to enact consequences on Taiwan for a DPP victory—and to create the evidence to support its own narratives”. The PRC's objective is now focused on May 20, the day of the Taiwanese President's inauguration and the speech he will deliver. Until then, the PRC will continue to exert pressure to ensure that the speech reflects a form of alignment with CCP’s interests.
Taiwan’s problem is therefore not over, and it will soon become ours too. The tactics and methods we have witnessed in this election case study will repeat themselves this year, perhaps in the next weeks or months. Or could it be already happening?
The case of AI-generated videos - should we worry?
This week, one of these tactics, the use of deepfake videos, has been spotted, targeting Prime Minister Rishi Sunak, ahead of the UK election. A research, conducted by Marcus Beard, former lead of the UK government’s digital counter-misinformation strategy, has found that more than 100 deepfake video ads impersonating Prime Minister Rishi Sunak were sponsored on Facebook. The videos impersonated BBC’s journalists and led to a spoofed BBC News page promoting a scam investment.
Side note: this operation is not without reminding us of the “Facebook Hustles” operation debunked by CheckFirst. If you haven’t read it before, it is worth taking the time!
This week too, the highly mediated World Economic Forum’s Global Risks Report 2024 has put AI-driven election disinformation in the spotlight, describing it as the top number one risk to global security.
Certainly, deepfakes videos, audios or AI-generated images will be used in 2024 election year. It is already the case. However, the significance we attach to them and their ability to interfere in elections depend on our understanding of how information operations work. Do we believe AI-generated content will have more impact on audiences’ opinions?
Foreign actors’ motives are not necessarily to impose some kind of truth on our societies. They seek to sow discord and confusion and to create distrust towards our democratic institutions. No need for any sophisticated AI; a cheapfake will do the job!
Close your eyes for a second and imagine the following tactic: a fake account posted a picture taken from a previous election in another country, claiming electoral fraud. His tweet was retweeted by a network of bots and trolls linked to conspiracy groups. Soon, an influencer or mainstream media would see it and publish something about it, reaching out to his own larger audience. Soon, the algorithms of platforms and search engines will make sure everyone has seen it. Simple, isn’t it?
Nonetheless, this does not mean we should do nothing against AI-driven disinformation as the previous NATO Secretary General writes this week in this article: We Must Go to War Against Deepfakes Now to Keep November Voting Fair |
What can we do? So far, civil societies across the globe have developed a range of tools at our disposal, from media-literacy to fact-checking and digital education. Governments have been preparing against this threat, developing synergies with civil societies organisations and platforms in new approaches. OpenAI has also announced a plan to deter disinformation targeting elections. It involves watermarking the content created using OpenAI DALL-E image generator. Additionally, other platforms have announced a coalition to secure the 2024 elections. However, the question remains: will these measures be efficient, or are they just strategic communications aimed at influencing our perceptions towards these big tech players?
Your press corner
Here’s the weekly readings to keep you connected to all the conversation on global elections and information operations:
Russian Influence in Latin America: Measuring Digital Latin American Public Opinion | StopFake paying attention to weak signals that may indicate changes in adherence are key to understand global strategies.
Social media deepfakes flood South Asian countries ahead of elections - CSMonitor.com challenges rising ahead for South-East Asian countries where big platforms are letting local actors alone in monitoring disinformation.
A US-Sanctioned Oligarch Ran Pro-Kremlin Ads on Facebook—Again | WIRED Reset, a key nonprofit organization countering disinformation, has repeatedly warned Meta about pro-Kremlin ads targeting Moldova’s local elections last November. Despite the warnings, no actions have been taken, and Meta has earned over 200,000 dollars from the ad campaign.
Elections Canada Launches ElectoFacts: A Tool to Combat Misinformation - TechStory on Canada’s latest efforts ahead of its election and other international efforts.
China’s meddling in Taiwan election opens year of misinformation threats - The Washington Post - A broad summary of the PRC’s interference and what it means for the U.S.
Organised chaos: how Russia weaponised the culture wars (globalgovernmentforum.com) - the first part of a report on foreign interference in elections.
Taylor Swift, a new TTP?
Ending on a lighter note, I wanted to share with you this thought about this, perhaps new, tactic, technique and procedure (TTP) that may be currently developing:
This week I came across a piece of news from AFP, reporting about a narrative spreading from the American right-wing sphere:
“Taylor Swift is a “front for a covert political agenda”. “Swift could be a “Pentagon asset””.
This narrative stroke me as it reminded me of one already known TTP: the use of local influencers by foreign actors to launder their narrative to their targeted audiences. Local influencers would be used to develop content in alignment with foreign actors’ campaign’s narratives.
This time however, the TTP seems to have been twisted. The local influencer, here Taylor Swift, is not directly laundering content to targeted audiences. But her potential ability to do so is instrumentalized to present the Pentagon as conducting a political warfare against its own citizens.
Burner Influencers?
To what purpose? As the article concludes, this TTP could be used to undermine Taylor Swift’s credibility and influence over her audience. In this context, influencers remain an asset. But, while some may be covertly leveraged to amplify narratives, instrumentalizing their influence on audiences, other may be burnt, like burner accounts in information operations, to suppress their influence and in the long-run undermine their audience’s voice.
Thank you for taking the time to dive into this newsletter and let me know what you thought about it and tips to improve it!