2024 Elections & Russian Information Operations: from Localized to Hyper-Personalized Targeting
Latest TTPs of Russian Information Operations (IOs) Targeting the U.S. Election - What to Keep in Mind and Anticipate Next - How Research Can Help Us Anticipate Our Own Narrative Vulnerabilities.
Hey there.
We are inundated with news about ‘disinformation’, ‘AI’ and ‘conspiracy theories’. Sometimes, it only takes a single comment from a politician to flood our news alert feed. Amid cursing against this unwelcome and untimely comment, we may miss important research publications, which are considered niche by most mainstream media outlets.
Last December, I missed an important report by Clemson University’s researchers Darren Linvill and Patrick Warren on the evolution of Russian information operations (IOs) targeting U.S. audiences. I stumbled on it last week, thanks to NYT journalist Steven Lee Myers, who surfaced it and followed up on the evolution of this IO.
Their reports are significant resources to understand the plans of Russian threat actors for the U.S. election. I thought I would take the time to summarize the main tactics described and what we can learn from them to anticipate what could be coming next.
I also wanted to cover another type of research at the end of this newsletter, continuing last week’s discussion about narrative vulnerabilities, but this time focusing on our own narrative vulnerabilities.
What to expect:
Russian IOs Targeting the U.S. Election - Context
We all wonder how Russia will target the U.S. election this year.
The Kremlin is engaged in ressource-intensive information warfare in Ukraine. The chief of the Internet Research Agency (IRA), Yevgeny Prigozhin, is no longer here to conduct its historical influence campaigns against the U.S. The evolution of the propagation of manipulated information has shown that foreign actors are no longer needed to push a conspiracy theory or to encourage political violence.
Looking at the first signals of Russian IOs targeting the U.S. is critical to get bits of answers regarding the following questions:
Strategy: Will Russia target the U.S. election to influence the election of a candidate? Or will the U.S. election be treated as a secondary event to leverage results in the information warfare targeting Ukraine?
Resources: Will past IRA resources be leveraged to spread manipulated information, indicating signs of a continuation of its activities? Or will new resources be created, associated with new tactics?
Efforts: Will Russia display considerable efforts to target the U.S. election, with high-personalized narratives, microtargeting and all kind of infrastructure being exploited? Or will it use “perception-hacking”, a previously used tactic in 2022, to give the impression that the election was hacked, when actually it was not?
We have already seen the involvement of Russian assets leveraging the US-Mexico border situation and exploiting the ‘civil war’ narrative to amplify dissensions between U.S. states and the Federal government. If you have missed it, you can find it here.
Last Thursday, in his latest article, journalist Steven Lee Myers described another case of Russian IO targeting the U.S. election, drawing upon Clemson University’s research ‘Infektion’s Evolution: Digital Technologies and Narrative Laundering’. The research and the NYT article give us insights on the latest Tactics, Techniques and Procedures (TTPs) leveraged by Russia.
What do we learn?
Russian IOs - Tactics and Techniques
To describe the TTPs used, I rely on the ABCDE framework to organise and list them down below:
Actor
Four ‘inauthentic news websites’ have been identified as online vectors of Russian manipulated information and propaganda targeting U.S. audiences: D.C. Weekly, the New York News Daily, the Chicago Chronicle and a newer sister publication, the Miami Chronicle.
These websites share links with the Russian information manipulation ecosystem, including the ‘Prigozhin galaxy’ through several ways:
The news websites use fabricated personas of journalists to legitimate the content. The profile image of one inauthentic journalist ‘Paul Martin’ is apparently a photo of George Eliason, an American journalist pro-Russia living in the Donbass. His photo also appears on the website of the Foundation to Battle Injustice founded by Yevgeny Prigozhin.
One of the contents shared was a video targeting the wife of President Zelensky to discredit her. In this video, a woman claims to have been an intern at Cartier and assisted the wife of President Zelensky with a purchase. She claims that Zelenska became angry with her service and insisted that she be fired. It was revealed that this woman was in fact a student and salon manager living in Saint Petersburg, where the IRA was established.
The domain dcweekly.org pointed to an IP address affiliated with John Mark Dougan, a former police officer who fled to Russia in 2016 and claims to be an independent pro-Russian journalist in Donbass.
The domain dcweekly.org shares repeated links to known Russian propagandists and official Russian government sources.
Aparte: If you want to know more about the actor behind the websites, look at this investigation published this Monday by Researcher Scot Terban aka Dr. Krypt3ia, who connected the sites to Technologies LLC Business Media, a company owned by Alexander Sergeevich Frolov and Mikhail Leonidovich Burchik, who was the Executive Director of the IRA.
Behavior
Impersonation of local news websites: the inauthentic news websites appear to mimic actual news organizations. For instance, the Miami Chronicle’s website claims to exist since 1937 to cover Florida news. A Chicago Chronicle did also exist a century ago. The four websites bear the names of United States cities": Chicago, New-York, Washington and Miami. They seem to target cities that “lean more democratic” according to the results from bestneighborhood.org, a random website that popped up after I asked Google if Miami and Chicago were Democrats or Republicans.
Laundering pro-Russian information: these websites are used to launder pro-Russian information according to Clemson University’s researchers. The report from last December identifies three steps: placement, layering, and integration. The U.S. websites are used in this last step to integrate inauthentic content placed by an external source and propagated through several layers, such as African media, before being shared with Western audiences.
Content
Plagiarizing existing news outlets to leverage existing divisive issues: initially, the websites regularly published major breaking news that bore the potential to polarize audiences on topics such as crimes and politics. The content was of extremely high quality, plagiarized from mainstream media such as Reuters and Daily Mail.
Duplicating and editing content from Fox News, RT and the Gateway Pundit through the use of AI: according to the researchers, the threat actors appear to have changed their strategy over the time and leveraged AI to more discreetly establish credibility through stealing and rewriting external stories originating from Fox News and other outlets. The stories were fairly short and not directly traceable to other websites. The articles had a similar structure and used the same graphics as Fox News. The publishers used AI, such as OpenAI’s ChatGPT-3 model. They instructed the language model to anonymize, combine and rewrite stories. Bits of texts mistakenly left by the authors of the articles reveal the type of instructions that were given such as:
not mentioning any specific news networks or authors,
selecting articles which either can impact public trust in government agencies, or involve a potential threat to law enforcement officers, or provide insights in the National Republican Congressional Committee’s confidence in the Republican chances to win the elections.
framing the article with a specific tone, such as a cynical tone or a tone critical of the U.S. position in the Ukrainian war or imitating the style of George Orwell.
Use of AI to translate articles: some articles were apparently translated by AI.
Degree
From mainstream social networks to inauthentic news websites to semi-private networks: several narratives seem to have originated from platforms such as X and Instagram before being integrated into news articles circulating on several websites before landing on the U.S. inauthentic websites. Then the news articles seem to have been used to spread narratives on alternative, more confidential networks such as Telegram, Reddit, Gab and Truth Social, probably because of moderation policies on bigger platforms.
Effect
It may be to soon to assess the effect of this campaign, but it already appears that two intended effects are pursued here:
Polarizing the opinion around sensitive issues: The issues chosen and the way the audiences seem to be microtargeted makes it likely that the Russian campaign aims to polarize the opinion, a strategy that has been ongoing since 2014 in the U.S.
Tarnishing the credibility of Western outlet to create distrust: Choosing to impersonate Western outlets may be intended to attack their credibility and push audiences to resolve to other alternative outlets or social networks to consume information.
Russian IOs - From Red to Blue
You may wonder why I have listed all the potential TTPs of this Russian information operation. Why does it matter to recognize the tactics that have been used? Because it may help identifying continuities and disruptions across the lines of Russian operations.
Here’s what I found:
The creation of inauthentic news websites is not a new thing. Back in 2020, Russian agents linked to the IRA created Peacedata and NAEBC, two news websites targeting respectively left-wing and right-wing audiences. However, last time they used real journalists and writers to write articles on real events, while this time they have fabricated a whole fictional ecosystem, which looks more like 2015 IRA tactics. In 2015, the IRA created clones of Louisiana TV stations and newspapers, as well as inauthentic personas on several platforms to fabricate a story.
The targeting of localized audiences has been ongoing since 2016. However, localized content strategy seems to have turned into hyper-personalized content strategy. The news websites bear the names of Democrat-leaning cities in the U.S., which could suggest that ‘swing cities’, instead of ‘swing states’, are targeted this time. It is reminiscent of the strategy described in Portal Kombat by Viginum targeting the Donbass occupied territories. It is also reminiscent of the latest Russian leaks and the Kremlin’s strategy to micro-target the Russian audiences.
The use of ChatGPT for content creation is, to my knowledge, the first known use case of OpenAI ChatGPT by foreign state-affiliated threat actors.
How could we develop countermeasures, departing from these few key takeaways?
Listing potential new targets, such as other U.S. cities described as leaning towards the Democrats Party. We can imagine that Russian agents rely on open-source websites to find information about U.S. cities.
Detecting the creation of new websites domains using U.S. cities’ names in their URL and checking them out. Some may not publish content directly, being there as prepositioned. Others may be bear names of cities leaning Republican, because the IRA’s past strategy has not only been to push one side but to polarize both sides even further.
Carefully looking at the structure of articles being published, as well as looking for specific content aiming to: (in regard to the instructions given to ChatGPT detailed above)
impact public trust in government agencies,
involve a potential threat to law enforcement officers,
provide insights in the National Republican Congressional Committee’s confidence in the Republican chances to win the elections.
I hope this little practical exercise will be useful to support everyone into looking in the right direction.
But remember, it can all be a distraction.
The latest DDoS attacks targeting French Ministries’ websites are here to remind us that Russian threat actors do not follow one single campaign, rely on multiple proxies, and play with our perception of being hacked.
Your Press Corner
Here’s the weekly readings to keep you connected to all the conversation on global elections and information operations:
Why Russia’s Election Matters to Putin | Council on Foreign Relations (cfr.org) - what’s at stake for President Vladimir Putin in this election?
Russian independent media outlet Meduza faces ‘most intense cyber campaign’ ever (therecord.media) - “In February 2024, the Russian authorities launched a series of cyberattacks against Meduza, more intense than any we’ve ever faced,” the organization said in a statement on Monday.
OII | 2024 Russian Presidential Elections – How Digital Technologies Are Used to Wield Authoritarian Power (ox.ac.uk) - Researchers from the Oxford Internet Institute and the University of Bremen share their insights on how digital technologies can be used to wield authoritarian power in the context of the Russian election.
From Russia with Spin: How Content from Russian State Media is Laundered by Polish Blogs – Alliance For Securing Democracy (gmfus.org) - a small network of Polish news blogs—Lega Artis, News na Dzis, and Daily Blitz—may serve as pathways for Russian state media and pro-Kremlin media to reach Polish audiences.
The West Is Still Oblivious to Russia's Information War (foreignpolicy.com) - Paralyzed by free speech concerns, Western governments are loath to act.
Bono por tuitear: así se paga por impulsar propaganda y desinformación en Venezuela (cazadoresdefakenews.info) - How Venezuelans teachers are recruited to support Maduro’s ‘educational achievement’. Deep dive into the communication strategy of the Venezuelan government to diffuse its propaganda, with a parallel drawn between GEC’s description of the Russian propaganda and disinformation ecosystem, and the Venezuelan ecosystem divided in 5 pillars (Comunicaciones Oficiales, Mensajería financiada, Medios proxy, Tropas comunicacionales, Desinformación cibernética).
How to combat fake news in Ghana’s media ahead of the 2024 election - Adomonline.com - In Ghana’s election, the threat comes from the inside.
Finnish far-right videos highly recommended by YouTube during the Presidential race - CrossOver - the findings demonstrate the existence of a “funnelling effect” in search and recommendation results, meaning that users engaged in a diverse range of political searches were directed by YouTube’s recommendation system towards a limited set of video channels.
Brazil seeks to curb AI deepfakes as key elections loom (france24.com) - ahead of municipal elections, likely planned for October 2024, Brazil bans the use of deepfake technology in the election.
Disinformation in Elections: Democracy Must Be Protected from AI Abuse - The Japan News (yomiuri.co.jp) - A comparison entre the EU and Japan’s responses to information manipulation, calling for the Japanese government to consider legal regulations based on the European example.
The Role and Potential of Artificial Intelligence in Extremist Fuelled Election Misinformation in Africa – GNET (gnet-research.org) - the case of Kenya and Nigeria and what it means for Africa.
Secretaries of state worry about AI impersonating them - POLITICO - Election officials are important enough to fake — and public enough to make it easy to do — but anonymous enough that voters may easily be tricked.
ATA-2024-Unclassified-Report.pdf (odni.gov) - Annual Threat Assessment of the U.S. Intelligence Community.
Disinformation campaigns likely to undermine EU elections, experts say – Euractiv - “When we analyse recent national elections, we clearly see a focus on narratives that seek to undermine the integrity of elections,” Delphine Colard, Deputy Spokesperson of the European Parliament told journalists.
Second edition (March 2024): Disinformation narratives during the 2023 elections in Europe – EDMO - This expanded and revised report analyzes over 1.000 fact-checking articles published in the context of thirteen elections in twelve different European countries.
Three Strategies To Combat Media Disinformation Amidst Political Polarization and False Narratives (prnewswire.com) - Prioritize Facts, Develop Media Literacy, Embrace Reason and Accountability.
Webinar: How to Investigate Elections – Global Investigative Journalism Network (gijn.org) - five senior journalists with experience in investigating electoral processes offer tips on cutting-edge tools, investigating candidates, developing sources, and tracking disinformation.
Video: GIJC23 – Investigating Elections – Global Investigative Journalism Network (gijn.org) - and in this webinar, four veteran reporters, who have investigated political campaigns and elections around the world, offer tips on cutting-edge tools, investigating candidates, developing sources, tracking disinformation, and more.
Tips From — And For — Women Investigative Journalists Reporting on 2024 Elections – Global Investigative Journalism Network (gijn.org) - To mark International Women’s Day during a potentially tumultuous election year, GIJN’s global team spoke with women investigative journalists about their election coverage best practices.
Photographer steps inside Vietnam’s shadowy ‘click farms’ - KTVZ - rare insight into the workshops that hire low-paid workers to cultivate likes, comments and shares for businesses and individuals globally.
How Israel Mastered Information Warfare in Gaza – Foreign Policy - Pro-Israel misinformation aimed at dismissing and discrediting Palestinian narratives is the fruit of a decade-long effort.
Confucius Institute: China's Influence Operation in Washington State Public School Exposed | National Review - Recently obtained emails shine a light on CCP–backed influence efforts in Washington State.
“Influencing the influencers:” a field experimental approach to promoting effective mental health communication on TikTok | Scientific Reports (nature.com) - how simple, cost-effective, and influencer-led interventions can influence mental health content on TikTok.
News From The Research Front: Is There a Way to Anticipate Future Narrative Vulnerabilities?
Last week, I discussed the vulnerabilities of external threat actors’ narratives, exploring ways to exploit them. This week, I would like to delve into our owned narratives' vulnerabilities, drawing upon new research.
A few days ago, I came across an interesting post on Substack titled (1) 🧮 How to calculate the inclusiveness of online groups (substack.com) by Hahrie Han and Josh Kramer, summarizing their paper on Dynamic Polycentrism.
The title “How to calculate the inclusiveness of online groups” struck me, as any initiative to calculate a social behavior and human feeling seems to me a very courageous initiative.
Then reading through the article, I was happy to find references to Tocqueville and Ancient Greece’s agora, recalling the fundamentals of democracies that enable us to discuss politics, live in a plural society and socialize freely without fears of violence and intimidation.
Then I encountered a twist that I had not read anywhere before.
The authors discuss the concept of pluralism, a core concept of democracies. Pluralism is a process to enable a healthy debate to produce agreements and consensus. However, for the authors of the paper, pluralism is not neutral.
Pluralism can have both a positive and negative influence. The authors remind us, for instance, how civic organizations were ‘vibrant’ and ‘thick’ in the years leading up to the Weimar Republic.
Furthermore, I was captivated by the following questions they then ask:
“how do we know ahead of time which civil society organizations are going to be carriers of democracy and which ones will be carriers of authoritarianism?”
The authors interrogate themselves on how we can effectively measure pluralism, recognizing the impossibility to know in advance what will be the lines of division in the years to come. They ask:
“how do we assess how pluralistic a community is without knowing all the different ways in which people might disagree?”
They offer one way forward with the concept of polycentrism, which is the idea of multiple nodes of power gravitating in decentralized ways, creating a “poly-centralized community”. The various centers of powers are seen as safeguards of pluralism.
“We anticipate that a community that has lots of different centers of gravity spread throughout it is going to be more flexible and able to adapt to divisive challenges that might come its way, because those centers of gravity can flex and flow with the challenges that emerge. That’s why polycentrism is important to pluralism.”
And for the authors, polycentrism, or rather ‘dynamic polycentrism’ is an “observable, attainable measure that gives us a hint about the likelihood of communities to be truly pluralistic or not.”
Why did I bring this up?
You may have already guessed that I enjoy trying to connect the dots between reflections and ideas from the threat intelligence sector and from the human sciences sector. The notion of anticipating lines of divisions and societal vulnerabilities is very attractive to me. It looks something external actors would also try to do, targeting our polycentric societies to polarize them further. From massive targeting to polycentric targeting.
If that is the case, what if we were repurposing all the maps we created for our investigations to explore another path?
We would not be following the footprints of our threat actor this time, but instead, the dynamic relations between our respective micro communities to detect potential vulnerabilities that could be ultimately exploited by threat actors.
We would not learn about external actors’ tactics at first, but about the way they see us. And then we could act on it.
Captivating, isn’t it?