AI, Information Operations & Elections: Opening the Pandora Box
Framing Our Problem Regarding Information Operations and Generative AI - What We Expected - What We Saw - What Could Be Coming Next + Stepping Out of the Box: Azerbaijan Elections' DIMI
Hey there.
In this newsletter, I have tried not to give in to the temptation of covering artificial intelligence. However, following the cycle of elections and the cycle of media, only one word seems to be winning this war for attention: artificial intelligence or simply AI.
A word that triggers excitement, fantasy and fears.
I was initially reluctant to talk too much about AI, as I believed it was not crucial for information operations that had already very much succeeded in sowing doubt and confusion without relying on sophisticated AI-generated content. However, recent AI-related events and their coverage in the news have left me with an uneasy feeling.
Only one aspect of AI’s malign use and potential effects is currently being reported. Moreover, the way the problem is framed, with AI being portrayed as the Pandora box that has been opened, makes it challenging to take a step back and see how AI-generated content is perhaps responding to a mindset that has been growing over the last two decades.
Let’s dive into AI!
Framing the Problem
First, you may have noticed that when I discuss AI and election-related information manipulation and interference, I tend to focus on AI-generated content in all its forms: text, image, video, and audio.
There are many ways of using AI today, ranging from the development of chatbots providing assistance to platform users to automating certain tasks for increased production efficiency. Its applications extend to facial recognition tools and refining internet searches to offer hyper-localized and tailored content. In the context of election, AI can increase information accessibility by translating content into multiple languages or answering questions with the help of chatbots. However, as cautioned by the Brennan Center, the use of AI is not without risks.
This technology is rapidly deploying, but it comes with hallucinations and biases. For example, when a chatbot directly interacts with a voter, it may deliver incorrect information presented as a fact. There is also a concern around the use of voters’ data to train or fine-tune AI tools, which could lead to privacy breaches.
In the context of election-related information manipulation, the type of AI that receives the most discussion is AI-generated content, also known as “deepfakes”. It is mostly addressed with a narrow focus: will AI-generated content influence voters’ trust towards candidates, parties and the integrity of the electoral system?
While this framing of the problem was probably true a few years ago, it might not be particularly relevant to reflect the current state of the information environment and online social behaviors, as highlighted by Quinta Jurecic, senior editor at Lawfare, in yesterday’s panel organized by Brookings on “Assessing the impact of generative AI and other online threats in a historic election year”. The successive U.S. elections in 2016, 2018, and 2020 have showcased the evolution of the dissemination of misleading content, from being originated and spread by external actors affiliated with Russia in 2016, to being originated and spread in 2020 by internal domestic actors, who genuinely believe that ballots are being stolen or altered.
The COVID-19 pandemic has crystallized this evolution of online socialization, where mis/disinformation gives way to conspiracy theories, accompanied by a multiplication of online actors empowered to tell what is true or isn’t true. In an era of post-truth and conspiracy theories, AI-generated content might be received as additional evidence of pre-existing beliefs.
Therefore, it seems challenging to frame the problem as a contentious relationship between AI and the online audience, where the latter is seen as a passive creature subjugated to the influence of AI.
AI is also desired and welcomed by the audience as it allows for the creation of an imaginary that aligns with their expectations and beliefs.
In the context of elections, AI-generated content may not necessarily reach new audiences and lead them to consume more misleading, fake or absurd content about their favorite candidate or political party. Instead, it is more likely that it will be used to reinforce existing beliefs within some communities and further entrench them in their respective echo chambers. This will likely accentuate the current society-driven mistrust and distrust towards elections and democratic institutions.
Framing AI as a tool which can both manipulate and be manipulated by the audience opens the way to understanding all the tactics, techniques and procedures (TTPs) that navigate around AI.
What the Research Says About AI Generated Content and Information Operations
Before, we look at concrete uses of AI in the latest elections since 2023, it may be worth taking a step back and reading again this piece published last year summarizing the results of a workshop conducted by experts from Standford University, OpenAI and Georgetown University’s Center for Security and Emerging Technology. This workshop addressed emerging threats and potential mitigations associated with the use of generative language models in the context of automated influence operations.
Using the ABC Framework, which I hope you now use daily to analyze information operations, this workshop predicted the following threats:
Actor: assuming that the large language models will become widely accessible, state actors already involved in conducting campaigns will see their costs decrease. Private actors, including private PR and marketing firms, may integrate this technology to offer new services for political actors.
Behavior: novel techniques may be developed with a focus on increasing the scalability of information operations, including mass-messaging campaigns or long-form news generation on unattributable websites. Ultra-localized tactics could emerge, including personalized chatbots interacting with targets one-on-one. Many tactics are yet unforeseen.
Content: the costs of generating content will likely decrease with an increase of scale and quality of the content. Inauthentic content may, therefore, become less detectable.
These observations demonstrate that the effects of AI-generated content are not only limited to the type of content that is generated and its support in disseminating manipulated information. They also emphazise that AI serves as a technological innovation capable of enhancing information operations, particularly in terms of scalability, and increase incentives for new proxies to join campaigns.
These predictions are worrisome.
Fortunately, the workshop also addresses mitigation measures across various areas, which you can find detailed in the report.
Meanwhile, another researcher at the Oxford Internet Insitute has shared another analysis of the potential effects of AI on information manipulation, focusing specifically on misinformation, defined as incorrect or misleading information without intention to deceive. According to Felix. M. Simon, AI may increase the supply of misinformation, but not necessarily the demand. Improving the quality of misinformation may not necessarily increase the exposure or the effect of this misinformation. Micro-targeting techniques may not be affected by generative AI and may not have the intended persuasive effects.
This argument, asserting that AI-generated mis/disinformation may not be the right thing to focus on when it comes to information operations, is also shared by researcher Carl Miller, founder of the Centre for the Analysis of Social Media at Demos in 2011 and CASM Technology in 2014. He has spent the last decade researching social media intelligence (SOCMINT), extremism, online electoral interference, radicalization, digital politics, conspiracy theories, cyber-crime, and Internet governance and featured recently with Renée DiResta in Tristan Harris’ podcast on AI and elections.
He argues that, rather than focusing on AI-generated content, which isn’t “going to change the game”, we should look at all kinds of influence surrounding information operations. According to Miller, the issue with information operations is not about false or misleading contents propagating around online. “It’s much more about confirming people’s beliefs about the world and guiding them in a certain direction than it is ever about telling them something which is untrue to get them to change their mind”.
As we keep brainstorming about potential malign uses of AI, Carl Miller has highlighted one potential threat targeting one of our human vulnerabilities: our loneliness. He imagines a scenario where threat actors could develop automated or semi-automated friendships with targeted audiences. Friendships that could be used over time to suggest ideas or highlight specific controversies, “swimming with people’s cognitive biases”.
Fascinating but very scary.
Generative AI in Elections: A Summary of the Methods and Tactics Observed Since 2023
It is perhaps time to stop the brainstorming, before we become like Carl Miller, unable to sleep at night imagining the worst possible outcomes for generative AI, and move on to a summary of the main methods and tactics observed since 2023. What have threat actors actually managed to accomplish using AI? What are the main strategies observed?
Discrediting a candidate, a party, a traditional media or the electoral process
Slovakia: During the 48-hour moratorium ahead of the polls opening, AI-generated audios were published on Facebook to impersonate Michael Šimečka, leader of the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper Denník N. They discussed ways to rig the election, partly by buying votes from the country’s marginalized Roma minority.
Poland: Weeks before the election, an election campaign ad combining AI-generated audios and authentic video clips impersonating former Polish Prime Minister Morawiecki was published on Twitter. The AI-generated audio portrayed the former Prime Minister as reading the content of leaked emails from the inbox of his former chief of staff Michał Dworczyk, revealing tensions within his party, the United Right Coalition.
U.S. Chicago: During the mayor election of Chicago in 2023, AI-generated audio content published on Twitter impersonated the Democrat Paul Vallas, making him appear indifferent to police shootings.
→ We can see that Gen AI functions as an enhancer of current tactics: development of an electoral fraud narrative, leveraging existing vulnerabilities such as the marginalization of local minorities and the fears of hacking attacks, disseminating the content during the electoral window of opportunity.
Suppressing votes
U.S. Hampshire Primary Election: During the primary election last month in New Hampshire, USA, AI-generated calls targeted Democrats voters. They impersonated President Joe Biden’s voice to discourage the voters from participating in the primary.
→ Intimidating or discouraging voters from voting is not a new tactic either. The narrative used in this case, suggesting that voting may not be particularly impactful or significant, exploits an existing vulnerability in democracies, as low turnout undermines the election’s result.
Empowering opponents
Pakistan Election: During the current election in Pakistan, AI-generated video ads have been created by the Pakistan Tehreek-e-Insaf (PTI) party to campaign for their leader Imran Khan, currently in jail. These clips impersonate Imran Khan delivering speeches written from his jailhouse, urging supporters to turn out on election day.
→ This is an interesting application of generative AI, enabling even in an already non-pluralistic election, all parties to participate, provided they have the resources to do.
Of all these cases, it is interesting to note:
the supremacy of audio content.
the recurring proxy supporting this generated content in most cases: the U.S. company ElevenLabs.
mainstream platforms such as Facebook and Twitter are the primary dissemination platforms, which have been criticized for the inadequacy of their policies in limiting the propagation of such content.
The Taylor Swift’s Case: a Simple Joke Or a Forged Content to Suppress Opponents’ Voices?
In the U.S., Taylor Swift, who has already been a subject of conspiracy theories, has become a target of another form of AI: “deepfake explicit images” that spread across social media. Thanks to Graphika, we now know who was behind the creation of this content and the conditions surrounding the development of these images. The research company traced the images back to one community on the message board 4chan (not really surprising). This community created these explicit images as part of a game, using image generator tools, including DALL-E, Microsoft Designer and Bing Image Creator. There were also requests among users to share tactics to circumvent platforms’ filters. As reported by the New York Times, this type of contest is not new and was previously seen in US elections in 2022. It exploits the appetite of users in such communities to circumvent restrictions.
While explicit images and videos is not a new thing, I wanted to share with you a more interesting aspect: the derived effect of disseminating this type of content on platforms’ moderation policies and and its implications for the development of new tactics targeting elections.
In the U.S., the issue of moderation has always been a highly sensitive one, seeking to find the right balance between the demand to protect information integrity and accusations of censorship, particularly considering the implications for the 1st amendment. However, within this debate, one topic has been a bipartisan one: online child safety. Platforms have been more scrutinized than ever. Meta has recently been accused of failing to add staff to focus on the online safety of minors. Just last week, the Senate Judiciary Committee “grilled CEOs from TikTok, X and Meta about online child safety” as NBC news titled it.
In this context, platforms have been particularly vigilant over the past year regarding the protection against harmful content. To the extent that, during the Taylor Swift deepfake episode, the platform X blocked searches for Taylor Swift for several days. According to the company’s head of business operations, Joe Benarroch, it was done “with an abundance of caution so we can make sure that we were cleaning up and removing all imagery”.
Now pause. Imagine you are a threat actor who doesn’t particularly favor influencers like Taylor Swift promoting Democrats values, especially to her young audience who will be voting for the first time this year. As a threat actor, you would much prefer a Republican victory or even better, the failure of the entire electoral system. In this war of attention, where one narrative struggles to dominate, investing in tactics to suppress other voices rather might be more effective than trying to be the loudest among the crowd. And moderation policies are right there to help you.
Instances from the past, for example the 2017 election in Armenia, have shown that domestic voices, such as local independent journalists and media, can be suppressed using platforms’ policies. On the night before the election, some local independent Armenian journalists and media saw their Twitter accounts temporarily suspended because these accounts had been massively reported to Twitter massively by adversarial threat actors. They were only unblocked after civil society activists reached out to Twitter.
This weaponization of platforms’ policies is significant and is not confined to the Armenian case. Deepfake pornography will perhaps be part of this phenomenon. A deepfake pornography is “a digital forgery” that can be used in information operations targeting our democracies. Dr. Mary Anne Franks, a Professor at the George Washington University Law School and expert on the intersection of civil rights, free speech, and technology emphasized the importance of the concept of “digital forgery” in the latest episode of your Undivided Attention podcast. She explains that, instead of using the term deepfake pornography, “digital forgery” highlights what it truly does: taking over your identity.
I think that’s it for me on the topic of AI for today. But before leaving you perhaps completely depressed, I thought I would share with you this interesting resource by the MIT Media Lab: Overview ‹ Detect DeepFakes: How to counteract misinformation created by AI. You can participate in an experiment to test your ability to detect digital forgeries. You can hear about previous tips to detect them and find more about research on the topic as well.
It might prove more helpful in debunking and countering AI-generated content in the coming days than all this intellectual talk on AI.
Your press corner
Here’s the weekly readings to keep you connected to all the conversation on global elections and information operations:
New Hampshire opens criminal probe into AI calls impersonating Biden - The Washington Post - Life Corp. and Lingo Telecom, the two American companies that are being investigated, did not respond immediately to requests for comment.
Deepfakes, dollars and ‘deep state’ fears: Inside the minds of election officials heading into 2024 | CyberScoop - State and local officials say they need more funding and resources to deal with a deluge of threats ranging from AI to threats of personal violence.
EU turns to Big Tech to help deepfake-proof election – POLITICO - Plaforms including Facebook, X and TikTok will be required to identify AI-generated content.
In Fighting the “Disinformation” Problem, We Risk Losing the Battle for Our Minds to Big Tech - Australian Institute of International Affairs - Australian Institute of International Affairs - a brillant essay by Dr. Emma Briant about the real problem “our attention economy”. Here’s an extract: “Solutions predicated on a “disinformation” problem will always be inadequate as they do not recognize the higher magnitude threat that the attention economy has created: an infrastructure to enable, incentivise, and profit from campaigns that systematically weaponise anxiety”
Meta urged to rethink manipulated media policy by oversight board | AP News - Meta’s oversight board follows up on President Biden-related GenAI content.
Meta is running ads selling QAnon merchandise, despite banning QAnon content years ago | Media Matters for America - years after the company banned advertising in support of the conspiracy theory, these ads keep running on the platform.
Media Watch: Taiwan faces post-election ballot fraud conspiracies — Radio Free Asia (rfa.org) - claims of electoral fraud have become mainstream during and after an election.
Disinformation on YouTube: Research and content moderation policies - EU DisinfoLab - an excellent ressource by EU DisinfoLab on YouTube.
Disinformation on TikTok: Research and content moderation policies - EU DisinfoLab - and because they do not limit themselves to one platform, here’s another excellent factsheet this time for TikTok.
TikTok launches ‘Pakistan Election Center’ - Pakistan - Business Recorder (brecorder.com) - TikTok is also trying to play its part in Pakistani Election, collaborating with Agence France Presse (AFP).
Media Matters for Democracy launches Facter for newsrooms (factcheckhub.com) - another Pakistani civil society-led factchecking initiative.
OII | Introducing the 2024 OII Elections Initiative (ox.ac.uk) - The Oxford Internet Institute launched the OII Elections 2024 initiative which aims to cover the digital aspects of some key elections in 2024.
EU leaders 'alert' after Latvian MEP accused of spying for Russia - Metsola | Euronews - The EU Parliament has opened a formal probe into Latvian lawmaker Tatjana Ždanoka, accused in an investigation by Russian newspaper The Insider of working as an agent for the Russian Federal Security Service (FSB) - the successor to the Soviet-era KGB - from 2004 to 2017.
Russian spies impersonating Western researchers in ongoing hacking campaign (therecord.media) - Hackers working for Russia’s intelligence services are impersonating researchers and academics in an ongoing campaign to gain access to their colleagues’ email accounts.
The making of a Russian disinformation campaign: What it takes (Opinion) | CNN - using the past to investigate the present.
How intelligence and transparency can combat electoral interference (globalgovernmentforum.com) - 4th episode of the saga by the Global Government Forum.
CISA unveils election resource page for officials and workers - Nextgov/FCW - #Protect2024 site has been launched this Wednesday.
Iran accelerates cyber ops against Israel from chaotic start - Microsoft On the Issues a teaser to U.S. election?
Azerbaijan: When FIMI Makes Way To DIMI
Azerbaijan held its presidential election this Wednesday, following what has been described as Azerbaijan’s Boringest Election Campaign Ever.
However, in an election where the outcome is already known, in a country where the the opposition has been silenced, it is interesting to examine the type of Domestic Information Manipulation and Interference (DIMI) being produced by Azerbaijani state media and its propaganda mill. Here are a few headlines below; you don’t need to read the content to understand what it is about:
Diaspora representatives: "We always wish to see Ilham Aliyev as President" (azernews.az)
Today.Az - Countries colonized by France are rising up. Corsicans for freedom
We can observe here two parallel strategies, both targeting international audiences. The first strategy aims to gather support from the Azerbaijani diaspora abroad for President Ilham Aliyev. The second strategy aims to legitimize Baku’s recent military victory, reclaiming the Nagorno-Karabakh disputed region from Armenia in September 2023, while discrediting other countries traditionally aligned with Armenia, including France.
Thank you for taking the time to dive into this newsletter and let me know what you thought about it and tips to improve it!