1. AI is already being used to influence elections
Targeted advertising and algorithmically determined timelines are nothing new on social media. “We’ve also seen the use of bots to influence public conversations,” says Knuutila, “and the quality of AI-generated media is fueling the spread of misinformation.”
Days before a tight vote in Slovakia in 2023, deepfake audio emerged that may have contributed to the election outcome. Unexpected events or news that can influence an election have come to be known as October surprises, referring to the November timing of American elections. “This is the kind of last-minute offensive that will probably only become more common,” says Knuutila. “We can’t say how much impact this had in the end, but it did demonstrate the insufficiency of platforms’ policies and tools for detection.”
Deepfakes are not only for deception: the Argentinian election in 2023 was rife with AI-generated memes, creating a free-for-all atmosphere where politicians and citizens alike were empowered to create and agitate. And generative AI is being used to create ‘political avatars’ or stand-ins for imprisoned leaders (see: Imran Khan’s AI clone in Pakistan) or to reach voters in more languages (see: the multilingual robo-calls of New York City mayor Eric Adams).
2. Social media algorithms amplify political messages that elicit anger
The trend on social media platforms has been from a ‘friends’ model, where relationships had to be reciprocal and confirmed, to a ‘followers’ model, where influencing and broadcasting to an audience became paramount, to the current ‘for you’ model, a kind of addictive algorithm-generated stream of content from outside the user’s own feed. In 2017, Facebook introduced emoji reactions, giving these more weight than the conventional like button.
According to Knuutila’s research, the rise of the nationalist Finns Party in Finland’s 2019 parliamentary election was linked to a high proportion of ‘anger’ reactions on Facebook, where the Finns also had the greatest amount of post shares of any Finnish political party. “An algorithm that favors emotional reactions unintentionally promotes angry posts,” says Knuutila, “and it also amplifies an angry tone in political campaigning.” Alternative algorithms could focus on getting engagement from diverse users, rather than just upping emotional reactions. Such a system would promote content that bridges user groups.
3. The legacy of AI-generated images and text is generalized doubt
With the continuing improvement of generative AI tools, it is becoming cheaper to produce misinformation. The quality of fake videos and fabricated audio recordings has increased. Experts surveyed by the World Economic Forum even estimate that AI-generated misinformation is among the most severe risks facing humanity.
Research on misinformation, however, tends to paint a more complicated picture. “Media doesn’t need to be realistic to be persuasive, when you can just cut and edit media to alter the meaning, creating a so-called ‘cheapfake’,” says Knuutila. Examples of such cheapfakes include the illusion of former US House Speaker Nancy Pelosi being drunk, created by simply slowing down a video. Videos of Finnish politics have also been manipulated by simply changing the sequence of speech with simple editing.
One effect of generative AI seems to be certain. People will start to distrust even authentic media, and disagreements about what is AI-generated and what is real will proliferate. For some politicians, this will become a useful defense. For example, some embarrassing audio recordings of the Argentinian politician Carlos Melconian emerged in 2023—but he stayed out of trouble by claiming it was all a deepfake.