2024 is a significant year in the emerging relations between politics and digital technology. With a high number of democracies voting in national polls (including India, the USA, the UK, France, Hungary, Bangladesh, South Africa and Algeria), 2024 has been described as ‘the year of elections’.
At the same time, political organisations now have (in many cases for the first time) easy access to generative AI to support their election campaigns. Much has been written about the dangers associated with generative AI’s ability to rapidly produce synthetic digital content for websites and social media platforms at minimal cost. Concerns have also been raised about the difficulties of verifying the sources from which AI platforms such as ChatGPT, Grok and DALL-E produce their content and the relative ease with which they can be used to manufacture deepfakes. Here we reflect upon a significant, if overlooked, development within the political use of generative AI – namely the production of obviously fake, or what we term ‘shallowfake’, visual content by right-wing populist movements. Unlike the deepfake, the shallowfake does not aspire to take a form that could be easily mistaken for reality. Instead, it produces idealised reflections of the present and future to provoke heightened positive and negative emotional responses. Shallowfakes are particularly difficult to regulate because they do not claim to be real. Ultimately, we claim shallowfakes reflect a novel political communication strategy with significant ethical implications in their own right.
Generative populism in the ‘year of elections’
In the election campaigns of 2024, populist movements have shown themselves to be prominent early adopters of generative AI-imaging tools. An explorative media analysis shows that diverse populist parties in Germany, the Netherlands, France, Argentina, New Zealand, Italy and the USA, are already leveraging the potential of AI imaging. The use of AI-generated images in the US election has been subject to much debate. In recent weeks, Donald Trump has posted obviously fake imagery suggesting that Taylor Swift is endorsing him and depicting the Democratic Party National Convention in Chicago as a Soviet military rally. Two prominent adopters of generative visual populism in Europe are the French Rassemblement National and the German Alternative für Deutschland parties. What stands out about these two parties is the continuous stream of AI-produced visual content they are feeding electorates. Generative AI seems to form a core part of their social media strategy. While others in the political mainstream have been more cautious, popularists are unapologetically tapping into a supercombo of benefits associated with AI imaging: first, it offers low-cost, almost effortless (‘one prompt away’) ways of producing desired imagery, and second, it sets the imagination free (‘get what you want’).

Image circulated byDo Republican Party Nominee Donald J. Trump via X.

Images circulated by AfD leader Norbert Kleinwächter via X (left) and on the website leuropesanseux.fr created by Rassemblement National (right)
The European non-profit organisation AI Forensics recently published a media analysis of AI imagery used in the 2024 French presidential elections. Identifying 51 visuals in total, mostly from right-wing populist groups, it reveals how AI imaging is ‘used to dramatize party-specific narratives and sensationalistic topics, particularly amplifying anti-EU and anti-immigrant messages’. As the two examples above illustrate, the right-wing pioneers of generative populism typically employ AI imaging to fabricate their objects of antagonism. It is their ‘enemies’ who get synthesised. We thus find alarmist, demonising images of crammed migrant ships and male refugee groups expressing primal anger.
It is, however, not only dystopias and antagonists that are the objects of generative populism. The Dutch Party for Freedom (PVV) illustrates how generative AI is also used to produce idealised (if still highly exclusionary) content. Via X, PVV leader Geert Wilders shared synthetic imagery that consists of a symbolic mix of utopianism and nationalism. A sunny future is sketched where traditional iconography (the Dutch flag, tulips, windmills) is celebrated and people (read: the white nuclear family) are (once again) happy. The Italian Deputy Prime Minister, Matteo Salvini, member of the Lega party, has also been spreading utopian-style AI-generated content, visualising self-selected Italian symbols (Dante, penne and the white nuclear family). Interestingly, he juxtaposes that utopian imagery with images of what he is against, almost playfully feeding the antagonist narrative. It is quite something that these parties are now able to produce these narratives in sharply colourised visual form. Effortlessly moving between ‘othering’ and self-glorification, AI imaging seems to have the concerning effect of effortlessly deepening divisive populist narratives.


Images circulated by PVV leader Geert Wilders via X (above) and by Lega politician Matteo Salvini via X (below)
The not-so-innocent power of the ‘hyper-real fake’
In a recent report, the Digital Forensic Research Lab claimed that in an analysis of the European elections, it could only find evidence of one AI-generated image that had been labelled as such. This raises the important question about how best to interpret and respond to shallowfakes and the generative popularism that creates them. A Code of Conduct was developed for the 2024 European Parliament elections, which directly addressed the use of AI. This voluntary code of conduct prohibits the publication of discriminatory political material and falsified or fabricated data. It also states that AI should not be used to generate deceptive content that alters the appearance of candidates, and that any AI images should be clearly labelled. Despite its best intentions, it is apparent that codes of content such as this are ill equipped to deal with shallowfakes. Shallowfakes are not a form of fabricated data and as such they cannot be easily regulated as a form of mis- or disinformation. While codes of digital conduct tend to focus on the use of generative AI to deliberately deceive, the cartoonish, parodying quality of shallowfakes makes the argument against deception difficult to sustain. Shallowfakes are undoubtedly expressions of political propaganda, but in their strategic ambiguity they appear to embody a new form of political communication. They are, perhaps, best conceived of as what the French sociologist Jean Baudrillard termed ‘hyper-real fakes’. As hyper-real fakes, these political images do not so much pervert reality as produce new, highly bespoke realities which delimit the contours of right-wing populist fantasies. As images with no authors or originals, they are difficult to regulate on the basis of their truthfulness. Yet the power of shallowfakes should not be underestimated. Labelling them as AI-generated will make little difference as their very nature is to be playfully fake. Perhaps our best line of defence against such images is to carefully deconstruct the simulated realities these images represent (thus exposing their exclusionary, intolerant and derogatory logics) and to make sure that the political groups that produce them are held to account for their dehumanising and divisive impacts.
Joram Feitsma is Assistant Professor in Public Governance at Utrecht University. Mark Whitehead is Professor in Human Geography at Aberystwyth University.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount on all books – sign up here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Bristol University Press and/or any/all contributors to this site.
Image credit: Igor Dernovoy via Unsplash