I’m in Berlin this week for the annual conference of the excellent Weizenbaum-Institut, which opens with a keynote by the great Claes de Vreese, whose keynote asks whether citizens are ready for an AI democracy (it won’t surprise anyone that the short answer is No). Democracy and politics are rapidly transforming at the present moment; democracy is under threat from populist and far-right movements and various other actors, and there are widespread concerns about democratic backsliding around the world. In a reversal of trends in the 1990s and 200os, the number of true democracies in the world is shrinking; the number of autocracies is growing again. Similarly, protections for the freedom of expression are decreasing in many countries, and safeguards for the quality of electoral processes are declining.
What might the impact of artificial intelligence be on all of this, then? We need to be careful here in how we assess this. There has been plenty of discussion on AI’s threat to democracy; some of this has been substantially overhyped, though. We saw the rise of deepfakes in election campaigning, for instance, yet the actual impact of such content on electoral outcomes appears to have been minimal. Most elections in 2024 featured some AI-related incidents, the IPIE has reported, and the sources of such AI content largely remained unknown, but harms from these incidents were limited.
These incidents also show the evolution of AI-generated content from text to audiovisual formats – AI content now comes as audio, images, and video, in various combinations of modalities. These pose different challenges for democracy. Such developments show that AI has now become part of the general electioneering toolkit, and is used to generate campaign content, attract donations, reinforce conflicts and emotional divisions, target specific groups, spread misinformation, doubt, and fear, and persuade voters towards particular political positions.
AI companies had made various promises about the guidelines and guardrails they had put in place to avoid the misuse of AI during elections; they rarely followed through on this. The same is true for social media platforms, which variously implemented and abandoned their commitments to policing and labelling political content. Similarly, several political parties and candidates promised to use AI in responsible fashion, yet not all of them did so or stuck to those promises.
Traditional media, in turn, covered these developments, and occasionally uncovered problematic activities, but also had only limited capacity to critical report on and analyse these developments, and instead often reported rather uncritically on the commitments made by AI industries, platforms, politicians, and other stakeholders, while also making their own institutional and commercial arrangements with AI providers. Citizens, more generally, also engaged with AI and related content, and did so often on the basis of a limited understanding of these technologies.
All of this took place in the complicated, unstable, and uncertain context of 2024, yet things developed further in 2025, especially in the United States. The unprecedented intersection of government and industry, as symbolised by the involvement of Elon Musk in US government, further changed the playing field.
We need, then, greater AI literacy, more sophisticated attitudes towards AI, and – combining both those aspects – better AI competence amongst citizens as they confront these challenges. This should involve all groups of users and potential users, from unskilled skeptics to expert advocates – groups of users defined in a recent Dutch study of AI knowledge and engagement patterns. Such AI competence also needs to form part of a much broader digital skill set.
But such individual-level approaches cannot do all of the heavy lifting, especially given the fast evolution of the technology. AI-generated images can no longer be detected easier from the small glitches they contain, for instance, so teaching such detection mechanisms is no longer effective; attitudes towards using AI in political debates are also strongly affected by the issues such content is being used to support – opposition to the use of AI depends on the context of the debate, in other words.
We need more collective approaches towards AI, therefore. This will need to include regulatory elements, as already initiated in the European Union (but made more difficult by the US tech industry’s alignment with the Trump administration in order to avoid such regulation); responsible company self-regulation (which remains limited and problematic); educational efforts (directed especially also towards older generations); alternative infrastructure developments (such as the development of sovereign capabilities); and effective journalism on these developments (avoiding overly techno-optimistic coverage and asking more critical questions). All of this also requires considerable political will, of course.
This combination of individual- and societal-level approaches is urgently necessary, but can only work if we also ensure that our democracies remain or become robust again.