You are here

Making Sense of the AI Revolution

The second keynote speaker at COMNEWS 2023 this morning is Claes de Vreese, whose focus is on AI; he notes that Artificial Intelligence has been a theme of discussion for many years, but has really been turbocharged in recent years by the emergence of new technologies. But these are normal developments in an emerging field, and we should not conclude from this that we are in the midst of a major AI revolution. There is also a great deal of self-serving rhetoric about AI from AI companies themselves, of course.

AI itself remains underdefined, too. Definitions being used in the European Union are very broad, for instance, but also remind us that AI is more than natural language processing and machine learning only; there are many elements that intersect in the emerging AI ecosystem, and we might be better served by thinking about ‘hybrid intelligence’ (also involving humans) than pure artificial intelligence at this stage.

AI is now affecting politics, public administration, the news media, and everyday citizens; it also raises questions about what journalists and researchers can do to fulfil their roles as watchdogs. Using automated processes to amplify political messages (for instance in social media) is nothing new, but AI enhances political advertisers’ ability to create new content and target it at specific audiences, for example. Journalists and others will find it increasingly difficult to track and identify such activities.

Public administration is also affected; AI provides major new tools in managing citizens, services, and governmental functions, as well as using AI and algorithmic tools to monitor society. But such automated decision-making processes are then also far less transparent, and can no longer be fully explained.

News media themselves are increasingly incorporating AI into their own functions, but for content creation and in their distribution of content to audiences. This changes the entire process of journalism, from story research through content creation to story distribution. News organisations hope that this will make journalists’ work more efficient, deliver more relevant content to users, and improve business efficiency – but the question is whether the resources saved this way will be reinvested into journalism itself, too.

A key battleground now is in the detection of disinformation: such content can be easily created, but detection lags behind, and many fact-checkers and other organisations are now seeking better approaches to doing so. How might the recipients of such disinformation be empowered to better detect when they are being lied to?

Citizens themselves are still coming to terms with AI, too. How do ordinary people see these developments, and what level of optimism or pessimism about technological advancements should they have? How indeed is public opinion being affected by AI interventions – and to what extent does public opinion itself in turn also feed back into AI systems themselves? What kind of public opinion are we gauging, for instance, when we observe social media activities, if they are overrun by AI content?

At this stage, most ordinary people still have now idea about how and where AI might be applied; there is a general sense about key fields that may be affected, but not necessarily how, and confronting people with scenarios about the possible uses of AI across diverse fields may help in developing a better understanding of such potentials (and threats) – yet initial results show that people still have broadly equal trust in the abilities of humans and AI systems to make informed decisions about critical matters. People do want to know whether they engage with humans or AI system at least, though.

More broadly, AI is also being seen as a threat to job security and data privacy, however; and current scholarship is also revising important questions about ethics and regulations, of course. There is a need to bring together regulators, researchers, and citizens in co-design processes in order to increase collaboration between stakeholders from different disciplines and develop more targetted and practical questions about the path forward – for instance in journalism and other centrally affected areas.

There are now predictions that in a few years a very substantial percentage of all content will be artificially generated, for instance; if this comes to pass, how should such content be identified? We have already seen an AI-generated ad responding very quickly to Joe Biden’s announcement that he would run again for US President, launched just minutes after the announcement itself; this was labelled as AI-generated, but such labels are hardly visible or recognised by audiences at the best of times.

What do we do with this as researchers, then? How do we address questions of literacy and detection? How do we ensure the presence of research and insights from non-WEIRD countries in all this?