You are here

Silicon Sampling: Using LLMs to Simulate Social Media Conversations

The next speaker at the Indicators of Social Cohesion symposium is Ethan Busby, zooming in from Utah. His focus is especially on the use of Large Language Models in research, and current research focusses especially on the analysis of conversations in social media spaces, and the potential for automated tools to interact with such conversations.

Large Language Models have the potential to scale up such research and interventions, both in real-world and in simulated contexts. He introduces the concept of ‘silicon sampling’, which asks AI systems to assume a particular political persona in order to then simulate engagement in or moderation of online conversations. Such simulations can then also be used to improve the affordances of digital platforms.

The next step here is to develop frameworks for getting LLMs to talk to each other, and thus to simulate entire conversations in a realistic fashion. Such simulation then enables the testing of new interventions in discussion spaces, in order to facilitate more prosocial conversations.

Appropriate prompting of LLMs is critical to the performance of such silicon sampling. Unless models are trained to take on the personas of simulated participants, the results diverge substantially from observable human behaviours.

Once the silicon sampling works, however, it becomes possible to test approaches to ‘persuasion for good’, to address problematic online practices and beliefs. Such interventions can then help to increase researcher capacity, but prompting and task structures matter, and it is important to understand the strengths and limitations of the Large Language Model being used. And of course, if the results of such studies are to be integrated into real-life social media platforms, then the ethical implications of doing so also need to be considered carefully.