You are here

Interactional Moderation by Instagram’s Bot Police

The final speaker in this AoIR 2023 session is Nathalie Schäfer, whose focus is on bots on Instagram. Bots are pervasive there, and some users have banded together to detect fake accounts and highlight automated interactions that are seen as problematic. They do so with ‘bot police’ accounts that ask to be tagged whenever users encounter bots, and also provide advice on how to detect bots and report them to Instagram. Other anti-bot accounts are set up solely to interact with social bots, and work as honeypot accounts whose postings are designed to attract automated interactions.

These accounts this present themselves as ‘good online citizens’, who practice individual information care (actively and choosing information sources that are not algorithmically curated or targetted), discourse care (following discussion norms of civility and respect), and considered contributions (communicating only authentic content and avoiding inflationary contributions). Such norms are shared understandings of how citizens ought to participate in public discourse on social media.

This is a kind of interactional moderation that complements conventional content moderation; it is driven by the community rather than the platform operators themselves. But this also means that the bot police’s potential for action are limited; they still require the platform operators to impose the greater sanctions against bots that only they can implement. They are advocates for a good digital society on Instagram.