Skip to main content
Home
Snurblog — Axel Bruns

Main navigation

  • Home
  • Information
  • Blog
  • Research
  • Publications
  • Presentations
  • Press
  • Creative
  • Search Site

How Meta’s Third-Party Fact-Checkers Are Learning to Think Like the Machine

Snurb — Thursday 31 October 2024 22:41
Journalism | Industrial Journalism | 'Big Data' | Artificial Intelligence | AoIR 2024 |

The final presenters in this session at the AoIR 2024 conference are Yarden Skop and Anna Schjøtt Hansen; their interests are in the third-party fact-checking network employed by Meta. This operates on the basis of a Meta-provided online dashboard that highlights potentially problematic content, and the dashboard’s operation directs fact-checking away from political content spread by major political figures, and towards other forms of content.

Many fact-checking organisations around the world now substantially rely on income from Meta through their engagement in its fact-checking programme; this is part of a global post-publication debunking turn, but also creates a dependency on Meta funding, of course. Meta claims that debunking reduces the reach of problematic posts, but provides no externally validatable data on this claim.

This process can be understood as an assemblage between human and non-human elements; it brings together Meta and third-party staff, the dashboard and its algorithms, and a number of other components. The present project explored this through interviews with fact-checkers and participation in the International Fact-Checking Network’s annual meetings.

Especially initially, fact-checkers essentially had to train the Meta dashboard to better identify posts that were both problematic and fact-checkable – the fact-checkers’ assumption here is that the system, which originally produced plenty of false positives, would learn from their actions. This was also seen as an unacknowledged labour contribution to the system, however, and some fact-checkers refused to participate in this way.

Fact-checkers also developed their own nuanced understanding of the veracity labels available to them, and specific labelling practices emerged over time – while those available labels also affected the fact-checkers’ own thinking about truths and falsehoods. To end-users, of course, these reasonings would remain opaque – they would only see the final fact-checking labels.

In this sense, fact-checkers are becoming ‘machine learners’, in Adrian Mackenzie’s understanding: their processes of critical thought are being shaped by the logics and data structures of machine learning. Meta’s fact-checking programme is cementing the politics of demarcation between fact and non-fact.

  • 155 views
INFORMATION
BLOG
RESEARCH
PUBLICATIONS
PRESENTATIONS
PRESS
CREATIVE

Recent Work

Presentations and Talks

Beyond Interaction Networks: An Introduction to Practice Mapping (ACSPRI 2024)

» more

Books, Papers, Articles

Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks (Social Media + Society)

» more

Opinion and Press

Inside the Moral Panic at Australia's 'First of Its Kind' Summit about Kids on Social Media (Crikey)

» more

Creative Work

Brightest before Dawn (CD, 2011)

» more

Lecture Series


Gatewatching and News Curation: The Lecture Series

Bluesky profile

Mastodon profile

Queensland University of Technology (QUT) profile

Google Scholar profile

Mixcloud profile

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence]

Except where otherwise noted, this work is licensed under a Creative Commons BY-NC-SA 4.0 Licence.