Skip to main content
Home
Snurblog — Axel Bruns

Main navigation

  • Home
  • Information
  • Blog
  • Research
  • Publications
  • Presentations
  • Press
  • Creative
  • Search Site

Why Do Users Perceive Search Engines as Biased?

Snurb — Thursday 5 June 2025 01:13
Search Engines | Weizenbaum-Institut 2025 |

The final session on day one of the Weizenbaum Conference starts with a second paper by Victoria Vziatysheva, whose focus here is on how users perceive search engine bias. Search engines are amongst the most critical elements of Internet infrastructure, but are sometimes criticised for supposed biases in their results; this may affect how users engage with them.

Users tend to trust algorithmic systems more when they are perceived to be fair, and such trust is also affected by users’ knowledge about these systems: users with limited knowledge tend to be more trusting. Search engine users tend to have folk theories about algorithms, based on qualitative experiences, and may personify these systems, treat them as feedback control systems, or develop popularity-based (search results based on popularity) or manifestation-based (search results manifesting users’ own personas) notions about algorithms.

The present study conducted a two-wave survey in 2024 with some 1,400 German-speaking Swiss citizens; it asked them open-ended questions about their perceptions of why search results might be biased, and correlated this with participants’ digital skills and literacies, political leanings, and demographic attributes.

Responses were coded into categories seeing explanations for possible bias in algorithm design, content features, actors influencing search results, normative concerns, political and commercial influences and exploitations, and differences in user input and behaviour.

Content-related explanations dominated (41%); others were algorithms (25%), actors (22%), influences and exploitations (22%), and normative concerns (15%); user input appeared only very rarely (2%). Amongst the actors, content creators, business, developers, political groups, and users were the most prominent, in order.

Particular types of possible bias that participants noted, in order, were ads and paid results, financial influence, problematic information and sources, lack of objectivity, ranking, misinformation and propaganda, personalisation, etc.

Younger and left-leaning respondents pointed more often at content and algorithm factors; right-leaning and distrustful users highlighted influence and exploitation; those with stronger information skills pointed to content as a cause of bias.

Overall, then, users blamed content rather than algorithms as a cause of bias; the commercial nature of search engine operators was also highlighted. Algorithms were highlighted more often by users with higher digital literacy.

  • 3 views
INFORMATION
BLOG
RESEARCH
PUBLICATIONS
PRESENTATIONS
PRESS
CREATIVE

Recent Work

Presentations and Talks

Beyond Interaction Networks: An Introduction to Practice Mapping (ACSPRI 2024)

» more

Books, Papers, Articles

Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks (Social Media + Society)

» more

Opinion and Press

Inside the Moral Panic at Australia's 'First of Its Kind' Summit about Kids on Social Media (Crikey)

» more

Creative Work

Brightest before Dawn (CD, 2011)

» more

Lecture Series


Gatewatching and News Curation: The Lecture Series

Bluesky profile

Mastodon profile

Queensland University of Technology (QUT) profile

Google Scholar profile

Mixcloud profile

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence]

Except where otherwise noted, this work is licensed under a Creative Commons BY-NC-SA 4.0 Licence.