You are here

Internet Technologies

How Divergent Skills Affect the Online Participation Divide

At the conclusion of my travels in Canada and Europe, I’ve made my way to Lugano for ECREA2018. We start with the first of two keynotes, by Eszter Hargittai, whose focus is on the digital divide in online participation. The fundamental question here is who benefits the most from Internet participation, and who does not: do participation divides facilitate social mobility or reproduce social divides?

The key point here is that digital divides cannot be solved by mere connectivity: getting online does not equate to using the Internet effectively and efficiently. Rather, such uses continue to be moderated by socioeconomic status, technical and social contexts, personal Internet skills, and the types of uses being made. Internet skills here include especially an awareness of what is possible, and the ability to create and share content, amongst a long list of others – and it is important to focus on such skills because users’ skills levels can be addressed by a variety of interventions more quickly than a variety of more intractable factors.

Understanding the Datafied Society by Decentring Data

The second day at the iCS Symposium at IT University Copenhagen starts with a keynote by Lina Dencik. She explores the difficulties in researching the datafied society, building on several of the projects currently underway at the Data Justice Lab at Cardiff University. This work must involve researchers, but also civil society actors, practitioners, journalists, and others.

The datafied society represents an immensely fast-moving space; there are constant updates on development projects, company initiatives, government actions, data scandals, etc. As researchers, it is important to introduce a sense of slowness into this environment from time to time, in order to take a more considered and careful look at what is going on, yet the speed at which new data-driven technologies are being implemented across society, often without having been fully trialled and tested, makes this very difficult and gives a great deal of unchecked power to the companies providing these technologies.

Three Distinct User Positions towards Algorithms

The final speakers in this AoIR 2018 session are Willian Fernandez Araújo and João Carlos Magalhães; they are interested in how ordinary people comprehend algorithms, and captured Portuguese-language tweets that used relevant terms to explore this.

Three Narratives about Algorithms

The third speaker in this AoIR 2018 session is Martina Mahnke, who is approaching algorithms from a human rather than technical perspective. Indeed, the term algorithm is often used to avoid explaining exactly how automated systems function, and what logics them embed; the study of algorithms from the user’s or programmer’s view has a considerably shorter history to date.

Swiss Internet Users’ Awareness of Algorithmic Systems

The next speaker at AoIR 2018 is Noemi Festic, whose focus is on algorithmic content selection processes by automated systems. This includes search applications, recommendation systems, and a broad range of other automated tools; these govern user behaviour by limiting and shaping activities but thereby also provide a space for new forms of engagement.

Consequences of Our Lack of Understanding of the DMCA

The final speaker in this AoIR 2018 session is Aram Sinnreich, whose interest is in the continuing consequences of the U.S. Digital Millennium Copyright Act (DMCA) – and in particular its anti-circumvention elements that criminalise the bypassing of copyright protection mechanisms such as encryption, even in contexts where ‘fair use’ exceptions apply.

Mark Zuckerberg’s Free Basics Initiative

The next speaker in this AoIR 2018 session is Andrea Alarcon, whose focus is on Mark Zuckerberg’s Internet.org project. Its aim was to provide free basic Internet service around the world, especially for people who were within the Web’s reach but remained unconnected with it; access to Facebook itself was deeply baked into this initiative, and this generated significant accusations of building a walled garden.

Digital Rights and the Internet Freedom Agenda

The next AoIR 2018 speaker is Nathalie Maréchal, who focusses on digital rights technology: any kind of hardware or software that improves users’ privacy, access to information, and freedom of expression. This threatens government and corporate control of information flows in an age of surveillance capitalism, and is therefore also controversial; it challenges the networked authoritarianism that is beginning to take hold in many countries around the world.

Models for Digital Rights Campaigning

The next session at AoIR 2018 starts with Efrat Daskal, who begins with a brief review of the development of the digital rights discourse since the original UN Declaration of Human Rights. Human rights in the digital age have developed especially since 2000, and especially the Internet Rights and Principles Charter of 2014 has made an important contribution. This enshrined the rights to access to information and technology, privacy and safety, and freedom of speech.

Towards Indigenous Understandings of Artificial Intelligence

Well, we’re finally here: AoIR 2018 in Montréal has begun. We start with the keynote by Jason Lewis, who addresses the continuing rise of white supremacy in recent years. He begins by referencing the novel Riding the Trail of Tears, which discusses a retracing of the removal of the Cherokee from their traditional lands through virtual technology, and the possibility of Indigeneity in a digital earth.

But such a perspective clashes with white supremacy, which is well established in societal power structures even without further action to entrench it more deeply. Jason compares this with the multi-layer hardware and software stack that digital interfaces operate on; we are subject to the regimes that the stack places upon us and have no meaningful way to escape them. In much the same way, white biases are a feature, not a bug of contemporary society at every level; in software, biases beget biases because new data and new systems are built on old data and old systems, and perpetuate their built-in assumptions, and the same is true in societal protocols. This is a millennia-long process or epistemological inertia.

Pages

Subscribe to RSS - Internet Technologies