You are here

Introducing a Theory of Acute Events

Seattle.
The next session at AoIR 2011 is our own, fabulous panel on crisis communication. We begin with an overview paper by my CCI colleagues Jean Burgess and Kate Crawford, who introduce the idea of acute events. Kate begins by outlining the idea of media ecologies involving a wide range of different media platforms, and their specific performance during acute events (such as crises, but also a range of similar events).

Jean follows on by defining acute events as significant real-world events which are associated with intense bursts in media activity – from political elections to royal weddings, from celebrity deaths to natural disasters. We can identify acute events on the basis of their timeline: a sharp peak of high volume and identity (whether locally or globally); highly mediated, involving multiple actors and interests; on Twitter, coordinated around specific #hashtags; and producing controversies and other adjunctive conversations associated more broadly with the topic.

We are able to observe a shift to an event-driven paradigm here: a focus on ‘peakiness’ or ‘burstiness’ in Internet research which is now supported also by improving research methods for tracking substantial datasets on specific events. We have been able to observe such peaky events for example for the Australian federal election 2010 (where a peak occurred on election day), or the royal wedding (where various peaks occurred around specific moments in the event itself).

Crises are a specific kind of acute event, then – they have specific structural and dynamic properties on Twitter (and elsewhere in the media), for example; they are unforeseen and unplanned, involve high stakes and multiple interests, have (sometimes significant and long-lasting) adverse impacts, and disruptive and/or productive effects on media systems. We have examined this for the Queensland floods and Christchurch earthquakes, for example, but they are also observable in political and brand crises, for example.

Kate continues by pointing to the question of new epistemologies which may emerge from this kind of work. We are now dealing with millions of tweets around some crisis events, for example, taking a 30,000ft view on them; how do the new tools and capacities for taking such a big picture view of events change how we perceive and analyse such events, then?

One issue is that of time and accessibility: Twitter and similar spaces are all about the ‘right now’; going back through datasets is difficult, and there usually is no comprehensive access to all tweets, but only to subsets of content selected according to various criteria. We need to point this out, and develop mixed methods to address these issues. This involves quantitative work, but also close qualitative investigation, especially also including ethnographic work. This is useful for example in examining the way in which the Queensland Police rapidly adapted to using Twitter during the floods.

There also are serious questions about research ethics, especially as we’re dealing with large amounts of data which we archive at times of crisis – people may be using such tools in extremis, and our archiving of this content gives it much greater longevity than may have been intended, and we must as granular questions on a project-by-project basis.

There also is a great temptation with sites like Twitter to assume we have perfect access, but digital divides still remain – between researchers inside and outside these social media organisations, for example, which raises sampling questions: what is our dataset, and what does it represent? Additionally, there are differences in skills: how do we skill people to deal with massive datasets, especially if we’re still only developing those skills ourselves (and there may be gender differences in who has these skills, too)?