Distributed Situation Awareness

What it means. And why Distributed SA is likely here to stay.

Situation Awareness: The birth of a paradigm

Pretty much everyone in aviation is familiar with the concept of situation awareness, so it might be surprising to learn that it’s actually a relatively new term which was scarcely used at all prior to the 1980s, and only really entered the lexicon with a dramatic explosion in popular usage from the mid 1990s. Perhaps it was an idea that had come of age, but its rapid growth in both academic circles and operationally in real world scenarios coincided with the publication in 1995 of a pair of seminal papers by Mica Endsley, eminent Chief Scientist of the US Air Force.

Figure 1. Graph showing the growth of the term SA in the English lexicon. From Stanton et al. 2017, p.450.

In the scientific article Toward a Theory of Situation Awareness in Dynamic Systems, Endsley laid out the key framework of SA that most aviation professionals are still taught during their human factors training. It is comprised of a loop of three levels of information processing: perception (level 1), comprehension (level 2), and projection (level 3), more colloquially summarised as a process of ‘What?, So What?, Now what?’

Endsley’s Model of 3-stage situational awareness

Endsley’s 1995 papers quickly became among the most cited works in human factors science, and by 2010 one study found over 17,500 articles discussing SA online. Her model became the dominant theory and was especially embraced in terms of the practical application of human factors in industry, by the aviation community, and beyond.

The interest it generated and its wholehearted acceptance by operators, who clearly recognised and identified with the concepts it describes, has been matched over the years by an unusual degree of academic contention and debate by other theorists. As research interest in SA grew, the concept expanded from the individual level to the team level with studies moving on consider the context of a whole crew. And from there, interest developed in how SA might apply in the context of larger and more complex systems, leading to evolving definitions that took thinking about SA down new avenues well beyond its human beginnings.

The concept of distributed cognition

The path to one of those was laid by the idea of distributed cognition which began a paradigm shift in how we think about how we think. Endsley’s description of SA was rooted in the discipline of psychology and the processes of human cognition. That is to say, the model exists to describe a process that is going on inside our heads. However, some theorists started to argue that, far from happening in a vacuum, human cognition -, the way we think – is a product of the way we interact with many other elements in our environment, some human and some technological. For example, how we process information is heavily influenced by our use of ‘artefacts’ all around us which impact upon our ability to understand, remember, recall, and take decisions. These artefacts might be something as simple as a paper and pencil on a kneeboard (which allow us to record our thoughts, words, and figures, or carry out simple maths), or as mind-boggling as a flight management computer (which combines the product of its own complex calculations from multiple inputs with the pilots goal-driven programming of intended flight paths, or performance objectives.)

The beginnings of this concept can be traced back to another influential article from 1995 called How a cockpit remembers its speeds. In it Edwin Hutchins introduced the idea that the task of configuring an aircraft on approach is not only dependent upon the knowledge and memory in the head of the pilot, but also exists in the ‘memory’ of the technical systems on board. The way a pilot recalls key information and thinks about flying an approach is dependent upon these, and in turn relieves demands on the pilot’s information processing. Distributed cognition makes the case that we should recognise that humans and technologies conduct co-ordinated tasks together to achieve goals or to solve complex problems.

What is distributed SA?

Inspired by Hutchins, it was only a short hop for the concept of distributed cognition to be applied to situation awareness. In a departure from Endsley’s human-centred approach, distributed SA suggests that to truly understand SA we can not limit its scope to what goes on inside the human brain, but must take in to account how SA is created and maintained from many different elements distributed across a ‘socio-technical’ system.

What does that actually mean? The idea is that SA is held by both human and non-human agents. Myriad technological artefacts within a system also hold some form of SA. Now if, like me, you initially struggle with the idea that an artefact (such as a radio, or altimeter) can have ‘awareness’, then bear with me. In this sense, what we are describing is their ability to hold task relevant information which contribute to SA as a whole. If the idea of machine awareness still seems far fetched right now, we can at least acknowledge that we are witnessing the beginning of an era in which technology is learning to sense its environment and becoming more animate. There is no doubt that this fact, coupled with the growing role for automation in everything we do is forcing us to change how with think about how we interact with technology in concepts such as SA.

Distributed SA theory explains transactions of information between different agents as the basis of how SA exists within a complex system. Technologies transact through sounds, signs, symbols, and other methods of sharing information about their state. In this way, cockpit alerts, speed and altitude information, and communications from air traffic control represent transactions of information in the system that contribute to overall SA. One agent can compensate for a degradation in SA in another agent to the extent that SA could be described as the glue that holds loosely coupled systems together. For example, two pilots may be heads-in when a TCAS alert draws their attention to conflicting traffic. This interaction of information from the TCAS system has contributed to maintaining SA within the system as a whole when it might otherwise have been lost. Without these kind of transactions the performance of the system would collapse.

Why does distributed SA matter?

In case this is all getting a little too conceptual, let’s bring ourselves back to the real world to consider the much talked about accident of Air France 447 from May 2009 when an Airbus A330 stalled and fell into the Atlantic killing everyone on board. This accident has already left its mark on how we teach CRM and train for non-technical skills, and is generally attributed to a loss of SA on the part of the pilots.

Some of the humans factors scientists at the centre of the distributed SA concept wrote a paper arguing that to attribute the accident to a loss of SA on the part of the aircrew is inappropriate, misunderstands situation awareness, and more importantly fails to harness its full potential to safety science. (see paper) They made the point that a fixation on labelling a single individual’s loss of SA as the cause of incidents and accidents is not only theoretically questionable, but also morally and ethically questionable too.

The paper went on to argue that instead of scrutinising the aircrew’s lack of situation awareness and their failure to control the aircraft, we should be asking different questions based on how systems, not individuals, lose SA. After all, it is now widely accepted that accidents are a systems phenomenon caused by multiple and interacting factors across systems as a whole.

If we limit our understanding of SA to a product of individual cognition during accident investigation we inevitably focus on fixing problems with the human operators and adopting countermeasures that only deal with human failures, for example, by demanding retraining, or introducing extra syllabus items (the requirement to teach Surprise and Startle in CRM training emerged as a result of the recommendations from this accident). Human factors knowledge and expertise has moved on since the mid-1990s, as has our approach to analysing and learning from accidents, which has progressed from a human-centric to a systems-centric understanding. If we ignore this then we neglect the opportunity to progress the design of safe socio-technical systems based on the lessons learned and ongoing advances in human factors science.

References.

  • Hutchins, E. (1995). How a cockpit remembers its speeds. Cognitive Science, 3, 265–288.
  • Salmon, P. M., Walker, G. H., & Stanton, N. A. (2015). Broken components versus broken systems: why it is systems not people that lose situation awareness. Cognition, Technology and Work, 17(2), 179–183. https://doi.org/10.1007/S10111-015-0324-4 
  • Salmon, P. M., Walker, G. H., & Stanton, N. A. (2016). Pilot error versus sociotechnical systems failure: a distributed situation awareness analysis of Air France 447. Theoretical Issues in Ergonomics Science, 17(1), 64–79. https://doi.org/10.1080/1463922X.2015.1106618 
  • Stanton, N. A., Salmon, P. M., Walker, G. H., Salas, E., & Hancock, P. A. (2017). State-of-science: situation awareness in individuals, teams and systems. Ergonomics, 60(4), 449–466. https://doi.org/10.1080/00140139.2017.1278796 
  • Wickens, C. D. (2008). Situation awareness: Review of mica Endsley’s 1995 articles on situation awareness theory and measurement. Human Factors, 50(3), 397–403. https://doi.org/10.1518/001872008X288420 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: