A very normal accident

It was early evening on the 24th September 2022 when an offshore AW139 helicopter inbound to Houma-Terrebonne Airport in Louisiana, USA, declared a mayday. A lot had already happened in the cockpit by the time the co-pilot hit the press to transmit. 

The first sign of trouble was a smell of burning plastic permeating the aircraft, but there was no smoke, nor were there any abnormal indications, so crew thoughts turned to the air conditioning unit, which they decided to turn off.

A few minutes later a loud “whoof” sound caught the pilots’ attention and within seconds they were engulfed in orange smoke. Cockpit visibility was zero due to the thick smoke, and emanating from somewhere around them the crew heard the rotor low audio warning (the highest priority warning there is, which advises the pilots of critically low rotor speed). At the same time they encountered a rapid overspeed of both engines and a significant and uncommanded movement of both the collective and cyclic flight controls. Unable to clear the smoke by opening his small ventilation window, the co-pilot attempted to open the cockpit door but was unable to do so due to high airspeed. He finally managed to clear the smoke by removing the entire cockpit window. 

The crew now fought to regain control of the helicopter, holding the flight controls hard forward and right to counter the uncommanded movement. Although they had managed to lower the collective (power) lever fully both engines were still outputting a power beyond their design limitations and the aircraft was climbing rapidly through 3500 feet as a result. Unable to establish a descent with normal control movements the crew opted to try to control the climb by selecting one engine to idle. This action caused a sudden and unexpected drop in rotor speed, forcing them to return the engine to flight condition. Leaning on the collective with full body weight to keep it down, and still climbing at high airspeed, they resorted to pushing the cyclic control hard forward to force a descent causing the helicopter to race past VNE.

Compound emergencies such as this one with multiple and confusing failure modes in the cockpit are as challenging as it gets. The crew were forced into the old maxim of aviate- navigate-communicate, starting with working out just how to keep the aircraft flying. However, in what might seem like less than enthusiastic recognition of the heroic efforts of the pilots given the extreme complexity of the emergency unfolding, I’m going to suggest that the exceptional  circumstances in which they found themselves were actually those of a very normal accident. 

The effect of interactive complexity on safety

Now might be a good time to introduce the work of Charles Perrow and his book Normal Accidents – Living with high risk technologies (1984). Perrow was a sociologist and not a safety scientist, but he was amongst the first to describe the characteristics of systemic accident sequences that were later popularised in the 1990s by the much more well known James Reason, who’s Swiss Cheese Model is familiar with most in professional aviation and beyond. Perrow argued that any system that works with complex and high-risk technology is characterised by what he calls ‘interactive complexity’. A modern aircraft like the AW139 is an example of this; a technological system which has many thousands of interacting components and engineered parts. So according to Perrow, the complex interaction of failure modes the crew faced in this incident is to be expected as entirely ‘normal’. 

When we experience two or more failures amongst multiple components they are likely to interact in some unexpected way. And two or more failures can easily interact in such a way that can break the system. No operator can be reasonably expected to be capable of figuring out many of these interactions in real time and responding accordingly, and this inability to understand how multiple failures could occur and interact is pretty much true across any industry. Furthermore, Perrow also argues that it is simply not possible for aircraft or system designers to consider every permutation of how when X fails, Y might also be impacted. The outcome of this is unanticipated – and unanticipatable – risk. After a component failure, accident or incident the designer might respond with a new control measure and this control might solve one problem, but it might introduce three new ones. 

No operator can be reasonably expected to be capable of figuring out many of these interactions in real time and responding accordingly.

Therefore, he concludes, technology is both a risk control and a hazard itself. The act of adding technology is at best risk neutral. Continually adding more technology in the belief that we are adding more layers of defence in a system is flawed because we are in fact adding more combinations of possible failure modes. In other words, there is a direct trade- off between increasing safety by adding in more controls, and decreasing safety by adding complexity. For example, it is a simple and inevitable fact that pilots’ understanding of their own aircraft is decreasing. The aviation industry is an example of what should be accepted as a more general truth: year on year we are creating ever more complex systems and organisations. What can we do about this safety paradox? There is a case to make that simplicity should be a key objective in achieving safety within any system. And if you aren’t convinced by this then consider that while any fool can make a system larger and more complex, it takes a genius to make something simpler.

It is a simple and inevitable fact that pilots’ understanding of their own aircraft is decreasing.

Mis-understanding risk in complex systems

Let’s go a little deeper into this idea of interactive complexity, as it clearly plays a starring role in the nature of the accident that we left our pilots grappling with above. Redundancy of critical systems is a key safety principle in aircraft design. When one layer fails we have always got other layers of safety to keep us airborne. The problem with this philosophy is that it assumes that each layer is independent of the other, and they’re not. And hence redundancy didn’t do much of favour for this crew. 

This principle is well explained using the example of probabilistic risk assessment. In aviation as in many high-risk industries many organisations use use Fault Tree Analysis to assist the risk assessment processes. The idea of this tool is that it examines the probability of each individual event and then calculates – through a complicated logic structure – the confluence of different events, taking the probabilities of each and combining them together. Such confluences are not probable. In fact, each individual combination of them is statistically quite improbable, but the tool achieves the task of examining the possible combinations of events and outputs a likelihood. What is missing from this kind of analysis however is that it assumes that we truly know the probability of each individual event and – more importantly still – that we can treat them as individual things and combine them. It never takes into account all of the factors that might make these apparently improbable combinations likely to happen all at once.

For example, in an air traffic control tower the chance of diode number 337 failing at the same time as wiring cable number 454 failing might be 1:100,000 multiplied by 1 :100,000. So the chance of them happening at the same time is considered to be 1 in 10 million. But if the electrical plant room is in the basement and entire ground floor of the building is under water then they are both guaranteed to fail at the same time. If the maintenance regime is failing at an air operator then two apparently unrelated parts on an aircraft could well both be under-maintained and therefore likely to fail. Risk assessments don’t take into account these reasons why apparently independent events might actually be quite likely to happen at the same time.

There is no better illustration of this than a paper written by John Downer (2013) called Disowning Fukashima which reflects on the credibility of nuclear risk assessments and argues that these sorts of calculations are fundamentally unworkable. Can we objectively and actively calculate the probability of suffering a catastrophic nuclear meltdown? Downer describes how one of the manufacturers of nuclear plant equipment for the Fukashima reactor had calculated the risk of a core damage incident being one per reactor for every 1.6 million years. They therefore decided that the probability of a core meltdown was so small that it was not even worth calculating a number for it. The risk assessment experts at Fukushima judged that the reactors were at risk of an incident once in a thousand years.

If this figure sounds ridiculously low to you, it’s because it is. Their estimates did not include any consideration of where that reactor happened to be. The assumptions of the number crunchers were only based on the reactor itself. Japanese history has not been well recorded for the last one thousand years, but even if we think back over the last one hundred years and focus on those things that have nothing to do with a nuclear reactor we should pause for thought. How many events have happened to Japan in the last hundred years which are capable of flattening a city never mind a nuclear power plant? There have been multiple city-destroying earthquakes, and multiple city destroying floods and tornadoes, not to mention a war that did a pretty good job of flattening multiple Japanese cities in that time; all of which just shows how ridiculously the calculated figures don’t take into account all of the possible causes of that accident happening.

Epilogue

Returning to Louisiana and the stricken AW139 helicopter, we have an apparently unrelated loss of engine control, flight control failure, and smoke in the cockpit all of which are interacting in unusual and apparently inexplicable ways. What we do know from the initial accident report from  the NTSB is that a chafing wiring loom against a flight control run above the pilots’ heads was responsible for a localised fire which deformed the flight controls and provoked a series of interactive failures. Just as Perrow described, no design mitigation or risk calculation could have reasonably anticipated how this chain of events would play out, and no pilot could be expected to be able to understand exactly what was happening in real time. The pilots could only focus on what was working and fly the aircraft as best they could. They did an outstanding job of doing just that, eventually managing a power-off landing of the helicopter from which everyone on board walked away.

Within a system-of-systems with literally billions of potential billion-to-one type failure modes we will see accidents from time to time.

The AW139 helicopter is a complex system with millions of potential interactions between its engineered parts. The aircraft also operates within the context of another complex system – the construct of civil aviation itself. It stands to reason that within a system-of-systems with literally billions of potential billion-to-one type failure modes it is inevitable we see accidents from time to time. Crucially, as this accident shows us, when things go wrong that you don’t expect, we need the operators there. Human operators are a really important source of capacity and resilience in a complex system. After all, you cannot pre-programme automatic responses to unanticipated threats and conditions. The human ability to react to unforeseen and unforeseeable circumstances remains unique and as yet unmatched.

Helicopter human factors in focus

Helicopter Human Factors is the title of a chapter by Sandra G. Hart in a book called Human Factors in Aviation written way back in 1988. It begins with an interesting statement:

For no other vehicle is the need for human factors research more critical, or more difficult.

Sandra G. Hart

That’s a bold assertion that I had never heard anyone make before and consequently had never given much consideration to whether or not it might be the case. So let’s unpack that proposition a little by looking at the arguments that the author offers to back it up:

She notes the following characteristics of helicopter aviation that make the human factors challenges uniquely demanding:

  1. Helicopters operate in a wide variety of operational environments extending from
    the civil air traffic control system to remote and hazardous areas overland, and far out to sea.
  2. Helicopters operate in a broad range of flight conditions including low visibility marginal VFR, night VFR, NVD enhanced VFR all the way to full IFR.
  3. Helicopters have a unique breadth of flight profiles with missions ranging from scheduled passenger services to search and rescue, medevac, construction, agriculture, law enforcement, fire-fighting, and military missions.
  4. Operational procedures, airspace, and air traffic control are designed for fixed-wing rather than rotary-wing aircraft, often without consideration of the implications on helicopter capabilities, flight profiles, and limitations.
  5. Helicopters can move in any direction, remain stationary while airborne, climb and descend vertically, and take off and land almost anywhere making their range of manoeuvres and control requirements much more significant than most fixed-wing aircraft.
  6. Helicopters often operate at very low altitudes, creating increased terrain avoidance, flight path control, and navigational visual demands on the pilots.
  7. Helicopters are inherently unstable without automatic flight control systems, which impose significant perceptual and motor demands.
  8. Helicopters have increased increased physiological demands in flight. Cockpit noise, vibration, heat, seating position, and postural demands when holding references are a few examples of these which helicopter pilots have to contend with.
  9. Although recent improvements in sensors, displays, controls, and avionics have offset some of the above factors, they have been accompanied by additional requirements to perform increasingly demanding tasks in more dangerous environments, creating further human factors challenges for designers and pilots.

I might add to her list a tenth significant characteristic of human factors in multi-crew helicopters which she doesn’t touch upon:

10. The unique teamwork dynamic of pilot and technical crew – and sometimes even ground-crew – co-operation, which includes a more flexible operational leadership and responsibility for aircraft control.

All of the above widen the demand for cognitive flexibility, decision-making, workload management, communication and team-behaviours already accepted and recognised in other walks of aviation. It is a not inconsiderable list, and it makes a convincing case for the criticality of human factors in rotary wing aviation.

Hart concludes her 1988 review of the characteristics of helicopter operations by noting that ‘helicopter human factors has received only limited attention by government [agencies], users, and manufacturers.’ In many aspects this is still the case 35 years later.

  • There is not a very large body of literature that directly considers the human factors specific to the challenges helicopter operations.
  • Training and checking regulations still have a relentlessly fixed-wing focus and provenance.
  • CRM training does not require or even reference the need for consideration of helicopter specific differences.

Despite plenty of hand-wringing in recent years about helicopter accident rates, in particular in operations such as HEMS which typify and distill many of the human factors challenges characteristic of helicopters noted above, the industry and regulatory response has focused in large part on technical progress in areas such as stabilisation, and automation of flight path management. Is it time to shine a more rotary wing shaped light on to rotary wing shaped problems?

The safety dividend of aviation’s professional culture?

How much does an aviator’s own cultural identification with safety have a role in contributing to safety outcomes?

Characteristics of Professional Cultures

Certain professions have strong and distinctive professional cultures. Aviation is one of these.

Culture distinguishes one group from another and provides a lens through which a group’s members see and understand the world. There are certain professions that have especially strong and distinctive professional cultures. Aviation and medicine are two examples of these. In both cases, their cultures are characterised by the fact that their members have expertise in a specialist field for which access is both selective and usually highly competitive. The path to acquiring such expertise is often long and usually requires rigorous training during which the drop out rate is high.

Professional Commitment

Aviators from all parts of the world feel like they have a common bond

A professional culture is manifested in its members by a sense of community and a common identity. Once initiated into the professional culture aviators from all parts of the world feel a common bond. Members of strong professional cultures typically place great value on what they do. Research shows that aviators are characterised by this high regard for their job, reporting overwhelmingly that they enjoy their work a great deal. More than 75 per cent of all pilots responded to a survey that they ‘agreed strongly’ with the statement “I like my job”.

Safety as a foundation of our professional culture

Does a belief in a deep-rooted safety culture underpin how aviators identify as professionals?

Professional culture is shaped by history, the attributes of the pro­fessional tasks involved, and by the risks and responsibilities associated with these. In aviation, the evolution of a pioneering safety culture over decades of learning from accident and error has become one of these defining attributes. Similarly, management of risk and the responsibility for safety in the air are two of the keystones of an aviator’s professional identity.

Has a belief and pride in this safety culture grown to underpin how aviators identity as a group of professionals? And if so, how much does this powerful cultural identification with safety itself have a role in contributing to safety outcomes, fuelling a positive feedback loop?

[Adapted from: Helmreich, R.L., & Merritt, A.C. (1998). Culture at Work in Aviation and Medicine.]

Does complacency really cause errors?

Complacency. Merited with at least a dishonourable mention in almost all books on human factors, this supposedly pernicious attitude is universally villainised and cited as a factor in countless aeronautical mishaps and accident reports. But what actually is complacency? Can it be shown to exist? And if so, how do you know it is at play until Captain Hindsight points his all knowing finger?

Attributing complacency as a cause of human error is as easy as it is lazy. Why? 

Just like error itself, complacency is a symptom and not a cause. 

Chicken Pox is an illness commonly contracted in childhood resulting in a characteristic rash of small itchy blisters. The blisters are symptomatic of the varicella virus but they are not what makes you ill. Complacency, just like error, is a manifestation of other factors further back upstream in a system, the health of which – like the human body – is infinitely complex and influenced in a multitude of ways. Why then do we characterise complacency as a malaise in and of itself?

Sidney Dekker (2003) argues that the answer to this lies in the fact that we universally endow complacency with causal power without actually bothering to define what it is. He provides a raft of examples where complacency is blamed for pilot error – most popularly in automation management – but points out that no one has attempted to explain what complacency is and describe how it is manifested. Complacency is constantly claimed to cause error. It is often attributed as the source for attention and vigilance decrements, but always without any effort to scientifically deconstruct how it comes about and what component parts make it up.

Defining Complacency

Getting a grip on an insightful definition for complacency is as slippery as scooping an octopus from a bucket of baby-oil. A number of terms have been offered up to help define it, including boredom; overconfidence; contentment; unwarranted faith; uninformed self-satisfaction; over reliance; a low index of suspicion; and a lack of awareness. Let’s look at some of these:

Complacency is about threat awareness. Supposedly, complacency makes us unaware of potential dangers or threats. And as you can’t protect against dangers you can’t see, it makes us vulnerable to change, to the unforeseen, and even to progress.

It is sometimes equated to laziness, but this is unsatisfactory because it suggests there is an element of choice in being complacent, and while you can make choices that result in complacency very few people decide to be complacent.

Overconfidence and complacency go hand in hand. Chuck Yeager famously pointed to complacency as the biggest challenge facing experienced pilots. And that’s because overconfidence is usually born of success, even if that success is only defined by many years of uneventful aviating. Unlike laziness, overconfidence is not a choice, but a state of mind.

Folk Models’

If these words haven’t helped to clarify much, there’s a reason. The real trouble is that the definitions offered simply substitute one word for another and offer no useful explanation about what goes into to being complacent. Fundamental to scientific enquiry is the ability to break a concept down into more elemental parts, a process which allows greater insights into the behaviour which it hopes to explain.

The problem of the lack of a satisfactory definition is not just a semantic one, but one of explanatory power. Because no descriptions of complacency offer deeper insight into its meaning it becomes a concept which is immune to critique. Aircraft accident? Let’s put it down to a complacent pilot! No one has stopped to explain the mechanism responsible for complacency, what about it causes errors to be made, reduces awareness, or diverts attention; so it must be accepted as it is. And because it has not been scientifically verified it cannot be falsified. Sidney Dekker coined the term ‘Folk Models’ to describe popular concepts like this which have been attributed causal power without subjecting themselves to scientific rigour.

Complacency as a warning.

Complacency is often used as a warning although we are rarely told how to avoid it. 

In CRM training we pick over accident case studies to learn from what went wrong and why. Causal and contributory factors are discussed in detail. With the benefit of hindsight and in the safe, stress free, and – hopefully – cognitively stimulating environment of the classroom, the lessons are clear and often painfully obvious. No one is short of an opinion. And in every case study I have ever done I guarantee that there’s at least someone in the room who’s thinking, “No way. Not Me. Never.” In hindsight they are convinced that the errors made are so evident and so avoidable that they wonder how the protagonists could have been blind to them. So we attribute complacency.

Whole books have been written on the subject without really coming to address the conceptual vacuum in which it floats. One was published just last year (Len Herstein, Be Vigilant – 2021) offering an explanation of both the causes of complacency and the supposed antidote: vigilance.

The traditional response to confronting complacency has always been to exhort people to be less complacent, to be more vigilant, or to strive for a greater level of flight discipline (Tony Kern, Flight Discipline, 1998). The trouble is, simply demanding a higher level of discipline and professionalism to avoid complacency is akin to addressing error conditions by telling pilots to think a bit more before they act. It reflects an outdated view of safety and error management. A more profound study of the the behaviour and preconditions that lead to complacency-induced errors has been neglected. Like error itself, complacency must have its roots intertwined in the organisational, task, and operating environments. It is a product of equipment, goals, pressures, limited resources, the team, culture, and all the other influencing factors of a complex system. If we really want to address complacency, that is where we must look to find answers.

Towards E-VFR flight: The dawn of mixed reality in the rotary wing cockpit? 

How progress in head-mounted display technology could revolutionise critical helicopter missions.


Image from Viertler, F. X. (2017). Visual Augmentation for Rotorcraft Pilots in Degraded Visual Environment

Envision a world in which emergency aircraft and their crews can launch in response to medical and other critical missions in almost any flight conditions imaginable. E-VFR (Electronic-VFR) speaks of this future thanks to electronically augmented visual flight which gives a sufficiently enhanced view of the external world to allow crews to use visual flying techniques around the clock and in any weather conditions. Recent leaps forward in extended reality technology means the feasibility of this concept is not as far off as it may seem. Described by scientists at NASA as a “better than visual” flight regime it has the potential to offer game changing benefits to both operational capability and safety.

As mixed reality develops apace in gaming and other commercial fields, human factors scientists and others are working away behind the scenes to see how both the tech and its lessons can be applied in aviation. A proliferation of recent studies is pushing forward the commercial applications for mixed reality devices in the cockpit in a variety of guises, but perhaps the holy grail of these is the realisation of what is coming to be called ‘equivalent visual operations’, whereby a mix of head-mounted display technology and conformal symbology, supported by an aircraft’s own sensors, will come to together to eradicate the boundary between instrument and visual flight techniques. Having spent the last two months studying progress in this fascinating field, I take a look back at what the implications of all this could mean for the future of manned helicopter operations.

The opportunities offered by E-VFR are particularly relevant to rotary wing aviation. Although modern derivatives have only recently spread to civil aircraft, HMD technology has its roots in 1950-60s military helicopters. An expanded field of regard – the fundamental difference between Head-up Display (HUD) and HMD capability – is more critical to helicopter flying than fixed wing. Low level and hover flight in the obstacle environment demands a greater range of visual scanning to the sides of the aircraft, and aerial tasks such as hoisting, load-lifting, and deck landing, often depend upon lateral scan. Civil helicopter critical missions such as HEMS, police, and SAR will continue to be manned operations for the foreseeable future, and are increasingly expected to be capable of an all-weather, H24 service.

Image from Viertler, F. X. (2017). Visual Augmentation for Rotorcraft Pilots in Degraded Visual Environment

A heightened exposure to degraded visual environments (DVE) and a susceptibility to inadvertent visual to instrument flight events have historically contributed to high accident rates in the rotary wing sector. In the last decade EASA has made a push to radically improve safety statistics, part of which has been an initiative to search for technological solutions. EHEST  One of these solutions is the potential benefits offered by augmenting outside cues with conformal terrain, obstacle, and other overlays on HMDs to aid the pilot in critical phases of flight, and unsurprisingly, it has made studies on DVE the focus of much current research. Across the pond, a recent investigation by the Federal Aviation Authority identified a number of accidents and incidents where the use of an advanced vision system may have resulted in a better outcome, suggesting that the same considerations are being examined beyond EASA. Has the time has come to for helmet mounted display technology to break through onto the civilian market?

In 2016 Microsoft Hololens first appeared, swiftly followed by competitors versions such as Google Glass and Oculus Rift. The potential of these off-the-shelf holographic visors with cutting edge optics and integrated sensors was not lost on researchers in a wide range of fields from medicine to engineering. In aviation, the contribution they could make to HMD research was jumped upon, and experiments were soon carried out examining how Hololens-like technology could be integrated into the cockpit environment.  

Research trends

Human Factors researchers have been focusing these experiments in two areas. The first is in the creation of effective conformal symbology for novel display types. Conformal symbology is the presentation of artificial scenery content or flight guidance symbology that is overlaid on natural terrain or man-made structures in a way which conforms to real-world shapes and form. The German Aerospace Centre, DLR has led the way with much of the helicopter specific research, investigating elements of HMD design specifically focused on rotary wing operations and DVE. This has included studies comparing and contrasting experimental helicopter landing symbology, hover drift cueing, and surface modelling. For example, one study evaluated a series of synthetic sea surface representations to determine which provide the best visual cues to pilots, finding artificial models were more effective than natural representations. Another tested a dashed line as a hover cue for lateral drift, and height towers for vertical hover references. In one particularly interesting study, which demonstrates the flexibility and potential offered by mixed reality and sensor integrated HMD, they investigated the impact of a 3D exocentric synthetic perspective which displays a disembodied external view of the aircraft to the pilot. Testing during landing and hover tasks alongside an offshore platform showed that innovative perspectives of this kind can improve spatial awareness and flight performance, outperforming conventional views and receiving positive feedback from pilots.

Exocentric helicopter viewpoint as displayed on HMD 

Image from Virtual reality headsets as external vision displays for helicopter operations: the potential of an exocentric viewpoint, by J.M. Ernst, L.Ebrecht & S. Schmerwitz, (2019).

The second focus of research has been on building up a body of evidence for the hypothesised benefits of HMD. This has been largely based on the premise that conformal HMD offer the pilot enhanced situational awareness and reduced cockpit workload, particularly in DVE. The difficulty for these studies is that situational awareness and workload are two concepts that are notoriously difficult to pin down, and demonstrating that novel display types have a significant effect on these is a challenge. So far, most research has been able to show little more than equivalent performance to fixed wing head up displays or traditional head down displays based on measurement of situational awareness and workload.

Instrument to visual flight – shifting the dividing line

However, there is an alternative method for demonstrating the performance advantage offered by  HMD, and this is to focus on the critically important transition between instrument and visual flight techniques. Traditional flight rules and flying techniques establish a hard dividing line between instrument flight (flying by sole reference to instruments) and visual flight (executed with visual reference to terrain). Many accidents and incidents occur at this critical point of transition from instrument to visual flight, for example, at the bottom of an instrument approach. In helicopter operations, the most challenging of these situations is a night offshore approach to a platform or vessel, where pilots have to rebuild their mental picture from one created by interpreting the instruments, to a matching one based on visual references, all whilst in a dark environment with limited or confusing visual cues. The human performance challenges of such a situation was attributed as a key factor in the 2006 accident of Dauphin G-BLUN in Morecambe Bay on approach to an offshore platform. 

The introduction of Point in Space instrument approaches introduces a new area of operational risk for this scenario, where a transition from instruments to the final visual flight segment at unprepared sites (with no prior visibility information) could contribute to a loss of control event.  The inverse of these situations is the transition from visual to instrument flight, a problem which helicopter operators are familiar with from incidents of inadvertent IMC and brown/white out approaches. In all cases, we are talking about a degraded visual environment in one of its different guises.  

On a traditional instrument approach, the change from instrument to visual scan is a point in time defined by the decision altitude. We can hypothesise that what HMD contribute to is an amplification of this time period in such a way as to reduce exposure time to poor conditions with limited visual cues. On the one hand, conformal display symbology enhances and prolongs the ability to maintain safe visual flight by improving the perception of visual references, while on the other, full regard head mounted instrument displays allow a continued instrument scan to merge  into the outside visual flight environment. Therefore we could describe the objective of conformal HMD technology to be to allow a level of human-system integration that redefines the distinction between instrument and visual flying techniques. Ultimately, the achievement of Electronic-VFR capability would eliminate the transition from instrument to visual flight altogether.

Towards E-VFR flight: how HMD and conformal imagery can redefine the dividing line between instrument and visual flight techniques.

Marrying displays with data

Future research will focus on greater sensor integration to feed the display of a wider range of conformal information. The potential scope of this for critical missions is wide. For offshore search and rescue, search patterns, sea current flows, and wind effects could be conformally marked, and for firefighting operations, water sources, deconfliction levels, and virtual entry and exit gates to firefighting areas. Integrating a variety of sensor data such as NVD, FLIR, RADAR, LIDAR, TCAS, locator beacon and homing signals, etc. offers ground-breaking improvements in all-weather and nighttime capabilities. Some of this has already been achieved in the civil market with commercial designs in the form of Airbus’s HELLAS-A LIDAR sensor for wire detection, and Leonardo’s obstacle proximity LIDAR, but these are still being presented on conventional head down displays. The cutting-edge of military technology has taken a step further in beginning to integrate conformal symbology from on board sensors into helmet mounted displays, with Elbit Systems’ BriteNiteTM  system a good example of market leading technology in the field of helmet mounted systems fed by a sensor array.

The path to Electronic VFR

Undoubtably, a combination of the maturity of current optical and display technology coupled with modern computer processing power has contributed to a period of exciting technological progress in the field of HMD in recent years. The pressing issue of helicopter loss of control incidents in DVE is giving impetus to the commercial development of helmet-mounted displays for the civilian market for the first time. However, there is first a need to successfully prove HMD as an interface for all this new technology. Whilst there are still technological hoops to jump through to progress display integration and conformal symbology sufficiently to reach the full E-VFR flight,and the Equivalent Visual Operations envisaged by NASA, an even greater challenge lies in gathering evidence for the safety and operational benefits that can take conformal HMD to the certification stage. That work is underway, but the answer does not lie in the twin metrics of situational awareness and workload alone. It also requires a greater understanding of divided attention, the dynamics of pilot scan, and mental model-building in the critical transition period between instrument and visual flight. And it puts human factors – as ever – right at the heart of advancements in 21st century helicopter operations.

The need for speed? How slowness has a value all of its own.

Human exploits in aviation have always been closely linked to our fascination for speed. We admire speed in its many guises and it remains a marker of achievement in almost any field you care to think of. A typist tapping at a keyboard; a child completing a jigsaw puzzle; a pianist playing an allegro; even reading a book in five days when it took your friend ten. Our association of speed with ability, intelligence and competence is deeply ingrained in us from an early age. Take yourself back to sitting in the school gym in exam season, looking up from half way through your maths paper to see that one classmate already setting down their pen and pushing back their chair with an hour still to run on the clock. Tell me you didn’t feel a combination of panic, envy, and self-doubt as you ran your eyes across all the questions you still hadn’t got to?

As a result of these associations we often couple up the idea of a quick response being a good response. For almost any kind of work output time is an important metric and, in most cases, the shorter the time something takes us, the more effective we are considered to be. We equate fast recall and response with cognitive ability, expertise, and experience. In aviation, just as in many other walks of life, we often assume the faster the better. We associate speed with competence.

I remember the first time I watched in awe as the hands of an experienced SAR pilot whirled around a myriad of buttons and switches in a bid to get a rescue helicopter turning, burning, and launched into the air in less than a couple of minutes. I wondered if I would ever master the highly choreographed cockpit ballet that mixes patterns and shapes into internalised motor programmes. It was beguiling, and it spoke of competence and confidence. 

If you still doubt this idea, let’s look at the opposite to speed. Slowness. “The quality or state of lacking intelligence or quickness of mind.”  (Merriam-Webster Thesaurus) This is one of its definitions, and it reveals the depth of prejudice that we harbour for doing things slowly. Listed synonyms include brainlessness, denseness, dim-wittedness, dopiness, dullness, foolishness, stupidity, weak-mindedness. They go on. It reflects a general consensus that if someone is slow they are probably not very bright.

Of course, sometimes, doing something slowly and deliberately has a virtue of its own. Slowness can be hard. It often takes significant mental self control. Japanese train drivers famously use the operating practice of Shisa Kanko (pointing and calling), a technique of pointing and verbalising to increase attention and awareness which has been found to reduce errors on the Japanese railways by 85%. It slows them down and forces them to be deliberate.

Slowing down is not just about reducing error rates however. In Thinking Fast and Slow Daniel Kahneman, the Nobel Prize winning psychologist and author introduced us to the concept of fast and slow cognition in terms of two separate decision-making systems. Since the book and its core ideas have gained in popularity it is now commonly introduced to crews in CRM training as a way of understanding how we take decisions under different circumstances. 

In the popular science book Blink, Malcolm Gladwell picks up on the theme of instinctive or intuitive rapid decision-making. It is subtitled “The Power of Thinking Without Thinking” and starts by recounting anecdotes that demonstrate the power of intuitive decision-making. He explains how we make rapid snap decisions all the time, often based on tiny amounts of information which he calls “thin-slicing”, and argues that decisions made quickly can be as good (and often even better) than decisions made after a thorough and deliberate thinking process. In CRM training these concepts have often been introduced as Recognition Primed Decision Making (Gary Klein) where instinctive decisions driven by the recognition of tiny cues and clues that are inaccessible to conscious mental processes, are based in deep expertise or years of experience.

Perhaps the most important point that Gladwell has to make in Blink is the importance of understanding when and why we take decisions in this way. There is a time for thinking fast, and there is a time for thinking slow. In CRM we often talk about these themes in the context of decision-making and controlling error, but they are also particularly relevant to how we handle startle effect or extreme stress responses.

Gladwell illustrates his book with examples from law enforcement and the impact of extreme arousal in shooting scenarios. He cites research from studies on marksmen which has demonstrated an optimum state of arousal – the range in which stress improves performance – when the heart rate is between 115 and 145 beats per minute. Above 145 beats per minute the negative effects of stress kick in, vision becomes restricted, and complex motor skills start to break down.  By 175 beats a complete breakdown in cognitive processes has been demonstrated. These failures in our capacity for rapid cognition caused by extreme arousal (what Gladwell calls mind blindness) are subtle, complex and surprisingly common.

However, Blink concludes with the assertion that these episodes are neither inevitable nor incurable. Stress inoculation training in combination with real world experience can fundamentally change the way we react to an acute stress encounter.  So how can we learn to control our thinking processes?

Recent research from Delft University studied the effectiveness of different stress response control techniques for direct application in the cockpit. (Managing startle and surprise in the cockpit.) It is no coincidence that all of them start by addressing the need to slow down and take control of our cognitive processes. But the most significant finding of the Dutch researchers was that even after teaching and getting pilots to apply a procedure specifically aimed at helping their startle response, they were often unable to apply this procedure in the face of a stressful situation. Instead, they had a tendency to fall back on intuitive responses even where these were inappropriate. They were unable to slow down their fast thinking. Training ourselves to slow down is not easy under any circumstances, but particularly under conditions of stress it takes deliberate practice.

Let’s return to address the unassailable prestige of speed over slowness that we started with. What if we could disassociate the idea of slowness with incompetence? What if instructors were made to teach the opposite? What if we came to associate a slow response with higher skill levels and greater professionalism?

What if we came to associate a slow response with higher skill levels and greater professionalism?

In emergency scenarios the subconscious culture of speed creates a false need for haste in 99% of our responses. Even in the most serious emergencies that are thrown at us by a simulator instructor such as a requirement to land immediately is still unlikely to suffer a worse outcome owing to a few more seconds of well-spent slowness. For all the rest, slowness should be positively promoted and rewarded as a demonstration of competency. The history of aviation is littered with examples of accidents where less than considered responses lead to disaster. Yet still we associate a prompt diagnosis and response to a problem in the cockpit with technical understanding, fluency, and competence.

As we’d do well to remember when thousands of feet up in the air, evolution works on the principle of survival of the fittest, not the fastest.

“When things happen too fast, nobody can be certain about anything, about anything at all, not even about himself.”

(Milan Kundera, Slowness, 1996).

Developing resilience to startle and surprise in helicopter operations

Here’s something that’s no surprise: the requirement to train helicopter crews in the psychological and physiological effects of startle and surprise was born of catastrophic incidents and accidents that had their repercussions in the airline industry. Obviously, startle and surprise can happen to any of us, not just airline crews, but do the numerous differences between airline transport flight profiles and helicopter operations mean that we should be looking critically at how to approach the subject from a rotary wing perspective? Is it as significant a hazard in the low level, high workload, high obstacle environment in which helicopter crews spend much of their time?

Startle and Surprise was added into the CRM syllabus a little over five years ago. Since then, and despite becoming a hot topic for human factors researchers (there have been some notable studies looking at practical ways to apply the human factors science on the flight deck) there is an ongoing lack of credible practical techniques to apply to situations of startle, particularly when they are applied to manually flying a helicopter at 500 feet instead of managing abnormal conditions at 35,000 feet in a flight director coupled cruise.

Aircrew tend to be an intelligent, critical thinking, and questioning kind of audience, and trying to pull practical lessons from what is known about the hugely variable reactions of humans under stress is not easy for CRM trainers. Providing credible training in the classroom if the main lesson is that a startling event can be distracting, and that surprising events can be managed by trying to reduce the element of surprise, is bound to be met by raised eyebrows! Startle and surprise is a complex subject to pin down because it doesn’t exist in a bubble, but is intrinsically tied in to other areas of both technical and non-technical skills. All this has meant that the training has been interpreted (fairly) by some crews to be quasi-scientific and constantly changing according to the latest research and the interpretation of different operators.

What then should startle and surprise training mean in an applied sense and how should we be approaching it? Apart from simply describing the psychological and physiological effects of surprise and startle, according to the EASA CRM syllabus, training should cover:

  1. The development and maintenance of the capacity to manage crew resources.
  2. The acquisition and maintenance of adequate automatic behavioural responses.
  3. Recognising the loss and re-building situation awareness and control.

Which sort of begs the question, ‘How are these requirements distinguishable from the normal objectives of CRM training?’ 

Startle and surprise is a reaction which has its root causes elsewhere.

Startle and surprise are human reactions that result from the interplay and outcome of other competencies. There is a line of thought that they cannot be trained for at all. Startle is a physiological reaction that cannot be ‘trained for’ by definition. And while you can train other competencies to reduce the effects of surprise, you cannot train for surprise itself. This is where training for startle and surprise merges into the large and complex topic of Competency Based Training (CBT). There’s no doubt that focusing training on key competencies is the direction in which CRM and non-technical skills training is moving. How does this work in practice?

Suppressing surprise with core competencies

Piloting competencies have been identified as 

  • Application of procedures and knowledge (APK)
  • Leadership and Teamwork (LTW)
  • Situational Awareness (SAW); Communication (COM)
  • Problem solving and Decision Making (PSDM)
  • Workload Management (WLM)
  • Flight path management (automatic)
  • Flight path management (manual).

Consider the images below which depict how we might deploy these competencies. The first image shows us in normal cruise flight. We are applying our procedures and knowledge of the aircraft and airspace around us (APK). We manage the flight path, perhaps with some higher modes coupled (FPA), and we believe we have a good awareness of the situation around us (SAW). This is our comfort zone of normal performance. We are also using all the usual non-technical competencies, but in a relaxed, low effort, low workload kind of way, without much need to put them to further use. 

Suddenly, we are unlucky enough to suffer a black swan event: an incident that is not expected, beyond the bounds of our previous experience, and so different to anything that we have seen before that we do not understand what is going on. We are startled by this and suffer the effects of fundamental surprise. 

Operational resilience is made up of resistance, recovery, and adaptation.

There is an inevitable dip in our performance as a result, and our ability to react drops. We find it hard to understand the new situation we find ourselves in (SAW). We can’t apply any established SOP, because there isn’t one. Our knowledge of other malfunctions doesn’t fit the situation either. (APK). We might decide to take manual control of the aircraft (FPM). 

What tools do we have that we can apply? Our non-technical competencies. They are what will allow us to recover the situation by leaning on teamwork, communication, shedding workload, and analysing the situation by applying generic knowledge that we share. Once deployed, these competencies can help us adapt to the new situation and recover above the line of normal performance. 

Operational Resilience following this kind of surprise event is made up of three elements: 

  1. resistance
  2. recovery
  3. adaptation.

Managing an event like this requires pilots to use their technical and non-technical competencies to best effect to maximise resilience and minimise the possibility and effect of surprise. In the technical sphere this depends having a sound technical knowledge of the aircraft to fall back on, and having to hand well developed ‘action plans’/ procedural schemas that have been thoroughly mentally rehearsed. In the non-technical sphere it is dependent upon factors such as effective aircraft monitoring, good situation awareness, threat and error management strategies, and, fundamentally, a questioning attitude and expectation and suspicion of things about to go wrong. The best preparation for startle and surprise is anticipation.

Resilience in Breadth? 

How do you train to improve performance in this kind of situation? The answer isn’t a narrow focus on ‘startle and surprise’ because the root causes of the problem are not the human reaction to the startle or surprise event. Building resilience hinges on our ability to anticipate events in flight by putting a greater emphasis on developing a breadth of competencies that give pilots the tools to deal with both anticipated and unanticipated events. This is supposedly what competency based training is all about, and it depends on an ongoing evolution from the mentality that piloting skills are all about flying aircraft, to recognising that they are in fact more about being effective managers of teams, relationships, threats, hazard environments, and complex automatic systems. Startle and surprise is a reaction which has its root causes elsewhere. Training for it is about identifying those root causes and then focusing our training accordingly.

Taking the training into the air

For airline transport crews, the (largely classroom-based) startle and surprise training is paired with an additional requirement of Upset Prevention and Recovery Training (UPRT), which has been required by regulation since 2019 and provides a practical, applied element of flying which dovetails with the classroom theory. It has led to a burgeoning market of training providers offering comprehensive courses in intervention, resilience development, and recovery strategies from all kinds of aircraft upset. So is there a place for UPRT in the rotary wing environment?

Perhaps. Spatial disorientation is certainly a big thing for helicopter pilots, but has startle and surprise leading to aircraft upset been responsible for the same loss of life in helicopters that has led to its prioritisation in the airlines? A case might easily be made for giving priority of attention and investment to other more helicopter-centric safety issues such as inadvertent IMC, CFIT, and the obstacle environment.

One answer could be a scenario-based “helicopter hazards” simulator course that specifically trains the non-technical skills elements identified in key accident causes such as spatial disorientation, surprise and startle, IIMC, and obstacle environment hazards. Such a course would allow instructors to dedicate real time and thought to developing resilience to operational hazards faced by rotary wing crews on a daily basis. However, the fact is that a rotary wing version of UPRT is not currently mandated for helicopter pilots and so remains unavailable, and until companies are forced to pay out for this kind of extra training by regulation they will not shoulder the cost based on a safety imperative alone.

The automation explosion: examining the human factor fallout

Also published in AirMed&Rescue, Nov 2021 edition..

Automation reduces workload, frees attentional resources to focus on other tasks, and is capable of flying the aircraft more accurately than any of us. It is simultaneously a terrible master that exposes many human limitations and appeals to many human weaknesses.

As we have bid to reduce crew workload across many different tasks and increase situational awareness with tools including GPS navigation on moving maps, synthetic terrain displays, and ground proximity warning systems, we have also opened a Pandora’s Box of human factors to bring us back down to the ground with a bump. Sometimes literally. These include powerful forces such as technology-driven complacency and the ever-growing internal distractions of the modern cockpit. In changing the pilot’s workload to monitoring and evaluating multiple systems, are we utilizing automation in a way that best meets our human limitations?

Calculation methods of old

One of my first SAR missions as a junior co-pilot in the UK Royal Navy had the crew heading out at night to a position over two hundred miles west of Lands’ End, the most westerly point of Britain and just at the limits of our range. I distinctly remember my nervous laugh as the flight navigator asked me to check his calculations of range, endurance, and point of no return, which was worked out using his whizz wheel flight computer – otherwise known back then as the Dalton Confuser, a version of which will be more familiar as the CRP5.

The Mk5 Sea King had changed little since its entry into service in the mid-20th Century, and the Navy has resisted most of the bolt-on technology that might have made life easier for pilots. Even back in 2009, the aircraft had no radio navaids, no GPS, no FLIR, no TCAS, no TAWS, and no satphone. It only had a basic AFCS with radio altimeter and barometric height hold. Raw radar information was interpreted using acetate overlays by the navigator and could not be displayed to the pilots visually. No instrument approaches were available other than talk down. We navigated with paper maps and charts, Second World War-era flight computers, and aeronautical information books.

This will sound familiar to rotary-wing pilots of a certain age, but this was only 12 years ago. My cohort of military SAR pilots are the last to experience this type of largely unaided flying and mission-management. In the decade that followed, a generational change of aircraft, a huge shift in risk management culture, and a technological explosion in aides to pilot workload and situational awareness combined to roll out a step-change in automatic flight control, and other game-changing technologies to helicopter cockpits.

It is as if I have experienced aviation time travel within such a brief period. Certainly, all these modern systems and aides have made life easier. But can we state definitively that they have made our operations safer?

Automate to safe flight

One approach to answering this question is a statistical analysis of accidents and incidents over the period in question. I delved into the data from EASA’s annual safety reports and, to keep things as simple, relevant, and as easily comparable as possible, I chose to focus on Commercial Air Transport operations (CAT). CAT is a strong category to focus on because it includes many special missions such as HEMS and SAR as well as offshore operations. These tend to be larger, most modern aircraft that are most likely to be equipped with the technological upgrades to increase capability and reduce crew workload.

Figure 1 is a compilation of data from EASA Safety Reviews covering 2009-2019. This period not only represents a period of rapid technological change in aviation, and particularly the rotary-wing sector, but also correlates with my own technological transition, from the Sea King SAR mission in 2009 to my return to SAR in 2019, flying the contemporary AW139. The graph displays the total number of accidents and serious incidents for helicopters of EASA Member States engaged in CAT during that period.

Figure 1

We might pose the reasonable hypothesis that during a decade of rapid technological progress resulting in both radical reductions to crew workload and gains in situational awareness and mission management capacity, we would see a corresponding increase in safety. But this hypothesis is not borne out by the accident rate – one of the best metrics we have for evaluating safety outcomes where they most matter. The trend line (drawn in light blue) doesn’t show the decrease we would expect. In fact, it isn’t even flat.

Despite CAT operations being at the vanguard of both the generational change of helicopters and deployment of cutting-edge technology touted as bringing game changing benefits to flight safety, we are still getting ourselves into trouble in the air at the same rate as in the years we were flying mostly manually. This is particularly the case in HEMS operations across Europe, which stand out as an operation type which has consistently (2009-2019) had an average mishap rate (4.2pa) of nearly twice that of the next most-reported group in CAT, and which rose notably in 2018 & 2019 with 12 and seven accidents/serious incidents respectively.

The reasons behind this are complex and multifaceted, but at least part of it lies in our relationship with all this automation.


The expanding human factor

No one can question the huge advance in capability that all this technology brings – one of which is the reduction of pilot workload. Thanks to GPS, we don’t have to worry too much about where we are anymore, or where we’re going. We can see that represented in any number of deliberately ergonomic formats in front of us. Thanks to TCAS, we know if there is traffic around and where to look for it. It even tells us how to avoid it. PBN provides us with configurable instrument approaches to points in space. With FLIR and NVG, we can see and search in the dark and the wet, increasing capabilities immeasurably. This list is far from exhaustive.

But while workload has undeniably reduced in many areas, it has also changed. A bit like squeezing a stress ball, we have compressed some areas, only to find it has ballooned out in others. The need for us to manage all this new technology and automation has produced additional tasks on the crew that didn’t exist before. For years now, we have increasingly added avionics and mission systems to the aircraft, and much of the pilot’s new workload is based in monitoring and evaluating feedback from multiple systems.

The great irony is that the task of monitoring the output of these systems is not one that matches particularly well with the capabilities of the human brain. We hit up against two key human limitations: the first being our brain’s ability to focus, select, and sustain attention. When we monitor multiple sources of information, we are applying selective attention, with greater attention being given to the one or more sources that are deemed most important. This applies to most flying tasks. Distraction is the negative side of selective attention.

The second is our capacity to process large quantities of information and handle extreme complexity. The ergonomics and tools of the modern cockpit mean much more information is available and presented, and this is not just a problem of deciding what to take in. As the systems with which we work become cleverer and more complex, we reach the boundaries of what the average person can understand about the design and architecture of the systems they are operating, preventing us from holding accurate mental models of them to fall back on for fault diagnosis or when things start to get confusing. A 2013 study by the FAA – Operational Use of Flight Path Management Systems – found that errors related to the operation and monitoring of automated systems contributed to over 30 per cent of incidents they reviewed, concluding that they were down to an inaccurate understanding of how these systems worked.

Aviation is a system of systems, and one becoming ever more complex. Aviation professionals have become exceptional at finding ways to improve the reliability, serviceability, and capability of all the systems around us.

Harness strength, restrain weaknesses.

The burgeoning of automation is a great example of this. But the human still sits in the middle of this system and the human’s failings have remained, by and large, the same. The current evolution of technology in aviation is resulting in the scenario where the human in the system is ever more frequently the point of failure.

But this perspective neglects to acknowledge the key role that the human can play in a system to step in and prevent things that might otherwise go wrong. We are starting to acknowledge that human performance also contributes significantly to the overall safety performance of the aviation system. The International Civil Aviation Organization’s latest Manual on Human Performance recognizes that ‘within a complex system, it is the human contribution that often provides the important safety barriers and sources of recovery’.

The tension between these two different approaches to understanding and reconciling the human contribution can often lead to emotionally charged discussion, particularly with respect to highly automated environments like the modern cockpit. But the reality is that they are two sides of the same human coin. Have we been slow to realize the significance of investing proportionately to the impact of the human contribution, regardless of whichever side of this coin it represents?

Because the human risk profile is growing, you would expect this to be reflected in safety and risk management strategies; we have learned to manage the fallibility of engines, airframes, and components through cycles, periodic maintenance, preventative maintenance, fault rectification, refurbishment, and all sorts of other techniques, to achieve extremely high levels of reliability. We have not yet achieved the same with the human element. Operators, manufacturers, and regulators expend huge energy and effort creating rules, structures, systems to manage design and operational risk, but often relegate the human dimension within this risk management to a less significant position than it truly represents. At the end of the day, when the investigators are picking through the smoking pieces of an accident site, it is not the aircraft that is the asset, but the people.

In automation, we have a technology that supposedly improves or removes human capability, so it seems a paradox that the human factor has grown with it. Automation does not overcome human failings, it just shifts them around. There is also an irony in acknowledging that, while reducing the human role in skills-based tasks, automation has nevertheless taught us the continuing importance of human contribution. It is not a paradox to state that the human is both the strongest barrier and the weakest link in the safety chain. It is likely that we always will be. Future safety outcomes will depend ever more on how much time we dedicate to understanding this, and how we choose to balance these opposing forces.

Distributed Situation Awareness

What it means. And why Distributed SA is likely here to stay.

Situation Awareness: The birth of a paradigm

Pretty much everyone in aviation is familiar with the concept of situation awareness, so it might be surprising to learn that it’s actually a relatively new term which was scarcely used at all prior to the 1980s, and only really entered the lexicon with a dramatic explosion in popular usage from the mid 1990s. Perhaps it was an idea that had come of age, but its rapid growth in both academic circles and operationally in real world scenarios coincided with the publication in 1995 of a pair of seminal papers by Mica Endsley, eminent Chief Scientist of the US Air Force.

Figure 1. Graph showing the growth of the term SA in the English lexicon. From Stanton et al. 2017, p.450.

In the scientific article Toward a Theory of Situation Awareness in Dynamic Systems, Endsley laid out the key framework of SA that most aviation professionals are still taught during their human factors training. It is comprised of a loop of three levels of information processing: perception (level 1), comprehension (level 2), and projection (level 3), more colloquially summarised as a process of ‘What?, So What?, Now what?’

Endsley’s Model of 3-stage situational awareness

Endsley’s 1995 papers quickly became among the most cited works in human factors science, and by 2010 one study found over 17,500 articles discussing SA online. Her model became the dominant theory and was especially embraced in terms of the practical application of human factors in industry, by the aviation community, and beyond.

The interest it generated and its wholehearted acceptance by operators, who clearly recognised and identified with the concepts it describes, has been matched over the years by an unusual degree of academic contention and debate by other theorists. As research interest in SA grew, the concept expanded from the individual level to the team level with studies moving on consider the context of a whole crew. And from there, interest developed in how SA might apply in the context of larger and more complex systems, leading to evolving definitions that took thinking about SA down new avenues well beyond its human beginnings.

The concept of distributed cognition

The path to one of those was laid by the idea of distributed cognition which began a paradigm shift in how we think about how we think. Endsley’s description of SA was rooted in the discipline of psychology and the processes of human cognition. That is to say, the model exists to describe a process that is going on inside our heads. However, some theorists started to argue that, far from happening in a vacuum, human cognition -, the way we think – is a product of the way we interact with many other elements in our environment, some human and some technological. For example, how we process information is heavily influenced by our use of ‘artefacts’ all around us which impact upon our ability to understand, remember, recall, and take decisions. These artefacts might be something as simple as a paper and pencil on a kneeboard (which allow us to record our thoughts, words, and figures, or carry out simple maths), or as mind-boggling as a flight management computer (which combines the product of its own complex calculations from multiple inputs with the pilots goal-driven programming of intended flight paths, or performance objectives.)

The beginnings of this concept can be traced back to another influential article from 1995 called How a cockpit remembers its speeds. In it Edwin Hutchins introduced the idea that the task of configuring an aircraft on approach is not only dependent upon the knowledge and memory in the head of the pilot, but also exists in the ‘memory’ of the technical systems on board. The way a pilot recalls key information and thinks about flying an approach is dependent upon these, and in turn relieves demands on the pilot’s information processing. Distributed cognition makes the case that we should recognise that humans and technologies conduct co-ordinated tasks together to achieve goals or to solve complex problems.

What is distributed SA?

Inspired by Hutchins, it was only a short hop for the concept of distributed cognition to be applied to situation awareness. In a departure from Endsley’s human-centred approach, distributed SA suggests that to truly understand SA we can not limit its scope to what goes on inside the human brain, but must take in to account how SA is created and maintained from many different elements distributed across a ‘socio-technical’ system.

What does that actually mean? The idea is that SA is held by both human and non-human agents. Myriad technological artefacts within a system also hold some form of SA. Now if, like me, you initially struggle with the idea that an artefact (such as a radio, or altimeter) can have ‘awareness’, then bear with me. In this sense, what we are describing is their ability to hold task relevant information which contribute to SA as a whole. If the idea of machine awareness still seems far fetched right now, we can at least acknowledge that we are witnessing the beginning of an era in which technology is learning to sense its environment and becoming more animate. There is no doubt that this fact, coupled with the growing role for automation in everything we do is forcing us to change how with think about how we interact with technology in concepts such as SA.

Distributed SA theory explains transactions of information between different agents as the basis of how SA exists within a complex system. Technologies transact through sounds, signs, symbols, and other methods of sharing information about their state. In this way, cockpit alerts, speed and altitude information, and communications from air traffic control represent transactions of information in the system that contribute to overall SA. One agent can compensate for a degradation in SA in another agent to the extent that SA could be described as the glue that holds loosely coupled systems together. For example, two pilots may be heads-in when a TCAS alert draws their attention to conflicting traffic. This interaction of information from the TCAS system has contributed to maintaining SA within the system as a whole when it might otherwise have been lost. Without these kind of transactions the performance of the system would collapse.

Why does distributed SA matter?

In case this is all getting a little too conceptual, let’s bring ourselves back to the real world to consider the much talked about accident of Air France 447 from May 2009 when an Airbus A330 stalled and fell into the Atlantic killing everyone on board. This accident has already left its mark on how we teach CRM and train for non-technical skills, and is generally attributed to a loss of SA on the part of the pilots.

Some of the humans factors scientists at the centre of the distributed SA concept wrote a paper arguing that to attribute the accident to a loss of SA on the part of the aircrew is inappropriate, misunderstands situation awareness, and more importantly fails to harness its full potential to safety science. (see paper) They made the point that a fixation on labelling a single individual’s loss of SA as the cause of incidents and accidents is not only theoretically questionable, but also morally and ethically questionable too.

The paper went on to argue that instead of scrutinising the aircrew’s lack of situation awareness and their failure to control the aircraft, we should be asking different questions based on how systems, not individuals, lose SA. After all, it is now widely accepted that accidents are a systems phenomenon caused by multiple and interacting factors across systems as a whole.

If we limit our understanding of SA to a product of individual cognition during accident investigation we inevitably focus on fixing problems with the human operators and adopting countermeasures that only deal with human failures, for example, by demanding retraining, or introducing extra syllabus items (the requirement to teach Surprise and Startle in CRM training emerged as a result of the recommendations from this accident). Human factors knowledge and expertise has moved on since the mid-1990s, as has our approach to analysing and learning from accidents, which has progressed from a human-centric to a systems-centric understanding. If we ignore this then we neglect the opportunity to progress the design of safe socio-technical systems based on the lessons learned and ongoing advances in human factors science.

References.

  • Hutchins, E. (1995). How a cockpit remembers its speeds. Cognitive Science, 3, 265–288.
  • Salmon, P. M., Walker, G. H., & Stanton, N. A. (2015). Broken components versus broken systems: why it is systems not people that lose situation awareness. Cognition, Technology and Work, 17(2), 179–183. https://doi.org/10.1007/S10111-015-0324-4 
  • Salmon, P. M., Walker, G. H., & Stanton, N. A. (2016). Pilot error versus sociotechnical systems failure: a distributed situation awareness analysis of Air France 447. Theoretical Issues in Ergonomics Science, 17(1), 64–79. https://doi.org/10.1080/1463922X.2015.1106618 
  • Stanton, N. A., Salmon, P. M., Walker, G. H., Salas, E., & Hancock, P. A. (2017). State-of-science: situation awareness in individuals, teams and systems. Ergonomics, 60(4), 449–466. https://doi.org/10.1080/00140139.2017.1278796 
  • Wickens, C. D. (2008). Situation awareness: Review of mica Endsley’s 1995 articles on situation awareness theory and measurement. Human Factors, 50(3), 397–403. https://doi.org/10.1518/001872008X288420 

Processing information in flight: Understanding the limits of cognitive capacity in the cockpit.

Hands up if you have ever experienced a mental meltdown, ‘cognitive freeze’, or intense tunnel vision in flight or in training? Most of us will recognise these phenomena happening to us at some point or other. They are intimately related to levels of workload, stress, or perhaps the surprise and startle effect. In CRM training it is often explained to us in terms of the well-known Yerkes and Dodson arousal curve – the inverted ‘U’ of arousal vs performance. 

The Yerkes Dodson Curve

That famous experiment behind Yerkes’ and Dodson’s research, which was actually the product of administering increasingly powerful electric shocks to mice, is now well over a hundred years old; around the time when aviation was still in its infancy. Since then, like the more highly charged mice, quite a few aviators of all ages and stages have probably experienced the unpleasant sensation of sliding down the backside of the curve.

What is “Capacity”?

Passing the point of ‘optimal arousal’ is more colloquially known as reaching the limits of your ‘capacity’. But what is capacity? We often use the term more generally to describe the amount of ‘thinking power’ that we have available to apply to problem-solving and decision-making.

Capacity” is used to refer to the amount of ‘thinking power’ we have available to apply to problem-solving and decision-making.

Instead of understanding the concept of the arousal curve as a line that represents the ‘before and after’ of a cognitive collapse, we should really conceive it simply as a description of the level of cognitive activity that we are capable of. Nobel Prize-winning psychologist Daniel Kahneman has described arousal in these terms, calling it ‘a reservoir of mental energy.’ In practical terms, arousal and capacity are correlated. It follows therefore that, the Yerkes-Dodson curve could also be described as your capacity level. 

Capacity and Attention

Another concept in human information processing that overlaps with this is attention. Attention is simply the way we allocate our cognitive resources. Our attention level is itself directly proportional to our level of arousal, so for the sake of understanding our capacity we can treat these too as one and the same.

Don Harris (2011) argues that attention is “in effect the amount of cognitive capacity or thinking power” that a person has available, so how we understand the functioning of our attention is important to understanding the limits of our capacity.

Most theorists agree that the way we process information is broken down into stages. We allocate our attentional resources to the different tasks of perception, memory retrieval, response selection (decision-making) and response execution. This can be seen represented by the ‘spider’s legs’ coming from the bubble of attention seen in the human information processing model below.

Human Information Processing Model

Buckets of Attention

Where the academics don’t agree however is how we expend our attention on these different stages of processing. The simplest theory explaining this process argues that all elements of our attention can be imagined as a single resource. Imagine that we have one bucket full of liquid cognitive resource which is depleted by whatever demands we put on it until it runs out. Our ability to carry out multiple tasks at the same time depends upon how much of the liquid in the bucket we allocate to each one. If we focus entirely on a single complex operation, we cannot attend to any other information hitting our senses. If we devote a lot of attention to one task, we reduce the attention we can dedicate to another task at the same time.

Capacity is a story of finite resource versus potentially unlimited demand.

Others suggest that our attention is divided into multiple resources, or a number of smaller buckets which we are able to allocate to different tasks independently. For example, we have a different bucket for perception and processing to the one which we allocate to response selection and execution. We also have a separate bucket depending on whether the kinds of inputs we are paying attention to are audio or visual, spatial or verbal. The ‘good news’ about this conceptualisation is that we can apply different cognitive resources to different tasks, but the ‘bad news’ is that our processing power is now limited by both our ability to make sense of all the data hitting our senses at the one end, and our ability to consciously process it and act on it at the other end.

Known as Multiple Resource Theory, researchers have demonstrated that while two concurrent verbal or spatial tasks do result in reduced performance, the same is not the case if one task is spatial and the other verbal for example. We can achieve separate modality (verbal/spatial, audio/visual) tasks without a significant reduction in performance in the other. Later research also found the same to be true within the visual sense. We are able to attend to one focal task, and devote attention to a second visual task in peripheral vision at the same time (although we cannot carry out two tasks at the same time that require visual focus). In other words, our visual bucket of attention can be divided into two further sections of ‘independent’ resources.

What does this tell us about managing our capacity in the real world? Well, it follows that our performance will be better if we draw upon our cognitive resources proportionately across the different buckets. By the intelligent use of all the different smaller buckets we can maintain our capacity by avoiding draining any single one. According to this understanding of how we process information, to make the most of our processing capacity we should be mindful to avoid taking on more than one auditory task at a time (offload radio communications if we are holding a conversation with the crew), and not to engage in two visually demanding tasks at once such as combining an instrument scan with the reading of an approach plate. Of course, this is where the topic of human information processing necessarily crosses over into the realms of workload management, communication, and all the other elements of CRM.

Capacity and Memory

How we interpret the information hitting our sensors forms the basis of our perceptions, and from this we create model of the world around us. We do so by constantly comparing this input data to our prior knowledge and experience of the world, and a big chunk of our attention is expended on deciding how to interpret and act upon these. Drawing on the processes, knowledge, and experiences that we hold in our long term memory and comparing it to the here and now is the cognitive process that allows us to make decisions, chose a response, and then carry out that response. 

WORKING MEMORY

Understanding our memory processes can also contribute to understanding the limits of our capacity. We are taught about working memory in CRM training. You’ll probably recall that working memory is very short-term, and very low capacity. There’s a ‘rule’ that the average person can retain only 7+/-2 ‘chunks’ of information (Miller, 1956) and that it can be held in working memory for a ‘half-life’ of 7 seconds (the delay after which our ability to recall that information is reduced by half).

There are a number of ways in which we can mitigate this rapid decay of items in our working memory. Research has found that acoustic items in particular can be maintained by articulating them repeatedly (sub-vocally). Chunking items works better when the chunks are strongly identified with an item in the long-term memory, where for example the four digits 1-0-1-3, can become a single chunk ‘1013’, identified in the minds of pilots everywhere with the standard altimeter setting. Whenever a memory item is meaningful in terms of a framework of prior knowledge, it is memorable. In other words your memory is strongly associative. Because of this, letters are more easily remembered than numbers, and numbers more so than a mix of letters and numbers. Working memory is also easily confused by similarity between chunks, for example adjacent number strings 5433 and 5334 are likely to be particularly difficult to recall.

LONG TERM MEMORY

Long term memory (LTM) can directly impact our capacity both in terms of the retrieval of knowledge, and the running of more automatic motor programmes. Skilled processes such as manual flying are driven by motor programmes that have been laid down in the long-term memory. Once established these programmes require relatively few cognitive resources. Creating these programmes depends upon our procedural memory, and the use of tactile and spatial methods of rehearsal provide a structure that helps to bind the neural networks we create for these skills. Hence the advantages provided by the use of simulators or touch drills in aircraft.

The other type of LTM is declarative memory and unlike procedural memory (which is hidden from view) these are memories that are consciously available to us. We store them in two different ways according to their characteristics. Episodic memory refers to the ability to recall past events or experiences and are stored as images. Semantic memory is memory for facts, rules, concepts, and problem-solving, and we store these as networks – structures to organise knowledge – by creating frameworks based on links with other meaningful knowledge.

If we already have a suitable framework for organising the information we learn established in our LTM then it is easier for us to categorise, understand, and build links and relationships with other information we already hold there. Information is also transferred to LTM more effectively when there is an existing structure there to support it and link to it. That is why when learning new topics it always helps to start with the known and move from the known to the unknown – to build upon other structures that are already in place.

It is already well understood that the building of neural networks is strengthened through repetition and rehearsal. Repeated retrieval of items from your long-term memory reinforces their pathways and structures. This explains why flying ‘armchair’ sectors from your living room is so effective. In so doing we are reducing the processing power required by recalling and retrieving the right elements from our memory stores. 

Maximising Cognitive Capacity

This may all seem rather theoretical, but it does offer some useful practical lessons to us on maximising our capacity in flight. The strength of an item in LTM depends upon the frequency and recency of its use. Training, rehearsal and revision not only allow for the strengthening of both of those elements, helping with recall, but it also fortifies the structures upon which you can hang new learning, and crucially, make inferences and analogies to other knowledge. Your mental frameworks have a crucial role to play in the creation and retention of knowledge. 

The cognitive demand required for Procedural Memory reduces progressively with practice, allowing processes and programmes to become highly automated and in some cases unconscious. Once laid down, procedural memory requires little cognitive resource, so the more that you can establish in your procedural memory, the more capacity you free up for other tasks. 

Image-based Episodic Memory is especially quick to establish in LTM and particularly enduring. Not only does a picture paint a thousand words, it also imprints this data on the brain more effectively. We often learn best from physical events and experiences that happen to us. The early and conscious reinforcing of those memory structures through debriefing and review will help cement that process.

The bottom line is that capacity – like the human information processing system itself – is a story of finite resource versus potentially unlimited demand. When we consider our capacity in the cockpit we should be conscious of the fact that we can manage both ends of this equation. The demand side is a question of workload management, but cognitive supply can also be managed. It is a function of many human factors, some of which have been described above, others of which stray into other more physiological topics such as arousal, fatigue, stress, and startle, amongst others. The aim in a perfect world? To sit ourselves on top of Yerkes and Dodson’s inverted ‘U’ every time we climb into an aircraft.

REFERENCES:

  • Harris, D. (2011) Human Performance on the Flight Deck. Farnham: Ashgate, p.24
  • Kahneman, D. (1973) Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall
  • Miller, G.A. (1956) The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review, 63, 81-97.
  • Wickens, C.D. (2002) Multiple Resources and Performance Prediction. Theoretical Issues in Ergonomics Science, 2, 150-77.
  • Yerkes, R.M and Dodson, J.D. (1908) The Relation of Strength of Stimulus to Rapidity of Habit Formation. Journal of Comparative Neurology and Psychology, 18, 459-82.

ARE YOU A SPECIALIST AVIATOR? WHY DEVELOPING RANGE IS PART OF YOUR JOB.

Photo: Lloyd Horgan

Most of us will recognise amongst our colleagues that figure who has an unmatched knowledge of their aircraft and operational procedures but isn’t a natural team player, doesn’t share thought processes much, and just perhaps doesn’t quite integrate with the rest of his/her colleagues as comfortably as others. We admire technical knowledge in aviation, but not only it is relatively easy to gain if you dedicate time to it (and have an unwavering focus on learning it to the detriment of other things), it is also only half the picture. 

The same cannot really be said of the soft skills and the non-technical side to being a successful aviator. This is the other half of the picture, and these are not the product of experience in the aircraft or classroom alone. Leadership, teamwork, effective communication, empathy, instructional technique, assertiveness, cultural awareness, and so many others are developed in the wider world, and imported from there into the aeronautical environment.

We are not born with great non-technical skills. Nor are they the product of experience in the aircraft or classroom alone.

And it turns out that the broader our experience of the wider world, the more successful we are likely to become at navigating our way through complex organisational environments and tackling systems of systems such as aviation. This is because us humans’ greatest strength is the exact opposite of narrow specialisation. Our particular human strengths lie elsewhere; in intellectual breadth, strategic thinking, critical analysis, and flexible learning. And it seems that the wider your domain knowledge the better.

This is the premise of David Epstein’s best selling book Range, which argues that in narrow, skills-based and rules-based worlds, humans may not have much to contribute for much longer. (Is the flying of manned aircraft a perfect example of this?) In his book Epstein makes the case that the most successful experts in many fields have a breadth of interests, citing as an example evidence that the world’s most successful – Nobel Prize winning – scientists are over 20 times more likely to partake in a performing art than other scientists. So, like the Nobel prize-winning scientists, are the very best aircrew also aficionados of amateur dramatics, self-taught pianists, ballet-dancers, sculptors, and painters?

Kind vs Wicked Environments

Like many others, aviation is a system which values narrow specialisation. Pilots are not simply pilots, they are type-rated pilots, specific experience in role and type is valued over generic aviation experiences. In many cases, breaking into the profession itself requires a single-minded dedication to aviation to the detriment of other activities, and once in the door it can take many years of patience and persistence with the same operator and the same operation, to be considered ready to reach command.

Despite all this commitment to specialist experience, how we are able to apply it depends upon the nature of the activity in question. As the pilotless cockpit looms on the horizon, the same is true of our potential contribution as the human in the system. In Range Epstein argues that there are two broadly defined environments in which we operate, the ‘Kind’, and the ‘Wicked’. They are opposites, and they are defined by the way in which we learn from them.

What is a ‘Kind’ learning environment?

A kind learning environment is one in which we can apply rules, we already have the answers to set questions, we can develop procedures, and learn fixed techniques which do not constantly evolve and change over time. Some good examples are the game of chess or golf, or learning the language of written music to learn to play an instrument. 

‘Kind’ learning environments are ones where patterns repeat over and over, and feedback is immediate and accurate. The mechanics of learning to fly an aircraft meet this definition well. We all started with effects of controls. In this activity the student moves the controls, observes what happens, attempts to correct any error, tries again, and repeats over and over until these motor skills are largely internalised. This is the definition of deliberate practice, and we employ it to good effect in many areas of aviation from take-offs and landings to emergency handling. 

In these cases the learning environment is kind because the learner improves through the simple act of engaging in the activity and striving for accuracy around set parameters. A kind learning environment, rewards and responds to repetitive practice, the dedication of hours of study or deliberate training of these skills over and over hone motor programmes and engrain pattern recognition in our brains. 

Gary Klein describes a model of recognition based decision-making which reflects this kind of learning and experience. He argues that hours of experience lead to expertise through the automatic and instinctive recognition of repeated patterns. However, this is not the whole picture as Daniel Kahnmann has so effectively demonstrated to us in his seminal book “Thinking Fast and Slow,” which debunked the idea that experience automatically translates to superior skill or decision-making. In very many real world scenarios where complexity is introduced, this is simply not the case.

Epstein argues that both these theories have validity within specific domains, but the evidence shows that in those domains which involve human behaviour, and where patterns do not clearly repeat, experience through repetition does not form the foundation of learning.

What is a ‘wicked’ learning environment?

A wicked learning environment is the flip side of the kind learning environment. As Epstein explains, “the rules of the game are often unclear or incomplete, there may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both.”

The problem is that conditions that allow learning from repetition and rapid feedback such as the striking of a golf ball do not reflect many of the real world skills that we aspire to learning or the real world problems that we need to solve. For example, curing all cancer is a real world problem. It is huge and complex. It doesn’t depend only on applying answers to problems, and checking for feedback, but it depends upon us figuring out the right questions to ask in the first place over a massive range of puzzle pieces.

The Aviation Realm – Wicked or Kind?

The cogs of world aviation in motion is another excellent example of a complex system at work. It reflects an interconnected, rapidly changing world, in which no single element has a view or an understanding of the whole. By this definition it is clearly a wicked environment. However, we have seen how aviation provides us with an example of how a ‘kind’ learning environment – that of hand and feet manual flying, can sit within a wickedly complex system of systems. 

The manual flying of aircraft is a fine example of a world in which narrow, skills-based, and rules-based activities are being superseded by machine-learning and automation. In ‘kind’ learning environments we will no longer be able to compete against artificial intelligence which will outlearn and outperform us in repetitive, feedback based tasks. But when it comes to handling and interpreting strategic complexity, studies show that the bigger the picture the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialisation. It is our ability to integrate broadly.

Our greatest strength is the exact opposite of narrow specialisation. It is our ability to integrate broadly.

In aviation, standardised operating procedures, checklists, and standardised practices are as ubiquitous as they are desirable. But they are tools of a kind learning environment, and with time almost anything that is predictable and standardised will be better achieved by automation than by a human operator. It is when events stray beyond the standard, routine, anticipated, or procedural that the Artificial Intelligence might come a cropper. Unfortunately, as is so often seen in accident investigation, this can also be true of the human! It is often the inability of crews to apply brain outside of the constraints of procedure and automation that catches them out. It’s fair enough; we have all been trained to think this way and become bound by the rigid framework of regulation and SOP. Like automation, SOPs are both a crutch and a trap.

Like automation, SOPs are both a crutch and a trap.

The Black Swan and thinking outside experience

In flight, the epitome of the wicked learning environment in action is a black swan event. A black swan refers to an unpredictable event in flight characterised by its extreme rarity, severe impact and impossibility of prediction. What the pilots have to deal with is outside of the realm of their previous experience, and here – in theory at least – is where the human potential of the aircrew should come into its own.

When discussing thinking outside of experience, David Epstein introduces the character of 17th- century father of astrophysics Johannes Kepler (of whom all commercial pilots will recall at least a vague familiarity from the General Navigation syllabus of the ATPL, where they were introduced to Kepler’s laws of planetary motion). Kepler was a man of extraordinary intellectual wanderings. In fact, he wandered so far from the boundaries of previous thought that as he mused the mechanics of the universe there was no evidence to draw upon to support his suppositions. This forced him into the use of analogies. 

Analogical thinking is (as far as we are aware) a uniquely human capability where we link the known to the unknown, or make parallels between the unknown and the known in order to reason through problems that we have never seen before, or problems that appear in unfamiliar contexts. It also gives us the ability to get our heads around a problem or a concept that we cannot see at all. A basic example of this which harks back to my experience of learning for the ATPL was listening to my instructor explain the ‘black magic’ of electricity as being similar in concept to water flowing from a tank where the water quantity represents electrical charge; water pressure, voltage; and water flow, electrical current.

It should come as no surprise to learn that the wider the experience of the world around us, and the broader our knowledge and interest in a range of topics and activities, the greater the facility we have for making analogies and drawing links between disciplines and intellectual domains. Kepler was a master of this, and the evidence shows that to make an intellectual breakthrough worthy of the Nobel Prize for science, you need to be able to do the same. Aircrew who have successfully managed complex emergency situations often cite the importance of experience beyond their cockpit knowledge. 

Relying on experience from a single domain is not only limiting, it can be disastrous .

David Epstein

Captain Sullenberger’s “Miracle on the Hudson” is probably the most famous of these events in recent times, and is seen as a major endorsement of the critical role of non-technical flying skills to a good outcome. We are not born with great non-technical skills. Prior to his accident with US Airways he was a member of an aircraft accident investigation board in the Air Force, was a NASA aviation safety research consultant evaluating cockpit systems, and had co-authored a technical paper on crew decision-making errors in aviation. He had also collaborated with NASA to provide a blueprint for safer pilot training, procedures and standardisation; with the NTSB on airline procedures and training for emergency evacuations; and led the development and delivery of the airline’s first CRM course. Sully himself made the point that, “I’ve been making small, regular deposits in this bank of experience, education, and training. And on January 15 (2009) the balance was sufficient so that I could make a very large withdrawal.” 

Range as Non-Technical Skills and Non-Technical skills as Range

David Epstein’s defence and advocacy of multidisciplinary range in education, training, interest and intellect is absolutely applicable to aircrew for success in aviation. It is not only a question of unlocking better decision-making and problem solving capacity, it is about linking wider lessons from our experiences beyond flying that will inform our style in areas like leadership, communication, decision-making, error management, stress and fatigue management, and team skills. 

Crew Resource Management (or the study of Non-Technical Skills) is itself a boundless multidisciplinary world, involving a huge breadth of topics. It also bridges the theoretical and practical. From Information Processing (how we think, perceive, and construct mental models) to the psychology of personality and behaviour, it spans pure science, social science, quasi-science, the art of communication, social-skills, and much more. With such scope it sort of goes without saying that non-technical skills in aircrew are not grown in annual one-day training. They are a composite of all the inputs into our personal experience from the professional world and far beyond. For the benefit those aviation addicts, aficionados, and workaholics, just remember to lift your head out of the cockpit from time to time so to speak. Be like Kepler: Stay curious!

Helicopter Hoisting and the Human in the system:

Applying the 3Hs to decision-making during helicopter hoist operations.

On the 29 April 2020 at Biscarosse near Bordeaux in France, two crew members of a French Air Force H225 fell to their deaths when a hoist cable parted during a winch training exercise. (Summary report in English from Aerossurance.) The tragic outcome coupled with the recently published accident report, which (as you would expect) dedicates some significant attention to the technical aspects of the hoist and hoist cable, forced me to consider the question of how well I understand, and what I still have to learn, about the characteristics and dynamics of hoists and hoist cables. The answer, as it turns out, is more than I care to admit.

This potentially extremely complex subject could easily snowball into something that goes well beyond the capacity of my basic schoolboy physics, so my aim here is to keep things brief, simple, and as limited – and relevant – as possible to what a crew need to understand to be able to take sound and informed decisions on risk during hoist operations.

The Three Hs

Any hoist system on a helicopter can be considered to be made up of three elements. 

  1. The Helicopter
  2. The Hoist
  3. The Human

The hoist, which is itself a machine, is mounted on a dynamic platform; the helicopter. The forces that affect the cable are a principally a combination of these two elements; the movements of the aircraft and the hoist itself. However, they are also affected in no small measure by the input of the humans in the system – variously, those of the hoist operator, pilot, winchman, survivor and maintenance technician. Our human interactions with the other elements of the system can affect the integrity of the system as whole, and it is on these interactions that I want to focus our attention.

There are two important facts to highlight at this point about the above described system.

  1. Despite the complexity of the three Hs, the hoist system has a single point of failure: The hoist cable.
  1. The first two elements of the system (the performance of the helicopter and the hoist) are often taken for granted by the humans.

For those whom dangling another human being on the end of a wire rope beneath an aircraft in flight has become routine and normalised, it can be difficult to remain attentive to the risks that would seem self-evident to others looking in from the outside. Trust in man and machine is a hugely important component in helicopter rescue. Trust in our colleagues and equipment is a product of experience and professional judgement – both theirs and ours. It requires an unceasingly questioning mindset. 

Blind trust results when we stop asking questions as to the integrity of the equipment and the competence of the operators and it has no place in high risk environments. Simply knowing, for example, that the maximum hoist load well exceeds the weight of the person currently hanging beneath the aircraft, and reassuring ourselves that it has certainly been tested to a safety margin well above that limit, is not sufficient knowledge to fully appraise ourselves of the ‘what ifs’ and the risks we are currently accepting with its operation. 

We also need to understand how and why cables can fail, which means having a basic grasp of the technical and load bearing characteristics of the hoist mechanism and of the cable itself. We need to know when to be concerned about their condition, and how to assess the impact of any incident involving the cable. We need to know how to look for the warning signs, both when pre-flighting the aircraft and during hoist operations.

Anatomy of a 19×7 helicopter rescue hoist cable

How cables fail

To simplify as far as possible, cables fail when they become overloaded beyond their ultimate limit. This is called tensile overload, and comes in two forms, static, and dynamic. 

Static Overload

Static overload is a slowly applied load which gradually stretches the wires in the cable. When the amount of stretch exceeds the cable´s capacity to stretch it starts to deform, damaging the cable in a non-recoverable way. The capacity of a cable to stretch is a function of its length, which means that the more cable you have out, the greater its elasticity and ability to absorb strain energy. Conversely, the shorter the length of cable, the more likely it is to be damaged in this way. Permanent damage to a (standard helicopter) cable will occur when subjected to a static load of above around 900kg. 

Although the listed values for the ultimate static load of a standard 3/16 inch diameter helicopter cable seem reassuringly high, (1500kg) this figure drops dramatically in a fatigued cable (by over 30%). Furthermore, the most likely cause of static overload on a winch cable, which is probably from a snagging of the winch hook, can quickly and easily exceed this figure, never mind the lower limit at which it becomes permanently deformed.

Dynamic Overload

Dynamic overload happens when the mass of a static load is multiplied by the velocity of a rapidly applied load and is by far the greatest threat for a sudden and catastrophic cable failure. An example of this would be a fall on to the cable. The instant a moving body is stopped its kinetic energy is completely transformed into the internal strain energy of the cable. Its kinetic energy is a function of the mass of the falling object and the time which it is in free fall. Because this dynamic load increases as a square of its velocity, it can quickly exceed the ultimate load of the cable in a very short distance. This is called ´shock-loading´ and in a catastrophic case the dynamic induced force exceeds the cable´s ability to absorb the energy, causing the cable to fail and separate. 

A cable broken under an extreme dynamic load

Shock load testing shows us that even a surprisingly small load falling a short distance can generate sufficient force to part a cable. For example, an 80kg winchman falling only 81cm (less than 3ft) on to a winch cable generates a dynamic force of 972kg, which is more than enough to break a standard helicopter winch cable, particularly if falling from the helicopter cabin with very little cable paid out to absorb the load. (Zephyr Int., Information on Helicopter Hoist Wire Rope, Failure Modes, and Rejection Criteria, p.37-8).

An excessive load in both static and dynamic forms, including the sudden release of a load from the cable, can cause internal damage to the wire core of a cable with no visible trace of damage on the external strands which might give away its condition to the operator or maintenance technician.

Why cables fail

Why cables fail is an altogether more complex question to answer. Either their tensile limitations are exceeded, or they become degraded and weakened, causing them to reach tensile overload before they should. This degradation could come about for any number of reasons to which we should be attentive, including fatigue from normal usage, poor maintenance practices, poor operating practices, manufacturing imperfections, corrosion, abrasion, and even damage from lightning strike or electrical discharge.

Poor operating practices that can twist a cable apart or seriously weaken it include stresses such as the rotation of a load upsetting the torsional balance of the wire rope, thus creating a birdcage, or the sudden release of a load (such as can result from a rescuer hitting the ground quickly, or using a quick release mechanism while in the air) which results in dynamic unloading and can cause a rebounding in in the internal core of the wire rope.

Abrasion, corrosion, and other forms of damage can be minimised through meticulous operating practices. Abrasion is typically caused by rubbing against the airframe, skids, hook etc. and causes a reduction in the local breaking strength of the cable. Rotation and excessive flexing at the ball end is often caused when crews connect or disconnect from the hoist hook in the helicopter cabin. High heat stresses in the cable can occur due to friction, electric discharge, and lightning strike, and can result in the weakening or cutting of individual or multiple strands.

The Human Factor

The Human in the system interacts with all of these factors. An example of how this interaction between the human factor in operation, maintenance, and design can have an impact on operational safety is evident in inconsistencies in how we count hoist cycles to manage fatigue life. Hoist manufacturer Goodrich says that a hoist cycle in flight is an extension and retraction of cable “regardless of the length of cable unwound and the load used”. 

Counterintuitively, repeated cycling of the hoist under no load causes more fatigue to the cable than when the hoist is in normal use with a load attached. (This is due to the effect of the hoist tension rollers which tension the outer layer of cables but not the cable core leading to improper balancing and loose strands, or bird-caging, particularly towards the ball end.)  

Using a cable to its maximum fatigue life will cause it to deteriorate from the inside out. As a result, rescue hoists are operated on a much less severe duty cycle than other cables. However, many crews and maintenance technicians don’t log the paying in and out of cable to the cabin, or during some maintenance tasks or preflight checks, which will seriously deteriorate that safety margin. Miscounting of hoist cycles is cited as a contributory factor in the Biscarosse accident.

Another recent occurrence of a broken rescue hoist cable occurred in Australia in February 2020, leading to a Safety Advisory Notice being issued. Again, the human factor played an important role as the probable cause was determined to be the improper stowage of the hoist hook assembly following hoisting operations, which led to accelerated wear of the hoist cable close to the ball end where it enters the hook. Fortuitously, the failure occurred during a cable conditioning flight with a dummy weight on the hook.

The design, knowledge of, and adherence to appropriate procedures played a role in both these accidents. In the case of the Biscarosse accident last year, mixed messaging over the correct way to handle a hoist malfunction, and negative transfer of procedures applicable to other types of helicopter and hoist both contributed to a decision to continue to use a partially jammed hoist. Effective and unambiguous procedures have a hugely important role in guiding decision-making, and should help to prioritise safety in the decision-making process. 

A framework for decision-making: Human, Helicopter, Hoist

Placed in the framework of the 3 Hs, it goes without saying that when winching these priorities should always be first Human, then Helicopter, and then Hoist. When in doubt about the integrity or functioning of the hoist or cable, the immediate action should always be to get the human cargo on the cable to a safe height and position before any further diagnosis of a malfunction takes place.

As with any other system, our technical knowledge will inform decision-making and the understanding of relative risk vs operational imperative, but we could probably all start from the same basic principles of: 

  1. Human, Helicopter, Hoist.
  2. When in doubt; stop and make safe.
  3. Following an incident with the hoist or cable, never take cable structural integrity for granted.
  4. Inspect the cable as if you are the one hanging on it!

On Lookout and helicopters

The importance of an effective lookout. We’ve heard it from day one in aviation, a constant through our flying training days and beyond. The dangers of mid-air collision, obstacles, and controlled flight into terrain (CFIT) will always be there. 

These are not static threats however, but are always evolving. Take the proliferation of drones as one example. Our responses to these threats have evolved too: traffic alerting; TCAS, Terrain Awareness and Warning Systems; Enhanced Ground Proximity Warning Systems; even synthetic terrain overlays. All are designed to enhance situation awareness and counter these threats. And while they are all amazing tools, their efficacy has unintended consequences: The human factor. 

“Technology driven complacency coupled with the ever-growing internal distractions of the modern cockpit are powerful forces.”

Technology driven complacency coupled with the ever-growing internal distractions of the modern cockpit are powerful forces. In recent months I have fallen victim to them not just once, but on three different occasions, coming uncomfortably close to (in order or appearance) a drone, multiple birds, and a microlight aircraft. On each occasion I took it as a salutary warning, and a lesson to go back to basics in dedicating more time to visual scanning outside the windscreen. But each time it happened again. Why? In a more in depth debrief to myself as to the reasons I have been repeatedly caught out I came up with the following 10 reasons, and one solid conclusion: Lookout should be a bigger deal for helicopter crews than for almost any other group in aviation.

So here’s 10 reasons why helicopter crews (in particular) need to be much more concerned about lookout.

THE EXTERNAL ENVIRONMENT

1. Helicopters are much more likely to be flying visually than most of other airspace users.

Visual flying means just that. Eyes outside. The Mk1 eyeball is our principal sensor in building and maintaining a picture of the world around us.

2. Helicopters operate more frequently and spend more time manoeuvring low level in the obstacle environment. Lookout isn’t just about mid-air collision.

3. Helicopters frequently operate below or beyond radar cover. Those friendly controllers can’t watch you back if they can’t see you.

4. Helicopters tend to share their airspace with birds, particularly in coastal areas.

5. The threat from drones is especially relevant to helicopters, has proliferated, and will continue to do so.

THE INTERNAL ENVIRONMENT

6. Latest generation rotary wing aircraft are designed for more ‘hands-off’ flying, leading to a delegation of flight-path management to automated systems and a slowing of response times.

7. A proliferation of aides to traffic and terrain identification and separation, and even synthetic terrain overlays contributes to a sense of safety and complacency that the machine will give us a reliable picture of the world around us and warn us to any threats.

8. The ergonomics and tools of the modern cockpit mean much more information is presented and more cockpit systems management is required, resulting in less time heads up managing the aircraft flight path visually.

9. The quantity of information in the cockpit that can be interrogated and our desire to interact with it is a powerful source of distraction from the outside environment that is difficult to counteract.

10. The breath of resource represented by the Electronic Flight Bag and its interactive functions provides another new potential source of distraction to the external visual scan.


The first four on this list are nothing new. However, the rest demonstrate how the pace of technological change in the rotary wing cockpit, as well as in aviation more generally, has accelerated a changing risk profile in terms of lookout.

Proper scanning requires the constant sharing of attention with other flying tasks, thus it is easily degraded by such conditions as distraction, fatigue, boredom, illness, anxiety or preoccupation. We, as humans, are not good at resisting the effect of these. Often, the root cause of many non-critical distractions is poor workload management bringing our heads in at inappropriate times. After considering my recent three lessons in lookout, whenever I am drawn heads-in I am learning to discipline myself with the questions, 

“Do I really need to be doing this now?” 

“Can it wait?” 

“Is this the right time?” 

When you have tasks building up in your short-term memory, when you want to respond to questions or requests for information from other crew members, or when you want to get ahead of the aircraft, one of the hardest things to do is to sit on your hands, raise your head, and cast your eyes out of the window.

Developing Competency in Problem-Solving and Decision Making: The importance of Process vs Outcome.

Ignoring the ever more insistent advice coming from his beleaguered co-pilot, the autocratic captain once again initiated a climb into the base of a dark, thick layer of cloud. It was a bad decision, but it had been taken early, assuredly, and without much consideration. Within less than a minute it was clear to both of them that something wasn’t right; the nose came up fast, the torque was high, but no rate of climb was indicating and the altimeter was stubbornly still. The HSI was moving though, and soon a gradual turn through north developed into a steadily increasing yaw through 180-360-540º. The helicopter broke cloud shortly afterwards 30 degrees nose down in a terrifying, unrecoverable descent, its crew utterly disorientated. There was a sudden jolt, a loud crash, and the inevitable red screen of death spread across the vista in front of them.

Since the start of this year I have been involved with the introduction of concepts of Competency Based Training and assessment to the instructor cadre. Using a flight simulator to role play different crew behaviours, and facilitate instructor understanding and assessment of these behaviours is an illuminating experience, not least because an identical scenario can – and will – give you an entirely different outcome each and every time. Human behaviour is the ultimate variable.

One of these core competencies is that of Problem Solving and Decision-Making (PSDM). It is a particularly difficult competency to debrief and assess correctly because most people find that their view on the merits of a decision tends to cloud their ability to assess the quality of the decision-making process itself.

Process vs Outcome

Separating the quality of a decision from the quality of the processes which lead to a decision being made sounds like it should be straight-forward, but it isn’t. This is especially true if we judge a decision to be a bad one, or a wrong one, when our negative perception of the choice can easily overwhelm what could have been a perfectly acceptable, collaborative, and well-communicated thought process.

Separating the quality of a decision from the quality of the processes which lead to a decision being made sounds like it should be straight-forward, but it isn’t.

The distinction between the quality of the decision-making process and the decision itself is an important one to make in the context of training for competency because although we won’t always make the right, or the best, decisions in any given situation, the ability to develop and improve our decision-making processes, is what competency-based training is all about. The idea is that in the long run the quality of the decisions themselves will also improve as our behaviours build in more inputs to problem solving, and more checks and balances to bad judgement or bad choices.

Two examples of behavioural markers used to assess pilot decision-making are:

  • Identifies and considers appropriate options
  • Perseveres in working through problems whilst prioritising safety

It is easy to argue that good decision-making behaviours have not been demonstrated successfully if they lead to a poor decision with a bad outcome. For example, we might state confidently that a crew that end up in a big smoking hole in the ground obviously didn’t “Identify and consider the appropriate options”! We might also conclude that a crew can’t have been “prioritising safety” if the outcome of an event is an unsafe act. However, this would be intellectually lazy.

The effect of Outcome Bias

Outcome bias happens when we believe we can prove a decision was the correct one because it had a good outcome. In fact, most of the decisions we take every day, we evaluate on the basis of outcome. That makes it a very powerful dynamic in how we review or judge our own decisions and those of others. But it’s not to be trusted. If, for example, we take a chance in flying on ten miles to the nearest point of land rather than ditch an aircraft in the sea, even though the checklist and all the indications tell us to land immediately, can we congratulate ourselves on a good choice well made just because we manage to put down safely on the cliff top?  

But outcome bias can swing both ways, and we must therefore be able to apply the flip side of the argument. The fact that you might have a really bad outcome does not ergo mean you have made a really bad decision. You could have had a bad outcome as a result of a perfectly good decision, (which others will be quick to judge disproportionately negatively on the basis of outcome). Just as it is important to look beyond the outcome when assessing the validity of a decision, we must be able to look beyond the decision itself to be able to assess the decision-making process.

Positive Behaviours in PSDM

So what are the behaviours that we are looking to develop in order to demonstrate competency in the processes of Problem-Solving and Decision-Making?

In most taxonomies of behavioural traits the decision-making behaviours that we want to improve fall roughly into three areas:

  1. Information gathering, verification, and prioritisation of options.
  2. Capacity for flexibility, adaptation, and resilience.
  3. Revisiting and assessing outcomes.

Sharing thought processes

If you fly as a crew the most obvious step towards good decision-making is encouraging active participation in a group decision-making process. That means verbalising your own thought processes, talking through options out loud, and putting forward the reasoning behind your decisions. At the same time it requires you to elicit the same process from others and be open to their input. This is not as easy a skill as it sounds, particularly for those who believe they already know the solution to a problem, or are certain about their approach to a decision. Creating the habit and ingraining these decision-making behaviours to reap the benefits in the times when we are faced with genuinely difficult problems and decisions is what training the competency aims to achieve. 

In one training scenario I ran recently, the decision making process went like this: 

[In the throws of a precautionary landing…]

  • Captain to Co-pilot: “I’m putting it on the beach.”
  • Co-pilot to Captain: “Ok…?” 
  • Captain to Co-pilot: “The decision is taken. I’m putting it on the beach.”

Landing on the beach, wasn’t a bad decision. In fact, in the circumstances, it was a good one. However, there were a number of other valid options available to the crew. When debriefed on their decision-making process, both Captain and Co-pilot insisted that they had fully considered the other options and decided against them. I’m almost certain that they did, but when I put it to them that the only decision-making that had been shared out loud between them was the Captain’s concluding comment, “The decision is taken!” they protested heartily. Sometimes to make a change we have to be shown what’s wrong with how we currently do it. Verbalising the thought processes that run through your head takes discipline and practice, but it is the key to unlocking a collaborative decision-making process.

Verbalising the thought processes that run through your head takes discipline and practice, but it is the key to unlocking a collaborative decision-making process.

Countering complexity with prioritisation

We may not always have the time available to identify the root of a problem and fully consider all the options open to us. If a problem is too bewilderingly complex to do so, it is about prioritising simple, safe steps to allow you to see the wood from the trees. Easy to say, not so easy to do. But time and again in incidents with multiple failures and confusing indications those that have done so successfully tell us that they did it by focusing on what was working for them instead of what was not. That is the reason the ultimate aviation adage of Aviate, Navigate, Communicate, is so enduring: if nothing else is going for you, fly the aircraft!

Changing Tack

Flexibility in your decision-making is not just about considering alternative courses of action, it is being able to adapt your response to changing circumstances. This requires the discipline to revisit a decision after it has been taken and ask yourself whether or not it is achieving your objective. This process of review might also, crucially, depend upon you having the humility to accept that it wasn’t the right course of action, or perhaps no longer is. One of the reasons this is such a challenging part of the decision-making process is that we all tend to see or seek out the things that confirm the veracity of our decision, and can easily be blind to evidence that proves the contrary. Having other people in the loop on your decision-making process can be an important countermeasure to this confirmation bias.

Developing Competency in PSDM

Back to the real question however, which was how we train these skills to develop competency. 

Although we won’t always make the right, or the best, decisions in any given situation, the ability to develop and improve our decision-making processes, is what competency-based training is all about

When we are debriefing non-technical skills we need to teach people to focus on processes, not on outcomes. Instead of training learned responses to specific events, failures or scenarios, this will allow us to engrain the habits and team-behaviours which will support a good outcome in any or all situations. And that is the rationale behind training for competency. As far as the particular competency of Problem-Solving and Decision-Making is concerned, we need to focus on developing a conscious process in crews which is open-minded, out-loud, collaborative, and iterative. Getting there requires us to examine and evaluate our own performance in these skills as often as possible.

Competency based training. By trying to solve one training problem are we creating another?

At the beginning of this month I tuned in to the Royal Aeronautical Society’s webinar titled, Flight Crew Competence; Assessing what and how? The webinar aimed to address the concept of Evidence Based Training and Competency Based Training (EBT/CBT) and consider the impact it has had on the experience of instructors, examiners and trainees. The panel represented a series of viewpoints from across areas ranging from flight operations, academia, learning and development, and psychology, to share insights about the history and development of the programme, and how competencies are identified, observed and trained. As you would expect, I found some areas more thought provoking than others, but by the end of it I was left with one nagging question that stood out for me. It was this:

By trying to solve one training problem by establishing CBT techniques in the industry are we creating another with the complexity and extent of the knowledge demanded of the instructor cadre to be able to train these methods effectively?

I was lucky enough to be able to put this question to the panel, and the response was an interesting one. In short, they agreed that this could indeed present a challenge to the practical success of converting CBT from theory into practice on the aviation front line.

The problem that CBT purports to solve.

How do you define a competent pilot? 

The point is, that whatever definition you come up with, it will inevitably change and evolve over time. What we deem to be a good pilot now looks quite different to what it was 50 years ago.

The origins of Evidence Based Training arose from an industry-wide consensus that changes in operating practices, technological advances, and the understanding of teaching and learning techniques have taken place. The time was right for a strategic review of the way we approach recurrent and type-rating training. The aviation industry has changed and progressed, and how it trains its crews needs to change and progress with it.

The problem with traditional methods of training and checking is firstly that they are quantitative. One example of this is that they require that you meet a quantified minimum hours of practice as a standard, which speaks nothing to whether or not an individual is actually competent. Some will require more hours and others less to become competent in any given skillset. 

Secondly, they are proscriptive. That is to say, the training required for any individual is determined by the testing criteria (which is set top-down by the regulator) rather than being determined by an individual’s progress towards being capable of doing the job they are training for. 

Thirdly, the testing criteria itself is outdated. It is mostly based on evidence of accidents relevant to airline operations alone, and principally derived from accidents to early generations of jet aircraft. The traditional philosophy is based on the belief that simply repeating pilot exposure to ‘worst case’ events in training is enough to ensure competency. Over time, as new incidents and accidents occurred they were added to the requirements, resulting in ever more crowded training programmes. This created a ‘tick box’ approach to training and one which often overlooked the more complex contributing factors behind the technical failures that might have been the trigger factors for an accident.

As an example of the dislocation between traditional mandated training and the real world, consider the unstable approach paradox. When a final approach is unstable pilots are expected to go around. Evidence shows that they usually don’t. But when they do the missed approach is almost always badly flown. In contrast, when they land from an unstable approach, 98% of the time they do so with no issue. The requirement to go around from an unstable approach therefore arguably works against safety. 

But let’s look at the root cause of the problem. Go arounds in training are usually performed with one engine inoperative from a defined minima and with no visual references. Go arounds in real life are almost always flown with a low weight aircraft, all engines operating, and are usually visually. We are not training how we are flying.

The proposed solution: Evidence Based Training and Behavioural Competencies.

Evidence Based Training has been developed on the basis that pilots should be trained to become competent in the specific demands and challenges of their particular aviation operation. This can be done by using evidence drawn from relevant accidents, incidents, training reports, and academic studies to design training programmes that meet those specific requirements.

Having identified the kind of training required – the things that you want your pilots to be competent in to carry out their role safely – the next challenge is how to develop and evaluate crew performance according to a set of competencies.  This is where things become much more complex. We are no longer able to separate training into a series of individual skills or manoeuvres that can be simply (and largely unthinkingly) ticked off by an instructor. Instead we need to determine the idea of competency within the context of an integrated set of both technical and non-technical skills.

Under a competency based system trainees have to be able to demonstrate – and instructors have to be able to assess effectively – all of the following:

  1. Application of Procedures
  2. Communication
  3. Aircraft Flight Path Management, (automation & manual)
  4. Leadership and Teamwork
  5. Problem Solving and Decision Making
  6. Situation Awareness
  7. Workload Management

It is relatively easy for flight instructors to concentrate on the technical side of training. It’s definable, it’s measurable, and it is more easily assessed and critiqued. The same cannot be said for the non-technical skills listed above. They all interplay with each other and are not easily picked apart. The exercise of judgement in assessing the non-technical aspects is much more subjective compared with assessing the success of a rejected take off, or actions in response to an engine fire. And there are lots of non-technical categories to assess. They require the assessor to be attentive to many different aspects of performance at the same time.

The best and most objective system that has been put forward to observe and assess these kinds of complex behaviours in action is a behavioural marker system and accompanying grading matrix. A behavioural marker system is basically a taxonomy (categorised list) of pilot behaviours. If a behaviour has been observed in flight it can be judged as evidence to prove competency in that category. For example, if the behaviour “seeks and accepts assistance, when appropriate,” is observed, then it is accepted as one evidence for competency in the category of Workload Management

Once those behaviours have been collected, or ‘observed’ in each category, an instructor can grade competency against a matrix that requires him to assess how many have been observed, how often, with what level of success, and with what resulting impact on safety. If it sounds technical, it’s because it is. Not only are there multiple competencies to consider, the arrival at a decision on a grade to apply is also multifaceted.

The problem of instructor training –  how to train the trainers.

The methodology is not perfect, but then it doesn’t claim to be. Nor am I suggesting that it doesn’t work. It does. The problem is that for it to be able to work effectively and achieve that paradigm shift in training practices that Competency Based Training purports to do, it depends in turn upon instructors who are themselves competent in how to apply these techniques.

Instructional Competency

Instructional competency in Competency Based Training techniques depends upon:

Knowledge

  1. A full understanding of the rationale behind the concept and its key principles. 
  2. An in depth knowledge not only of the competencies themselves, but of the behavioural markers taxonomy behind their evaluation and assessment.
  3. An understanding of the assessment criteria and grading methodology.

Skills

  1. This knowledge underpins the development of root cause analysis, the key instructional skill in making a success of competency based training. Understanding how to identify the root cause of an error or an adverse cockpit event takes practice and a capacity for analysis.
  2. A skilled instructor will be able to understand the KSA (Knowledge, Skills, Attitudes) demanded of trainees and how to apply them – particularly the impact of attitudes – on improving crew competency. 
  3. The ability to apply the grading matrix to come to as objective a judgement as possible on an assessment outcome. The fact that a pass/fail can be based on any of the competency areas including non-technical skills makes this decision-making skill even more challenging.

Attitudes

  1. Empathy is a key trait to allow an instructor to apply the above skills successfully.
  2. Buy-in: For some, adapting long engrained practices to these changes will mean a paradigm shift in instructional technique. Instructors have to want to take this step, and have an open mind to its benefits. 

The EBT/CBT initiative to address an outdated and sometimes anachronistic system of pilot training has led to a paradigm shift in thinking about how flying training should be approached, and major changes in training philosophy and methodology. The weight of all this has fallen on training departments and instructors, requiring that they get to grips with new knowledge and the practical skills to put these changes into practice. But despite the breadth and depth of these new demands on the training teams and the simple fact that the success of any competency based programme will depend upon the competency of the instructor cadre themselves, very little extra attention has been paid to what we demand of our instructors, how we choose who is suitable, and what we should be doing to teach them the skills to train others effectively. 

The training on EBT/CBT that is given to instructors is sometimes not as comprehensive as it might be. It is sometimes passed on by people who themselves don’t have a firm grasp of the concept. It can leave more questions than answers, and often involves minimal experience of actually applying the techniques in practice, before sending instructors off to spearhead the change. As ever, even the Authority admits in CAP 737 that “In terms of operational flight safety, instructors hold one of the most influential positions in the industry…[but, although]…the regulation is clear on the expectation and abilities required of the instructor, it offers little in the way of guidance.”

The problem of instructor selection.

It is not just about ensuring that we train the trainers adequately either, because the instructors that go on to lead CBT programmes need to have more than just the knowledge and skills. They need the right attitudes as well. Success will depend above all on the attitude to embrace change, and the attitude to learn a new and challenging set of skills. Not everyone will see it as progress, and not everyone will buy in to the changes.

Accepting a shift in the balance of training towards non-technical skills and briefing and debriefing using facilitation isn’t for everyone. Some instructors simply aren’t comfortable with facilitation as a valid and valuable instructional technique. Facilitation depends more on the character of the instructor than other forms of instruction. They might prefer a more hierarchical and autocratic teaching style, and will continue to defend its efficacy, asserting that facilitation doesn’t get the best from their trainees and undermines their own status and authority as an instructor.

Even more won’t have the motivation to put in the effort required to learn and apply the new knowledge, preferring to remain in their comfort zone and fall back on an easier path of the familiar style that ‘has worked for all these years’. How to identify the willing from the reticent, and the motivated from those stuck in the past is one of the challenges such significant change brings. And how we go on to separate the two groups so that CBT can achieve its goals led by an instructor cadre who are willing to invest in it is a more troublesome question than it might first seem. 

The problem of systemic and cultural factors

A further issue that is a product of a long-held culture within aviation is that instructors are not necessarily chosen by the system for their instructional potential. Ideally the system would select on the basis of character – for empathy and likability as well as other more obvious flying instructor traits – and for expertise and dedication in the area of teaching and learning as a vocation. Instead the system often selects its instructors on the basis of technical flying skills. For example, the RAF has always demanded an above average ability in the air to be able to go on to instruct. This may seem like a sound policy, but selection of pilots to go on to become instructors on the basis of their technical skills alone, means that they are being chosen for factors totally unrelated to their knowledge, interest, and skills in teaching, learning, and development. It also ignores the fact that under the new paradigm, technical skills sit alongside a raft of non-technical skills that also need to be taken into account. Worse than being selected for their technical skills, many instructors reach their position by virtue of seniority, status, or reward. Many of these are not best suited to teaching this kind of training. Furthermore, the coupling of TRI/E status to seniority often means that those individuals are burdened with other responsibilities as well forcing them to divide their time and attentions between training and other management tasks.

By trying to solve one training problem are we creating another?

In EBT/CBT we have layers of complexity which themselves need to be explained sufficiently that the trainers can then pass them on to trainees with confidence and competence. A lot of thought has been put in how to solve the first problem, but in solving the first it is my sense that we have created a second, which has so far been somewhat overlooked.

Progressing training practices towards the concept of EBT/CBT is an ambitious project of technical and cultural change. Doing anything well takes time, effort, and resource. Furthermore, such significant change was always going to throw up problems of change management and opposition created from engrained cultural habits. These are not intractable problems, but they do address embedded beliefs and practices, making it easier to come up with the solutions in theory than to put them into practice. The ideal solution is a dedicated training team with a vocation for teaching and learning. These would be people who are specialists in the specific domain of flying instruction but who are also people who are able to set their instructional skills within the domain-independent context of the theory and practice of education and learning. Aspiring instructors should be prepared to specialise and dedicate their careers to the field of training, leaving others to management and operational leadership.

Small Talk, Big distraction: Taking a look at the sterile cockpit concept through the lens of helicopter operations

The concept of the ‘Sterile Cockpit’ as a defence against distraction is a well known one, even well below the cruising levels of the world’s airline operations. The chances are most helicopter pilots will be familiar with it as a company Standard Operating Procedure. Not so many will know that it is in fact a formal regulation under both US Federal and EASA regulations, in the latter case under PART-ORO, and therefore non-compliance is a violation of these. The rule was first enacted by the FAA in 1981 which makes it 40 years old this year. Since then, evidence from the airline world and beyond shows that this regulation is still frequently ignored by crews. Non-compliance with the sterile cockpit rule is a very common violation.

WHAT IS IT AND WHERE DID IT COME FROM? THE ORIGIN OF THE STERILE COCKPIT RULE 

The term ‘Sterile Cockpit’ is used to describe any period of time when crew members shall not be disturbed except for matters critical to the safe operation of the aircraft and/or the safety of the occupants. In addition, it states that during these periods of time the flight crew members should focus only on their essential operational activities. 

Unpacking this a bit, there’s really two halves to this rule, with one objective. The first half is that you shouldn’t be disturbed by anyone. The second half is that you shouldn’t do any disturbing.

Before we go on to discuss the concept in more detail, it helps to have a little background on where it came from in the first place. The rule was created in response to the crash of Eastern Air Lines flight 212 at Charlotte Douglas International, USA, in 1974. The accident investigation determined that one of the principal causes of this accident was that the pilots were distracted by an attempt to visually identify a nearby amusement park whilst setting up for final approach and flying at a low altitude. In this case (as in many others) they were both the disturbers and the disturbed, the distractors and the distracted.

WHAT THE RULE ACTUALLY SAYS

The sterile flight deck procedures were published in Regulation (EU) 2015/140 as an amending regulation to (EU) No 965/2012 on air operations. The associated guidance material is AMC1ORO.GEN.110(f).

It states that:

Sterile flight crew procedures should ensure that:

  1. flight crew activities are restricted to essential operational activities; and
  2. Cabin crew and technical crew communications to flight crew… are restricted to safety or security matters.

The sterile flight crew procedures should be applied:

(1) during critical phases of flight;

(2) during taxiing 

(3) below 10 000 feet above the aerodrome of departure after take-off and the aerodrome of destination before landing, except for cruise flight, and;

(4) during any other phases of flight as determined by the pilot-in-command.

As ever, a first scan of the wording hints at the rule’s provenance in airline flight profiles but if you look more carefully its applicability is universal and straight-forward. In my opinion it is useful to boil it down even further to points (1) and (4). (Given that as the aircraft is under its own power, by many definitions taxiing is considered to be a flight phase, and approach and departure can be included into the definition of critical phase of flight). In fact, and in particularly in terms of its relevance to most kinds of rotary wing operations, we could make it even more simple, by agreeing that, sterile crew procedures should be applied, ‘on demand’.

APPLYING THE STERILE COCKPIT TO HELICOPTER OPERATIONS

What distinguishes the sterile cockpit concept in many helicopter flying tasks from its use in routine passenger flight sectors is that it is less of a rule based procedure. In an airline flight profile you can by-and-large apply the rule: No chitter chatter below 10,000 feet.

As a rule based procedure, the sterile cockpit is about good communication techniques and communication discipline. Most of us are aware of occasions when flight safety was compromised because crew members or passengers broke the silence, and our concentration. Although the rule is simple, well known, and easy to apply, it is also easily ignored, or overcome by our very human desire to share a thought, a sight, or get something off your chest right then and there before the moment has passed. Like many violations, not complying with it reinforces the sense that it is of little importance because most of the time it will betray no visible consequences. Most of the time.

In helicopter operations application of the concept is less rule-based and more of a skill-based procedure. Why? Because we are more likely to be interacting with the parts of the rule that require judgement, decision-making, and experience to decide how and when to apply it. Knowing when you have entered or are about to enter a critical phase of flight is not always straight-forward, can’t always be based on pre-defined flight profiles, and instead needs you to be able to interpret workload on behalf of yourself and on behalf of other members of the crew.

JUDGING WHEN TO APPLY STERILE COCKPIT 

When many complex or concurrent tasks are performed in a short time interval, distracting events can cause errors and significant reductions in the quality of work performed. The performance of a non-safety related duty or activity when crew workload is heavy could be the critical event which precludes a crew-member from performing an essential task such as extending the landing gear. This not uncommon happening was the case in an incident in 2016 in Japan when a SAR configured AW139 landed on a beach directly next to the scene of a rescue, and high whole-crew workload distracted a crew of four from checking the gear was down before landing.

The need to assess and interpret workload is particularly true in the case of the often high tempo and multi-faceted task environment of SAR and HEMS missions. Defining critical phases of flight for an IFR transit doesn’t take as much analysis and consideration as it can do within the constantly changing risk profiles that make up the landscape of some helicopter rescues. Neither does it have the same multiplicity of unforeseeable distractions, all of which could be construed as ‘essential operational activities’. Take for example the monitoring of multiple communications networks transmitting simultaneously. R/T tends to demand a high proportion of attention to the detriment of other tasks, as well as being one of the most cognitively draining activities in the aircraft. Recognising this kind of thing contributes to your understanding of critical workload thresholds.

The sterile cockpit is about more than just communication, it is primarily about workload.

THE STERILE COCKPIT IN PRACTICE

Highly effective crews tend to have a highly task-oriented communication, building in techniques such as ‘round robin’ interjections, or an information acknowledgement sequence. These kinds of advanced communication skills are developed by training and experience, and you recognise them when you see them in practice. These skills are also likely to be accompanied by an enhanced ability to intuitively understand and make judgements about the changing cognitive capacity of fellow team members.

Simply being quiet and not interrupting others is not the correct response to a call for sterile cockpit conditions. This is where the interplay with other non-technical skills such as crew monitoring come into evidence. When sterile cockpit is instigated it should be a ‘Red Flag’ warning you to tune into the workload of other members of the crew and up your rate of questioning and monitoring. It should trigger you to be on the look out for sources of distraction, and to start double checking the critical actions of other team members. It should also cause you to ask yourself whether you can contribute to sharing or distributing that workload.

STERILE AIRCRAFT – EXPANDING THE DEFINITION FROM COCKPIT TO CREW

In helicopter operations this is a whole crew function, and therefore semantically limiting the concept to a sterile ‘cockpit,’ in the sense that it is the domain of the pilots alone, is not helpful. That it is written this way highlights how it was conceived for a passenger jet with a closed door flight deck and a physical as well as metaphorical distance between the functions of the pilots and their crew. This compares with the different physical workspace of a helicopter as well as the important distinction of a more integrated team dynamic which is typical of helicopter crews.

During CRM training I listen to the perspectives of HEMS, SAR and firefighting helicopter teams. A question was raised recently as to whether any member of the crew could call for a sterile cockpit/aircraft. My answer was, absolutely you can. Doing so will not stop anybody on board from carrying out essential operational activities.

EASA’s regulation on sterile cockpit determines that the rule can be instigated ‘in any phase of flight’ on the judgement of the pilot in command. Whilst deferring to the ever-present authority vested in the figure of the Aircraft Commander, I don’t believe that the ability to call for a sterile cockpit is a role that should be restricted to the Pilot in Command, as the regulation would have you believe. It is not a demand for silence. Instead, it is a clear and known message to everybody else that flags up a high perceived level of cognitive demand on you or other members of the crew. It is expected to be met with an immediate response based on a known procedure and understood rationale. 

Medicalised and SAR helicopter scenarios give a perfect example of how the function of a sterile aircraft can be just as important when requested from back to front as from front to back. A crew working on a medical emergency in the helicopter cabin can be under higher workloads and pressures at different times to the pilots in the front, but the human factors at play remain the same.

In his book Peak Performance Under Pressure: Lessons from a Helicopter Rescue Doctor, Stephen Hearns describes how the relevance of a concept which he first learned from his contact with SAR and HEMS helicopters can be transferred into other high pressure environments. He has been applying it to emergency medical response scenarios, which is where it comes full circle back to SAR and HEMS aircraft. The sterile cockpit concept has a place in any high pressure, high workload environment because it is about the limits of human cognitive capacity and distraction, so why not allow it to be called for by any member of the crew that recognises when they – or anyone else – are being stretched or distracted? Investing the authority to do so in any member of the crew has the added bonus of teaching all of us to be responsible for the business of anticipating risk and empathising with the cognitive load of others.

PUT IT TO USE

How does the Sterile Cockpit rule appear in your Operations Manual? Unfortunately, some helicopter operators have latched on to the fixed wing application of sterile cockpit in flight phases below 10,000 feet in height, and transposed it, choosing to demand the same conditions in a helicopter below 1000 feet above ground level, and prohibiting a list of cockpit and cabin activities below that height.  This exposes the danger of an over prescriptive SOP which does not correspond to the realities of how we operate the aircraft. The result is non-compliance with a rule that effectively demands a constantly sterile environment for aircraft that rarely climb above that height. The rule quickly becomes worthless. 

When it comes to applying the concept, more than knowing what the rule says, we need to understand what it is trying to achieve. Now we can apply judgement. This is especially true in helicopter operations where the sterile cockpit requires a more subjective judgement of workload, rather than simply applying a rule about communication. Be sensitive to changing conditions of workload. Understand that anyone can demand a sterile aircraft in response to a perceived risk or distraction. And don’t be afraid to use it.

The checklist in the rotary wing cockpit: Understanding what, why, and how.

Do helicopter crews have as good an understanding of the proper use of checklists and checklist philosophy as their airline pilot brethren? Like everyone else, I have worked with checklists since I first set foot in the world of aviation. They are omni-present. But my in own experience – as far as I can recollect – I was never taught anything specific about the ‘correct’ way to interact with them.

Many of those who populate the rotary world are not brought up in a multi-crew environment from the beginning of their aviation careers like most airline crews are. On the contrary, the likelihood is that they served apprenticeships in the single pilot cockpit, where interaction with a checklist is necessarily very different. Even many military pilots come from a single pilot background for much of their careers. Those that don’t will still recognise that there is a less rigid application of the multi-crew SOPs in military flying than what they might have encountered subsequently in the civilian world. All of this leads to a different upbringing in our relationship with the checklist, and perhaps not as complete an appreciation as to how to use it and why they are so important.

The role of the checklist in how we interact with and operate aircraft is integral to the liveware-hardware interface between human and machine, and as such deserves more attention than we often give it.

The role of the checklist in how we interact with and operate aircraft is integral to the liveware-hardware interface between human and machine, and as such deserves more attention than we often give it. A checklist’s exact use, benefits, weaknesses, contradictions, and even failings are often a source of opinion and comment. Nevertheless, understanding its design philosophy – and when it is being used ‘correctly’ or incorrectly’ with respect to this – is central to our being able to make informed judgements about how we choose to use it in a practical context. If the operation requires us to adapt the use of the checklist or use it in a non-standard manner, then we should at least understand that we are doing so, and why. 

Understanding the value of your checklist

It may be a relatively humble piece of on-board equipment, but having a proper understanding of what your checklist brings to your performance, and what it offers you, should help you give the respect it deserves!

Here are ten things your checklist does for you:

Recall and memory functions

1. Helps your recall in the process of configuring the aircraft.

2. Provides sequences for motor movements and eye fixations around the cockpit panels. 

3. Provides a standard foundation for verifying aircraft configuration that will protect against any reduction in your psychological and physical condition.

4. Provide a sequential framework to meet internal and external cockpit operational requirements.

Crew co-ordination and Communication

5. Facilitates mutual supervision (cross checking) among crew members.

6. Enhances team situation awareness in configuring the aircraft by keeping all crew members “in the loop.”

7. Distributes crew workload by dictating the duties of each member thereby helping to achieve optimum crew coordination.

Supervision, Standards & Standardisation

8. Allows for standardisation of crews across operations and fleets, promoting an improved baseline standard.

9. Serves as a tool for supervision, quality control management and regulatory oversight over the crew in the process of configuring the plane for the flight.

10. Helps to overcome problems associated with high or low crew gradient when called upon as a well established SOP.

There is also an eleventh function that depends upon you having a sound understanding of the above. It turns out, that research has shown that when crews have a true understanding of the importance of the checklist, then the checklist itself promotes a positive attitude to the procedures that they contain, meaning greater compliance and adherence to SOP.

When crews have a true understanding of the importance of the checklist, then the checklist itself promotes a positive attitude.

Proper checklist philosophy and usage

There are two different methods that can be used to conduct a checklist correctly. Some operators will combine the two depending upon the nature of the tasks, but which philosophy is in use should be clear, and how they are to be used set down as a standard procedure.

Challenge-Verification-Response.

Under this philosophy, the crew use their memory and other techniques to configure the aircraft.

Then, once the initial configuration is complete, the checklist is used to verify that several critical items have been correctly accomplished. 

The process of conducting this checklist method is as follows: 

  • Pilot Flying calls for the checklist.
  • Pilot Monitoring calls the checklist item from the list.
  • Both PF & PM together verify that the item is set properly.
  • PF then calls the verified status of the item, and so on. 

Note that under “challenge-verification-response,” the checklist is a backup for the initial configuration of the plane, providing redundancy, as a ‘second check’.

The Do-List. 

This method can be better termed “call-do-response.” In this method, the checklist is used to direct the pilot in configuring the aircraft by following the list through step-by-step. 

The process of conducting this method is as follows: 

  • Pilot Flying calls for the checklist.
  • Pilot Monitoring calls for an item. 
  • Pilot Flying positions or sets the item to the correct position, and then announces the new status of the item.
  • Once the item is accomplished, the next item is read and so on.

Therefore, the configuration redundancy employed in the challenge-response method is lost with this method as the aircraft is configured only once as part of the check. It is worth emphasising that in both methods the task of verifying the status of each item is the responsibility of both pilots (also conceived that way for reasons of redundancy). 

Common Errors in Checklist Usage

Verification and redundancy

Probably the most common deviation in following this checklist philosophy properly is the failure to fully involve both pilots in the verification process. In both challenge-verify-response and Do-list formats, the verification of the status of each item on the list should be confirmed visually by more than one crew member to ensure redundancy.

Another common deviation which can occur is the Pilot Monitoring carrying out both the challenge and the response part of the list. This tends to happen in situations of high workload for the Pilot Flying, or when workload – particularly communication – on the part of the whole crew, gets in the way of a suitable time window for the running of the list before reaching the point the aircraft needs to be configured. 

There might be times when this situation cannot be avoided, but it is important to understand the effect of running the checklist without the interaction of the rest of the crew. The mutual redundancy that is built into the procedure is lost. Furthermore, the process becomes vulnerable to all the elements that the checklist is designed to prevent:

  1. With no one monitoring the process, the rigour and quality execution declines.
  2. The person running the list is more susceptible to skipping through items or skimming down the list depending on time pressure after the quick initial configuration of the aircraft.
  3. As it is no longer a formal crew event, it becomes more vulnerable to distractions such as ATC communications, outside scan, starting an engine during etc.
  4. Running the list against a previous check of configuration means that it is once again based on memory, and not on a step-by-step challenge-and-response.
  5. The situation awareness of the configuration of the aircraft is lost for the rest of the crew.
  6. The rest of the crew are unable to verify that the list has been run and completed properly.

‘Chunking’ the list

Chunking is a ‘short-cut’ to proper usage of the checklist that can develop overtime and become normalised. This is when several challenge items are called together in one “chunk,” by the person running the list, and the other crew member replies in turn a series of chunked responses. 

This short-cut undermines the concept behind the step-by-step challenge-and-response procedure and once again introduces a reliance on the pilot’s short-term and long-term memory as to the order and completion of the checklist, which, in fact, is exactly what the checklist is supposed to prevent.

Not calling completion

The completion call is a redundant action. In most cases crew members know that the checklist is completed. However, this is the only reliable feedback available to indicate this. Some operators write the completion call as the last item in each task-checklist, making the call itself the final checklist item. Some choose not to list this call in the checklist, but still require the pilots to make the completion call. 

Distraction is a common cause of poor aircraft configuration and checklist discipline. In a high workload or time-pressured flight as can be the case in many helicopter tasks, this becomes a high risk area for error. Calling completion of the list introduces an opportunity for the person running the list to confirm that all items actually have been completed. It also flags to the rest of the crew that aircraft has been configured for that phase of flight, and that discrete task of running the checklist is now over, allowing attention to be moved on to other things.

Ambiguous responses

Many checklists have variable responses on the list to allow for different aircraft configurations. For example, the words “set,” “check,” “completed,” etc. indicate that an item is accomplished. However, these words should ideally not be used to respond to the challenge. Instead, the response should always portray the actual status, position, or the value of the item (switches, levers, lights, fuel quantities, etc.).

The checklist in the rotary wing cockpit.

Like so many other aspects of rotary wing aviation, the design and philosophy of checklist use is inherited from airline operations. This is all very well, but it is useful to acknowledge that some important differences exist in the way many helicopter operations are crewed and function, compared with line flying in the commercial air traffic sector.

Rotary wing flying can present challenges to the proper use of the checklist. Acknowledging these pressures is a good start in mitigating their potential misuse.

For example:

Crew configuration

  • Single pilot ops are more prevalent
  • Single pilot ops can include non-pilot crew members
  • These can include non-pilot front seat crew members 
  • Multiple aeronautically trained crew allow for the option of non-pilot checklist reading/checking/monitoring

If helicopter crew is made up of more than just two pilots, or one pilot supported by other crew members a wider range of options is available as to how we can, could, or should interact with checklists. In my past military flying it was not uncommon to make use of non-front seat crew to assist with the reading of emergency and even normal checklists. Back in the days when airliners flew with Flight Engineers, things also worked this way. In a high workload situation it was seen to be a logical use of all available (human) resources in the aircraft. Or just good CRM. Obviously this depends on a well trained and drilled crew-member to be able to understand and read the checklist effectively.

Flight profiles

  • In rotary wing flying typical flight sectors are shorter, less uniform, and often interrupted by other, non-routine or unexpected tasks.
  • Traffic patterns, landing and approach briefs and recces can be more time constrained.
  • A typical flight can often contain many high workload periods compressed together or unevenly distributed in an unpredictable pattern. 
  • The urgency of the mission can sometimes prevent proactive workload management distributing tasks more evenly, or allowing extra time.

One hard working helicopter I have flown had logged 20,000 flying hours, but amassed over 60,000 landings. That amounts to more than three landings for every flight hour, instead of the more likely figure for an airline jet of three flight hours for every landing. Those statistics suggest a some helicopter crews might have up to 9x less time to interact with the same set of checklists that an airline crew has during a flight.

I know that the above example is overly simplistic, however, it is illustrative of a problem inherent to helicopter operations. I have listened to complaints from one operation that I have worked with that the (original, un-tailored) normal checklists that had accompanied the entry into service of their new helicopter was so extensive that they physically didn’t have time between each phase of flight to run the required lists and configure the aircraft, often arriving on scene for training or an incident before having worked through all the preceding lists. This kind of workload management problem is not uncommon in the sort of short hops often encountered in HEMS or SAR type operations, and can create the conditions for non-compliance with SOPs, under the category of “violation for organisational gain,” or not following the checklist because it doesn’t work “in the real world.”

Tailoring checklists to rotary wing tasks

Do these differences require us to interact differently with checklists to an airline crew? Fundamentally, the answer is no. However, we do have to account for the human desire to find the fastest, quickest, or most efficient way to work. In a questionnaire studying the reasons why we don’t always follow standard operating procedures, over 40% of people agreed that practicality – “they are unworkable in practice”; “they make it more difficult to do the work”; “they are too time consuming”; “they are unnecessarily restrictive” -was a key factor. A similar number identified optimisation as the main reason: “people usually find a better way of doing the job” or “it does not describe the best way to carry out the work.” A checklist design and a checklist philosophy that does meet the challenges of the way we are operating our aircraft will soon become a source of violation of SOP, and normalisation of deviance by crews.

Should then we be considering the different nuances to checklist design in the rotary wing world in more depth? Probably. 

It is the Operators’ responsibility to create and adapt type specific checklists to the nature of their operations. However, this is not always a straight-forward job. The versatility of the helicopter and its role flexibility can create the problem of how to integrate a variety of roles or tasks into a single checklist or set of lists. Or how to maintain a concise selection of checklists. A rotary wing operation that includes a variety of roles from load-lifting and winching to fire-fighting and HEMS, onshore and offshore, might have to balance the risk of proliferating ever more tailored lists, against the dangers of paring down to an overly generic one. Whether sufficient time and thought is put into the design and content of these is dependent upon the motivation, responsiveness, and the culture of individual companies.  

Whatever the challenges that using a checklist might present to us, we should still think of it as a trusted friend. The volume and complexity of what we know, and how we are expected to perform, has exceeded our ability as individuals to do so correctly, safely, and reliably. Using a checklist is like a cognitive net which catches our built-in mental flaws and shares the responsibility for errors. Furthermore, its use has been shown to establish a higher standard of baseline performance in all of us, so they will also make you a better aviator!

AERONAUTICAL DECISION-MAKING AND LOSS AVERSION:

What Nobel Prize Winner Daniel Kahneman can teach us about why taking the hardest decisions of them all is so hard.

In his book Thinking Fast and Slow Nobel Prize winning economist and thinker Daniel Kahneman introduces us to many fascinating insights into the human decision-making process. Loss Aversion is one of these. He begins by explaining how a simple experiment provides evidence of how strongly this cognitive bias affects how we take decisions:

Loss Aversion Theory

[Extracted from Thinking Fast and Slow by Daniel Kahneman p.283-4.

Many of the options we face in life are “mixed”: there is a risk of loss and an opportunity for gain, and we must decide whether to accept the gamble or reject it. Investors who evaluate a start-up, lawyers who wonder whether to file a lawsuit, wartime generals who consider an offensive, and politicians who must decide whether to run for office, all face the possibilities of victory or defeat. 

As an example of this decision-making in practice, examine your reaction to the next question:

You are offered a gamble on the toss of a coin. 

If the coin shows tails, you lose $100.

If the coin shows heads, you win $150.

Is this gamble attractive? Would you accept it?

To make this choice you must balance the psychological benefit of getting $150 against the psychological cost of losing $100. How do you feel about it? Although the expected value of the gamble is obviously positive, because you stand to gain more than you can lose, you probably dislike it – most people do. 

For most people the fear of losing $100 is more intense than the hope of gaining $150. We concluded from many such observations that “losses loom larger than gains” and that people are loss averse.

You can measure your aversion to losses by asking yourself a question: What is the smallest gain that I need to balance an equal chance to lose $100? For many people, the answer is about $200, twice as much as the loss. The “loss aversion ratio” has been estimated in several experiments  and is usually in the range of 1.5 to 2.5.

Loss Aversion in Operational Decision-Making

This is an astonishing ratio when you think about it, particularly when we translate the implications of a simple financial transaction or bet into the realm of operational decision-making.

Let’s take the example of a rescue scenario in which a helicopter crew find themselves faced with a situation in which they must winch four survivors from a stormy sea. They estimate they only have the performance and endurance to winch up two of the four survivors if they are to carry out the task safely, and within the limitations of the aircraft and established fuel minima. Despite this, they calculate that it might be possible that the extra two survivors could be brought on board at the expense of ‘stretching’ their established power, weight, and endurance limits, and they mull over this possibility.

They are faced with a decision. Let’s superimpose the scenario of this decision-making process onto the example of Kahneman’s experiment above. There will of course be many other factors to this decision, but we won’t introduce them here so as not to complicate the equation.

The stakes in Kahneman’s example above are no longer represented in financial terms ($100). Instead, the risk associated with the $100 stake is represented by the risk of damage to aircraft, equipment, crew, property, or third parties every time we fly. At worst, this stake could even represent the cost of life. You might think this stake to be a high one – and you would be correct – but (unlike Kahneman’s example which offers 50/50 odds) in risk terms it is offset by the fact that we generally consider the likelihood of a poor outcome to be low.

The ‘gain’ for this stake, or our return on this investment is represented by our professional status, fulfilment, pride in our role, sense of self-worth, and (especially in the case of risk-to-life missions) a feeling of making a contribution to society, or saving the life or limb of another human being. Taken together these are no small gains for the human psyche.

On the other hand, failure to carry out the rescue successfully would represent the ‘loss’ that is described by this scenario within the context of loss aversion theory.

If the gains are high, Kahneman’s evidence of our loss aversion tells us that at a ratio of around 2:1 we are going to be even more reluctant to surrender them.

It is approximately twice as hard for us to take a decision to abort a mission, turn-around, or land, as it is for us to keep going with the task or the plan for which we had cognitively primed ourselves.

The Cognitive Bias Responsible for Press-on-itis

This theory explains a lot about the power of that well-known ailment suffered by the unsuspecting aviator; “Press-on-itis”. It is approximately twice as hard for us to take a decision to abort a mission, turn-around, or land, as it is for us to keep going with the task or the plan for which we had cognitively primed ourselves. This will continue to be the case even if we have recognised a changing risk profile during the course of a flight.

The effect of this phenomenon is compounded by a further finding of Kahneman’s work on Loss Aversion. He also provides evidence for the case that humans are guided by the immediate emotional impact of gains and losses, rather than the long-term prospects of whatever the outcome may bring. This further increases the odds of us failing to take that difficult decision. The emotional impact of a perceived ‘failure’ to achieve a task (the loss) is immediate and weighs heavily on your mind, whereas the possibility of a bad outcome still seems remote, and improbable. 

From this we are able to make a couple of simple observations:

Firstly, any decision that involves a perceived failure to achieve a task, a loss of face, a hit to professional credibility or, self-respect will very likely be affected by this powerful bias. Secondly, it is probably fair to say that if a decision of this nature hangs in the balance, then the chances are that the correct decision is the option that makes you feel least comfortable.

Crew Resource Management Training from Classroom to Cockpit. Are we missing a link?

When preparing for a trip to the simulator most of us start by reaching for the emergency and abnormal checklist to refresh ourselves on the inevitable bevvy of aircraft malfunctions that we know will be coming our way in due course. Who hasn’t come across that sim instructor who feels it would be a dereliction of duty if they failed to cram every malfunction off this list into a 2 hour session? The conscientious victim will of course have sat down and brushed up on their technical knowledge of the aircraft, they will have re-committed to memory the key numbers and figures, and run the critical flight profiles through their heads in an effort to cognitively prime their coming performance in the box. They almost certainly won’t have shown the same commitment to preparing their non-technical skills however.

The imbalance of technical and non-technical skills in practical flight training

We have plenty of statistical evidence from accident and incident reports that Human Factors failings are ever present and lessons are there to be drawn in almost every case. In contrast, the kind of serious technical malfunctions that we focus on during most simulator training are not only every more rare, but they are usually only a part of the picture in any accident or serious incident. And as we well know from our modules on Threat and Error Management, these accidents and incidents are only the tip of the iceberg.

So why is so much of our valuable simulator training time given over to the tip of as many metaphorical icebergs as our instructor can place in our path during our voyage through the simulator? The bits of the iceberg above the waterline represents known knowns. These are the emergencies listed in our checklists. We prepare for them because we know they are there, and in the case of the simulator, we know they are coming too. They have immediate and subsequent actions, they have warnings, and cautions, and notes. They are therefore the easiest bits of the icebergs to avoid.

But Threat and Error Management teaches us that we should be focusing on the great hidden mass below the waterline. That is where the real danger lies. These are the Human Factors. Unlike the critical technical failure that might raise its ugly head for the unfortunate amongst us once or twice during a whole career, we all make HF related mistakes every time we fly. There is always some aspect of our non-technical skills that could be improved or worked on. There will often be lessons to be learned from. And yes, sometimes, our HF mistakes can even end in tragedy. The chances of this are infinitely higher than the single engine failure that we practice multiple times each time we visit the simulator, and many times higher than almost any other aircraft malfunction that we dedicate time to.

When it comes to Non-Technical Skills and CRM there is a sense that it has never been fully integrated into many training and checking regimes and can remain out on a limb in some training departments and ATOs.  On instructor competencies and assessment EASA clearly states that training should be both theoretical and practical and that the practical elements should include the development of specific instructor skills, particularly in the areas of teaching and assessing threat and error management and CRM. (EASA AMC.FCL.920 (a)) However, it is not difficult to provide evidence that there is still an imbalance between the technical and non-technical sides of training, and we have some way to go yet to redress it.

The role of the Line and Simulator Instructor in CRM 

CRM training was first introduced as a ground training course, and the regulatory (and therefore operational) focus of CRM on classroom theory that has emerged and developed as a result has meant that teaching the practical application of CRM in the air has perhaps not received as much attention as it warrants. This integration has long been a problem. Even the UK Civil Aviation Authority acknowledge, while discussing the introduction of CRM, that training received “mixed reviews from pilots, one problem being a lack of direct application and integration of CRM to the flight deck (pilots themselves were left to work out how to integrate it into their job roles).” (CAP 737, p.11)

A lack of direct application and the integration of CRM to the flight deck has long been perceived to be a problem.

The practical application of CRM classroom theory is really the realm of the flight instructor cadre. The Flight Crew Human Factors Handbook makes this critical point, stressing that, “Good use of CRM skills relies on effective training and assessment by practitioners (flying instructors, examiners, etc.)” 

The practical application of CRM classroom theory is really the realm of the flight instructor cadre.

Instructors are endowed with CRM Trainer privileges as part of their initial qualification and are expected to put these to use from the outset. The UK Civil Aviation Authority states that, “the role of the instructor is to develop the crew in their ability to both fly and operate the aircraft safely…from a human factors perspective the crew will need instruction in developing dealing strategies for threats and errors during both normal operations and emergency handling.” (CAP 737, p.170) The knowledge on which this privilege is based is supposedly imparted during their training as flight instructors, and then revalidated alongside their TRI/TRE/SFI qualification. The reality is that the breadth and depth in which the teaching of non-technical flying skills are touched upon during a process that inevitably focuses on technical knowledge and procedures is questionable.

Training in the teaching and assessing of CRM skills for TRI/TREs, particularly continuation training, is an area of weakness. A lack of detail in regulation allows operators to interpret the requirement to update and revise this knowledge as loosely as they wish. This results in an assumption in many cases that CRM understanding is practiced and maintained by default in the simple act of running a simulator session or carrying out line training. In CAP 737, hidden within the pages of its 25 chapters, its authors make passing reference to an important truth: They note that, “In terms of operational flight safety, instructors hold one of the most influential positions in the industry.” (p.169) This is surely so, which highlights the point that if non-technical skills are at least as critical to operational flight safety as their technical brethren, then our failure to raise the bar in this area is a clear area where safety could be improved.

In terms of operational flight safety, instructors hold one of the most influential positions in the industry.

It is easier for the flight instructors to concentrate on the technical side of training. It’s definable, it’s measurable, and it is more easily assessed and critiqued. On the other hand, the creation of effective CRM practical scenarios to train in the simulator takes a lot of thought, effort, and preparation. Unlike the training of technical malfunctions, they can’t be used again and again without losing their impact and value. They must be continually updated, refreshed and adapted to remain valid. Creating and directing effective CRM scenarios is often more complex than working through a series of handling exercises or malfunctions, and extends into being able to understand a philosophy and transmit a set of values and behaviours. It requires effective training of the instructors in turn. In this area, CAP 737 notes, “the instructor has the opportunity to reduce accident and incident numbers caused by human factors, if trained in the use of the appropriate tools.” However, it is often the case that the level of training that would provide those tools is either not required and not sufficient. The CAP goes on to admit the fact that, “the regulation is clear on the expectation and abilities required of the instructor, but it offers little in the way of guidance.” 

The Regulation

When it comes to CRM, there is some detail in regulation on what should be taught, but not much on how it should be taught. Training of Non-technical skills and human factors throughout a company is the responsibility of the operator alone. There is no accreditation of instructors by the Authority in this area, little guidance as to what constitutes appropriate training and checking, and often little capacity for oversight. The most in depth requirements in this field are laid down for CRM Ground Trainers for whom the Acceptable Means of Compliance are explicit in what they must achieve to maintain their knowledge, competency and currency in the discipline. But when it comes to the guardians and teachers of the practical application of CRM, and one of the most influential groups in the industry in contributing to safe flight practices, the only requirements demanded by EASA are that “All Instructors and Examiners shall be suitably qualified to integrate elements of CRM,” and “All instructors are suitably trained/checked in CRM and receive on-going development.” (CAP 1607, CRM Standards Guide) With no further guidance on what that should mean, anything more specific is left to the judgement and conscientiousness of the operator.

The instructor cadre shouldn’t be to blame for the emphasis that the system places on the technical over the non-technical side of flying. Type-rating courses; ground school; technical knowledge examinations; standards checks; compliance checks; standardisation; the requirements of recurrent training: all are set up to prescribe exactly what and how the technical stuff is taught, recorded, checked and audited. The fact is, that when it comes to the non-technical side of teaching flying, there is almost nothing specified as to the content and standards that might be expected of instructors, and little included in any training manuals that they could turn to for guidance should they be so inclined. The body of information in CAP 737, is perhaps a notable exception to this. In its introduction on Non-Technical Skills appears the admission that, “It is clear that to be most effective, such skills must be integrated into the job role, and this integration is something that CRM has traditionally struggled with.” (p.11)

Despite all this, the level of interest in operator CRM taken by aviation authorities is growing and it is likely that the crucial role played by flight instructors in integrating the practical elements into crew training will come into greater focus as they do so. Raising the standard of teaching non-technical skills as part of flight instruction will have a trickle down effect across flight operations. Improve the level of knowledge and commitment to the philosophy and principles of multi-crew co-operation and CRM amongst the trainers, and above all give them the tools to share practical techniques and advice, and the benefits will be much more widespread. Students don’t just remember what you teach, they remember what you are.

Don’t neglect your CRM: The value of telling stories

Last week was a CRM week. I was immersed in a Crew Resource Management course for aspiring facilitators with three full days dedicated to talking, listening, and learning about flying, human factors, and facilitation. Learning from the experiences of others is a lot of what human factors training is about. You don’t do that without a forum to talk and listen. But as I found out last week, it doesn’t have to be a classroom.

As is so often the case, it was beyond the course that the most enlightening, goose-bump inducing, and sometimes sheer terrifying stories were shared. Over lunch, or after hours with a beer in hand gathered around the table for dinner we picked up on themes from the classroom with personal tales. We don’t often have time to share these moments anymore. When they do come up, the opportunities are invaluable, sometimes priceless. 

I have just tucked away two or three anecdotes from colleagues which will go with me for the rest of my career. For as long as I fly. These stories held the room. You could hear a pin drop. Every single listener was perched on the edge of his seat straining not to miss a single detail. Stories of miraculous escapes in the air from almost certain disaster – usually at the hands of somebody else – and almost always ending in an unreasonably large serving of outstanding good luck. “I’ll never forget that day”, said one. “It is now my birthday,” he said, quoting the date from memory. “I was born again that day. I was given a second chance to live”. Strong stuff.

At the heart of all of these stories were not technical but human failings. In each case, the tellers were fully aware of the extreme danger into which they were being led, but did not have the tools at the time to do anything to prevent it. 

The telling of stories is such a fundamental part of learning about our failings. We all have stories. Many of us have lots. Most of us are able to identify with the stories of others. They all serve as a reminder that – as so many accident investigations demonstrate – it is unlikely to be your hands-and-feet technical ability that will keep you out of trouble, or save the day when things start to go badly wrong in the air. What will keep you safe, and in some cases even keep you alive, is your understanding of the human element. Don’t neglect your CRM. Your CRM skills and knowledge of human factors deserves as much or more attention that the rest of your flying training put together. It’s not just another currency item. Statistically, it is what is most likely to keep you out of trouble. It might, one day, just be the thing that keeps you alive.

Flying SAR in the sunshine: What’s not to like?!

From the Atlantic to the Mediterranean: The Weather. Learning to adjust. And just learning.

On moving to Valencia last year to try my hand at flying a Search and Rescue helicopter in Spain, the predominantly anti-cyclonic picture of Spain’s Mediterranean-facing east coast presented an entirely new meteorological situation to me. “SAR in the sunshine, what’s not to like,” was one of the tongue in cheek comments that sent me on my way from the UK. And it’s true; the days where CAVOK doesn’t make up most of the TAF seem few and far between. 

But, for a pilot who associated all the ‘gotchas’ of meteorology with the ‘bad’ weather conditions of the UK – changeable weather, low cloud, heavy precipitation, strong winds, and high seas – I have had to accept a change of mindset. I have come to learn that ready sunshine and slack pressure gradients can still present their own challenges to an unsuspecting aviator who might be more accustomed – and even comfortable – with Instrument Meteorological Conditions. 

One of the near constants of my flying career – the wind -has become less dependable and more fickle. The Atlantic-facing shoreline of the UK is rarely short of a pressure gradient and if you ask most people what the prevailing wind direction is in Britain they will surely tell you ‘south-west’. If you discount topographic effects as it crosses the country from west to east this is certainly the case. The dominance of our prevailing south-westerly is evident from the fact that even as aviators we don’t give our different winds names to distinguish them from each other. 

On Spain’s eastern coast, along what is known as the Costa Blanca and the Costa Brava the prevailing winds are more changeable. Each has a name, and even the general public are familiar with these and the kind of weather and temperature changes that each brings. They are known as the Mistral and the Tramontana from the north, the Levante from the East and the Poniente from the West. Strangely enough however for a part of the world that is seemingly so wind-conscious, one of the things that struck me almost as soon as I started to fly here was how little of it there seems to be.

Winds of the Mediterranean

Of course there is wind but, not only does it change sporadically with the pressure patterns, it changes direction and comes and goes throughout the day. With strong solar heating and a steep mountainous coastline which faces the sun, from the Costa Blanca northwards, and unaffected by Africa, the sea breeze is often by far the most significant factor. On parts of this coast, almost regardless of the morning wind, the sea breeze often sets in around early to mid-afternoon from a south-easterly direction. Sometimes it lasts for a few hours and reaches 20-25 knots, extending to some 15-20 miles offshore. At other times it dies out fairly quickly and will scarcely get to 10 knots. In terms of local time it seems to come in fairly late in the day between 1600 or 1700 hours but this is due to the fact that in reality it is only 1400 or 1500 in solar time, so still only 2 hours or so after solar noon. 

Valencia sits in a relatively sheltered position well to the south of the strongest effects of the Mistral and Tramontana that flow down from the south of France, and well north of the funnelling effects of the Straits of Gibraltar which gives the wind its strength on the Costa del Sol and places like Tarifa, the famous kite-surfing Mecca. By shortly after dark it is not unusual for the wind to disappear almost altogether, and for operational reasons, this is when SAR training usually takes place.

One of the more challenging elements of adjusting to these conditions for a pilot brought up on the Atlantic coast is the problem of wind finding, where you require a greater flexibility and a more questioning mindset in the set of up your approach, particularly in the later stages where the influence of the wind in low wind conditions starts to become more evident. It is not unusual at wind speeds below 5 knots for the wind direction to change significantly within a short space of time and over relatively short distances.

The skill lies in finding an into wind position by looking for all available cues at the bottom of an approach over the sea, including the position of the downwash, judging whether the aircraft has come to the hover in a slightly left or right wing low position, and the resulting need to introduce left or right pedal to correct until it settles comfortably. Once established in the hover slack wind conditions present the twin evils of higher hover torque and a few tonnes of rotor wash hitting the sea directly below the aircraft, which could easily drown a person in the water, and complicates the task of hoisting to small and even medium sized vessels.

Hoisting in safe single engine conditions even for training is unusual until the later stages of a flight, even in the abundantly powered AW139, and expectations as to what constitutes a flyaway condition in the event of engine failure are tweaked to figures up to 10% higher than those being used by colleagues enjoying the colder and windier conditions on the North Sea. Acceptable risk in this area has to be adjusted to take account for the art of the possible.

Rotor downwash effect on a flat sea.

Rescue techniques have also been developed to take into account the fierce downwash that can hamper hoisting operations in still wind conditions. Hover heights are stepped up, and a hi-line is almost always used to guide persons and equipment on the cable to a deck or a cliff situation, allowing the helicopter to remain in a position stood-off from the overhead to alleviate the worst effects of the punishing downdraft. Even so, the position of the ‘donut’ of air pushing down and out from the helicopter has the effect of turning and drifting smaller vessels in unpredictable ways, which can complicate the job of the pilot in keeping station and maintaining effective visual references. In situations where it would be too difficult or risky to con the aircraft directly overhead the target it is the job of the rescue swimmer to do the hard work of fighting their way through the wind and waves to reach the victim or the target vessel under their own power. They have my utmost respect for what they do to earn their keep.

Not everybody thinks of mountains when they think of Spain but it is in fact one of the most mountainous countries in Europe. Neither would most foreigners identify the Mediterranean coast as a part of Spain which is particularly mountainous. I thought I knew better than that, well acquainted as I was with the trip up and down the coast between the cities of Alicante and Valencia. But it wasn’t until I started flying here that I realised the highest obstacle between the two cities reaches a surprising 5304ft above mean sea level, requiring a safety altitude of 7500+ feet. This is a figure which seems very high up for a maritime helicopter pilot and is not far off the ceiling of the aircraft in some conditions. Skafell Pike, England’s highest peak, comes up over 2000 feet short. In summer, when temperatures often climb in to the high thirties, the density altitude climbs too and can become a factor if operating at high all up mass.

Terrain rises steeply from the sea on Spain’s eastern coast

Even close to the coast there are significant local wind effects; many of them orographic, as well as the anabatic and katabatic flows associated with mountainous terrain. Flying along the coast on a strong wind day with prevailing westerlies, a well-extended line of fierce-looking ‘cats’ paws’ can be seen marking the water downwind of the steeply rising coastline. These can easily reach a mile offshore. 

Downdrafts and the severe turbulence present in standing waves are a feature which occur on a scale that you would unlikely encounter in all but the most severe areas of Britain’s more gently undulating topography. In 2019 a six and a half tonne AW139 SAR helicopter was turned on its side in flight here in Spain and the windows knocked out by what is likely to be judged a severe turbulence event and illustrates just how powerful some of these forces can be. The aircraft and crew survived an emergency landing and the accident is still under investigation.

On the occasions that the prevailing CAVOK in the TAF makes way for a cloud group, it is no longer the low level stratiform cloud that I was used to in West Cornwall. Instead, towering cumuli and cumulonimbus tend to be the things to look out for. With strong heating, vigorous convection is always possible, and the warm water of the Mediterranean will ensure that there is always plenty of latent heat. Thunderstorms are the result. These can be massive and dangerous, and if they pop up in the wrong place at the wrong time could even result in a no-go decision for a SAR mission. My previous belief that the weather radar – in anything other than ground-mapping mode below 500 feet – was for airline pilots has given way to a realisation I have a new tool in the cockpit that it is well worth paying some attention to.

I have been a helicopter pilot long enough now to know that when it comes to flying there is always more to learn. I’m also far enough through life to understand the infinite value of new experiences. Moving to fly in Spain has illustrated this as never before, and has already taught me many new things in the cockpit and beyond. Truth be told, when I first looked ahead at the challenges it might bring I think I was unable to project beyond the struggles I might have with language, communication, and culture. I knew it would take me well beyond my comfort zone, and that it did. Sometimes it still does. And looking ahead it certainly will still. But knowing that after such a short time I can look back in the knowledge that I have grown as an aviator, and thinking about everything that I wouldn’t know if I hadn’t taken the leap, already makes it all worthwhile. SAR in the sunshine: what’s not to like?!

*The content and the views expressed in this article are all my own and do not represent those of my employer or any related parties.

Is Human Factors in aviation at a crossroads?

Now seems like a good time to look beyond the dark prism of the current COVID-induced crisis in aviation to consider a future beyond the mire.

The Chartered Institute of Ergonomics and Human Factors (CIEHF) recently published a White Paper called “The Human Dimension in Tomorrow’s Aviation System”. It’s made up of a series of accessible short articles written by Human Factors experts from across aviation and related fields, and provides some fascinating insights into how the aviation sector might develop and change through the 2020s and beyond. It’s also available to listen to as a webinar.

Most of these were penned before the seismic impact of COVID19 knocked the industry sideways. And despite acknowledging current difficulties as, “perhaps the biggest challenge in the history of aviation,” the long term view of this paper envisages a gradual return to pre-pandemic levels of demand and progress.  It is entirely possible that the fallout from whatever social, economic, and commercial changes the pandemic might provoke will actually accelerate many of the structural, technological, and systemic developments that these articles predict.

The White Paper explores a really broad range of themes, but two of its key arguments caught my attention:

The first is the idea that aviation is currently on an ‘uncharted and unprecedented journey’ set to change dramatically during the next two to three decades. This change is already happening all around us. What marks this change out from what has gone before is that up until recently change within aviation has been driven from within the industry itself, with technological and other innovations coming from aircraft manufacturers and operators. As a consequence, the industry set its own rhythm and enjoyed a measured pace of change. This is no longer the case, and many of the catalysts for change in the industry are now coming from outside the world of traditional aviation giants, driven by new business entrants with independently produced innovations. This is rapidly accelerating the pace of change and leaving the regulators struggling to keep up.

The second idea is that the role of Human Factors in aviation is also at a crossroads. It too must accelerate its development and capability to support the coming changes in aviation. The authors argue that the traditional approach to Human Factors has been a piecemeal focus on responding to single issue questions in safety or ergonomics, often in response to manufacturer demand or safety reviews. Acknowledging that aviation is now developing into becoming a ‘system of systems’ requires a deeper partnership, where the human and the technology are considered hand in hand, as interdependent. HF input will have to broaden from the micro to the macro, to itself become a more encompassing ‘systems’ approach. And it will need to be integrated strategically to allow it to meet the unseen and unplanned challenges that arise from all the technological, social, and regulatory change that we are now experiencing.

Aviation is on an uncharted an unprecedented journey.

Chartered Institute of Ergonomics and Human Factors

The CIEHF concludes that “in both civil and military aviation there is a need for a more concerted effort to harness human factors … so that it can support the raft of innovations and their interactions – intended and otherwise – that will become aviation’s ‘new normal’ in this decade.”

If you’re interested in Human Factors in aviation, there’s something for everyone in this paper. It’s food for thought and I recommend at least dipping in to it.

Can you learn to deal with the unexpected & unpredictable?

Cognitive Readiness in Search and Rescue operations: What is it? Do you have it? How do you get it?

There’s a problem with training to learn to deal with the unexpected: we simply don’t know in advance what the objectives of any training or instruction should be.

If you haven’t come across it already, Cognitive Readiness as a concept in Search and Rescue is a term that you might be about to become more familiar with. It was born of military studies into how different individuals are mentally equipped to handle fast-moving and ever-changing scenarios, and it is just starting to be applied to the world of Search and Rescue.

It describes a form of mental readiness, or preparation in a set of non-technical skills that demonstrate a capacity or predisposition for high performance in complex and unpredictable environments.

Nothing is as predictable in military operations as unpredictability. So the question of how we should prepare people, teams, and organisations for the unexpected, (which is by definition something we cannot anticipate) seems a reasonable one for military academics and theorists to be asking. The leap in the application of this thinking from the military domain to the world of Search and Rescue in all its guises, is logical too, given the related dynamic in SAR of real-world challenges, reactive problem-solving, unanticipated events, and unforeseen consequences. 

The leap in the application of this thinking from the military domain to the world of SAR is logical

This leap has been driven by a study carried out by a team from the Applied Psychology and Human Factors Group of the University of Aberdeen. Their work on behalf of the Helicopter Industry is behind the development of a new Behavioural Marker System that helps to identify (and therefore develop) aircrews’ Non-Technical Skills. (For more information see HELINOTS

Part of this study is aimed specifically at identifying the skill-sets considered to be important to Search and Rescue. In a survey which made a direct comparison between the responses of offshore transport pilots and search and rescue pilots, subjects were asked to rate which Non- Technical Skills they perceived as the most important to their respective roles. Despite the fact that Cognitive Readiness was new to them as a concept, the group of SAR pilots all immediately recognised its importance in their role, collectively rating it as the most important skill of all from a list that also included Communication, Situational Awareness, Teamwork, Leadership, Task Management, and Decision-Making. 

The study went on to identify Cognitive Readiness as a key difference between the missions of search and rescue, and offshore transport flying, concluding that it is a vital skill within search and rescue operations.

What is Cognitive Readiness?

It has been defined as,

“The mental preparation (including skills, knowledge, abilities, motivations, and personal dispositions) an individual needs to establish and sustain competent performance in complex and unpredictable environments.”

Fletcher, J.D. & Wind, A.P. (2014) Chapter 2: The Evolving Definition of Cognitive Readiness for Military Operations

More specifically, it could be said to be made up of:

  1. The ability to remove ambiguity and recognise patterns in uncertain, confusing, and chaotic situations.
  2. The ability to identify and prioritise problems and opportunities presented by these situations.
  3. The ability to devise effective responses to the problems or opportunities presented.
  4. The ability to go on to implement those responses.

In their 2020 study, Hamlet, Irving and McGregor (link) split the idea of Cognitive Readiness down into three categories:

Preparedness: the mental and physical preparation to enable pilots to respond to new tasks swiftly and effectively.

Problem-Solving: the assessment of a task from multiple perspectives in order to cope with taxing rescue conditions.

Adaptability: quick responses to a change in task focus, rescue conditions, and terrain.

Whether these skills are trainable is admittedly controversial. Certainly, they could include elements that are not trainable. However, the corollary of asserting that they are not elements that can be trained or taught is that all operators could hope for is that their recruiting/selection processes have been successful in choosing people with these innate qualities.

Do I have it? How do I get it?!

Whether or not you have these attributes, and what you can do to work on improving them, therefore seem like important questions for anyone who wants to be successful working in this domain. Academic studies on the topic have thrown up some disagreement as to exactly what components make up the idea of Cognitive Readiness. Suffice it to say, it is multi-faceted, and built upon many of the non-technical skills that will already be familiar to anybody working in aviation. These include, amongst others:

Situation Awareness

Problem Solving

Metacognition – the awareness and understanding of one’s own thought processes.

Decision-making

Adaptability

Creativity

Pattern Recognition – the basis for integrating the sensory information hitting the eyes, ears etc. in working memory with the contents and patterns stored in long-term memory.

Teamwork

Communication

Interpersonal Skills

Resilience

Critical thinking

Independent studies show – and aeronautical experience itself suggests – that there are ways of training many of these individually or collectively with respect to specific tasks or activities. However, the additional challenge of isolating Cognitive Readiness as a ‘trainable’ skillset is that, by its very nature, it has to be context free. If it were not, then we would no longer be operating in the land of the unanticipated and unexpected. Making training context free is similar in effect to the idea of studying Latin to develop developing generic learning skills that will help us understand German, solve mathematical problems, or perform other more specific activities.  

A healthy scepticism of fresh industry buzzwords might provoke some to express a sense that there is a bit of reinventing the wheel here. What separates Cognitive Readiness from previous error management/Non-technical skills acronyms such as TEM (Threat and Error Management) or CRM (Crew Resource Management)? Whilst there is still work to be done to allow Cognitive Readiness to be integrated into the sphere of error management and resource management as a whole, it is important to draw attention to the fact that the vast majority of literature and research work done worldwide in the sphere of human factors and CRM for aviation has been derived from the imperatives of the fixed wing and airline industries. Here, for the first time, are studies that are considering the specific challenges and peculiarities of Non-Technical Skills for rotary wing and Search and Rescue professionals as well.

The fact that we readily recognise the description that Cognitive Readiness makes about the challenges of unpredictability and adaptability that are demanded by our roles in SAR suggests that there is value in having a word that describes it. After all, can you properly understand a concept without a word to express it?

Can a fatal accident provide proof that CRM training does save lives?

Grand Cay 139

On July the 4th last year an AW139 departing from Big Grand Cay in the Bahamas at night hit the water shortly after take off killing all on board. The US National Transport Safety Board (NTSB) has just released the transcript from the cockpit voice recorder carried on board. Perhaps the most shocking part of what, as ever, makes a sobering read is the observation by the monitoring pilot exactly ten seconds before the impact that what was happening to them mirrored the fate of G-LBAL, another fatal accident, also in the AW139, and in very similar circumstances just five years before. 

These were the co-pilot’s final words: “There was a fatal accident in the UK and this is exactly what happened there.”

For those of you who haven’t read about or talked about the G-LBAL accident during a CRM session, it was another flight that lasted for less than a minute after take-off. On a foggy spring evening in 2014, the AW139 helicopter departed from a private site in the UK at night and with little cultural lighting. Although the commander had briefed a vertical departure, the helicopter pitched progressively nose-down until impacting the ground about thirty seconds after take-off, killing all four occupants. Spatial disorientation in a degraded visual environment, and confusing visual cues was the thrust of the the accident report. Specifically, it identified Somatogravic Illusion as a possible culprit for the progressively abnormal nose down attitude of the helicopter ‘feeling’ normal to the occupants. It seems likely that the investigation into the accident in the Bahamas last year will reach sadly similar conclusions. 

G-LBAL fieldG-LBAL

The co-pilot’s final words again: “There was a fatal accident in the UK and this is exactly what happened there.” Perhaps, he had read the accident report himself, or perhaps he had covered the accident as a CRM case study like some of you may have. 

Let’s assume for a moment that it was the latter. The fact that he recognised the relevance and significance of that knowledge at such a critical point raises an interesting question: Did the training serve its purpose?

The easy answer, would be to say no. After all, that knowledge alone did not give him the tools to prevent the accident from happening. But accident sequences and causal factors are always complex and multi-dimensional so perhaps it would be more useful to ask a different question: Could the co-pilot’s recognition of the situation, and how it matched his knowledge of a previous accident, have saved the flight that night? 

If things had turned out differently, and the co-pilot had gone on to intervene successfully and recover the aircraft, leading us to read about a serious incident instead of an accident, his comment on CVR would perhaps have been the first recorded proof that CRM training does indeed save lives.

Tragically, that’s not how things turned out. So. Does his comment suggest the value in accident report analysis and CRM training, or does it in fact underline its failings?

Are we successfully linking risks, environmental threats, and dangerous flight conditions in people’s minds through the training of case studies and the accidents and incidents of others, but nevertheless failing to provide the practical tools and techniques to allow them to react effectively? 

What was it that this pilot was missing in his training to be able to react in an effective and timely manner? Or is perhaps that the tools were there, but the ability to apply them quickly under conditions of acute stress and confusion, requires a more frequent and engrained level of training? 

Many pilots have been trained in intervention strategies and techniques. In some scenarios, where there is no immediate threat of an undesired aircraft state, these allow time for a building assertiveness, along the lines of Ask/Suggest/Direct/Take-over. For example, “Are you happy with the rate of descent?”… “I think we need to raise the nose” … “Pitch up” … “I have control”. Another such example is the acronym PACE, which stands for Probe-Alert-Challenge-Emergency, where an unsatisfactory or absent response from the flying pilot on the third occasion requires an immediate intervention and should be considered an emergency. 

Standardised callouts in response to flight path deviations should give pilots set phrases to fall back on which require little pause for thought as to how to express themselves. “Check pitch/bank angle/airspeed.” When and how they should be used is often set in SOPs as well, which is conceived to ease any reluctance to intervene for reasons of gradient or perceived offence to the ego of the other pilot. 

Despite all this, accident and incident statistics from EASA’s Annual Safety Review data shows that the delightfully underwhelming-named “Aircraft Upset” is consistently amongst both the highest risk and highest frequency occurrences in both the on- and off-shore helicopter sectors. And so the question remains that, despite a familiarity with the concepts, how many pilots have the level of training, or have worked with the phrases sufficiently frequently, to have the standardised call-outs and mnemonic lists on the tips of their tongue when the darkness is swallowing their aircraft into a black hole of disorientation?

Grand Cay 2

Appended below is an abridged excerpt from the transcript pertaining to the 50 second flight from take off to impact:

Take off:

PF:  I’m going.


8 seconds into the take off climb


PM: Watch your altitude

TAWS: “Sink”, “Don’t sink”

TAWS: “Warning terrain, warning terrain”

A/C: “150 feet”

TAWS: “Warning terrain, warning terrain”

PF: How high are you…

TAWS: “Warning terrain, warning terrain”

PF: 300 feet…

TAWS: “Warning terrain, warning terrain”

PM: We’re not.

PF: That’s what it says over here…

TAWS: “Warning terrain, warning terrain”

PF: Yeah we were diving…sorry

TAWS: “Warning terrain, warning terrain”

PM: There was a fatal accident in the UK and this is exactly what happened there.


The TAWS warning continued to sound until, exactly ten seconds after making this observation, the aircraft impacted the water with fatal results for all onboard.


 

Full Crew Flight Monitoring: mitigating the unique hazards in HEMS operations.

Italy HEMS

EASA’s Annual Safety Recommendations Review 2019 has identified HEMS as one of its key safety topics noting that,

“EASA has received several Safety Recommendations over the last years related to this topic.”

before going on to comment that,

“There are several unique hazards faced by Helicopter Emergency Medical Services (HEMS) operations. The time pressure, planning challenges and environmental factors associated with air ambulance operations makes them inherently high risk operations.”

The Safety Review now identifies Q3 2021 as the likely date for the drafting of new rules on HEMS operations in the form of an Opinion to be submitted to the EU Commission.

Three years on from a fatal HEMS accident in Italy that led to the some of the recommendations for change in this area, it seems that one of the areas that EASA intends to focus on is the greater provision of multi-pilot operations where flight conditions are complex enough to require a higher level of monitoring of the flying pilot.

Two years after I first wrote about crew monitoring for single-pilot helicopter operations, (Monitoring for Technical Crew) and the important safety role it should have to play, it is interesting to note that there appears to be no recognition of this in the current proposals, which instead seem to perpetuate the status quo of regarding monitoring as an activity that is exclusively limited to the cockpit rather than being a whole crew responsibility.

EASA describes that ongoing work currently includes proposals to:

“Draw up Guidance Material applicable to daytime flights…which provide indications about the opportunity of using two pilots in specific geographical areas where the orography and the possible sudden changes in visibility can make the conduct of the flight problematic, requiring, even as a preventive measure, the monitoring of controls and instruments.”

Their objective is to,

“Maintain a high aviation safety level by reviewing the requirements related to HEMS flights by day or night, regarding equipment, training, minima, and operating/hospital site illumination.”

To date, there is still nothing that requires training on monitoring for helicopter technical crew or non-pilot crew members that goes beyond the basic annual recurrent CRM syllabus. This in itself does not focus in on the important safety implications of this topic. The fact that none of the published material on monitoring from any source considers the role of flight crew members beyond the cockpit also underlines the lack of attention that appears to be given to the function that non-pilot crew could and should have in monitoring all aspects of the flight, as well as monitoring the pilots themselves.

CRM is defined in CAP 737 as “The effective use of all available resources, equipment, procedures, supporting services and, above all, people to perform a task as efficiently and safely as possible”. All HEMS crews are made up of more than just a pilot or two. If EASA are serious about instigating a step change in helicopter safety, then it is time to acknowledge the whole crew concept and adjust the focus accordingly.

Decision Making in a complex environment: The role of experience, intuition, and the contribution of team behaviours.

complexity

What is a complex environment?

Put simply, a complex environment is a system or situation that has too many elements and relationships to understand in simple analytical or logical ways. It is a landscape with multiple and diverse connections, and dynamic and interdependent relationships, events, and processes. While there may be trends and patterns, they are entangled in such a way as to make them indiscernible, complicated by time delay, and contain any number of feedback loops and knock on effects.

In turn, a complex problem is one that is difficult to define and may change significantly in response to any chosen solution. It may not have a single ‘right answer’. It will have many interrelated causes, few or no precedents, can have multiple external influences, and is often prone to surprise.

The decisions we are faced with in the aviation environment can often match this definition neatly. Ours is a landscape that is well cultivated with surprises, emergent changes, and an ever changing meteorology, both literally and figuratively. In this landscape, the problem or situation requiring a decision can often be unique, dynamic, unprecedented, difficult to define or bound, and have no clear set of solutions.

Decision-making in conditions of complexity

Sometimes we will be faced with a decision where we cannot apply a typical rule-based or practiced response. Some aircraft malfunctions or emergencies can be dealt with by defaulting to a simple, clear, and immediate rule based task. Others can be met with knowledge based actions that apply a measure of cognitive effort on behalf of the pilot. But when an ambiguous situation presents the pilot with symptoms or circumstances which they are unable to match to anything in their prior experience or knowledge database it is an altogether different prospect.

When we consider human limitations in the face of extreme complexity our capacity to respond effectively can seem futile and our decisions less an exercise of judgement than role of a dice. These are the situations of ‘unknown unknowns’ which prior risk assessments and threat and error management will not have taken into account. Here too lies the greatest risk of startle effect brought on by fundamental surprise where an entirely new situation takes the pilot outside of a framework which he understands, and requires a thorough re-evaluation of the situation starting from basic assumptions. In these circumstances, often – even with the benefit of our best judgement – the outcome of a situation will depend heavily on what some would term luck.

unknown

Despite this, and as is the case in any informed decision-making process, taking a decision under conditions of complexity does involve deploying a toolset that we all have to varying degrees. We can call it simply ‘Experience’ and it is made up of many elements. Your toolset may include experience, education, relationship networks, knowledge of past successes and choices, multiple frames of reference, cognitive insights, mental, physical and emotional wellness, knowledge, an awareness of related external and internal environmental pressures. We could go on.

When dealing with complex systems the ability to put your past experience and cognitive capabilities to best use is probably the most important consideration of all. This means applying both your conscious and unconscious mind (with its memory and associative processing power) to help understand the situation in which you find yourself and place it within some context that you can understand. In whatever small way, by doing this you are pulling yourself back from the unknown to the known.

The good news is that we all know much more than we think we know. We spend our lives soaking up data, information and knowledge, and through our experiences and internal thinking and contemplation we develop understanding, insights and feelings about things of which we are often unaware. Even when we are unable to actively retrieve a piece of stored knowledge or experience from our long-term memory it can still influence our unconscious decision-making process. This phenomenon is often interpreted as intuition. Like, for example, the firefighters who might ‘just have a feeling’ that a building is about to explode. In fact their senses are unconsciously matching the stimuli present with past experience of the same thing happening before.

The ability to use intuition and judgment to solve problems or react to situations without being able to explain how they know is a common characteristic of experts. As Malcolm Gladwell described in his book Outliers, it is said that it takes at least 10,000 hours or ten years of deliberate practice to become an expert in any given activity. However, many people may do something for 10,000 hours – for example, driving a car over the course of a lifetime – and still never get anywhere near expert level. Most people plateau and some even get worse. Those that do achieve what could be called expert status do so by actively learning through deliberate, investigative, and knowledge-seeking experience, developing intuition and building judgement through play and intensive interaction with the system and its environment. That is what is meant by deliberate practice.

chess

A study of chess players concluded that “effortful practice” was the difference between people who played chess for many years while maintaining an average skill and those who become master players in shorter periods of time. The master players, or experts, examined the patterns over and over again, studying them, looking at nuances, trying small changes to perturb the outcome (sense and respond), generally “playing with” and studying the patterns. The report also noted that, “… the expert relies not so much on an intrinsically stronger power of analysis as on a store of structured knowledge.” (Ross, 2006, p. 67) In other words, they use long-term working memory, pattern recognition and chunking rather than logic as a means of understanding and analysing.

This indicates that by exerting mental effort while exploring complex situations knowledge becomes embedded in the unconscious. By sorting, modifying, and generally playing with information, manipulating and understanding patterns and their relationships to other patterns, a decision-maker can proactively develop intuition, insight and judgment relative to the domain of interest.

When it comes to error management in aviation, one of the principal countermeasures we use is the standardisation of practice and procedure. We employ key tools to assist us in this, such as the checklist in both normal and emergency forms. In some cases they are there to help guide our decision-making process. But while standardisation in all its guises is possibly our most powerful weapon in the most common and routine areas of error management, it does construct barriers towards the development of experience, insight, and the deeper level understanding as described above. Alongside other risk mitigating measures such as weather minima, go/no-go items, and SOPs which prescribe strict boundaries to flying parameters and manoeuvres, and prohibit others, we create a strangling effect on experience. Put most simply, we know that we learn most from making mistakes. If we are effective in preventing mistakes, then we are necessarily restricting our learning. We will no longer be able to stretch the boundaries of our experience by experimentation and learning from error. 

In the context of routine flying it is not hard to see how the balance of cost-benefit topples rightly towards constraining risk. But it is a more difficult dilemma in the context of complex situations, where we fall back more heavily on experience and cognitive problem solving. The well known modern safety dilemma of automation dependency is an example of this. Safety is greatly enhanced by modern auto-flight systems which for the most part do the job of flying better than any pilot could. But when that system goes wrong the pilots that have only flown with its assistance have not always built up the experience, knowledge, and thought processes to operate safely without it. Furthermore, the growing complexity and ubiquity of these systems is such that no pilot will ever have a complete understanding of their intricacies even over the course of a whole career. The complexity has outgrown our capacity.

Impact of the team dynamic on decisions in a complex environment

Recognition of the above makes the contribution of a team more important than ever. The use of teams to develop multiple perspectives, engage in dialogue, and drive critical thinking can improve the overall understanding of a complex situation, thereby improving decision making.

As individuals, some things are not always clear to us because we’re just too close to them. As we take in the external world and events around us, we think that we observe, create a model in our minds, and thereby have an accurate representation of the external world. Unfortunately, this is not the case. How we view a situation, what we look for, and how we interpret what we see depend heavily on our past experience, expectations, concerns and goals.

It stands to reasons that two brains, and two perspectives are better than one: it increases the availability and understanding of relevant facts, data, context information and past behaviours. But it is not just the sum of the knowledge available that helps, the dynamic in group decision-making can also make a large difference. When we find ourselves in confusing situations, facing ambiguities or paradoxes where we don’t know or understand what’s happening, it is intelligent to recognise the limited mental capacity of a single individual.

Confusion, paradoxes and ambiguities are not made by the external reality of the situation itself; they are created by our own limitations in thinking, language and perspective or viewpoint. This is why teams can improve the understanding of complex situations. Multiple viewpoints, and the sharing of ideas and dialogue can surface, and clarify confusion and uncertainties to an extent far greater than any one individual mind can do. 

Team-working also encourages mental flexibility. Mental flexibility means the capacity to learn to maintain an open mind, and not be prejudiced by past outcomes, organisational routines or standardised thinking. It is the ability to assess an occurrence in its environment objectively and have the wherewithal to take whatever rational action makes sense—either on the basis of logic or intuition and judgment—to achieve decision-making goals. This flexibility means that decision-makers must be willing to move beyond conservative solutions that have proven themselves in the past and be willing to consider new approaches where outcomes are uncertain (at best) and perhaps completely unknown (at worst). This also means that people must be capable and willing to work with each other, work with others they have not worked with before, work in new ways, and take unfamiliar actions.

Team Resource Management 

Training in Crew Resource Management concepts in aviation and Team Resource Management in a wider context provides obvious benefits in handling complex environments. The development of resilience, strategies for responding to fundamental surprise, and the building of mental flexibility, and decision-making tools all contribute to this. So too does understanding the limitations of our situational awareness and our mental models, our own character and personality, and how we process and perceive the information hitting our senses. As ever in CRM though, nothing is more important to decision-making in complex environments than a properly functioning and productive team dynamic, with effective communication at its core, which allows ideas and actions to be shared, questioned, and critiqued. Setting the conditions to encourage these processes to take place should be our first priority in training and operating.

Adapted from: Bennet, Alex & Bennet, David. (2008). The Decision-Making Process for Complex Situations in a Complex Environment. Handbook on Decision Support Systems. (See link)

Crew Resource Management for Search and Rescue Operations.

2ca04f0b-5adc-412c-8cbe-b08c6a6a46eb

It’s a personal opinion, but I think that one of the most enduring fallacies of Search and Rescue, (and one that SAR practitioners worldwide are unlikely to be working hard to shake off!), is that it is somehow an elite branch of the helicopter world that requires a higher level of skill or ability than other kinds of flying. I think that is nonsense. Like all specialist operations it requires a high and consistent level of training in a few particular skills and disciplines, but that doesn’t mean that it is the reserve of the particularly skilled. What will set apart those who do it really well from those who just do it are not ‘flying’ skills at all, but ‘soft’ skills, non-technical skills, CRM: give it the name you will. A successful mentality in SAR requires flexibility; the ability to absorb and react to changing circumstances; take decisions, and then revisit and be willing to change those decisions; accept perspectives different to your own; take advice and even criticism; and have a certain unflappability. 

But of course these are attributes that will serve you well in any environment. In its objectives and its relevance, CRM, and how it is trained, is no different with respect to SAR than it is to any other operation. Failures in CRM; failure to use, manage, or prioritise the resources available to achieve a task within or beyond the cockpit does not result in any more critical an outcome in SAR than in any other safety critical activity. 

There is no such thing as CRM topics or training that is unique to SAR operations. 

I have run CRM training with some very experienced crews, both in SAR and non SAR. When I ask them to list as many of the 14 core topics that make up the CRM syllabus, most can’t get beyond Communication, Situational awareness, and Decision-making. Why? Because, even the most experienced tend to think of CRM first in terms of events within the aircraft, even though they know there’s more to it than that. 

SAR aircraft usually use a multi-pilot crew concept, as well as a multi-crew crew concept, where the nature of the task means that priority and leadership moves around the aircraft depending on the stage of the mission, so perhaps it requires a more developed level of teamwork and a less rigid model of leadership than normal, but what is going on in the aircraft is only a part of the picture.

1a5e0f8c-edec-4baf-b2ae-c4272d0494b0

This unconscious prejudice towards what used to be called ‘cockpit’ or ‘crew’ resource management ignores what is actually most likely to impact on your resource management, which is often the external factors. And what does separate SAR operations from other flying operations such as CAT/Offshore, and (arguably) even more specialist areas such as HEMS and HHO is the potential multiplicity of inputs, influences, and external factors that can affect the conduct of the mission, as well as the decision-making processes both before and during the flight.

In SAR, the bubble of potential resources, supporting services, and people that could impact on your mission tends to be much larger, and therefore exponentially more complex. These influences could include the Rescue Coordination Centre; other rescue agencies such as coastguard, police, fire service, and ambulance; the presence of the public sometimes in large number; other SAR or HEMS aircraft at the scene; or multiple vessels involved in a search or rescue at sea.

Likewise, the choice of equipment, procedures, and variety of options open to you as to how you might go about the task are more likely to be more numerous, specialised, and probably complex, than in many other operations. Do you winch or land? Do you get a vessel to make way or heave to? Does the winch rescue call for a strop, rescue litter, or stretcher? Do you use a hi-line, a single lift, or a double lift? Which way do you point the aircraft to get the best trade off between the pilot and the winch-operator’s sight references and priorities? How is the downwash affecting the rescue effort below the aircraft? In SAR there are many ways to skin a cat, and whilst some are likely to be better than others, few are likely to be the ‘right’ or ‘wrong’ way. 

You might also have to contend with other people’s conflicting priorities such as the fishing captain who’s more interested in his catch than disembarking an injured crew member; interagency rivalries where more than one rescue service think they are best placed to achieve the task; or the cruise liner on a tight schedule that won’t alter course. Then there are the ever changing circumstances, such as one callout I had to a man overboard at night, which while on route to the scene became a ‘we’ve found him on board, but injured’, and then when arriving overhead turned out to be a crazed sailor wielding a knife that the rest of the crew had locked in the on board freezer to cool off.

In comparison, operations such as offshore Commercial Air Transport are more tightly bound and controlled by rules and procedure. This takes a lot of the mental effort of routine decision making out of the hands of the crew. A flight is carried out within a set of reasonably defined parameters. Much of what is to be expected can be briefed beforehand and procedures that delineate the flight be followed. Even if things start to depart the script and an aircraft malfunction is experienced the checklist guides the decision-making process: if ‘A’ happens, then apply procedure ‘B’. Nothing is that clear cut in SAR, where most of the decisions will have to be weighed off against what is at stake both in the aircraft and on the ground, and even malfunction actions in a checklist can vary depending on the type of mission.

So how should this be addressed in training? Should CRM training for SAR have a different focus or different priorities than other operations? My answer is no. Why is this?

The standard EASA CRM syllabus that is followed by most operators and its 14 sub-topics are a catch-all that potentially cover every aspect of every possible operation. The way the regulations are written allows a lot of leeway for tailoring training to your particular operation, and includes very few specifics about how you should go about covering the different areas and in what depth. Unlike many things in aviation that require and value standardisation, I believe that this is a good thing when it comes to CRM. 

When I deliver training I put a lot of emphasis on tailoring it to the problems, challenges, and specific demands of the operation, so the training will naturally focus on what you do. For example, I get crews to raise what they consider to be their five key threats to safety, and we compare and contrast answers and go into them as a starting point for training, asking how we could address them as an operator, how we could improve, and what changes we could introduce. It is always interesting to see how the answers differ between the pilots in the front seat and the rear crew members. Each has very different perspectives. 

Not all operators run CRM training this way. Many have a stock syllabus that you have to sit through which is repeated time and again over the years. Even worse (in my opinion) is delivering CRM training via online courses which is the worst of all worlds. You lose the ability to sit together face to face and have those open and frank discussions about things that have happened to you and others and why. Very little true learning takes place and you lose the ability to drill down into the specific questions or problems raised by your crews.

CRM training sessions should, above all, be a forum for critically appraising yourselves, your activities and procedures; the way you operate. They are a chance to get together at least once a year and ask ‘what did we get wrong and why?’, ‘what could we do better?’, ‘how can we develop?’, and ‘where are we getting left behind?’ They are a time and place to raise safety concerns and propose changes. In doing this you will find discussions delve naturally into areas such as communication, culture, workload management, teamwork, situational awareness, and all the rest of the things that EASA require us to talk about.

Flattening the gradient

What former US Navy Captain and leadership guru David Marquet can teach us about managing power gradient.

The premise of David Marquet’s book Leadership is Language, (For more on Marquet click to his website) is that the deliberate and self aware choice of language in how we communicate within teams can transform the way we communicate and collaborate with others, supercharge team performance, and help to manage errors. Power gradient is one of the areas that he dwells on and demonstrates how your choice of language can build up or break down gradient.

Screenshot 2020-04-23 at 18.56.54

Gradient is not just an aviation thing. Nor is it only about a Captain-co-pilot dichotomy. It exists between all the crew, as well as it exists in any and every relationship. Neither is gradient a fixed element. It is ever changing and variable according to circumstances and situation.

Whatever the circumstances however, one thing never changes. The rule of power gradients is that the steeper the gradient, the more difficult it is for information to flow upwards – think speaking truth to authority.

What are the trappings of power gradient? They surround us and are omnipresent: physical separation – one office for the boss, another for the team – executive dining rooms, crew lounges, reserved parking spots, different uniforms or attire, for example overall and hard hat colours, share of voice in meetings and groups, and indeed the way we have learnt and are culturally programmed to talk to each other. Nevertheless, in many cases the senior person in a relationship may not even be aware of the existence of these cues, will not be thinking about gradient, and is unlikely to be bothered about it. A junior person will sense the power gradient more.

If your position or your experience gives you more authority or power, then you need to actively work to flatten the power gradient with those below you. It is incumbent on the authority figure to flatten the gradient because it is extremely difficult for the junior person to flatten an otherwise steep power gradient to their senior. Evidence from many studies demonstrates that teams with flatter gradients have better back-and forth-communication, better error correction, and more learning. How do we make members of the team feel sufficiently safe to speak up?

Screenshot 2020-04-23 at 19.22.16Screenshot 2020-04-23 at 19.07.52

Steps to actively flatten a power gradient:

  1. Be able to admit you don’t know.
  1. Be vulnerable: the leader should draw attention to his own mistakes and doubts.
  1. Trust first: trust means I believe you mean well. Whether or not you do well depends on many factors beyond just wanting to do well.
  1. Enhance accessibility, both physically in terms of getting rid of barriers of status and social distance, and in terms of your communication.
  1. Deliberately work to reduce the fear of the junior person of being judged, assessed, and evaluated by others, especially in a social context. This is called creating psychological safety.
  1. Dedicate deliberate thought to the use of language. Focus on how you communicate. To do this well requires continuous self-awareness, and reprogramming to avoid the default imperative mode of communication that we habitually resort to. 
      • For example, language that steepens the power gradient and prevents participation:
          • “I have more experience.”
          • “I have done this before.”
          • “I was in the meeting.”
          • “The boss told me that he/she wants…”
          • “Well, you’ve never done that before.”
      • Language that flattens the power gradient and enhances participation sounds like this:
          • “Your fresh eyes will be valuable on this.”
          • “Just because we’ve been doing this for a long time, doesn’t mean we can’t improve it.”
          • “When it comes to improving things, different perspectives are helpful.”
          • “I’ve done this so many times it’s hard for me to see it objectively.”

Despite long embedded CRM practices that have positively affected communication and safety, Captains continue to crash airplanes four times more frequently than co-pilots because co-pilots are less willing to correct their Captain’s mistakes than the other way round. Furthermore, the Captain is less willing to listen to a correction from the co-pilot than vice versa. This speaks to the powerful allure of human nature to hierarchy.

The Chimp, the Virus, and the Helicopter:

The Chimp

Coping with confinement and COVID-19 induced stress.

What is the link between a chimp and a helicopter?… apart from the fact that every instructor you ever had told you that any monkey can be taught to fly one!

Screenshot 2020-04-02 at 12.10.00

In his best-selling book The Chimp Paradox, Professor Steve Peters (click for further information) describes a seven step blueprint for dealing with stress. Step four, he calls The Helicopter and getting perspective. 

Perspective is possibly the most important factor in coming to terms with any stressful situation. It is certainly one we should be putting to work right now to confront the uncertainty, doubt, and undoubted stressors caused by COVID and confinement.

He puts it like this:

Imagine you have climbed into a helicopter that has taken off and is now hovering about the situation.

Screenshot 2020-04-02 at 12.09.40

You can now look down and get some perspective on what is happening. Imagine your whole life as a timeline from start to finish and see where you are at this particular point in time. Ask yourself, “How important is this situation to the rest of my life?” “What are the really important things in my life, and is this one of them, or has it changed them?” Remind yourself that everything in life will pass. You will soon look back on this moment as a distant memory. Very little in life is important in the long run.

Crew Resource Management: Is it time to rethink our approach?

Let’s not beat around the bush, Crew Resource Management has an image problem. For many, CRM training means little more than a day in the classroom which generally inspires at best a resigned ambivalence.

CRM has an image problem…

Perhaps, there has been a failure to attempt to define CRM for what it really is. A failure to separate it from being just another compulsory annual training item to tick off, alongside the more minor competencies, and place it in the context of aviating as a whole.

The problem with defining it as a set of syllabus items to meet the regulatory requirements is that the end objective of CRM training becomes compliance. But compliance is not the end objective of CRM training. The purpose of the training is to make us better, safer, more complete aviators.

The end objective of CRM is not compliance.

Is this the fault of the regulators by creating a culture of compliance which demands adherence to a set syllabus and the repeated teaching of the same items over and over?

Or is this the fault of the operators for misunderstanding what CRM is, for failing to grasp the flexibility that exists in the regulation to encourage a tailored and adaptable approach to training, and for a mental laziness on how to approach training as a way to engage with improving your operation?

Are we losing sight of the ends by focusing too much on the means? That is to say, the means of compliance. How do we achieve a change of mentality to embrace the fact that CRM is no more and no less than a catch-all term for all the behaviours, knowledge, and skill sets that make and define us as pilots or aviators?

CRM has an image problem. Is it beyond resurrection? Or could we shift the paradigm to make it genuinely integral to and inseparable from any other part of recurrent flying training? It is my belief that annual CRM training session should be (within the context of CRM theory) a forum for a broad operational debrief of the problems, challenges, positives and negatives of what you do or how you operate. It should ask questions such as, ‘what could we do better and how?’ ‘What do we do well and why?’ ‘How could we extend our expertise?’ ‘Where are the gaps in our knowledge?’ ‘How do we better draw lessons from what we do?’ It should be unique to each operator, and tailored to meet the questions raised by each part of their operation. Because if that is not our approach the we will never manage to take the training into the aircraft, which should be the ultimate end point of dedicating time, effort, and expense to it.

“Engine failure! Cut cut!” Power loss during winching operations: the pre-eminent risk in your assessment?

helimer-javea“Clear door, ready to winch.” “Power assessment/hover scenario: Ditching/Committed/Flyaway/Safe Single Engine.” For most of us who fly multi-engine helicopter types, single engine performance and the choice of flight profiles deriving from this was introduced as a predominant consideration from the beginning of our flying training, and has remained there ever since. 

Our pre-flight calculations, our SOPs and even our flight checklists ensure that a possible power loss event remains at the forefront of our minds, and in most situations the guess-work and judgement is taken out of the equation anyway with strict parameters within which we are allowed to train and fly.

This creates a kind of availability bias. 

Availability bias is the fact that you are going to focus on what you know most about or have heard about before as a dominating risk, rather than evenly balance the risks. In this case, a power loss scenario. The problem is, the evidence clearly shows that in helicopter winching operations, power loss is not the dominating risk. In fact, it is far from it.

Incident and accident data from helicopter winching operations worldwide tell us that it continues to be as risky an activity as it ever was. It is made up of a broader and longer list of causal factors than are ever going to run through your head as you access the scene of the proposed winch operation and establish the hover. Some of these include:

  • Blade strike hovering near obstacle
  • Entanglement with gear (causing either attachment or unintended persons on winch)
  • Physiological degradation caused by chest strop
  • Casualty in strop losing consciousness
  • Roll out from hook
  • Shock load breaking wire
  • Attachment to rock-face/vessel
  • Loss of visual references
  • Accidentally hooked to weak harness point
  • Winching starts with person unattached or partially attached
  • Untethered hoist operator
  • Uncontrolled spin
  • Hi-line attached to person or object on the ground
  • Hi-line entanglement with person on winch
  • Hi-line weak-link insufficient for large high-powered rotorcraft
  • Tree blown down by hovering aircraft during winching
  • Aircraft enveloped by descending cloud-base during winch operation
  • Downwash causing fatal fall
  • Accidental cable cut

That’s too many eventualities to explicitly cover in briefing either pre-flight or in-flight. But which ones should we be prioritising and when? And are our SOPs and our decision-making processes flexible enough to allow us to tailor our profiles to the situations in which we find ourselves ?

I do not cite definitive statistics, but a quick scan of accidents and incidents in the past five years throws up five accidents involving fatal or serious injuries during helicopter hoisting. The most recent of these happened just this month in Japan, where a 77 year old lady fell 120 feet to her death after being incorrectly attached to winching equipment. It followed hot on the heels of another well-publicised incident in June of this year where a Phoenix Police crew grappled with an uncontrolled stretcher spin.

Accidents involving power loss during winching do happen, however they are rare. One took place in Iceland on 16 July 2007 when an AS365 Dauphin from the Icelandic Coast Guard ditched during winching following an engine failure. There were no fatalities and no injuries.

Safety management and reporting systems exist for us to readjust our approaches to how we operate based on the data that they produce. The weight of evidence suggests we should be shifting our focus to include other risk factors, inviting ourselves to think in new ways about how we are going to get caught out, and asking ourselves where the greatest risks lie in each case. What processes, procedures, or models can we put in place to help us to do this? What about our approach should we be re-evaluating? 

Winchops

As a community, should the technical crew be taking a lead on this, or does it necessarily require a whole crew approach? When I led a training session case study on a winching incident last year, I deliberately split the groups into front-seat and rear crews to see how they would take a different approach to assessing the risks. Sure enough, their considerations were dominated by their respective areas of expertise: they had been trained to think that way! Furthermore, the view of what constitutes the top risks from the differing perspectives and experiences of cockpit and cabin does not always coincide. How do we balance these off? How should we draw out and combine these different thought processes as effectively possible?

The Structured Debrief: The Big 7

kneeboard

The Structured Debrief

During my time in the military a debrief often began with the question, “Any flight safety points?” A closed question. A question that invites a no.

It was meant to show the primacy of flight safety in what we were doing. What it actually did was immediately put people on the spot if they had an issue to raise. It would sometimes point out the elephant in the room. Usually if there is a significant flight safety point then everybody knows it. 

Either it is a wasted question because whatever the issue is it is about to dominate the debrief anyway, or it is the elephant in the room about which no one wants to speak out. Surely it is better to tease it out by starting a discussion that can lead people into the subject in a less confrontational manner?

What is The Big 7 Structured Debrief?

The Big 7 are based on core CRM principles which will be familiar to most, but using them as a structure upon which to construct a debrief might be inviting a change of mindset for many of us. 

The Big 7 Structure

  1. Communication
  2. Workload Management
  3. Decision-making
  4. Situational awareness
  5. Monitoring
  6. SOPs
  7. Automation

Use the subject titles above to carry out a debrief based on these fundamentals.

This has a number of advantages over a more traditional approach:

Recall

Using the Big 7 to structure your debrief triggers memory better than a chronological approach. It provides a cognitive framework and forces you to think critically about each aspect of your flight.

It avoids the mental bias of engaging with the most memorable events instead of the most significant with respect to your CRM behaviours.

Breadth

These titles encompass a broader approach which can include pre-flight – even from before you arrive at work – to the brief, and other wider influences and external factors.

Openness

It allows for open questions on specific areas of flight. 

For example: 

“How could we have used the automatics to better effect?”

“What could we have done better to manage our workload?”

Raising each subject one by one encourages crew members to speak up with respect to specific concerns/events that they might not otherwise have drawn attention to: “On the subject of SOPs… “As you’re asking about communication, actually…”

Structure

It avoids a chronological approach which tends to focus on the early part of a flight in detail, remember the end, but skim over much of the middle. The problem with debriefing chronologically can often be that as it starts to drag -conscious of time and levels of interest – some of the most important lessons in the later stages get skipped altogether.

Focus/Brevity

Unlike a walk through-talk through of your flight, using the Big 7 to provide focus can help you to keep a debrief short and to the point. If there’s nothing to say on each topic, then a simple ‘no points’ will keep you on track to a concise debrief. If there is something worth discussing, then you can get straight to the point without already having testing colleagues’ patience by talking at depth about less relevant or significant events.

Management

Any debrief needs to be managed to prevent it getting bogged down by the talkers, the micro-analysers, and those inevitable conflicting points of view when they arise. By being broken down into discrete subject areas this structure allows the conversation to be moved on diplomatically when momentum is being lost.

139cockpitnight

Debrief to learn: El debriefing para aprender

El empleo del ‘debriefing’ como herramienta de entrenamiento para los profesionales aeronáuticos es la manera más eficaz para aprender, hablar de la toma de decisiones, adquirir/revisar habilidades técnicas y mejorar el trabajo en equipo.

Effective debriefing as a training tool for aviation professionals is the most effective way to learn, to talk about decision-making, to acquire or revise technical skills and to improve teamwork.

El debriefing se define como la conversación entre la tripulación y/o personas afiliadas a la operación para revisar un evento real o simulado, en el que los participantes analizan sus acciones y reflexionan sobre el papel de los procesos de pensamiento, las habilidades psicomotrices y los estados emocionales para mejorar o mantener su rendimiento en el futuro.

A debrief is a conversation between members of the crew and/or personnel involved in the operation to go back over an event – real or simulated – during which the participants can analyse their actions and reflect on their thought processes, motor skills, and mental states to be able to sustain or improve their future performance.

Si bien la experiencia es la base para el aprendizaje del adulto, este no puede suceder sin una reflexión rigurosa que permita examinar los valores, las presunciones y el conocimiento base que guían la actuación de los profesionales. Es decir, simplemente acumular experiencia no equivale a convertirse en experto.

Experience may be considered the basis of adult learning, but we cannot learn without a thorough reflection on our actions that allows us to examine the values, assumptions, and knowledge base that is behind them. To simply build experience alone does not make you an expert.

A pesar de su importancia, el debriefing es un dilema para muchos porque a menudo no encuentran el modo de manifestar abiertamente sus juicios críticos sobre la actuación observada sin herir los sentimientos o provocar una actitud defensiva en sus compañeros. Como resultado, a menudo lo evitamos. Evitamos verbalizar nuestros pensamientos y sentimientos por no enfrentarse a nuestros compañeros, no ponerlos en evidencia o no provocarles emociones negativas, y porque deseamos conservar una buena relación profesional con ellos.

Despite its importance, debriefing is a challenge for many people who struggle to find a satisfactory way to express constructive criticism of others without hurting feelings, or provoking a defensive posture in their colleagues. Consequently, we often avoid it. We avoid speaking our thoughts out loud, or expressing our feelings to avoid provoking a negative reaction, or with the excuse of trying to maintain a good professional relationship.

Características del enfoque debriefing con buen juicio

Este enfoque se basa en compartir abiertamente la opinión o el punto de vista personal y, al mismo tiempo, asumir lo mejor que aporten los participantes. Se fundamenta en pedir los más altos estándares a los colegas y asumir que sus respuestas merecen gran respeto. 

Characteristics of a constructive debrief

The constructive debrief is founded on being able to share opinions or personal points of view openly, and at the same time take on board the best of what the other members have to offer. It is about demanding the highest standards from your colleagues and understanding that their feedback in turn demands your respect.

Nuestro reto es hacer de los errores una fuente para mejorar la seguridad, por lo que no parece adecuado tender a ocultarlos, ser tímidos y no expresar el propio punto de vista, ni hacer preguntas abiertas o dirigidas con la esperanza de que los participantes puedan llegar a las conclusiones que nosotros somos reacios a expresar. 

The challenge for us is to see our errors as not detracting from our safety standards, but feeding them. To make that happen we can’t hide them, we can’t be shy about speaking out, we can’t just hope that with an open question we might tease out a conclusion that we are loath to bring up directly. 

Si no se puede analizar y discutir los errores cometidos durante un entrenamiento o una misión echamos a perder la mayor fuente de desarrollo profesional que tenemos. Para mejorarnos como aviadores y también promover la seguridad, se necesita encontrar un camino para discutir abiertamente los errores, lo cual es fácil decir, pero no tan fácil de llevar a cabo en la realidad.

If we can’t analyse and discuss the errors that we make during missions or during training we are selling ourselves short of the greatest source of professional development that we have. If we really want to improve our performance and improve safety, we have to find a way to talk openly and freely about our mistakes. This is easy to say, and not so easy to do.

Debrief2Learn

What does the language barrier mean to the multi-national cockpit?

 

Communication-Barriers

The challenges of piloting an aircraft can often load us up enough without overlaying the cognitive effort involved in transmitting and receiving information in a language other than our own. But plenty of pilots out there do just that every day. In aviation, those of us lucky enough to have English as our mother tongue are unlikely ever to fully appreciate the additional task that this implies for the rest.

By recently signing up to fly helicopters with the Spanish Coastguard, I have just given myself the opportunity to find out first hand.

Like any other skill set, communicating is one that can be learned. But, as anyone who has tried it at almost any level can attest, striving for success in communicating in a language other than your own can be in turns the most satisfying and most demoralising of activities. It takes effort and tenacity.

Despite the fact that in the airlines a multilingual crew is probably more the norm than the exception, it is probably true to say that notwithstanding the mix of nationalities in many large helicopter operations, it is nevertheless not the case to the same extent in the rotary wing world.

You might think that when we talk about the impact of language on CRM then we’d be focusing in on the subject of communication. But here’s the thing: I am a strong believer that CRM is communication. And communication is CRM. The art of communication is the thread that runs through each and every other element that makes up our non-technical skills.

As a CRM instructor I had the privilege of spending some time in the classroom with French pilots who were joining a British Search and Rescue operation. We talked about some of the difficulties that language could present, but it wasn’t until I flew with them that it was really brought home to me the incredible amount of extra mental energy that they must be expending carrying out that role in a foreign language.

So now it’s my turn it struck me as an interesting exercise to try to consider some of the different ways that operating across a language divide could impact on the CRM of the whole crew. The fact that I have never paused to give it such in depth thought before is itself worth highlighting: I have worked with speakers of English as a foreign language throughout my career, and I know that I would not be alone now in admitting a failure to pay sufficient attention to my own language usage when working in my mother tongue.

As a non-native speaker, being the odd one out amongst a crew of native speakers raises a number of questions in my mind about how the impact of my language usage and interpretation will impact on the crew dynamic and the Operation in any number of technical and non-technical ways.

How much will I lean on pro-words, and standardised call-outs? Will I rely on them too much, at least in the early days?

How will the limitations of my language skills restrict my propensity to verbalise my thought processes or avoid moments of communication that I might not have shied away from in my own tongue, with less mental effort?

What effect will it have on my cockpit workload? How can I offset the impact that it will have on my capacity to deal with other eventualities in flight?

To what extent will it impact on my situational awareness? Will I miss anything going on around me due to the knock on impact it will have on my capacity in general, or due to the limitations of my language skills themselves? What impact will my deficiencies or the differences in how I communicate in my second language have on the situational awareness of the rest of the crew? Will I be able to explain myself as quickly and effectively, draw pictures with words, and contribute to building crew awareness?

The mental effort required working in a second language is fatiguing. What effect will that have on my performance? To what extent will my colleagues be conscious of the impact of this on me?

What if I suffer a surprise and startle event? Will I be lost for words? Will the particular challenge of emergency or abnormal situations prevent me from finding the language and being able to express myself fluently? Under acute stress will reversion to my native tongue happen automatically, and what impact could that have on the rest of the crew?

Will insecurities -albeit subconscious ones – about how language might hold me back or change the way that others look at me, prevent me from projecting a greater leadership role? Would the corollary of that be that I am asking more of the team around me?

For those of you who will never put yourselves outside of this particular comfort zone, although you are less likely to put these questions to yourself, they are no less relevant to you should you fly alongside a member of your crew that is working with you in a second language. Many of us are guilty of not paying enough attention to this. How often do you actively consider the way you speak? Your choice of words, avoidance of idiom, colloquialism and slang, moderation of local or national cultural or geographical references? Do you consider you own pronunciation, accent, speed or clarity of speech? I expect that in most cases the answer is no. Or not often. Why? Because it is not something we habitually do. It takes mental effort and deliberate thought.

This is a question for the whole crew: How does the fact that I don’t speak your language just like you do change the way we have to work together?

 

Language-Barrier.jpg

 

What delivering two years of CRM training has taught me.

helicopterpilotmug

Something over two years ago I decided I would throw my hand in at applying to be a CRM Trainer and Human Factors Facilitator. CRM had never really been my thing. My experience of Human Factors/CRM training up to that point was that ‘facilitators’ tend to be either evangelical to the extent that their fervour in preaching ‘the message’ would rub me up the wrong way, or they’d be a minimum effort type, an insipid presenter of stock hand-me-down case studies with nothing to add, and nothing to offer.  In the case of the first I certainly never thought myself as ‘that guy’ – I don’t like any kind of preacher. I was equally certain that I would do everything in my power not to be the second. 

I don’t know what inspired me to take on the challenge of all but single-handedly becoming responsible for leading training in something for which I considered myself uniquely unqualified, over a startlingly broad range of subjects about the human condition, to a group of professionals who were – to a man – all more experienced and (for the most part) considerably more knowledgable in most areas of aviation than me. Thinking about it, I don’t even like the title ‘Facilitator”. It sounds underwhelming. Under qualified. Not really even good enough to be an instructor. No knowledge to impart. Nothing to teach. So, if anything, this was about challenging myself to do something that not only was I ambivalent about, but was also out of my comfort zone, in that up to that point I had spent most of life avoiding putting myself front and centre in any classroom or group environment. 

Two years later I have delivered some 24 training days in as many months. That’s about 190 hours of CRM training in two years. That’s somewhat over the EASA mandated minimums. So I can safely say that I have talked CRM enough recently to understand that doing a day of it once a month is some people’s idea of purgatory. I get that. Maybe the fact that I get that is the only reason that I might consider myself qualified to talk CRM to those people.

There are still plenty of aircrew out there who have what we shall politely term a ‘healthy scepticism’ for the benefits of CRM. That’s fine as far as I am concerned. I believe it stems from the fact that aircrew are, generally speaking, a very intelligent and highly motivated audience who are expected to constantly question and critically appraise the validity of what they do. If they haven’t bought into the value of the training then it is because the training has failed to convince them of its validity. Getting told the same old platitudes about the use of body language in communication or the need to think twice before taking a decision is at best wasting the time of an intelligent audience. Playing little team games to illustrate the obvious is at worst patronising.

So what has two years of doing CRM taught me? 

Don’t patronise people. Don’t waste their time talking about the same fundamentals they have covered year in year out if you don’t have anything interesting to add to the subject. This isn’t hard. I figured that being aircrew myself I don’t fall out-with that group, so it is reasonable to assume that anything that I find really interesting or engaging is likely to interest others of my ilk too. Anything that I roll my eyes at would probably cause some eye-rolling elsewhere too. For example, don’t play team games unless they have a genuine and clear message!

Make it relevant. This seems like an obvious one, but it seems to me the absolute key to providing interest. What is covered has to resonate with the listener. They have to be able to link it with their own experience. The more strongly and immediately they make that link, the more it will resonate. To that end, the closer you can tailor your material to the operation that they fly, the aircraft that they fly, the problems that they encounter, and the organisational challenges that they face, the more they will buy in, and the more the debate around your subject will come alive. 

All the feedback that I have received points to case studies being the single most effective way to engage people with the subject. There is no more provocative way to get people to evaluate their own failings, errors, and near-misses, than by drawing the link between them and other people’s alternative outcomes. I call it the ‘Sliding Doors effect’. Or the Here but by the grace of God go I effect. As with point two, the closer the similarities between a case study and how the audience operate their aircraft, the more powerful the effect.

The EASA regulations on CRM allow for all of the above. The way they are written allows for a huge degree of flexibility in tailoring the training. I would like to think deliberately. They do not require a slavish adherence to syllabus items as they do not specify the depth to which you have to cover each topic. For me this gives a chance for us to address the naysayers. We need to re-frame the way people see the obligation of annual CRM training. Instead of a tiresome day in the classroom to sit through and a currency box to tick, if we, the trainers, can get people to see it as an opportunity to talk shop about professional matters with colleagues, a forum within which to discuss almost anything of interest about our professional skill-sets, to raise questions, admit to mistakes, and create an environment for learning, who wouldn’t want to have that opportunity? 

The final thing that becoming a CRM Trainer has taught me is that I massively underestimated how much it could teach me. For one, I’m certain that I have learnt more from putting together and delivering the training than anyone has learnt from me, either individually or collectively. I’m equally certain that the lessons I’ve picked up along the way have made me a better pilot. 

It has also fundamentally changed the way that I think about CRM as a discipline. Now I see CRM as being a kind of catch-all term for all the myriad skill-sets that define being a pilot. Except perhaps for a very restricted definition of manual, hands on, hands-and-feet, flying skills. And maybe a bit of aircraft technical knowledge. And when you look at it like that, what makes a good pilot does not – proportionately – equate to whether or not you have ‘a good set of hands’. In actual fact, it is all about the other 90%. That is to say, your CRM.

Sting in the tail: keeping the back end at the front of your mind.

Screen Shot 2018-11-28 at 20.49.16

Following the accident at Leicester City Football Club at the end of last month, all of which was caught on camera, and replayed very publicly, tail rotor failures are back in focus, and for those of us who fly the machines, are very much at the forefront of our minds.

It was following a similar spate of high profile tail rotor incidents and accidents in the 1990s that the UK CAA and the Ministry of Defence co-funded a research project by QinetiQ to study in depth the incidence of tail rotor malfunction across fleets, and consider initiatives to both reduce their frequency, and mitigate their consequences. 

The study, Helicopter Tail Rotor Failures CAA Paper 2003/1 was published exactly fifteen years ago, in November 2003. It still represents, to date, the most in depth study of tail rotor malfunctions conducted, and for anybody who is interested in a more in depth understanding of the subject it should be mandatory reading.

How much has changed since then? Has the failure rate reduced significantly with the development of more sophisticated HUMS technology as the report anticipated? How much have training and checking regimes and improved flight simulation evolved to better educate and drill pilots on reacting to and understanding the nuances of different malfunctions in tail rotor control? 

The motivation for the 2003 study was the overwhelming evidence that tail rotor failures (TRFs) were occurring at a much greater rate than airworthiness design standards required. This was true for both tail rotor drive and control systems, on both civil and military types.

Data Sources

My first observation as to the effectiveness of the study is the difficulty that was experienced in collecting reliable and comprehensive statistical data. The project examined helicopter TRF statistics covering the period 1971-1994 based on data from the 3 UK MOD accident databases, and civil data were taken from the CAA’s MOR System for the period 1976-1993. Recognising the limited scope of the data set, the study was expanded by obtaining data from global sources. As data from Eastern Block nations were not available, the remaining sources were gathered from the USA, Canada and New Zealand which accounted for over 80% of the remaining known aircraft. 

In the absence of worldwide collated accident and incident databases aircraft manufacturers were approached for material to be included in the analysis but they would not release their own figures into the public domain. Presumably, this position has not changed in the intervening 15 years. 

What has changed during the period 2003-2018 is the world wide web, global interconnectivity and sharing of information, and our ability to gather data on a whole host of subjects. Certainly, better computerised databases have become more accessible in recent years, take for example the Wiki-style aviation safety database run by the Aviation Safety Network which lists over 20,000 aviation accidents/incidents across aircraft types. Notwithstanding the possibilities that internet has opened up in this field since the 1990s, the ability to search a full database using specific categories such as aircraft type, date fields, and accident causal factors, seem still to be beyond current capabilities.

This is a disappointment given that the very first recommendation of the report was that, “the relevant authorities co-operate to standardise accident and incident classifications, and the details recorded in occurrence reports worldwide.”

The ability to collate occurrence data so that it can be statistically analysed for useful conclusions in the interests of safety and learning is the purpose of any Safety Management System. Although the technology to expand this concept to a global level already exists, and despite the huge untapped potential in safety progress this could represent, the human, organisational, and commercial challenges to making it a reality are still a bridge too far. This is a subject I have touched upon in a previous article titled Aviation safety culture and the paradox of success:  Can safety innovation keep pace with technological progress?

The 2003 Study

The UK military airworthiness standard for a TRF (defined as ‘extremely remote’) was set at 1 per every million flying hours. Looking at the statistics for the military types alone, TRF incidents were occurring at an average of over 8 per million flying hours, with the Westland Lynx leading the pack at 33.2 per million flying hours, over 33 times the level considered acceptable.

The expanded database compiled for the 2003 study comprises data from 344 TRF occurrences across civil and military fleets. The civil airworthiness standards define ‘extremely remote’ at an even stricter occurrence rate of 1 in 10 million to 1000 million per flight hour. The study revealed that actual accident rates across the fleets were in the range 9.2-15.8 per million flight hours. The overwhelming evidence demonstrated that TRFs were occurring at rates much higher than the airworthiness standards require.

Screen Shot 2018-11-28 at 20.55.22

Is this still the case? In the absence of a new study of TRF occurrences in the period since 2003 it is impossible to say definitively, but anecdotal evidence suggests that there has not been a significant decline in these occurrences with the introduction of new aircraft and technologies in the past 15 years. Within the UK military fleet a few significant TRF incidents spring to mind in the last 7 years alone.

The development of Health and Usage Monitoring Systems (HUMS)

The 2003 study focused on HUMS in some depth as a key technology for monitoring TR health. At the time of the occurrences studied (1970s-1990s) HUMS was yet to be born or was still in its infancy.

One of the findings of the study was that failure of the drive system accounts for approximately one third of all TRF occurrences and fatalities. It went on to conclude that, by conservative estimate, 49% of TRF caused by failure of the drive system, and 18% of TRFs overall could have been prevented by HUMS, and that those figures could be increased by another 15% and 5% respectively with further development of HUMS technology. A more up to date study of TRF statistics would allow us to check up on the accuracy of that prediction. 

There is good evidence that HUMS analysis has had an important role to play in preventing potentially catastrophic TRF accidents. This was highlighted by an incident in the North Sea in December 2016 when an S92 suffered a TR pitch shaft bearing failure over a platform helideck. The warning signs were picked up on HUMS data, but systemic and human factors intervened to prevent the anomaly being identified, and the aircraft was released back into service, leading to the incident. For a good short case study on this incident by Aeroassurance, click here.

Screen Shot 2018-11-28 at 20.21.54

This occurrence has been a catalyst for further momentum in the development of HUMS, making the case for advancing the concept into real time HUMS (See article). Real-time HUMS echoes one of the 2003 report’s recommendations with respect to the future potential for HUMS which says that, “”further work should be conducted to define an approach for the presentation of in flight information.” 

Furthermore, it goes on that, “In the longer term consideration should be given to providing an intelligent cockpit warning system that prioritises warnings, presents immediate actions, guides the pilot through a sequence of steps, and makes supporting information available”.  We are not there yet.

Training, emergency procedures and advice to aircrew

The study found that statistically the largest causes of TRF are the TR either striking or being struck by an object, which together account for approximately one half of all TRF occurrences and fatalities. Of course, these are not airworthiness incidents at all. As might be expected, the study also confirmed that a disproportionately large number of occurrences (51%) are associated with high torque phases of flight. 

In terms of training, two quick lessons can immediately be drawn from this data, namely that situational awareness of the tail, and the flight condition in which you chose to spend time, are two areas where better training and awareness could help to keep us safe. To this we could add FOD and loose article awareness, as it appears that a recent fatal accident in New Zealand on 18th October of this year could have been caused by an item of clothing being sucked out of the cockpit and going through the tail rotor. (See article).

Soberingly, the report concluded that for a TR drive failure in forward flight with a pilot intervention time of 2 seconds, (considered to be a realistic estimate for a well-trained pilot) the outcome is a transient sideslip that is likely to be beyond the structural limits of the aircraft, and would require a control response by the pilot that will cause the rotor speed to exceed the transient limits and make a successful outcome very unlikely. Similarly, the hover trials showed that there is little that can be done to avoid the spin entry caused by a drive failure.  Recovery from a high power TR control failure was also very difficult, with the chances of recovery without significant damage concluded to be low. 

In many cases the trials identified that the difference between a successful and unsuccessful outcome turned on the speed with which the pilot could recognise a TRF and therefore their subsequent reaction time. If fast enough it might prevent the aerodynamic response to the loss of anti-torque taking the aircraft beyond its structural limits. This conclusion makes the quality of training and technical advice on different types of failure and individual aircraft responses to them amongst the most critical factors in recovering from a TR related incident.

Acknowledging this, one of the key recommendations of the report was that manufacturers should be required to analyse the effect of TRFs in their aircraft types, and to provide more in depth advice in terms of handling and emergency procedures. The importance of this also stemmed from the evidence that arose about complexities and differences in aircraft responses across different types. A one size fits all philosophy that ‘a helicopter is a helicopter is a helicopter’ does not apply when it comes to dealing with TR malfunctions. The trials showed how significantly different the response to ‘the same’ malfunction can be according to aircraft type, flight regime – and crucially – the immediate response of the pilot to the initial symptoms of the failure. Thus, knowing how to recognise these, and respond correctly for your aircraft could determine a life or death outcome.

Also highlighted is the variation and standard of advice currently given in Aircrew or Flight Manuals. In many cases (and my current type, the AS365N2 is a case in point) there is no handling advice to the crew whatsoever as to the characteristics of the aircraft following a drive failure, and TR control failures are not referred to at all. The key message from the research is that manufacturers should be mandated to provide validated, and more in depth advice on TR malfunction handling for all types. This includes that:

  • It should make clear whether the use of a power and speed combination is appropriate during the recovery from a TRDF.
  • There should be information on techniques required to control the descent.
  • The loss of tail pylon/TR components should be identified as a source of possible aircraft pitch control problems.
  • Unusual vibrations emanating from the TR area should be identified as being indicative of a possible TR problem and should lead to selection of minimum power setting when in forward flight, or a landing and shutdown for technical investigation if in the hover.
  • Unusual pedal positions should be identified as a possible impending TRF condition, leading to the same actions as above.
  • The effects of a TR control circuit disconnect on the TR pitch condition should be identified.
  • Where appropriate to type, the benefit of varying the main rotor speed in the hover following a TRCF should be advised.
  • There should be a requirement for the manufacturer to identify the possible failure modes of the TR control circuit, and the impact of TRFs on the anti-torque moment supplied by the TR so that appropriate advice can be generated.

Screen Shot 2018-11-28 at 21.01.21

For some of the larger, and more modern types, there is no doubt that the quality and quantity of information available to aircrew has improved significantly in recent years, as has the accessibility of more in depth sources. For example, the philosophy of the Flight Crew Operating Manual as a supplement to the Flight Manual which communicates the manufacturer’s guidelines to operators for enhancing operational safety during routine and abnormal situations, is evidence of this kind of advance. However, the standard of advice still varies greatly depending on both manufacturer and type, and this level of guidance to flight crews is not always the norm.

Simulator Training

The fidelity of simulator training is discussed in some depth. This is another area in which technology has advanced in leaps and bounds since the turn of the century, as has its now far more widespread use by operators. However there is still the problem of how to model effectively a flight response – and gather data – for something that is usually so far outside of the flight envelope that it cannot be safely recreated in flight. 

Image result for helicopter flight simulator red screen

The recommendation that TRF diagnosis and recovery training in a simulator is part of normal company training policy is accompanied by the warning that the provision of inappropriate training due to poor modelling in the simulator or poor type-specific advice from instructors could exacerbate the problems encountered during emergencies. 

Summary

On reading the report it seems reasonable to conclude that fifteen years later there have been some significant advances made, especially with respect to the quality of training and advice on TR malfunctions available to aircrew, and in the extent that HUMS is now a factor in providing early warning of technical, maintenance, or design failures. However, it is also fair to conclude that there is still much more that could be done, and as accidents such as those in Leicester seem set to show us, that tail rotor components can still fail, and slip through the cracks of airworthiness and design standards at a higher frequency than the industry should be comfortable with. 

Remembering that 50% of TRFs are caused by impact of the TR against another object, how we choose to operate our aircraft with respect to the tail should be at the forefront of our minds. The flight profiles that we choose are more often determined by OEI considerations and performance margins than by considering the probability of suffering a tail rotor malfunction. Despite much ill-informed debate about the choice of departure profile from the stadium in Leicester, the prioritisation of OEI considerations over other serious malfunctions does raise some interesting questions for how the industry assesses the risks of flight as a whole. 

It may be fifteen years old this month, but a thorough read of this CAA paper CAA Paper 2003/1 will have something to teach every pilot out there, and at least should give you cause to spare a thought for taking into account more than just your single engine considerations the next time you pick an approach, establish a hover, or manoeuvre towards a confined area. If nothing else, let it inspire you to go away and find out more about the tail rotor characteristics and failure modes of your own machine.

For those who don’t have the stamina to wade through the full 255 page CAA Paper, then for a short read I recommend jumping straight to Appendix B, Tail Rotor Failures- Advice and considerations, by Steve O’Collard for an excellent summary of the trials that were carried out and generic advice and considerations on tail rotor failures to raise awareness within the professional piloting community.

A machine for jumping to conclusions:

Human Decision-making:

Extracts from Daniel Kahneman’s Thinking Fast and Slow.

Daniel Kahneman, a Nobel prize winner for his work, first became famous for his article Judgement under uncertainty (1974) Heuristics and Biases. The Article was produced from research funded by US Department of Defense and Office of Naval Research. He expanded this into a book in 2011 called “Thinking fast and slow” describing how the human mind makes judgements and choices.

Kahnemanpic

Two Systems

Kahneman talks of two systems referring to the brain’s two, side-by-side, modes of thinking when we make judgments or decisions.

System 1 – operates automatically and quickly with little or no sense of voluntary control.

System 2 – allocates attention to the effortful mental activities that demand it, including complex computations. System 2 is associated with conscious choice and concentration.

When we think of how we make decisions we generally identify with System 2. This is the idea that we are reasoning; we make choices determined by rational thought. 

However: Kahneman’s book uses as series of social experiments to prove time and time again that in fact, it is system 1 which dominates the way in which we take most decisions.

Characteristics of System 1:

  • Operates automatically and quickly, with little or no effort, and no sense of voluntary control
  • Executes skilled responses and generates skilled intuitions after adequate training.
  • Generates impressions, feelings, inclinations; when endorsed by System 2 these become attitudes, beliefs, and intentions.
  • Infers and invents causes and intentions
  • Neglects ambiguity and surpasses doubt
  • Is biased to believe and confirm 
  • Focuses on existing evidence and ignores absent evidence
  • Generates a limited set of basic assessments

Some examples of System 1 at work:

  • Driving a car on an empty road.
  • Answering of simple sums, e.g. 2 plus 2.
  • Complete the phrase “bread and…”
  • Detect that one object is closer than another.
  • Orientate to the source of a sudden sound.

Some of these are involuntary – you can’t avoid doing them. Others are able to be voluntarily controlled but normally run on autopilot.

Characteristics of System 2

System 2 operations have one thing in common: they require high levels of attention and concentration dedicated to them and are disrupted when that attention is drawn away.

  • Focus on the voice of a particular person in a crowded and noisy room.
  • Search memory to identify a surprising sound.
  • Count the occurrences of the letter ‘a’ in a page of text.
  • Check the validity of a complex logical argument.
  • Continuous monitoring of your own behaviour – the control that keeps you polite when angry, and alert when tired.

The two systems in tandem

  • Both systems are active whenever we are awake. System 1 runs automatically, system 2 is normally in a comfortable low effort mode. 
  • System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions and feelings. If endorsed by System 2 then impressions and intuitions turn into beliefs, and impulses turn into voluntary actions.

Problem 1: Overconfidence in System 1

  • People have too much faith in their intuition. Because cognitive effort is just that – it requires effort and therefore is mildly unpleasant, we all try to avoid it as much as possible. 

Problem 2: The Laziness of System 2

  • System 2 is inherently lazy: For example. Quickly perform this little sum:

A bat and ball cost £1.10. The bat costs £1 more than the ball. How much does the ball cost?

Most people come to a quick answer of 10 pence. This is the intuitive answer. It is appealing, and wrong. Do the maths properly and you will see that if the ball costs 10 pence then the total cost will be £1.20. The correct answer is 5 pence.

Many thousands of university students have answered this question and more than 80% gave the answer as 10 pence. This shows a willingness to accept the easy ‘intuitive’ solution rather than engaging an effortful System 2 calculation.

  • System 1 provides us with automatic impressions and intuitions. It is gullible and biased to believe. The role of doubting and ‘unbelieving’ is that of System 2. It has to challenge these by recalling knowledge, making comparisons, calculations and mental judgements. 
  • It has been demonstrated that when System 2 is otherwise engaged by mentally demanding tasks, it loses the ability to check and balance the judgements of System 1. It is also lazy. There is ample evidence that people are more likely to believe obvious falsehoods, and make basic errors when they are tired or having to engage System 2 in other activities.

Your lazy brain:

  • When faced with complicated judgements we all tend to simplify the question by substituting it with another, easier one. For example:
  • Target question: How happy are you with your life these days?
    • Substituted question: What is my mood right now?
  • Target question: How popular will the government be one year from now?
    • Substituted question: How popular is the government right now?

We have a powerful tendency for us to come to an answer by substituting a difficult question with an easier one. 

Here’s an example of how:

Steve is a very shy and withdrawn character, invariably helpful, but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.

What do you think Steve’s profession is? Order the following professions from Most to least likely:

Farmer, salesman, airline pilot, librarian, physician.

Research has shown that the vast majority of people make this judgement based on similarity or representativeness with no consideration to prior probability of outcomes. If you were to make this judgement based on statistical probability alone then you would first have considered how many farmers, salesman, airline pilots, librarians, physicians, that there are per head of population. The fact that there are many more farmers in the population that librarians should enter into any reasonable estimate of the probability that Steve is a farmer instead of a librarian.

Here’s another example:

A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it will be higher than 50%, sometimes lower.

For a period of a year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?

The larger hospital

The smaller hospital

About the same (that is within 5% of each other).

Over 50% of people selected that last answer. In contrast, sampling theory determines that the expected number of days on which more than 60% of the babies are boys is much greater in the small hospital than in the large one, because a large sample size is less likely to stray from 50%. This fundamental notion of statistics although we understand it perfectly well, is not part of people’s repertoire of intuitions.

jumpconclusions2

Do not be fooled by your carefully reasoned judgement: You are a machine for jumping to conclusions!

Aviation safety culture and the paradox of success:  Can safety innovation keep pace with technological progress?

The aviation industry is hailed as a pioneer of safety practices, of open reporting, of just culture, and in learning from its mistakes. And given its remarkable safety record, this reputation is perhaps justified. Nevertheless, it would be both complacent and counter to those values themselves to believe that the goal of safety has already been achieved. Better is a journey, never a destination.

The paradox of safety is that safety can be dangerous, while danger can make us safe. The fact that a company might have gone twenty years without an accident or a serious incident might demonstrate a good safety record, but it does not demonstrate that it is safe. On the contrary, it is likely to be at its most dangerous. Why?

Because past performance does not determine future performance. Any system, even the economy, the environment, and the human body itself adapts to its surroundings. If the surroundings seem safer, these systems tolerate more risk. As time passes since a previous incident or accident, those surroundings will inevitably feel safer.

And as with any other system, the success of the safety culture in aviation demonstrates this paradox perfectly. Aviation has always been seen as an inherently dangerous activity, and it is the risks associated with the catastrophic consequences of air accidents that have driven it to become such an unusually safe activity. Of course, the reverse is also true, and that is where the aviation professional should beware. Because as soon as we pat ourselves on the back for setting the benchmark in safety, for leading the field across safety critical industries, and buy into our own hubris, we start to wrack up the levels of risk once again. We become more dangerous. 

None of this is to say that we have a false sense of security, because it isn’t false: aviation really has become safer – all else being equal. However, all else is often not equal. As our environment becomes more complex, so do our interactions, and with them the potential for unintended consequences and catastrophe. The question is, how do we keep innovating and adapting our safety systems to keep pace with the constantly changing environment in which they are set, and the constantly changing risk profiles that even the most successful safety cultures must address?

Addressing the problem of extreme complexity.

Defeat under conditions of complexity occurs far more often despite great expertise, know-how and great effort, rather than from a lack of it.

In aviation, just as in many other technologically and organisationally complex fields, know-how and sophistication have increased so much in recent years that the result is a struggle to deliver on them. Another of the paradoxes of continuing technological development in aviation is that although its aim is to reduce workload, diminish the opportunities for human error, and to make a pilot’s life easier, the huge advances have at the same time turned a pilot’s job into the art of managing extreme complexity.  This raises the ultimate question of whether a level of complexity is reached that can no longer, in fact, be humanly mastered.

There are degrees of complexity in aviation design concepts that have grown so far that avoiding mistakes is becoming impossible even for the super-specialised, and most experienced. What do you do when even the super-specialists fail? What do you do when expertise is not enough?

Globalise sharing of knowledge and expertise

The answer must be to increase the complexity of our response in line with the complexity of the problem. One pilot, one, crew, one company alone, will not have the sufficient experience, expertise, or understanding – even in a lifetime of flying, of work, or study – to match the demands of such complex systems. 

Just as the first pioneers of safety management in aviation determined that improvement only comes from the analysis of failure, and that this is best achieved by sharing occurrences and learning from the mistakes of others, we must look for ways to progress this concept of safety. 

At its core is communication, and the sharing of knowledge and ideas.

The same technological progress that has driven aircraft automation and complexity, has in the same short timescale given us the tools to do this. The internet age; the in-your-hand electronic encyclopaedia that is your smart phone or tablet. Imagine a kind of professional Google, a specialised distilling of aircraft knowledge, by type, worldwide. Imagine, a repository for every bit of know-how imaginable about your aircraft type, recorded and locked in, even from the design stage, or from experts long-retired. Imagine having access to a system with the potential to answer almost any question you have either by reference to published or passed on knowledge, or in real time by being able to put it to all of the most expert practitioners in the world at once. Any engineering conundrum, any obscure malfunction, shared for future reference, comparison, and training.

In terms of the practicalities alone, it is now perfectly possible – if not simple – to construct a worldwide system to facilitate the sharing of all knowledge and expertise; all safety incidents and occurrences and investigations; all the lessons learnt by individual flight crews through training, experience, errors and chance; all of the same by the maintenance teams. This would add up to many hundreds of thousands of hours of flight and maintenance experience available at the touch of a button or the interrogation of a search engine, to any interested party.

It would fast forward knowledge levels across aviation professionals by many years’ worth of experience all distilled and organised for universal consumption. It would go to the core of addressing the problem of expertise versus complexity. And it would have the added consequence of a step change in safety.

It would however require a paradigm shift in how we are prepared to share information. Have faith; it wouldn’t be the first time that aviation has shown itself capable of pioneering significant cultural change to blaze a trail in safety concepts.

I for one, would like to believe it is possible.

time to share
*Based on insights from: 
  • Atoll Gawande, The Checklist Manifesto (London: Profile Books, 2011)
  • Greg Ip, Foolproof: Why safety can be dangerous and how danger makes us safe (London: Headline, 2015)
  • Matthew Seyd, Black Box Thinking (London: John Murray, 2015)

“PF & CM:” Pilot Flying & Crewman Monitoring

Training Monitoring for Helicopter Technical Crew (Part 2)2

The response to my last post underlined how little technical or non-technical material out there is written with the niche skill-set of the helicopter technical crew community in mind. There is certainly an interest and appetite for its consumption among those who play a part in this community.

Taken aback by the interest generated by my last article on monitoring for Technical Crew, I return to the theme to attempt to answer some of the questions that it posed.

As the overall accident rates have dropped steadily, and automatic systems have grown and developed in their ubiquity and reliability in modern aircraft, the causes of accidents in aviation have evolved too. Inevitably, it is not the automatics per se but our interaction with them as aircrew that has become a significant point of failure. 

The recognition of this in the airline world followed high profile disasters such as Air France Flight 447 in 2009, which remained stalled for 38,000 feet as it fell into the Atlantic Ocean. The serious concerns that this and other accidents raised about crew monitoring prompted a switch from the terms Pilot Flying & Pilot Non-Flying (PF & PNF) to Pilot Flying and Pilot Monitoring (PF & PM).  Pedantic and semantic though this change may seem, it was meant to bring in to focus the absolutely integral and indivisible part of piloting that monitoring is.

With a nod and a wink to this then, I introduce to you, the terms Pilot Flying & Crewman Monitoring (PF & CM):

 The ability to monitor the pilot (or pilots) effectively is no less an integral and indivisible part of the role of the helicopter technical crewman.

Monitoring:

The observation and interpretation of the flight path data, configuration status, automation modes and on board systems appropriate to the phase of flight. It involves a cognitive comparison against the expected values modes, and procedures. It also includes observation of the other crew member and timely intervention in the event of deviation.

(Definition given in CAA Paper 2013-02 Monitoring Matters)

As I observed previously, there is nothing in the various definitions of monitoring given that a helicopter Technical Crew member does not or could not do in flight. Where I noted that the exception to this might only be the ability to intervene on the flight controls, I stand corrected that even in terms of intervention there are techniques for verbal intervention that could make the difference enough to avert a loss of control event in a single-pilot cockpit. One structured intervention policy that could certainly be put to good effect from anywhere in the aircraft is the acronym PACE. For example:

PROBE: “The rate of descent seems quite high, are you happy with that?”

ALERT: “Rate of descent increasing”

CHALLENGE: “CHECK YOUR RATE OF DESCENT!”

EMERGENCY: “PULL UP! PULL UP!”

The Monitoring Role for Helicopter Technical Crew

If you break down the definition of monitoring given above, there are three parts to it: Observation and interpretation, comparison, and human.

Observation

The first part is the observation of the flight path data, configuration status, automation modes and on board systems, appropriate to the phase of flight. From a rear crew perspective, the ability to monitor in this way can be challenging. In a purely practical sense, the ability to observe the flight path data etc. is often compromised from a TC’s physical position in the cabin. However, this does not preclude a wealth of data about the flight being available to them, depending upon aircraft type, role, and fit. It also raises some important questions about where the TC should choose to position himself within the cabin to facilitate the monitoring role during different or critical phases of flight. 

For some crews this could even include a decision about whether to sit in the front or the rear of the aircraft. This question was raised during a recent accident investigation in Italy when an AW139 HEMS mission ended in a CFIT in poor weather conditions. The TC chose to remain in the cabin, and despite his attempts to verbally prompt the pilot in the final minute of the flight, he was unable to either assert himself sufficiently or intervene successfully enough to avert the crash into a snowy mountainside in white-out conditions. The Italian ANSV report recommends that in the case of single-pilot helicopters operated for HEMS missions, attention is drawn to the advantages of the HEMS TC occupying the co-pilot’s seating position, and that tasks required by the TC’s role  in the passenger cabin are carried out by another suitably trained crew member instead. For a short case study of the accident in English by Aerossurance, follow the link AW139 HEMS accident EC-KJT.

Comparison & Interpretation

Comparison and interpretation is the second part of the definition, which involves making a cognitive comparison against the expected values, modes, and procedures. Like interpreting the flight path data, this is a function of technical knowledge and experience. As TC are not qualified pilots with flight training requirements determined by Part FCL regulation and type rating, their ability to compare and interpret flight data will obviously be reduced in comparison to their colleagues in the front seats. Nevertheless, the knowledge and understanding of systems, procedures, and checklists that can be built up across a career’s-worth of experience in the rear of aircraft is astounding, and can cover the full range of aviation expertise from ATC procedures, airspace, instrument approaches and IFR, and even extremely high levels of aircraft technical knowledge. In this context, the ability to become an expert at monitoring is limited only by the building of experience, coupled with the desire of the TC to learn and take on knowledge relevant to the role.

Human

The third element in the definition is the observation of the other crew members and timely intervention in the event of deviations. In this area the TC can be just as effective as any other member of the crew, and arguably better placed during a high-workload event in the cockpit to monitor the behaviour of the pilot or pilots. Take a breakdown in situation-awareness for example. Being removed from the immediate cockpit environment, which could be the source of the overload of the pilots – both physically and in terms of the stimuli themselves – allows the TC the benefit of the ‘big picture’. This is sometimes called ‘the capacity seat’. 

Sometimes the TC’s source of situational awareness is one that the pilots do not have. This was the case in the CFIT accident of Rescue 116 in March 2017, when an Irish SAR S92 crashed into a small island offshore. It was the TC who brought the crew’s attention to the 282ft obstacle of Blackrock in the final seconds of the flight. He had spotted the obstruction because he was scanning the flight path using the FLIR. Unfortunately, his warning did not come early enough to avoid the collision. For more details on this accident, see the Rescue 116 Preliminary Report.

Monitoringpubs

Training and developing Monitoring for the non-pilot members of the crew

A lack of regulation to set-out and standardise training requirements for helicopter TC, coupled with a failure to formally acknowledge the importance of the contribution they make to the crew in terms of monitoring means that, for the time being, the initiative falls to the Operator to establish a proactive approach to promoting the concept of monitoring for rear-crew. 

How could the conscientious Operator/Technical Crew Trainer do this?

  • Ensure that TC see monitoring as a fundamental part of their training and development and take responsibility for developing their skill set.
  • Raise the profile amongst the pilot cadre of the role that TC monitoring has to play, and how it can be supported and aided from the front seat.
  • Engage with the company CRM team to pick out elements from literature relevant to monitoring by rear-crew, and facilitate a forum on how to develop the discipline and discuss the challenges it presents. 
  • Facilitate a joint discussion with pilots and rear crew on what measures could be taken to make it easier for the crew in the back of the aircraft to monitor the flight path and pilot activity.
  • Build monitoring skills into Line Training and Checking of TC. Explicit reference should be made to the discipline and behaviours required to improve monitoring during Line Training and Checking, and attention drawn to behaviours which demonstrate good or poor monitoring practices by rear crew.
  • Establish the extent of the training gap in monitoring theory and practice between pilots and TC owing to the requirement for pilot training under Part FCL. (For example, the Human Performance Syllabus amongst other areas.)
  • Establish the extent of the training gap in monitoring skills taught to, and practiced by pilots during simulator training and LOFT scenarios, and TC who do not benefit in this way. Consider ways to bridge the gap left by simulator training in particular.

“Becoming an expert at monitoring is limited only by the building of experience, coupled with the desire of the TC to learn and take on knowledge relevant to the role.”

Training Monitoring for Helicopter Technical Crew

F58A3850-CA8F-4062-AE2F-0E039F335A77

I have been asked to deliver training on Monitoring to Technical Crew as part of a bespoke course to qualify them to assist single-pilot operations from the front seat. After considering how to approach the session and content I have been left asking more questions than I started with.

Being a fan of simplicity, I tried to start at the beginning by finding an answer to the first and most obvious question: 

What is monitoring?

The observation and interpretation of the flight path data, configuration status, automation modes and on board systems appropriate to the phase of flight. It involves a cognitive comparison against the expected values modes, and procedures. It also includes observation of the other crew member and timely intervention in the event of deviation.

(Definition given in CAA Paper 2013-02 Monitoring Matters)

There is nothing in that definition that a helicopter Technical Crew member does not or could not do in flight. Except for the last part (i.e. intervention). But this definition, like most of the material on monitoring, is only written with pilots in mind.

In my mind, Technical Crew are flight crew, and as such they have much more in common with pilots than they do with airline cabin crew, despite the fact that in a regulatory sense they are only beginning to be properly distinguished. (It is worth noting that the EASA definition of Flight Crew does not include crew members outside the cockpit.) In any case, most people would agree that as an integral part of any crew, and in accordance with the core principles of CRM, there is an important monitoring role for Technical Crew. 

So how should we train monitoring for the non-pilot members of the crew?

There is very little guidance on this. The most in depth material on monitoring from the Authority is the 2013 CAA Paper Monitoring Matters. However even its title – “Guidance on the Development of Pilot Monitoring” – gives away the fact that the focus of the research and material does not extend beyond the cockpit.

The requirement to teach monitoring in the ground training environment is limited to its presence as an item on the EASA CRM training syllabus, which of course applies to all aircrew alike. Nevertheless, CAP 737 The Flight Crew Human Factors Handbook (and the key reference publication for CRM) only discusses the topic with reference to pilots, and does not widen the perspective to consider monitoring as a whole crew concept. As we noted earlier, the EASA definition of Flight Crew is limited to pilots only, and so by that measure, the Flight Crew HF Handbook is itself only written with pilots in mind, despite the applicability of CRM training to many other types of crew member in the modern aviation environment.

For the sake of a discussion on monitoring by technical crew it is probably worth highlighting the two spheres in which a TC plays a monitoring role in most helicopter operations. The first, and most common, is through their capacity as a rear-crew member of the flight crew, and the second as a stand in for the co-pilot role while operating in the front seat (as for example is common in single pilot HEMS operations).

While it is useful to distinguish between front-seat TC operators, and TC monitoring of multi-pilot operations from the back of the aircraft, is there actually any fundamental difference in the monitoring skill-sets that we are asking them to use? 

What are the challenges to effective monitoring by technical crew?

How might monitoring flight parameters, automatics, and situation awareness from the cabin differ from the same discipline in the cockpit? If anything, do these differences present extra challenges for monitoring from the rear of the aircraft, and if so, how should we be addressing them?

If we can’t identify any differences between the skills and techniques required by technical crew to monitor effectively, and those used in the front seats, then do we need to distinguish training for TC in monitoring from training for pilots at all, or is it fundamentally the same discipline?

Perhaps ‘monitoring’ is a term that has only been developed to refer to activity within the cockpit, despite the definition provided above? Can we still call it monitoring if it is outside the cockpit? Or then does it become just good CRM? Some might say that as a skill set, it just refers to a comprehensive use of good CRM behaviours by rear-crew to keep tabs on the phase of flight, the workload management of the pilots, their SA, the automatics, the flight parameters etc.

What level of monitoring should be expected from Technical Crew?

Is the ability of a TC to monitor effectively more dependent upon the CRM of the pilot and the way information and thought processes are shared and communicated within the aircraft or is it more a function of the experience and ability of the TC themselves?

Is your ability to monitor effectively proportionate to your knowledge of the aircraft and its systems, your knowledge of procedures and checklists, and your airmanship and experience? Take as a particular example having an understanding of the modes of automatics in modern aircraft.

Does a formalised role in the front seat of the aircraft require any additional training in the particular discipline of monitoring? If so, how should this be standardised? 

What role should technical crewmen be playing in monitoring the flight? To what extent do Technical Crew Trainers evaluate the monitoring skills of TCs in the training and checking environment? Is explicit reference made to monitoring as part of TC CRM assessment?

CRM is defined in CAP 737 as “The effective use of all available resources, equipment, procedures, supporting services and, above all, people to perform a task as efficiently and safely as possible”.

Many of the Technical Crew that I have worked with are considerably more experienced aviators than the pilots that they fly with. Are we failing to make best use of one of our key resources by not paying sufficient attention to the role they have to play in keeping us safe airborne? Is there a need to raise the profile of monitoring from a rear-crew perspective, and if so, how should it be done?

EBT & ATQP…What now for CRM?

EBT&ATQP vs CRM image.png

 

Let’s start by unravelling that jumble of acronyms:

EBT – Evidence Based Training

ATPQ – Alternative Training and Qualification Programme

CRM – Crew (Cockpit) (Complete) Resource Management

What is Evidence Based Training?

EBT is a shift in philosophy away from traditional, prescriptive training and checking methods in which, for example, you repeat the set requirements and training exercises of an LPC/OPC over and over again every six months. It emerged from a recognition that training and checking regulation was a product of long outdated evidence from accidents on a fleet of first generation jet aircraft that had little in common with the highly automated world that modern aircrew operate in today.

The size and scope of data sources on how we operate aircraft has increased exponentially over the past two decades. HUMS, flight data analysis, LOSA programmes, and the product of modern occurrence reporting systems and SMS have all contributed to a wealth of data about flight operations and safety. Tapping in to this data to provide information (evidence) about how we could be training more effectively and efficiently is what the EBT project is all about. For more about how the EBT movement came about see The EBD Foundation. 

In depth studies of the data indicated the need for pilots to be exposed to the unexpected in a learning environment, and be more challenged with and immersed in complex situations, rather than being repetitively tested in the execution of manoeuvres.

Therefore, EBT does not aim to simply replace an outdated set of critical events with a new set, but to use the evidence from research as a vehicle for developing and assessing crew performance across a range of desirable competencies. For more detail see ICAO Manual on EBT.

It also moves the focus of the instructor cadre onto analysis of the root causes to correct inappropriate actions, rather than simply asking a flight crew member to repeat a manoeuvre with no real understanding as to why it was not successfully flown in the first instance.

EBT infographic

What is the Alternative Training and Qualification Programme? 

Using the same rationale as EBT, ATPQ allows operators to develop an alternative framework for the conduct of initial and recurrent training. This gives the opportunity for operators to create a more effective and more operation specific training and checking programme for their crews. ATPQ has been summed up by the CAA as “Train the way you operate, and operate the way you train” allowing training programmes to be targeted at areas pertinent to the operators type and theatre of operation. For more information see the CAA’s ATPQ Industry Guidance.

What is the impact of all this on how we teach and assess CRM?

As we have seen, EBT has been behind a paradigm shift in the philosophy of how we evaluate crew performance. It advocates assessment of performance according to a set of competencies, and these competencies do not necessarily distinguish between the “non-technical” (e.g. CRM) and the “technical”.

Under EBT any area of competence assessed not to meet the required level of performance shall also be associated with an observable behaviour that could lead to an unacceptable reduction in safety margin. This could be technical or non-technical, or both. Under this new way of thinking, it is not really possible to separate the technical from the non-technical skills as the are both intrinsically tied up with each other.

If there is no longer such a thing as ‘non-technical’ skills, where does that leave a CRM syllabus that specifically defines, and is mandated to cover such ‘non-technical’ topics as leadership and teamwork, communication, situational awareness, decision-making,  personality and behaviour?

If there is no longer such a thing as ‘non-technical’ skills, where does that leave a CRM syllabus that specifically defines, and is mandated to cover such ‘non-technical’ topics?

Is it possible to teach and discuss CRM without separating the technical from the non-technical?

For years training and checking forms and pilot assessment have separated and required a separate assessment of the technical and non-technical skills in flight. However, once considered in the light of a competency-based philosophy such as EBT, it is difficult to contest the notion that the two are so completely intertwined, that separating them cannot produce any useful measure of either.

Where does that leave the CRM ground syllabus?

The key skill of a successful CRM ground trainer must be the ability to present and build on theoretical knowledge of the core topics in such a way as to draw a useful, relevant, and credible link between the material being presented and its application in the cockpit or operational line flying environment.

As the EASA CRM syllabus demonstrates, with recent additions to the core topics in the form of Resilience and Surprise and Startle,  and an increasing focus on automation and monitoring, the drive is to raise the profile and levels of understanding behind those areas which must be readily applied when under stress in the cockpit. EBT creates line and simulator training which in turn facilitates learning in these areas, thus closing the circle between the theoretical and the practical elements of CRM. 

Advocates of the EBT philosophy would argue that it bolsters the opportunity to put core non-technical aspects of piloting under the spotlight, through more effective training and exposure to rapidly developing and dynamic situations. Having to deal with unpredictable scenarios in the training and checking environment will allow a better assessment and examination of pilot decision-making processes for example. Rapidly changing demands on the flight plan are more likely to provoke challenges to and possible breakdown of situational awareness, and higher demands on the crew is likely to shine a spotlight on the effectiveness of teamwork and communication. Another important aspect of EBT is the notion of resilience. In aviation terms, resilience is the capability of an individual or crew to recover and “bounce back” from a challenging situation or serious threat. 

Just as traditional training and checking philosophies have become outdated, an approach to churning out standard CRM mantra and learning objectives under a rigid set of syllabus topics will not keep pace with new approaches to training and learning such as EBT. CRM training philosophies need to grow with them, both on the Line and in the classroom.

CRM has suffered in the past – and still does in some quarters – from a stigma and disdain owing to it being too obtuse, theoretical, disconnected from the commercial and other realities of the flight line. It can only overcome these prejudices by innovating, adapting and remaining relevant to what aircrew understand that they do.

In CRM, the bottom line should always be, “Can you take the training into the aircraft?”

 

Human perception & the mental model

Screen Shot 2018-07-29 at 22.56.36

Memory and meaning

I cdnuol’t blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid. Aocdcrnig to rscheearch at Cmadrigbe Uinervtisy, it deosn’t mttaer in waht odrer the ltteers in a wrod are, the olny iprmoatnt thnig is taht the frist and lsat ltteer be in the rghit pclae! Amzanig, huh?

1N  7H3  B3G1NN1NG 17  WA5  H4RD  BU7  N0W , oN  7H15  LIN3,  YoUR  M1ND  1S R34D1NG  17  4U7oM471C4LLY  W17HoUT  3V3N  7H1NK1NG  4BoU7  17.  B3  PRoUD!  oNLY C3R741N P3oPL3  C4N R3AD 7H1S!

The two paragraphs above demonstrate very effectively how the brain uses memory and prior experience to build understanding. An English speaking child in the early stages of learning to read would probably struggle with this task. Yet, a reasonably fluent speaker of English as a foreign language would probably decipher it with a little more time and effort, but nevertheless do so quite happily. This is because our perceptual system interpolates and reconstructs. The brain ‘fills’ in words by comparing prior experience from the long term memory store, so it is more dependent upon vocabulary and a similar lexicon – albeit a foreign word – than anything else. If we know or understand the general meaning of the target text, we will even read over some passages that do not exist at all, or in this case, we fill the gaps through our knowledge.

For those of us in aviation who are familiar with the sometime challenges of piecing together a broken transmission on the radio, we have experienced the same process in play, this time in the aural sense. Radio messages that are incomplete, or difficult to hear, are often understood perfectly by the experienced pilot, when a passenger given the opportunity to listen in would be stumped. As with the text above, we look to match what we actually hear to a template of what is familiar or has been heard before. We change meaning towards what we expect. This is where there is no substitute for experience.

Human Perception 

The principal task played by human perception is to strengthen the sensory stimuli entering the body through the ears, eyes, nose, and tactile receptors, to allow us to perceive, orientate, and then act quickly and efficiently. This is certainly how we want the brain to act for us in the unpredictable and fast-moving world of aviation. 

How does visual perception work?

Visual processing is composed of 3 different stages (Marr, 1982) These are early, intermediate, and late vision. In the early stage, basic processes like segregation of figure from background, border detection, and the detection of basic features (e.g., color, orientation, motion components) occur. The intermediate combines this information into a temporary representation of an object. At the later stage, the temporary object representation is matched with previous object shapes stored in long-term visual memory to achieve visual object identification and recognition. 

The content of memory then directly influences how the stimulus is perceived. Therefore observers tend to perceive the world in accordance with their expectations. For example, research has demonstrated that a yellow–orange hue is more likely to be categorized as orange on a carrot than on a banana (Mitterer & de Reuter, 2008) and that a face is perceived to be lighter if it contains prototypical White features rather than Black ones (Levin & Banaji, 2006).

We will focus here on visual perception, as not only is it the key sense feeding the brain with stimuli in the cockpit environment, (about 80 per cent of our total information intake is through the eyes), but sensory perception is often the most striking proof of something factual and therefore the most difficult to force yourself to overcome when what you are seeing doesn’t seem to make sense to you. Any pilot who has experienced a real-life case of ‘the leans’ will attest to this. When we perceive something, we interpret it and take it as “objective”, or “real”. The assumed link between perception and physical reality is particularly strong for the visual sense. Seeing is believing, right?

The Kanizsa Triangle is a famous example of how the brain draws upon its experience of familiar known images, to build its perception of an ‘object’.

Kanizsa triangle

A white triangle is perceived where none is drawn. This is called a subjective or illusory contour. We not only perceive two triangles, but we even interpret the whole configuration as one with clear depth, with the solid white “triangle” in the foreground of another “triangle” which stands bottom up. 

Taking this concept from the academic study into the cockpit is a small step as the links are easily made with well known visual illusions that can present significant hazards to the unsuspecting pilot.  In the low-flying world of the helicopter where the boundaries between VMC and IMC are small, and movements between the two often frequent, these hazards are both heightened, as well as more likely to occur.

The False Visual Reference

The false visual reference is a function of the human system of perception described above.

False visual reference illusions may cause the pilot to orientate the aircraft in relation to a false horizon caused by flying over a banked cloud, night flying over featureless terrain with ground lights that are indistinguishable from a dark sky with stars, or night flying over a featureless terrain with a clearly defined pattern of ground lights and a dark, starless sky.

Other well known visual illusions caused by the brain making an erroneous comparison with a known sight-picture include; linear perspective illusions (up-sloping/down-sloping runways, or unusually wide/thin, long/short runways); the black hole approach; or auto-kinesis, all of which are caused by a failure of the brain to interpret the visual stimuli in the correct manner.

The effect of a possible visual illusion was cited as a major factor in the accident of the AS365 Dauphin that crashed into the sea in Morecambe Bay on a dark night in 2006. The cockpit voice recorder prior to the ditching captured an extremely experienced off-shore helicopter crew struggling with their perception of the oil rig to which they were attempting to make an approach.

G-BLUN image
AS365 Dauphin G-BLUN

For the full accident report read here: G-BLUN Accident Report

Limitations of Human Perception

Apart from these inbuilt biases fed by experience, perception is limited even further by the capabilities of the human information processing system. This is best illustrated by our acoustic sense. 

The adult human can only register and process a very narrow band of frequencies ranging from about 16 Hz–20 kHz as a young adult, and this band gets narrower and narrower with increasing age. Typically, infrasonic and ultrasonic bands are just not perceivable despite being essential for other species such as elephants and bats, respectively. The perception of the environment and, consequently, the perception and representation of the world around them is, as such, significantly different for these species to what it is for us.

Let’s look at a well-known example of the limitations of the human processing mechanism, and how it  could affect us in the cockpit. 

Blind spot

(Note: If you are struggling with this, try to fixate at a distance of approximately 40cm and move your head slightly horizontally from right to left as you move the page towards you.)

The object disappears when it moves in to the area of the retina where visual information can’t be processed due to a lack of photoreceptors.

This has obvious implications on pilot lookout and the avoidance of mid-air collisions. Mid-air collision has become an increasing area of concern for aviation authorities worldwide in the past few years.

 A study of over two hundred reports of mid-air collisions in the US and Canada showed that they can occur in all phases of flight and at all altitudes. However, nearly all mid-air collisions occur in daylight and in excellent visual meteorological conditions, mostly at lower altitudes where most VFR flying is carried out, and because of the concentration of aircraft close to aerodromes, most collisions occurred near aerodromes when one or both aircraft were descending or climbing, and often within the circuit pattern.

All of this was the case in the mid air in November 2017 near Wycombe Air Park between a Cessna 152 and a Guimbal Cabri G-2 helicopter.

Despite the fact that in recent years a lot of the focus on combatting the hazard of mid-air collision has been through technological advances such as Mode Sierra, TAS and TCAS, the need to continue to educate pilots on effective lookout remains. 

In 2013, the UK CAA published a SafetySense information leaflet (Read more: Safety Sense Leaflet – Collision Avoidance) which looks in more depth at the limitations of human vision with respect to effective lookout.

TEXTING AND FLYING?

Texting & Flying: Pilot distraction & the myth of multi-tasking.

On August 26, 2011, at about 6:41 pm CDT, a Eurocopter AS350 B2 helicopter operated by Air Methods on an EMS mission crashed following a loss of engine power as a result of fuel exhaustion a mile from an airport in Mosby, Missouri. The pilot, flight nurse, flight paramedic and patient were killed, and the helicopter was substantially damaged.

TextingHEMScrash

An examination of cell phone records showed that the pilot had made and received multiple personal calls and text messages throughout the afternoon while the helicopter was being inspected and prepared for flight, during the flight to the first hospital, while he was on the helipad at the hospital making mission-critical decisions about continuing or delaying the flight due to the fuel situation, and during the accident flight.

While there was no evidence that the pilot was using his cell phone when the flameout occurred, the NTSB said that the texting and calls, including those that occurred before and between flights, were a source of distraction that likely contributed to errors and poor decision-making.

Read NTSB Report here.

“This investigation highlighted what is a growing concern across transportation, distraction and the myth of multi-tasking,”

National Transport Safety Board Chairman, Deborah Hersman:

textingwhileflying2

Distraction: “A thing that prevents someone from concentrating on something else.”

Research shows that when multitasking, people perform tasks more slowly and make more mistakes.

One theory of divided attention conceived by Kahneman, explains that there is a single pool of attentional resources that is divided among multiple tasks. 

You can’t give more than one thing at a time your FULL attention. If you’re engaged in a safety critical action, don’t get distracted.