
As we charge headlong into the era of artificial intelligence, the expanding use of – and reliance on – automation in both aviation and everyday life feels almost inevitable. In flying we work everyday with all manner of complex automated systems, and are constantly striving to keep up with the pace of change. Developing and maintaining knowledge of these systems is a fundamental skillset for the modern pilot. Alongside technical learning about on board systems and their practical use, CRM training aims to breed familiarity with key discussions around the ironies and challenges of using automation and how we can manage them through non-technical skills. But little time is spent during either of these types of training to focus explicitly on the critical role played by the concept of trust in our interactions with automation. While automation-related incidents are often framed as failures of systems knowledge, this article argues that they could be more accurately understood as failures of trust calibration. In the following paragraphs we’ll explore what we mean by trust in this context, and how it determines our reliance, use, misuse, or indeed, disuse of automated systems.
The nature of human–automation trust.
Trust has been thought about and studied for centuries in many different contexts from religion and philosophy, to sociology, economics and political science. More recently, it has reached the fields of psychology and human factors too. First and foremost trust is understood to be a fundamentally human concept so you’d be forgiven for asking what it might have to do with high-tech inanimate objects such as software, control units, and actuators. Although trust in technology differs from trust between people, the fundamentals of how trust develops and what it means remain the same. Similarities exist in the way we gain, maintain and lose trust in automated systems in just the same way we do in our relationships with other people.
Research has shown that in both cases, the formation of trust involves both thinking and feeling, but it is the feeling and not the thinking that drives trusting behaviour. This is known in academic speak as ‘affective processing’, and is the idea that our emotions are a powerful determinant in how trust in both humans and technological agents is developed. Furthermore, (of particular significance to a pilot’s trust relationship with systems and automation) research has shown that when cognitive resources are limited, we are even more likely to fall back on emotionally driven thought processes and these usually occur rapidly and outside of our conscious awareness.
Just as when we place our trust in another person, trust in technology is an attitude or belief that a technological agent will help us to achieve a specific goal. A ‘technological agent’ in this context could mean an individual aircraft system such as TCAS, GPWS, or NVIS, or an integrated system of systems such as an automatic flight control system. Our relationship with any given technological agent is characterised by uncertainty and vulnerability. Uncertainty is present in the sense that we cannot be sure that the automation will carry out its intended function in the way we expect it to; vulnerability, because there is something at stake in the possibility of its failure or deviation from this expectation, be that a process failure or, in the case of an aircraft, ultimately, safety.
As with interpersonal trust, studies have shown that we apply socially learned rules in our dealings with machines, such as politeness for example. Other human cultural dynamics such as age, gender, and social norms play important roles in how we think about, accept or reject automation. In fact, neurological research suggests that just the same neural pathways are firing when we evaluate our trust in technological systems as when we evaluate the trustworthiness of another person. (This could be because our trust in such systems in part represents our trust in the designer of that system, just one step further removed than normal).
Whereas interpersonal trust is based mostly on the competence, integrity, or benevolence of the person in which trust is being deposited, research suggests that human-automation trust is anchored around three core dimensions. These are the three ‘Ps’: the performance, process, or purpose of the system in question. Performance based trust is about how well we believe a system executes a task. Process-based trust depends upon our level of understanding of how a system works. Purpose based trust is dependent upon the system delivering on our expectation of the designer’s intended use for that system.
Why does trust in automation vary?
If trust is neither fixed nor purely rational, it raises the question of why trust can vary so markedly between pilots, systems, and situations. Academics have developed a three-layered model to explain the trust relationship between humans and automation. It tells us that although trust is dependent upon a multitude of variables, they can be grouped into three categories. These are:
- Learned trust.
- Situational trust.
- Dispositional trust.
1. Learned trust.
Learned trust is driven primarily by experience. Past interactions with an automated system strongly influence how we assess its trustworthiness, and we routinely extrapolate from our experience with familiar systems when encountering new ones. Our understanding of a system’s purpose and process shapes both our expectations and how we choose to use it.
Learned trust is closely linked to perceived system performance and tends to strengthen over time when reliability is demonstrated. As a system matures, trust becomes grounded in its predictability and dependability rather than initial assumptions. However, evidence suggests that people often exhibit a positivity bias toward novel automation, placing early faith in new systems that can quickly erode following errors or unexpected behaviour.
System design also plays a key role in shaping learned trust. Factors such as usability, transparency, and communication style influence how performance is perceived, with trust improving when interaction with automation is experienced as clear, patient, and non-interruptive. While learned trust is often the most visible aspect of human–automation interaction, it does not operate in isolation.
2. Situational trust.
Situational trust reflects the influence of context on our relationship with automation and varies dynamically with circumstances. These influences can be broadly divided into internal factors related to the human operator and external factors associated with the environment, task, and system.
Internal factors include transient states such as stress, fatigue, motivation, and workload, all of which can alter trust even within a single flight. Experience level also plays a role: novice operators often place greater trust in automation than experts with deeper system knowledge. When self-confidence is low and trust is high, reliance on automation tends to increase.
External factors include the operating environment, perceived risk, and the nature of the task being deferred to the automation. Higher-risk contexts generally demand greater trust, yet paradoxically may reduce it if uncertainty increases. Task complexity and cognitive workload further shape reliance, with automation used more heavily under high workload conditions, provided its performance aligns with expectations.
Situational trust is also influenced by team and organisational culture. Norms, instructor attitudes, SOPs, and shared responsibility for monitoring automation all shape how systems are perceived and used. At the team level, group dynamics such as polarisation, groupthink, and social loafing can further influence trust and reliance, reinforcing that trust in automation is not purely an individual phenomenon.
3. Dispositional Trust
Dispositional trust refers to an individual’s general tendency to trust or mistrust automation, independent of a specific system or situation. This relatively stable trait influences how readily a pilot delegates tasks to automation, how tolerant they are of system uncertainty, and how quickly trust is withdrawn following an error. Dispositional trust is shaped by a combination of personality, cultural background, age, and prior exposure to technology, meaning that two pilots presented with the same system and the same evidence of its performance may nonetheless respond very differently. Importantly, because dispositional trust operates largely outside conscious awareness, it can bias automation use in subtle ways, predisposing some individuals toward over-reliance and others toward chronic scepticism. Recognising dispositional trust as a factor in human-automation interaction therefore highlights the value of self-awareness and reflective practice within CRM training, allowing pilots to better understand not only the system in front of them, but also their own default orientation toward trusting it.
Why trust, not just knowledge, matters.
Incidents and accidents involving automation are often attributed to familiar themes such as automation dependency, confusion, or surprise, and are typically framed as failures of technical understanding. While system knowledge and experience are undoubtedly fundamental, they are not sufficient on their own to explain how pilots choose to rely on, intervene in, or disengage from automation. It is trust – shaped by experience, context, and individual disposition – that ultimately governs human interaction with automated systems. Recognising trust as a central determinant of automation use reframes many operational challenges as problems of trust calibration rather than knowledge alone. By developing greater sensitivity to situational trust dynamics, and an awareness of our own predispositions toward automation, pilots can better understand the subtle factors influencing their decision-making. One thing is for sure; the ability to understand our relationship with automation, and therefore make more thoughtful decisions about how best to use it, is ever more central to the role of the modern pilot and will continue to be so long after the last human leaves the cockpit.
Note:
This article is loosely based on the comprehensive literature review on Trust in Automation by Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, 57(3), 407-434.
