H2F BITESIZE #46

I bring you a weekly bite-sized chunk of the science behind helicopter human factors and CRM in practice, simplifying the complex and distilling a helicopter related study into a summary of less than 500 words.

TITLE:

Understanding pilots’ perceptions of AI-mediated mental health support in aviation: a socio-technical framework.

WHAT?

Study examining how commercial pilots perceive AI-supported mental health tools such as chatbots, self-assessment systems, and digital wellbeing assistants. It focuses on whether pilots would trust and use these tools within the spheres of aviation medical regulation, licensing, and management of career risk.  

WHERE?

University of Washington, surveying airline pilots from Asia, Europe, and North America.

WHEN?

Published 2026.

WHY?

Pilot mental health is now recognised as both a wellbeing and safety issue. Previous studies show elevated levels of stress, fatigue, anxiety, and under-reporting, often linked to fears over certification or employability. AI tools are increasingly proposed as a discrete tool for support, but little research to date has asked pilots how acceptable they would be in practice.  

HOW?

Researchers conducted semi-structured interviews with 13 airline transport pilots. Discussions explored mental health experiences, barriers to seeking support, and reactions to hypothetical AI tools in low- and high-stress situations.

The interview data were analysed thematically, producing a socio-technical framework showing how pilots’ decisions are shaped by regulatory systems, employer structures, operational pressures, and personal identity.   

FINDINGS:

Pilots did not reject AI itself, and many were already using tools such as ChatGPT for routine tasks.

However, acceptance of these tools depended on trust and control:

  • AI was viewed positively as a private tool for reflection, self-checking, fatigue awareness, and organising thoughts.
  • Pilots were cautious or resistant to AI embedded in company, medical, or regulatory systems.
  • Many feared stored data could later affect careers or licences.
  • Informal coping methods and peer support were often preferred to documented clinical pathways.

The study concludes that reluctance is driven not only by stigma, but by a lack of control over institutional visibility and perceived professional risk.

SO WHAT?

This paper develops the discussion on pilot mental health to the practical question of what kind of support pilots will actually use. It sits at the intersection of two growing research areas:

  • Pilot mental health, which has increasingly shown that stress and psychological strain are common but underreported.
  • AI in aviation, where most work focuses on operational uses such as decision support, automation, or training rather than wellbeing.

The study suggests AI could fill an important gap by providing low-threshold, confidential early support before issues escalate into performance, fatigue, or fitness-to-fly concerns. That could make it highly relevant to safety management and the future of preventive human factors.

However the researchers also warn that AI wellbeing tools could be perceived as surveillance, screening, or another pathway into formal records, leading to pilots trying to avoid them entirely. Thus design and governance matter as much as technical capability.

For airlines and regulators, the implication is that acceptable tools are likely to resemble trusted peer support, coaching, or self-management systems, and not compliance mechanisms. For CRM and human factors this reinforces the idea that psychological safety, trust, and reporting culture remain central to pilot wellbeing.

REFERENCE: 

Chawla, D. K., Zheng, Y., Larson, S., Lat, H., Key, S., & Perkins, K. (2026). Understanding pilots’ perceptions of AI-mediated mental health support in aviation: A socio-technical framework. In Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (CHI EA ’26) (pp. 1–5). ACM. https://doi.org/10.1145/3772363.3798996 

Leave a comment