What one well conceived study tells us about how experience interacts with risk-taking; and doesn’t always align with competence.
The enduring hold of experience
“Experience is the teacher of all things” is a maxim attributed to Julius Caesar; so it goes back a few years further than the dawn of aviation. We love the idea that experience begets wisdom because there’s a powerful and easy to see truth in it. And in the edifices and structures that have been built up around aviation it’s a deeply held belief which is also hard to deny. So simple and beguiling is the logic of the relationship between experience and competence that hours in a logbook have long been used as a prerequisite for recruitment, promotion, and regulatory compliance. The “many-thousand-hour pilot” is usually assumed to be safer, wiser, and more capable than the newcomer. This is presumably because experience is seen as the key teacher of sound judgement in the cockpit.
In this article I take a deep dive into a fascinating study by Drinkwater and Molesworth (2010) from the University of New South Wales in Australia. They decided to take this assumption and put it to the test by posing a simple question: Do traditionally used indicators of pilot competence, such as age, total flight hours, and recent flying experience, actually predict sound risk management behaviour? They wanted to find out what really predicts a pilot’s risk-taking behaviour. What their research discovered warrants taking a closer look, as their findings should make every one of us, from line pilots to instructors to flight operations managers, pause and think carefully about how we determine competence.
In a cleverly designed experiment meant to mirror a realistic but challenging aeronautical decision, fifty-six pilots, were introduced to a simulated flight scenario. Participants were flying traffic circuits when they were asked to assist in the search for a missing skydiver. They had only 18 minutes of fuel remaining plus a mandated 5-minute reserve. The pilots were told it wasn’t an emergency, just a routine task that could be accepted or declined at their discretion. Would they help out?
At the point they took their first decision, the experiment then split the group into two: on the one hand the “No-Go” pilots, who had decided to turn down the task, recognising the risk; on the other hand the “Go” pilots, who chose to continue the mission despite the low-fuel condition.
The researchers recorded demographic data of pilots’ age, total flight hours, and hours flown in the past 90 days (recency). Both groups also completed well-validated psychological measures before the scenario, which included risk perception scales and the Aviation Safety Attitude Scale (ASAS), a psychological tool developed to measure pilots’ attitudes toward flight safety and risk-taking, and used to explore how personal attitudes influence decision-making and behaviour in the cockpit.
Of the fifty-six pilots, thirty-six decided to “go.” That’s roughly two-thirds who accepted the additional risk of the non-urgent task. The interesting part of the research is what separated this group from the cautious one-third who said no. It wasn’t experience, or age, nor recency of flight time. Even self-reported attitudes toward safety were not predictive.
The only meaningful predictor that differentiated the pilots’ behaviour was how they individually perceived risk. During the assessment of each pilot’s risk perception they were asked to rate what the researchers called “Immediate High Risk” situations, the kinds of scenarios where time pressure and danger converge. The “No-Go” pilots consistently rated these situations as significantly more hazardous than those who accepted the mission.
In other words, it was perception, not experience that separated the safety-firsts from the risk-takers. Traditional measures of competence — total flight hours, recent flying, and pilot age — showed no relationship with whether pilots took the safer decision or not. This finding may not seem groundbreaking, but it is clearly at odds with how the industry often measures flying capability.
After the initial Go/No-Go decision, only the pilots who chose to continue the mission proceeded with the simulated flight task. These participants conducted a short search for the missing parachutist under low-fuel conditions, during which their behavioural responses were objectively recorded. Two key performance measures were captured: total flight time (how long they continued searching before deciding to return) and minimum altitude reached (indicating how close they flew to the ground during the search). These measures served as indicators of risk-taking behaviour — longer flight times and lower altitudes reflected a greater propensity to accept potential hazards. The researchers then correlated these behaviours with each pilot’s attitude scores, risk perception ratings, age, total flight experience, and recent flying activity, allowing them to examine which psychological or demographic factors best predicted in-flight risk behaviour beyond the initial decision to continue.
Judgement is a flowing river
Based on the decision-making of those who did fly, the study found a curious relationship between self-confidence and risk behaviour. In previous research, overconfidence has been linked to poor decision-making, including the downplay of hazards, the tendency to press on into marginal weather, or descend below safe altitudes, with high-time pilots believing their experience or skills will compensate for this. However, in this case it was the more confident pilots who carried out a less prolonged search, thus limiting their exposure to fuel exhaustion.
The authors noted that, paradoxically to that finding, the older pilots were the ones who chose to descend significantly lower during the simulated search. The more seasoned aviators, it seems, perceived the lower altitudes as less of a risk than the others. One explanation for this could be that experienced pilots were better able to judge ‘big picture’ decisions such as task feasibility. So, in this case, they were quicker to recognise when the search should be abandoned due to eroding margins, leading to an earlier strategic decision to turn back before fuel became critical. However, that same experience gave them the confidence to fly the search lower and still feel comfortable. In other words they had a reduced sense of perceived immediate danger. This reflects a broader well-known human-factors principle that with experience often comes risk normalisation. Familiarity breeds comfort, and what once might have seemed dangerous begins to feel routine. The experienced pilots didn’t necessarily have a higher risk appetite, they simply perceived a lower level of risk operating within what felt like an acceptable envelope based on previous safe outcomes.
So it is possible that experience leads to a trade off where managing the strategic risk (in this case, fuel and time exposure) is enhanced, but sensitivity to the tactical risk (altitude and margin for error) is degraded. And maybe this gives a snapshot example of how judgment is not monolithic. It’s not a solid, permanent, unmovable product of experience that you either have or don’t have. Judgement is fluid and changeable. Like a river it can’t be defined or described per se, you can only test it at a given point in time and say that when and where you did so the water was cold and fast flowing, deep and turbulent. At another time and place on the river it will not have the same characteristics. Nor will it ever be the same twice. Like dipping a toe in a river, the experience of decision-making is situational.

Seeing risk for what it is — and isn’t
What do the findings of this experiment mean in practice? Firstly it reinforces one of the key messages behind the development of competency-based training: that traditionally used markers such as age, hours, currency and recency fail to predict who will make safer decisions. Competence isn’t about how long a pilot has flown. Judgement does not automatically mature with age or experience. Genuine competency in operational decision-making demands a continuous calibration of an individual’s own risk environment and a conscious management of it.
A good pilot doesn’t just manage the aircraft. They manage their own perception of danger.
Human factors research consistently reminds us that risk perception is not objective. It’s a personal construct, shaped by factors such as mood, fatigue, culture, and even personality. The authors noted that “risky decision-making is dependent on the activity or task… as opposed to any global factor”, by which they mean that a pilot who is conservative about weather might still take unjustified risks with fuel. Another might be strict on procedures but casual about altitude. Risk awareness and risk perception doesn’t necessarily transfer automatically across contexts, and it certainly doesn’t transfer across individuals. This is why it is good practice to garner as many perspectives as possible across your team when dynamically assessing flight risks. Through open dialogue, standardised procedures, and shared mental models, effective CRM helps calibrate collective risk perception.
Experience without reflection is just repetition
The evidence from Pilot see, pilot do once again shows us that experience alone does not guarantee safety. Without deliberate reflection and calibration, experience can blind us to risk rather than sharpen our awareness of it. But, if experience alone isn’t the key, what then should training focus on?
Well, we shouldn’t throw the baby out with the bathwater! In aviation as in life, experience remains a fine, if not the finest, of instructors. That said, experience supplies the raw material, but reflection and learning is what transforms it into judgement. The authors suggest that ‘recognition and perception of immediate high risks’ are the most reliable behavioural markers of good decision-making. Therefore, training must go beyond drilling procedures, to teaching pilots how to think about risk. Following a checklist invites compliance and guarantees standardised responses, but developing situational thinking is what allows us to ask questions such as “How do I recognise when this situation is deteriorating?” “When should I deviate from my plan?” “How has the risk landscape changed?” Minimising risk to as low as reasonably practicable isn’t about avoiding flying altogether, it is having the wisdom and the humility to stop when the margins disappear.
Reference:
Drinkwater, J. L., & Molesworth, B. R. C. (2010). Pilot see, pilot do: Examining the predictors of pilots’ risk management behaviour. Safety Science, 48(10), 1445–1451. https://doi.org/10.1016/j.ssci.2010.07.001

