Little time is spent discussing the critical role played by the concept of trust in our interactions with automation. While automation-related incidents are often framed as failures of systems knowledge, this article argues that they are more accurately understood as failures of trust calibration. We explore what is meant by trust in this context, and how it determines our reliance, use, misuse, or indeed, disuse of automated systems.
