Knowing (Not) to Know: Explainable Artificial Intelligence and Human Metacognition
Abstract
Many organizations seek to combine human expertise with explainable artificial intelligence (XAI), but they often overlook a core requirement for effective collaboration; humans must understand their own abilities. This understanding of one’s own abilities is referred to as metacognition, which captures how well individuals monitor and regulate their own decision making. In two experiments with real estate and lending professionals, we find that XAI improves human metacognition by reducing their overconfidence. As a result, experts can better delegate decisions to an artificial intelligence (AI) and interact with it more effectively, improving the collaborative performance. These effects occur primarily when XAI highlights differences between human reasoning and AI logic. Our findings demonstrate that metacognition is a key mechanism through which XAI affects decision outcomes and offer guidance for organizations deploying (X)AI under growing transparency and accountability mandates, such as those in the European Union Artificial Intelligence Act.
