Adaptive Trust Calibration in HumanAI Decision-Making: A Framework for Dynamic Confidence Alignment

Authors

  • Jasmine Washington , PhD Candidate, Human-Centered Computing Division, Georgia Institute of Technology, Atlanta, USA
  • Marco Bianchi , PhD Candidate, Department of Computer Science, ETH Zürich, Zürich, Switzerland
  • Yara Al-Mansoori PhD Candidate, Idiap Research Institute, Ecole Polytechnique Federale de Lausanne´(EPFL), Martigny, Switzerland

Keywords:

Human-AI Collaboration, Trust Calibration, Adaptive Interfaces, Decision Support Systems, Explainable AI

Abstract

Human-AI collaboration requires careful alignment between system behavior and user trust, but current systems often lack dynamic adaptation to shifting user confidence levels. This paper presents an Adaptive Trust Calibration (ATC) framework that adjusts AI transparency and explanations in real time to support decision-making in domains like healthcare and finance. The proposed approach uses a feedback mechanism where AI systems monitor implicit trust signals (e.g., reliance patterns, interaction history) and adapt their explanatory outputs accordingly. We formulate trust calibration as an optimization problem that considers both task performance and cognitive load. Experimental studies with domain experts show that ATC leads to measurable improvements in task accuracy (18.7% increase) and reduced cognitive strain (22.3% decrease on NASA-TLX scales) compared to static explanation systems. The framework combines computational trust modeling with practical system design, suggesting pathways for developing more responsive collaborative AI systems. These findings contribute to ongoing research on trust dynamics in human-AI teams while identifying practical considerations for system implementation.

Downloads

Published

2025-04-28

Issue

Section

Articles