Adaptive Trust Calibration in HumanAI Decision-Making: A Framework for Dynamic Confidence Alignment
Keywords:
Human-AI Collaboration, Trust Calibration, Adaptive Interfaces, Decision Support Systems, Explainable AIAbstract
Human-AI collaboration requires careful alignment between system behavior and user trust, but current systems often lack dynamic adaptation to shifting user confidence levels. This paper presents an Adaptive Trust Calibration (ATC) framework that adjusts AI transparency and explanations in real time to support decision-making in domains like healthcare and finance. The proposed approach uses a feedback mechanism where AI systems monitor implicit trust signals (e.g., reliance patterns, interaction history) and adapt their explanatory outputs accordingly. We formulate trust calibration as an optimization problem that considers both task performance and cognitive load. Experimental studies with domain experts show that ATC leads to measurable improvements in task accuracy (18.7% increase) and reduced cognitive strain (22.3% decrease on NASA-TLX scales) compared to static explanation systems. The framework combines computational trust modeling with practical system design, suggesting pathways for developing more responsive collaborative AI systems. These findings contribute to ongoing research on trust dynamics in human-AI teams while identifying practical considerations for system implementation.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Future - Artificial Intelligence and Social Systems: Innovations and Impacts (AISSII)

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

