An Explainable Artificial Intelligence Framework to Enhance Transparency and Trust in High-Stakes Decision Systems
Keywords:
Explainable AI, Trust, Transparency, High-Stakes Decision Systems, Model Interpretability, Structural Equation ModelingAbstract
High-stakes decision systems, such as those in healthcare, finance, criminal justice, and autonomous systems, increasingly rely on artificial intelligence (AI) to provide predictive and prescriptive insights. Despite AI’s capability to optimize decision-making, the “black-box” nature of many AI models reduces transparency and undermines stakeholder trust. Explainable AI (XAI) has emerged as a solution to enhance interpretability, accountability, and confidence in AI-driven decisions. This study develops and empirically validates a conceptual framework linking explainable AI methods, model interpretability, user understanding, and trust in high-stakes decision systems. The framework integrates feature attribution, counterfactual explanations, model transparency, and human-in-the-loop mechanisms to evaluate their impact on user trust and decision acceptance. A quantitative research design utilizing Partial Least Squares Structural Equation Modeling was employed to test relationships among constructs. Data were collected from 397 professionals and decision-makers across healthcare, finance, and autonomous systems who regularly interact with AI-driven tools. Measurement model evaluation confirmed reliability and convergent validity with composite reliability above 0.91 and average variance extracted above 0.62. Structural model analysis indicated that feature attribution beta 0.53 p < 0.001, counterfactual explanations beta 0.48 p < 0.001, and model transparency beta 0.44 p < 0.001 positively influence model interpretability. Model interpretability mediates the relationship between XAI methods and user trust beta 0.47 p < 0.001. The framework explains 63 percent of variance in model interpretability and 61 percent of variance in user trust. Findings highlight the importance of integrating XAI techniques to enhance transparency and trust in AI-enabled decision-making processes. The study provides a validated framework to guide practitioners, policymakers, and system designers in deploying trustworthy and transparent AI solutions in high-stakes environments

