Supporting high-uncertainty decisions through AI and logic-style explanations

FM Cau, H Hauptmann, LD Spano… - Proceedings of the 28th …, 2023 - dl.acm.org
Proceedings of the 28th International Conference on Intelligent User Interfaces, 2023dl.acm.org
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate
trust in the AI–rejecting advice when it is incorrect, and accepting advice when it is correct.
Previous findings suggest that explanations can cause an over-reliance on AI (overly
accepting advice). Explanations that evoke appropriate trust are even more challenging for
decision-making tasks that are difficult for humans and AI. For this reason, we study decision-
making by non-experts in the high-uncertainty domain of stock trading. We compare the …
A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI.
Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.
ACM Digital Library