skip to main content
10.1145/3301275.3302308acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

I can do better than your AI: expertise and explanations

Published: 17 March 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Intelligent assistants, such as navigation, recommender, and expert systems, are most helpful in situations where users lack domain knowledge. Despite this, recent research in cognitive psychology has revealed that lower-skilled individuals may maintain a sense of illusory superiority, which might suggest that users with the highest need for advice may be the least likely to defer judgment. Explanation interfaces - a method for persuading users to take a system's advice - are thought by many to be the solution for instilling trust, but do their effects hold for self-assured users? To address this knowledge gap, we conducted a quantitative study (N=529) wherein participants played a binary decision-making game with help from an intelligent assistant. Participants were profiled in terms of both actual (measured) expertise and reported familiarity with the task concept. The presence of explanations, level of automation, and number of errors made by the intelligent assistant were manipulated while observing changes in user acceptance of advice. An analysis of cognitive metrics lead to three findings for research in intelligent assistants: 1) higher reported familiarity with the task simultaneously predicted more reported trust but less adherence, 2) explanations only swayed people who reported very low task familiarity, and 3) showing explanations to people who reported more task familiarity led to automation bias.

    Supplementary Material

    MP4 File (p240-schaffer.mp4)

    References

    [1]
    Vicky Arnold, Nicole Clark, Philip A Collier, Stewart A Leech, and Steve G Sutton. 2006. The differential use and effect of knowledge-based system explanations in novice and expert judgment decisions. Mis Quarterly (2006), 79--97.
    [2]
    Robert Axelrod and Robert M Axelrod. 1984. The evolution of cooperation. Vol. 5145. Basic Books (AZ).
    [3]
    Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the royal statistical society. Series B (Methodological) (1995), 289--300.
    [4]
    Shlomo Berkovsky, Ronnie Taib, and Dan Conway. 2017. How to Recommend?: User Trust Factors in Movie Recommender Systems. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 287--300.
    [5]
    John Boddy, Annabel Carver, and Kevin Rowley. 1986. Effects of positive and negative verbal reinforcement on performance as a function of extraversion-introversion: Some tests of Gray's theory. Personality and Individual Differences 7, 1 (1986), 81--88.
    [6]
    Donald T Campbell and Donald W Fiske. 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological bulletin 56, 2 (1959), 81.
    [7]
    Jennifer A Chatman and Sigal G Barsade. 1995. Personality, organizational culture, and cooperation: Evidence from a business simulation. Administrative Science Quarterly (1995), 423--443.
    [8]
    Jessie YC Chen and Michael J Barnes. 2014. Human-agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems 44, 1 (2014), 13--29.
    [9]
    Jessie Y Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael Barnes. 2014. Situation Awareness-Based Agent Transparency. Technical Report. DTIC Document.
    [10]
    Yu-Hui Chen and Stuart Barnes. 2007. Initial trust and online buyer behaviour. Industrial management & data systems 107, 1 (2007), 21--36.
    [11]
    Ahyoung Choi, Celso M de Melo, Peter Khooshabeh, Woontack Woo, and Jonathan Gratch. 2015. Physiological evidence for a dual process model of the social effects of emotion in computers. International Journal of Human-Computer Studies 74 (2015), 41--53.
    [12]
    Robert A Cribbie. 2007. Multiplicity control in structural equation modeling. Structural Equation Modeling 14, 1 (2007), 98--112.
    [13]
    Mary Cummings. 2004. Automation bias in intelligent time critical decision support systems. In AIAA 1st Intelligent Systems Technical Conference. 6313.
    [14]
    Janet E Davidson and CL Downing. 2000. Contemporary models of intelligence. Handbook of intelligence (2000), 34--49.
    [15]
    Robyn M Dawes and Richard H Thaler. 1988. Anomalies: cooperation. The Journal of Economic Perspectives (1988), 187--197.
    [16]
    David Dunning. 2011. 5 The Dunning-Kruger Effect: On Being Ignorant of One's Own Ignorance. Advances in experimental social psychology 44 (2011), 247.
    [17]
    Mica R Endsley. 1988. Situation awareness global assessment technique (SAGAT). In Aerospace and Electronics Conference, 1988. NAECON 1988., Proceedings of the IEEE 1988 National. IEEE, 789--795.
    [18]
    Mica R Endsley. 2000. Direct measurement of situation awareness: Validity and use of SAGAT. Situation awareness analysis and measurement 10 (2000).
    [19]
    Mica R Endsley and Daniel J Garland. 2000. Situation awareness analysis and measurement. CRC Press.
    [20]
    Renee Engeln-Maddox. 2005. Cognitive responses to idealized media images of women: The relationship of social comparison and critical processing to body image disturbance in college women. Journal of Social and Clinical Psychology 24, 8 (2005), 1114--1138.
    [21]
    Martin J Eppler and Jeanne Mengis. 2004. The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The information society 20, 5 (2004), 325--344.
    [22]
    Brian J Fogg. 2002. Persuasive technology: using computers to change what we think and do. Ubiquity 2002, December (2002), 5.
    [23]
    William H Gladstones, Michael A Regan, and Robert B Lee. 1989. Division of attention: The single-channel hypothesis revisited. The Quarterly Journal of Experimental Psychology 41, 1 (1989), 1--17.
    [24]
    Natalie S Glance and Bernardo A Huberman. 1994. The dynamics of social dilemmas. Scientific American 270, 3 (1994), 76--81.
    [25]
    Lewis R Goldberg. 1993. The structure of phenotypic personality traits. American psychologist 48, 1 (1993), 26.
    [26]
    E Bruce Goldstein. 2014. Cognitive psychology: Connecting mind, research and everyday experience. Nelson Education.
    [27]
    C Gonzalez, N Ben-Asher, JM Martin, and V Dutt. 2013. Emergence of cooperation with increased information: Explaining the process with instance-based learning models. Unpublished manuscript under review (2013).
    [28]
    Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly (1999), 497--530.
    [29]
    David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web (2017).
    [30]
    Jason L Harman, John O'Donovan, Tarek Abdelzaher, and Cleotilde Gonzalez. 2014. Dynamics of human trust in recommender systems. In Proceedings of the 8th ACM Conference on Recommender systems. ACM, 305--308.
    [31]
    Morten Hertzum, Hans HK Andersen, Verner Andersen, and Camilla B Hansen. 2002. Trust in information sources: seeking information from people, documents, and virtual agents. Interacting with computers 14, 5 (2002), 575--599.
    [32]
    David Hitchcock and Bart Verheij. 2006. Arguing on the Toulmin model. Springer.
    [33]
    Vera Hoorens. 1993. Self-enhancement and superiority biases in social comparison. European review of social psychology 4, 1 (1993), 113--139.
    [34]
    Jason D Johnson, Julian Sanchez, Arthur D Fisk, and Wendy A Rogers. 2004. Type of automation failure: The effects on trust and reliance in automation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 48. SAGE Publications Sage CA: Los Angeles, CA, 2163--2167.
    [35]
    John Kagel and Peter McGee. 2014. Personality and cooperation in finitely repeated prisonerâĂŹs dilemma games. Economics Letters 124, 2 (2014), 274--277.
    [36]
    Bart P Knijnenburg, Svetlin Bostandjiev, John O'Donovan, and Alfred Kobsa. 2012. Inspectability and control in social recommenders. In Proceedings of the sixth ACM conference on Recommender systems. ACM, 43--50.
    [37]
    Bart P Knijnenburg and Alfred Kobsa. 2013. Making decisions about privacy: information disclosure in context-aware recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 3, 3 (2013), 20.
    [38]
    Sherrie YX Komiak and Izak Benbasat. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS quarterly (2006), 941--960.
    [39]
    Yehuda Koren and Robert Bell. 2015. Advances in collaborative filtering. In Recommender systems handbook. Springer, 77--118.
    [40]
    Marios Koufaris and William Hampton-Sosa. 2002. Customer trust online: examining the role of the experience with the Web-site. Department of Statistics and Computer Information Systems Working Paper Series, Zicklin School of Business, Baruch College, New York (2002).
    [41]
    Justin Kruger. 1999. Lake Wobegon be gone! The" below-average effect" and the egocentric nature of comparative ability judgments. Journal of personality and social psychology 77, 2 (1999), 221.
    [42]
    Justin Kruger and David Dunning. 1999. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of personality and social psychology 77, 6 (1999), 1121.
    [43]
    Guan-Chyun Lin, Zhonglin Wen, Herbert W Marsh, and Huey-Shyan Lin. 2010. Structural equation models of latent interactions: Clarification of orthogonalizing and double-mean-centering strategies. Structural Equation Modeling 17, 3 (2010), 374--391.
    [44]
    Charles G Lord, Lee Ross, and Mark R Lepper. 1979. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of personality and social psychology 37, 11 (1979), 2098.
    [45]
    Jolie M Martin, Ion Juvina, Christian Lebiere, and Cleotilde Gonzalez. 2011. The effects of individual and context on aggression in repeated social interaction. In Engineering Psychology and Cognitive Ergonomics. Springer, 442--451.
    [46]
    D Harrison Mcknight, Michelle Carter, Jason Bennett Thatcher, and Paul F Clay. 2011. Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS) 2, 2 (2011), 12.
    [47]
    Harrison McKnight, Michelle Carter, and Paul Clay. 2009. Trust in technology: development of a set of constructs and measures. Digit 2009 Proceedings (2009), 10.
    [48]
    Kimberly Merritt, D Smith, and JCD Renzo. 2005. An investigation of self-reported computer literacy: Is it reliable. Issues in Information Systems 6, 1 (2005), 289--295.
    [49]
    Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3-5 (2017), 393--444.
    [50]
    John O'Donovan and Barry Smyth. 2005. Trust in recommender systems. In Proceedings of the 10th international conference on Intelligent user interfaces. ACM, 167--174.
    [51]
    Anatol Rapoport and Albert M Chammah. 1965. Prisoner's dilemma: A study in conflict and cooperation. Vol. 165. University of Michigan press.
    [52]
    Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. 2015. Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, 141--148.
    [53]
    James Schaffer, John OâĂŹDonovan, Laura Marusich, Michael Yu, Cleotilde Gonzalez, and Tobias Höllerer. 2018. A study of dynamic information display and decision-making in abstract trust games. International Journal of Human-Computer Studies 113 (2018), 1--14.
    [54]
    Mohsen Tavakol and Reg Dennick. 2011. Making sense of Cronbach's alpha. International journal of medical education 2 (2011), 53.
    [55]
    Jodie B Ullman and Peter M Bentler. 2003. Structural equation modeling. Wiley Online Library.
    [56]
    Weiquan Wang and Izak Benbasat. 2007. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23, 4 (2007), 217--246.
    [57]
    Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User trust dynamics: An investigation driven by differences in system performance. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 307--317.

    Cited By

    View all
    • (2024)Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision MakingProceedings of the ACM on Human-Computer Interaction10.1145/36537088:CSCW1(1-31)Online publication date: 26-Apr-2024
    • (2024)Between Trust and Identity: Form, Function, and PresentationProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3669999(1-4)Online publication date: 8-Jul-2024
    • (2024)Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI CollaborationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645198(370-384)Online publication date: 18-Mar-2024
    • Show More Cited By

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '19: Proceedings of the 24th International Conference on Intelligent User Interfaces
    March 2019
    713 pages
    ISBN:9781450362726
    DOI:10.1145/3301275
    © 2019 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the United States Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 March 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. cognitive modeling
    2. decision support systems
    3. human-computer interaction
    4. information systems
    5. intelligent assistants
    6. user interfaces

    Qualifiers

    • Research-article

    Conference

    IUI '19
    Sponsor:

    Acceptance Rates

    IUI '19 Paper Acceptance Rate 71 of 282 submissions, 25%;
    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)380
    • Downloads (Last 6 weeks)27
    Reflects downloads up to 14 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision MakingProceedings of the ACM on Human-Computer Interaction10.1145/36537088:CSCW1(1-31)Online publication date: 26-Apr-2024
    • (2024)Between Trust and Identity: Form, Function, and PresentationProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3669999(1-4)Online publication date: 8-Jul-2024
    • (2024)Take It, Leave It, or Fix It: Measuring Productivity and Trust in Human-AI CollaborationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645198(370-384)Online publication date: 18-Mar-2024
    • (2024)Designing for Appropriate Reliance: The Roles of AI Uncertainty Presentation, Initial User Decision, and User Demographics in AI-Assisted Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36373188:CSCW1(1-32)Online publication date: 26-Apr-2024
    • (2024)Explaining the Unexplainable: The Impact of Misleading Explanations on Trust in Unreliable Predictions for Hardly Assessable TasksProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659573(36-46)Online publication date: 22-Jun-2024
    • (2024)When in Doubt! Understanding the Role of Task Characteristics on Peer Decision-Making with AI AssistanceProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659567(89-101)Online publication date: 22-Jun-2024
    • (2024)Trust by Interface: How Different User Interfaces Shape Human Trust in Health Information from Large Language ModelsExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650837(1-7)Online publication date: 11-May-2024
    • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
    • (2024)Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision MakingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641905(1-17)Online publication date: 11-May-2024
    • (2024)Guided By AI: Navigating Trust, Bias, and Data Exploration in AI‐Guided Visual AnalyticsComputer Graphics Forum10.1111/cgf.1510843:3Online publication date: 10-Jun-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media