skip to main content
10.1145/3442188.3445923acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

Published: 01 March 2021 Publication History

Abstract

Trust is a central component of the interaction between people and AI, in that 'incorrect' levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the cognitive mechanism of trust, and how can we promote them, or assess whether they are being satisfied in a given interaction? This work aims to answer these questions. We discuss a model of trust inspired by, but not identical to, interpersonal trust (i.e., trust between people) as defined by sociologists. This model rests on two key properties: the vulnerability of the user; and the ability to anticipate the impact of the AI model's decisions. We incorporate a formalization of 'contractual trust', such that trust between a user and an AI model is trust that some implicit or explicit contract will hold, and a formalization of 'trustworthiness' (that detaches from the notion of trustworthiness in sociology), and with it concepts of 'warranted' and 'unwarranted' trust. We present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.

References

[1]
David Alvarez Melis and Tommi Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). 7775--7784. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf
[2]
Matthew Arnold, Rachel K. E. Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, Darrell Reimer, John T. Richards, Jason Tsay, and Kush R. Varshney. 2019. FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM J. Res. Dev. 63, 4/5 (2019), 6:1-6:13. https://doi.org/10.1147/JRD.2019.2942288
[3]
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. Generating Fact Checking Explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7352--7364. https://doi.org/10.18653/v1/2020.acl-main.656
[4]
Annette Baier. 1986. Trust and antitrust. ethics 96, 2 (1986), 231--260.
[5]
Jeremy Barnes, Lilja Øvrelid, and Erik Velldal. 2019. Sentiment Analysis Is Not Solved! Assessing and Probing Sentiment Classification. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, Florence, Italy, 12--23. https://doi.org/10.18653/v1/W19-4802
[6]
Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Trans. Assoc. Comput. Linguistics 6 (2018), 587--604. https://transacl.org/ojs/index. php/tacl/article/view/1464
[7]
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, 632--642. https://doi.org/10.18653/v1/D15-1075
[8]
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8 (2019), 832.
[9]
Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, and Tong Wang. 2018. An Interpretable Model with Globally Consistent Explanations for Credit Risk. CoRR abs/1811.12615 (2018). arXiv:1811.12615 http://arxiv.org/abs/1811.12615
[10]
F. Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, A. Bau, and James R. Glass. 2019. What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. In AAAI.
[11]
Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv:2006.11371 [cs.CV]
[12]
Harm de Vries, Dzmitry Bahdanau, and Christopher D. Manning. 2020. Towards Ecologically Valid Research on Language User Interfaces. CoRR abs/2007.14435 (2020). arXiv:2007.14435 https://arxiv.org/abs/2007.14435
[13]
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A Benchmark to Evaluate Rationalized NLP Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4443--4458. https://doi.org/10.18653/v1/2020.acl-main.408
[14]
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019. Show Your Work: Improved Reporting of Experimental Results. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 2185--2194. https://doi.org/10.18653/v1/D19-1224
[15]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[16]
Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, Berlin, Germany, 134--139. https://doi.org/10.18653/v1/W16-2524
[17]
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP Models via Contrast Sets. CoRR abs/2004.02709 (2020). arXiv:2004.02709 https://arxiv.org/abs/2004.02709
[18]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé, and Kate Crawford. 2018. Datasheets for Datasets. In Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning.
[19]
Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, and Paul Varghese. 2018. ClinicalVis: Supporting Clinical Task-Focused Design Evaluation. CoRR abs/1810.05798 (2018). arXiv:1810.05798 http://arxiv.org/abs/1810.05798
[20]
Amirata Ghorbani, James Wexler, James Y. Zou, and Been Kim. 2019. Towards Automatic Concept-based Explanations. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 9273--9282. http://papers.nips.cc/paper/9126-towards-automatic-concept-based-explanations
[21]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6572
[22]
Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual Visual Explanations (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, Long Beach, California, USA, 2376--2384. http://proceedings.mlr.press/v97/goyal19a.html
[23]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 51, 5, Article 93 (Aug. 2018), 42 pages. https://doi.org/10.1145/3236009
[24]
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, New Orleans, Louisiana, 107--112. https://doi.org/10.18653/v1/N18-2017
[25]
Sven Ove Hansson. 2018. Risk. In The Stanford Encyclopedia of Philosophy (fall 2018 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[26]
Peter Hase and Mohit Bansal. 2020. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? CoRR abs/2005.01831 (2020). arXiv:2005.01831 https://arxiv.org/abs/2005.01831
[27]
Katherine Hawley. 2014. Trust, distrust and commitment. Noûs 48, 1 (2014), 1--20.
[28]
Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. 2018. Grounding Visual Explanations. In ECCV. https://arxiv.org/abs/1807.09685
[29]
Robert R Hoffman. 2017. A taxonomy of emergent trusting in the human-machine relationship. Cognitive systems engineering: The future for a changing world (2017), 137--163.
[30]
Gert Jan Hofstede. 2006. Intrinsic and Enforceable Trust: A Research Agenda. European Association of Agricultural Economists, 99th Seminar, February 8-10, 2006, Bonn, Germany (01 2006).
[31]
Alon Jacovi and Yoav Goldberg. 2020. Aligning Faithful Interpretations with their Social Attribution. CoRR abs/2006.01067 (2020). arXiv:2006.01067 https://arxiv.org/abs/2006.01067
[32]
Alon Jacovi and Yoav Goldberg. 2020. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4198--4205. https://doi.org/10.18653/v1/2020.acl-main.386
[33]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna M. Wallach, and Jennifer Wortman Vaughan. 2019. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning.
[34]
Divyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning The Difference That Makes A Difference With Counterfactually-Augmented Data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
[35]
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 252--262. https://doi.org/10.18653/v1/N18-1023
[36]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2017. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arXiv:1711.11279 [stat.ML]
[37]
Svetlana Kiritchenko and Saif Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5-6, 2018, Malvina Nissim, Jonathan Berant, and Alessandro Lenci (Eds.). Association for Computational Linguistics, 43--53. https://doi.org/10.18653/v1/s18-2005
[38]
J. Kleinberg and S. Mullainathan. 2019. Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability. Proceedings of the 2019 ACM Conference on Economics and Computation (2019).
[39]
Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions (Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, International Convention Centre, Sydney, Australia, 1885--1894. http://proceedings.mlr.press/v70/koh17a.html
[40]
Hana Kopecka and Jose M Such. 2020. Explainable AI for Cultural Minds. https://sites.google.com/view/dexahai-at-ecai2020/home Workshop on Dialogue, Explanation and Argumentation for Human-Agent Interaction, DEXAHAI; Conference date: 07-09-2020.
[41]
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 4365--4374. https://doi.org/10.18653/v1/D19-1445
[42]
Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. 2019. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300717
[43]
David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of Google Flu: traps in big data analysis. Science 343, 6176 (2014), 1203--1205.
[44]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50--80.
[45]
J David Lewis and Andrew Weigert. 1985. Trust as a social reality. Social forces 63, 4 (1985), 967--985.
[46]
Sarah Lichtenstein, Baruch Fischhoff, and Lawrence D Phillips. 1977. Calibration of probabilities: The state of the art. In Decision making and change in human affairs. Springer, 275--324.
[47]
Zachary C. Lipton. 2018. The mythos of model interpretability. Commun. ACM 61, 10 (2018), 36--43. https://doi.org/10.1145/3233231
[48]
Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 505--514. https://doi.org/10.18653/v1/2020.acl-main.48
[49]
Brian Lubars and Chenhao Tan. 2019. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 57--67.
[50]
Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and H. Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020).
[51]
Ana Marasović, Chandra Bhagavatula, Jae sung Park, Ronan Le Bras, Noah A. Smith, and Yejin Choi. 2020. Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 2810--2829. https://doi.org/10.18653/v1/2020.findings-emnlp.253
[52]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709--734.
[53]
Carolyn McLeod. 2015. Trust. In The Stanford Encyclopedia of Philosophy (fall 2015 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[54]
Tim Miller. 2018. Contrastive Explanation: A Structural-Model Approach. CoRR abs/1811.03163 (2018). arXiv:1811.03163 http://arxiv.org/abs/1811.03163
[55]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (2019), 1--38. https://arxiv.org/abs/1706.07269
[56]
B. Misztal. 1996. Trust in Modern Societies: The Search for the Bases of Social Order. Wiley. https://books.google.co.il/books?id=q3R1QgAACAAJ
[57]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019. ACM, 220--229. https://doi.org/10.1145/3287560.3287596
[58]
Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors 39, 2 (1997), 230--253. https://doi.org/10.1518/001872097778543886 arXiv:https://doi.org/10.1518/001872097778543886
[59]
Joelle Pineau. 2020. The Machine Learning Reproducibility Checklist. https://www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf.
[60]
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7237--7256. https://doi.org/10.18653/v1/2020.acl-main.647
[61]
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4902--4912. https://doi.org/10.18653/v1/2020.acl-main.442
[62]
Mireia Ribera and Àgata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. In IUI Workshops.
[63]
D. Schlangen. 2020. Targeting the Benchmark: On Methodology in Current Natural Language Processing Research. (2020). https://arxiv.org/abs/2007.04792 arXiv:2007.04792.
[64]
Philipp Schmidt and Felix Bießmann. 2019. Quantifying Interpretability and Trust in Machine Learning Systems. CoRR abs/1901.08558 (2019). arXiv:1901.08558 http://arxiv.org/abs/1901.08558
[65]
K. Simonyan, A. Vedaldi, and Andrew Zisserman. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In 2nd International Conference on Learning Representations ICLR, Workshop Track Proceedings. https://arxiv.org/abs/1312.6034
[66]
Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S. Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376624
[67]
Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova. 2020. We Need to Talk About Random Splits. CoRR abs/2005.00636 (2020). arXiv:2005.00636 https://arxiv.org/abs/2005.00636
[68]
Robert C. Solomon. 1998. Creating Trust. Business Ethics Quarterly 8, 2 (1998), 205--232. https://doi.org/10.2307/3857326
[69]
Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating Gender Bias in Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1679--1684. https://doi.org/10.18653/v1/P19-1164
[70]
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1630--1640. https://www.aclweb.org/anthology/P19-1159
[71]
Jonathan Tallant. 2017. Commitment in Cases of Trust and Distrust. Thought: A Journal of Philosophy 6, 4 (2017), 261--267. https://doi.org/10.1002/tht3.259 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/tht3.259
[72]
Jonathan Tallant and Donatella Donati. 2020. Trust: from the Philosophical to the Commercial. Philosophy of Management 19, 1 (2020), 3--19.
[73]
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics - On what Language Model Pre-training Captures. CoRR abs/1912.13283 (2019). arXiv:1912.13283 http://arxiv.org/abs/1912.13283
[74]
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4149--4158. https://doi.org/10.18653/v1/N19-1421
[75]
Scott Thiebes, Sebastian Lins, and Ali Sunyaev. 2020. Trustworthy artificial intelligence. Electronic Markets (10 2020). https://doi.org/10.1007/s12525-020-00441-4
[76]
Erico Tjoa and Cuntai Guan. 2019. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. CoRR abs/1907.07374 (2019). arXiv:1907.07374 http://arxiv.org/abs/1907.07374
[77]
Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The Relationship between Trust in AI and Trustworthy Machine Learning Technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 272--283. https://doi.org/10.1145/3351095.3372834
[78]
Suresh Venkatasubramanian. 2019. Algorithmic Fairness: Measures, Methods and Representations. In Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (Amsterdam, Netherlands) (PODS '19). Association for Computing Machinery, New York, NY, USA, 481. https://doi.org/10.1145/3294052.3322192
[79]
Sarah Myers West, Meredith Whittaker, and Kate Crawford. 2019. Discriminating systems: Gender, race and power in AI. (2019). https://ainowinstitute.org/discriminatingsystems.pdf
[80]
Stephen Wright. 2010. Trust and Trustworthiness. Philosophia 38, 3 (2010), 615--627. https://doi.org/10.1007/s11406-009-9218-0
[81]
Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. 563--574. https://doi.org/10.1007/978-3-030-32236-6_51

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2024)The Role of Transparency in AI-Driven Technologies: Targeting HealthcareAI - Ethical and Legal Challenges [Working Title]10.5772/intechopen.1007444Online publication date: 12-Nov-2024
  • (2024)AI Caramba!Leveraging AI for Effective Digital Relationship Marketing10.4018/979-8-3693-5340-0.ch011(309-352)Online publication date: 18-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. artificial intelligence
  2. contractual trust
  3. distrust
  4. formalization
  5. sociology
  6. trust
  7. trustworthy
  8. warranted trust

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,594
  • Downloads (Last 6 weeks)286
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2024)The Role of Transparency in AI-Driven Technologies: Targeting HealthcareAI - Ethical and Legal Challenges [Working Title]10.5772/intechopen.1007444Online publication date: 12-Nov-2024
  • (2024)AI Caramba!Leveraging AI for Effective Digital Relationship Marketing10.4018/979-8-3693-5340-0.ch011(309-352)Online publication date: 18-Oct-2024
  • (2024)The Behavioural Impact of Artificial IntelligenceEnhancing and Predicting Digital Consumer Behavior with AI10.4018/979-8-3693-4453-8.ch016(311-329)Online publication date: 12-Apr-2024
  • (2024)Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive SystemMultimodal Technologies and Interaction10.3390/mti80300208:3(20)Online publication date: 1-Mar-2024
  • (2024)Uncertainty in XAI: Human Perception and Modeling ApproachesMachine Learning and Knowledge Extraction10.3390/make60200556:2(1170-1192)Online publication date: 27-May-2024
  • (2024)Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design StudyJMIR Formative Research10.2196/607128(e60712)Online publication date: 11-Oct-2024
  • (2024)Beyond the Value for AI Adopters: Analyzing the Impacts of Autonomous Vehicle Testing on Traffic Conditions in CaliforniaSSRN Electronic Journal10.2139/ssrn.4775378Online publication date: 2024
  • (2024)Effects of machine learning errors on human decision-making: manipulations of model accuracy, error types, and error importanceCognitive Research: Principles and Implications10.1186/s41235-024-00586-29:1Online publication date: 26-Aug-2024
  • (2024)Concerns about the role of artificial intelligence in journalism, and media manipulationJournalism10.1177/14648849241263293Online publication date: 19-Jun-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media