skip to main content
10.1145/3442188.3445890acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

The Sanction of Authority: Promoting Public Trust in AI

Published: 01 March 2021 Publication History

Abstract

Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the underdevelopment of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. This model provides a theoretical scaffolding for Trusted AI research which underscores the need to develop nothing less than a comprehensive and visibly functioning regulatory ecosystem. We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.

References

[1]
IBM Research AI. 2020. Trusting AI. https://www.research.ibm.com/artificial-intelligence/trusted-ai/.
[2]
Malta AI. 2019. Malta The Ultimate Launchpad: A Strategy and Vision for Artificial Intelligence in Malta 2030. https://malta.ai/wp-content/uploads/2019/11/Malta_The_Ultimate_AI_Launchpad_vFinal.pdf.
[3]
Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, A Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, Darrell Reimer, J Richards, J Tsay, and KR Varshney. 2019. FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6--1.
[4]
Guruduth Banavar. 2016. Learning to trust artificial intelligence systems. Report, IBM, Armonk, NY (2016).
[5]
Stephen R Barley and Pamela S Tolbert. 1997. Institutionalization and structuration: Studying the links between action and institution. Organization studies 18, 1 (1997), 93--117.
[6]
Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics 6 (2018), 587--604.
[7]
Mark Bovens. 2010. Two Concepts of Accountability: Accountability as a Virtue and as a Mechanism. West European Politics 33, 5 (2010), 946--967.
[8]
Joanna Bryson. 2018. AI Global Governance: No One Should Trust AI. https://cpr.unu.edu/ai-global-governance-no-one-should-trust-ai.html.
[9]
Berkman Klein Center. 2020. Fairness and AI: Sandra Wachter on why fairness cannot be automated. https://medium.com/berkman-klein-center/fairness-and-ai-c5596faddd20.
[10]
European Commission. 2020. Denmark AI Strategy Report. https://ec.europa.eu/knowledge4policy/ai-watch/denmark-ai-strategy-report_en.
[11]
European Commission. 2020. White Paper: On Artificial Intelligence---A European approach to excellence and trust. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
[12]
Madeleine Clare Elish and Danah Boyd. 2018. Situating methods in the magic of Big Data and AI. Communication monographs 85, 1 (2018), 57--80.
[13]
Colin English, Paddy Nixon, Sotirios Terzis, Andrew McGettrick, and Helen Lowe. 2002. Dynamic trust models for ubiquitous computing environments. In First workshop on security in ubiquitous computing at the fourth annual conference on ubiquitous computing (Ubicomp2002).
[14]
Thomas Erickson and Wendy A Kellogg. 2000. Social translucence: an approach to designing systems that support social processes. ACM transactions on computerhuman interaction (TOCHI) 7, 1 (2000), 59--83.
[15]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018).
[16]
Anthony Giddens. 1984. The constitution of society: Outline of the theory of structuration. Univ of California Press.
[17]
Anthony Giddens. 1990. The consequences of modernity. Cambridge: Polity (1990).
[18]
Alyssa Glass, Deborah L McGuinness, and Michael Wolverton. 2008. Toward establishing trust in adaptive agents. In Proceedings of the 13th international conference on Intelligent user interfaces. 227--236.
[19]
Erving Goffman. 1955. On face-work: An analysis of ritual elements in social interaction. Psychiatry 18, 3 (1955), 213--231.
[20]
Richard Harper. 2014. Trust, computing, and society. Cambridge University Press.
[21]
Monika Hengstler, Ellen Enkel, and Selina Duelli. 2016. Applied artificial intelligence and trust---The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change 105 (2016), 105--120.
[22]
Michael Hind, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Alexandra Olteanu, and Kush R Varshney. 2018. Increasing trust in AI services through supplier's declarations of conformity. arXiv preprint arXiv:1808.07261 (2018).
[23]
Sarah Holland, Ahmed Hosny, and Sarah Newman. 2020. The Dataset Nutrition Label. Data Protection and Privacy: Data Protection and Democracy (2020), 1.
[24]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389--399.
[25]
Matthew R Jones and Helena Karsten. 2008. Giddens's structuration theory and information systems research. MIS quarterly 32, 1 (2008), 127--157.
[26]
René F Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2390--2395.
[27]
Frens Kroeger. 2017. Facework: creating trust in systems, institutions and organisations. Cambridge Journal of Economics 41, 2 (2017), 487--514.
[28]
Karl Krukow, Mogens Nielsen, and Vladimiro Sassone. 2008. Trust models in ubiquitous computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, 1881 (2008), 3781--3793.
[29]
Marc Langheinrich. 2003. When trust does not compute-the role of trust in ubiquitous computing. In Workshop on Privacy at UBICOMP, Vol. 2003.
[30]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684.
[31]
J David Lewis and Andrew Weigert. 1985. Trust as a social reality. Social forces 63, 4 (1985), 967--985.
[32]
Xin Li, Traci J Hess, and Joseph S Valacich. 2008. Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems 17, 1 (2008), 39--71.
[33]
Stephen Marsh, Tosan Atele-Williams, Anirban Basu, Natasha Dwyer, Peter R Lewis, Hector Miller-Bakewell, and Jeremy Pitt. 2020. Thinking about Trust: People, Process, and Place. Patterns 1, 3 (2020), 100039.
[34]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709--734.
[35]
D Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research 13, 3 (2002), 334--359.
[36]
D Harrison McKnight, Larry L Cummings, and Norman L Chervany. 1998. Initial trust formation in new organizational relationships. Academy of Management review 23, 3 (1998), 473--490.
[37]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.
[38]
Brent Mittelstadt. 2019. AI Ethics-Too principled to fail. arXiv preprint arXiv:1906.06668 (2019).
[39]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279--288.
[40]
Inioluwa Deborah Raji and Jingying Yang. 2019. ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles. arXiv preprint arXiv:1912.06166 (2019).
[41]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135--1144.
[42]
John Richards, David Piorkowski, Michael Hind, Stephanie Houde, and Aleksandra Mojsilović. 2020. A Methodology for Creating AI FactSheets. arXiv preprint arXiv:2006.13796 (2020).
[43]
Jens Riegelsberger, M Angela Sasse, and John D McCarthy. 2005. The mechanics of trust: A framework for research and design. International Journal of Human-Computer Studies 62, 3 (2005), 381--422.
[44]
Karin Sanders, Birgit Schyns, Graham Dietz, and Deanne N Den Hartog. 2006. Measuring trust inside organisations. Personnel review (2006).
[45]
Narendar Shankar and William Arbaugh. 2002. On trust for ubiquitous computing. In Workshop on Security in Ubiquitous Computing, UBICOMP, Vol. 32. Citeseer.
[46]
Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31, 2 (2018), 47--53.
[47]
Petros Terzis. 2020. Onward for the freedom of others: marching beyond the AI ethics. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 220--229.
[48]
Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 272--283.
[49]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR. Harv. JL & Tech. 31 (2017), 841.
[50]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2020. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Available at SSRN (2020).
[51]
Darrell M. West and John R. Allen. 2020. Turning Point: Policymaking in the Era of Artificial Intelligence. Brookings Institution Press.
[52]
Jess Whittlestone, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. The role and limits of principles in AI ethics: towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 195--200.
[53]
Maranke Wieringa. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 1--18.
[54]
Brian Wynne. 2006. Public engagement as a means of restoring public trust in science-hitting the notes, but missing the music? Public Health Genomics 9, 3 (2006), 211--220.

Cited By

View all
  • (2024)In Seal We Trust? Investigating the Effect of Certifications on Perceived Trustworthiness of AI SystemsHuman-Machine Communication10.30658/hmc.8.78(141-162)Online publication date: 2024
  • (2024)Trustworthiness of Policymakers, Technology Developers, and Media Organizations Involved in Introducing AI for Autonomous Vehicles: A Public PerspectiveScience Communication10.1177/1075547024124816946:5(584-618)Online publication date: 6-May-2024
  • (2024)"It Felt Like Having a Second Mind": Investigating Human-AI Co-creativity in Prewriting with Large Language ModelsProceedings of the ACM on Human-Computer Interaction10.1145/36373618:CSCW1(1-26)Online publication date: 26-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Trust
  2. artificial intelligence
  3. face-work
  4. institutional trust
  5. structuration theory
  6. trustworthiness

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • ESRC

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)309
  • Downloads (Last 6 weeks)56
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)In Seal We Trust? Investigating the Effect of Certifications on Perceived Trustworthiness of AI SystemsHuman-Machine Communication10.30658/hmc.8.78(141-162)Online publication date: 2024
  • (2024)Trustworthiness of Policymakers, Technology Developers, and Media Organizations Involved in Introducing AI for Autonomous Vehicles: A Public PerspectiveScience Communication10.1177/1075547024124816946:5(584-618)Online publication date: 6-May-2024
  • (2024)"It Felt Like Having a Second Mind": Investigating Human-AI Co-creativity in Prewriting with Large Language ModelsProceedings of the ACM on Human-Computer Interaction10.1145/36373618:CSCW1(1-26)Online publication date: 26-Apr-2024
  • (2024)Embodied Machine LearningProceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction10.1145/3623509.3633370(1-12)Online publication date: 11-Feb-2024
  • (2024)Towards Designing a Question-Answering Chatbot for Online News: Understanding Questions and PerspectivesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642007(1-17)Online publication date: 11-May-2024
  • (2024)(Un)making AI Magic: A Design TaxonomyProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641954(1-21)Online publication date: 11-May-2024
  • (2024)Designing an ML Auditing Criteria Catalog as Starting Point for the Development of a FrameworkIEEE Access10.1109/ACCESS.2024.337576312(39953-39967)Online publication date: 2024
  • (2024)Ethics-based AI auditingInformation and Management10.1016/j.im.2024.10396961:5Online publication date: 1-Jul-2024
  • (2024)Reducing organizational inequalities associated with algorithmic controlsDiscover Artificial Intelligence10.1007/s44163-024-00137-04:1Online publication date: 15-May-2024
  • (2024)Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI AuditabilityInformation Systems Frontiers10.1007/s10796-024-10508-8Online publication date: 2-Jul-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media