• Xuan Y, Small E, Sokol K, Hettiachchi D and Sanderson M. (2025). Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations. International Journal of Human-Computer Studies. 10.1016/j.ijhcs.2024.103376. 193. (103376). Online publication date: 1-Jan-2025.

    https://linkinghub.elsevier.com/retrieve/pii/S1071581924001599

  • Wenzel J, Köhl M, Sterz S, Zhang H, Schmidt A, Fetzer C and Hermanns H. (2025). Traceability and Accountability by Construction. Leveraging Applications of Formal Methods, Verification and Validation. Software Engineering Methodologies. 10.1007/978-3-031-75387-9_16. (258-280).

    https://link.springer.com/10.1007/978-3-031-75387-9_16

  • Baum K, Biewer S, Hermanns H, Hetmank S, Langer M, Lauber-Rönsberg A and Sterz S. (2025). Taming the AI Monster: Monitoring of Individual Fairness for Effective Human Oversight. Model Checking Software. 10.1007/978-3-031-66149-5_1. (3-25).

    https://link.springer.com/10.1007/978-3-031-66149-5_1

  • Mehrotra S, Degachi C, Vereschak O, Jonker C and Tielman M. (2024). A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges. ACM Journal on Responsible Computing. 1:4. (1-45). Online publication date: 31-Dec-2025.

    https://doi.org/10.1145/3696449

  • Handler A, Larsen K and Hackathorn R. (2024). Large language models present new questions for decision support. International Journal of Information Management. 10.1016/j.ijinfomgt.2024.102811. 79. (102811). Online publication date: 1-Dec-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S0268401224000598

  • Dierickx L, Opdahl A, Khan S, Lindén C and Guerrero Rojas D. (2024). A data-centric approach for ethical and trustworthy AI in journalism. Ethics and Information Technology. 26:4. Online publication date: 1-Dec-2024.

    https://doi.org/10.1007/s10676-024-09801-6

  • Wilkowska W, Otten S, Maidhof C and Ziefle M. (2023). Trust Conditions and Privacy Perceptions in the Acceptance of Ambient Technologies for Health-Related Purposes. International Journal of Human–Computer Interaction. 10.1080/10447318.2023.2272075. 40:22. (7784-7799). Online publication date: 16-Nov-2024.

    https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2272075

  • Duan W, Zhou S, Scalia M, Yin X, Weng N, Zhang R, Freeman G, McNeese N, Gorman J and Tolston M. (2024). Understanding the Evolvement of Trust Over Time within Human-AI Teams. Proceedings of the ACM on Human-Computer Interaction. 8:CSCW2. (1-31). Online publication date: 7-Nov-2024.

    https://doi.org/10.1145/3687060

  • Recki L, Lawo D, Krauß V, Pins D and Boden A. (2024). "You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCI. Proceedings of the ACM on Human-Computer Interaction. 8:CSCW2. (1-25). Online publication date: 7-Nov-2024.

    https://doi.org/10.1145/3686996

  • Lassiter T and Fleischmann K. (2024). "Something Fast and Cheap" or "A Core Element of Building Trust"? - AI Auditing Professionals' Perspectives on Trust in AI. Proceedings of the ACM on Human-Computer Interaction. 8:CSCW2. (1-22). Online publication date: 7-Nov-2024.

    https://doi.org/10.1145/3686963

  • Luo S, Ivison H, Han S and Poon J. (2024). Local Interpretations for Explainable Natural Language Processing: A Survey. ACM Computing Surveys. 56:9. (1-36). Online publication date: 31-Oct-2024.

    https://doi.org/10.1145/3649450

  • Chae J and Tewksbury D. (2024). Perceiving AI intervention does not compromise the persuasive effect of fact-checking. New Media & Society. 10.1177/14614448241286881.

    https://journals.sagepub.com/doi/10.1177/14614448241286881

  • Shaalan A, Tourky M and Ibrahim K. (2024). AI Caramba!. Leveraging AI for Effective Digital Relationship Marketing. 10.4018/979-8-3693-5340-0.ch011. (309-352).

    https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-5340-0.ch011

  • Peng W, Lee H and Lim S. (2024). Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study. JMIR Formative Research. 10.2196/60712. 8. (e60712).

    https://formative.jmir.org/2024/1/e60712

  • Lee C and Cha K. (2024). Toward the Dynamic Relationship Between AI Transparency and Trust in AI: A Case Study on ChatGPT. International Journal of Human–Computer Interaction. 10.1080/10447318.2024.2405266. (1-18).

    https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2405266

  • Benk M, Kerstan S, von Wangenheim F and Ferrario A. (2024). Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions. AI & SOCIETY. 10.1007/s00146-024-02059-y.

    https://link.springer.com/10.1007/s00146-024-02059-y

  • Arne Glomsrud J, Kemna S, Vasanthan C, Zhao L, McGeorge D, Arne Pedersen T, Rye Torben T, Rokseth B and Trong Nguyen D. (2024). Modular assurance of an Autonomous Ferry using Contract-Based Design and Simulation-based Verification Principles. Journal of Physics: Conference Series. 10.1088/1742-6596/2867/1/012043. 2867:1. (012043). Online publication date: 1-Oct-2024.

    https://iopscience.iop.org/article/10.1088/1742-6596/2867/1/012043

  • Caro-Martínez M, Recio-García J, Díaz-Agudo B, Darias J, Wiratunga N, Martin K, Wijekoon A, Nkisi-Orji I, Corsar D, Pradeep P, Bridge D and Liret A. (2024). iSee: A case-based reasoning platform for the design of explanation experiences. Knowledge-Based Systems. 10.1016/j.knosys.2024.112305. 302. (112305). Online publication date: 1-Oct-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S0950705124009390

  • Coghlan S and Quinn T. (2024). Ethics of using artificial intelligence (AI) in veterinary medicine. AI & Society. 39:5. (2337-2348). Online publication date: 1-Oct-2024.

    https://doi.org/10.1007/s00146-023-01686-1

  • Huang K and Ball C. (2024). The Influence of AI Literacy on User's Trust in AI in Practical Scenarios: A Digital Divide Pilot Study . Proceedings of the Association for Information Science and Technology. 10.1002/pra2.1146. 61:1. (937-939). Online publication date: 1-Oct-2024.

    https://asistdl.onlinelibrary.wiley.com/doi/10.1002/pra2.1146

  • Chen C, Liao M and Sundar S. When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems. Proceedings of the Second International Symposium on Trustworthy Autonomous Systems. (1-17).

    https://doi.org/10.1145/3686038.3686066

  • Chen C, Lee S, Jang E and Sundar S. Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users' Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools. Proceedings of the Second International Symposium on Trustworthy Autonomous Systems. (1-12).

    https://doi.org/10.1145/3686038.3686060

  • Palazzolo P, Stahl B and Webb H. Measurable Trust: The Key to Unlocking User Confidence in Black-Box AI. Proceedings of the Second International Symposium on Trustworthy Autonomous Systems. (1-7).

    https://doi.org/10.1145/3686038.3686058

  • Withana S and Plale B. (2024). Patra ModelCards: AI/ML Accountability in the Edge-Cloud Continuum 2024 IEEE 20th International Conference on e-Science (e-Science). 10.1109/e-Science62913.2024.10678710. 979-8-3503-6561-0. (1-10).

    https://ieeexplore.ieee.org/document/10678710/

  • Farbiz F, Aggarwal S, Karol Maszczyk T, Habibullah M and Hamadicharef B. (2024). Reliability-improved machine learning model using knowledge-embedded learning approach for smart manufacturing. Journal of Intelligent Manufacturing. 10.1007/s10845-024-02482-4.

    https://link.springer.com/10.1007/s10845-024-02482-4

  • Nowroozi E, Taheri R, Hajizadeh M and Bauschert T. (2024). Verifying the Robustness of Machine Learning based Intrusion Detection Against Adversarial Perturbation 2024 IEEE International Conference on Cyber Security and Resilience (CSR). 10.1109/CSR61664.2024.10679401. 979-8-3503-7536-7. (9-15).

    https://ieeexplore.ieee.org/document/10679401/

  • Hauptman A, Schelble B, Duan W, Flathmann C and McNeese N. (2024). Understanding the influence of AI autonomy on AI explainability levels in human-AI teams using a mixed methods approach. Cognition, Technology & Work. 10.1007/s10111-024-00765-7. 26:3. (435-455). Online publication date: 1-Sep-2024.

    https://link.springer.com/10.1007/s10111-024-00765-7

  • Matzen L, Gastelum Z, Howell B, Divis K and Stites M. (2024). Effects of machine learning errors on human decision-making: manipulations of model accuracy, error types, and error importance. Cognitive Research: Principles and Implications. 10.1186/s41235-024-00586-2. 9:1.

    https://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-024-00586-2

  • Cai J, Fu X, Gu Z and Wu R. (2023). Validating Social Service Robot Interaction Trust (SSRIT) Scale in Measuring Consumers’ Trust Toward Interaction With Artificially Intelligent (AI) Social Robots With a Chinese Sample of Adults. International Journal of Human–Computer Interaction. 10.1080/10447318.2023.2212224. 40:16. (4319-4334). Online publication date: 17-Aug-2024.

    https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2212224

  • Gröner F and Chiou E. (2024). Investigating the Impact of User Interface Designs on Expectations About Large Language Models’ Capabilities. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 10.1177/10711813241260399.

    https://journals.sagepub.com/doi/10.1177/10711813241260399

  • Kostick-Quenet K, Lang B, Smith J, Hurley M and Blumenthal-Barby J. (2023). Trust criteria for artificial intelligence in health: normative and epistemic considerations. Journal of Medical Ethics. 10.1136/jme-2023-109338. 50:8. (544-551). Online publication date: 1-Aug-2024.

    https://jme.bmj.com/lookup/doi/10.1136/jme-2023-109338

  • Lu Q, Zhu L, Xu X, Whittle J, Zowghi D and Jacquet A. (2024). Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering. ACM Computing Surveys. 56:7. (1-35). Online publication date: 31-Jul-2024.

    https://doi.org/10.1145/3626234

  • Singh G, Moncrieff G, Venter Z, Cawse-Nicholson K, Slingsby J and Robinson T. (2024). Uncertainty quantification for probabilistic machine learning in earth observation using conformal prediction. Scientific Reports. 10.1038/s41598-024-65954-w. 14:1.

    https://www.nature.com/articles/s41598-024-65954-w

  • Ferrario A, Facchini A and Termine A. (2024). Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems. Minds and Machines. 10.1007/s11023-024-09681-1. 34:3.

    https://link.springer.com/10.1007/s11023-024-09681-1

  • Li Y and Goel S. (2024). Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI Auditability. Information Systems Frontiers. 10.1007/s10796-024-10508-8.

    https://link.springer.com/10.1007/s10796-024-10508-8

  • McGovern A, Demuth J, Bostrom A, Wirz C, Tissot P, Cains M and Musgrave K. (2024). The value of convergence research for developing trustworthy AI for weather, climate, and ocean hazards. npj Natural Hazards. 10.1038/s44304-024-00014-x. 1:1.

    https://www.nature.com/articles/s44304-024-00014-x

  • Cheng R, Wang R, Zimmermann T and Ford D. (2024). “It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation Tools. ACM Transactions on Interactive Intelligent Systems. 14:2. (1-39). Online publication date: 30-Jun-2024.

    https://doi.org/10.1145/3651990

  • Deacon T and Plumbley M. Working with AI Sound: Exploring the Future of Workplace AI Sound Technologies. Proceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work. (1-21).

    https://doi.org/10.1145/3663384.3663391

  • Salimzadeh S and Gadiraju U. When in Doubt! Understanding the Role of Task Characteristics on Peer Decision-Making with AI Assistance. Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. (89-101).

    https://doi.org/10.1145/3627043.3659567

  • Mahony S and Chen Q. (2024). Concerns about the role of artificial intelligence in journalism, and media manipulation. Journalism. 10.1177/14648849241263293.

    https://journals.sagepub.com/doi/10.1177/14648849241263293

  • Ahn Y, Kim H and Kim S. (2024). WWW: A Unified Framework for Explaining what, Where and why of Neural Networks by Interpretation of Neuron Concepts 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10.1109/CVPR52733.2024.01043. 979-8-3503-5300-6. (10968-10977).

    https://ieeexplore.ieee.org/document/10656501/

  • Sullivan E. SIDEs: Separating Idealization from Deceptive 'Explanations' in xAI. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (1714-1724).

    https://doi.org/10.1145/3630106.3658999

  • Wang R, Cheng R, Ford D and Zimmermann T. Investigating and Designing for Trust in AI-powered Code Generation Tools. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (1475-1493).

    https://doi.org/10.1145/3630106.3658984

  • Manzini A, Keeling G, Marchal N, McKee K, Rieser V and Gabriel I. Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (1174-1186).

    https://doi.org/10.1145/3630106.3658964

  • Pareek S, Velloso E and Goncalves J. Trust Development and Repair in AI-Assisted Decision-Making during Complementary Expertise. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (546-561).

    https://doi.org/10.1145/3630106.3658924

  • Wang Z, Wang J, Tian C, Ali A and Yin X. (2024). Adopting AI teammates in knowledge-intensive crowdsourcing contests: the roles of transparency and explainability. Kybernetes. 10.1108/K-02-2024-0478.

    https://www.emerald.com/insight/content/doi/10.1108/K-02-2024-0478/full/html

  • Hu M, Zhang G, Chong L, Cagan J and Goucher-Lambert K. (2024). How Being Outvoted by AI Teammates Impacts Human-AI Collaboration. International Journal of Human–Computer Interaction. 10.1080/10447318.2024.2345980. (1-18).

    https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2345980

  • Bostrom A, Demuth J, Wirz C, Cains M, Schumacher A, Madlambayan D, Bansal A, Bearth A, Chase R, Crosman K, Ebert‐Uphoff I, Gagne D, Guikema S, Hoffman R, Johnson B, Kumler‐Bonfanti C, Lee J, Lowe A, McGovern A, Przybylo V, Radford J, Roth E, Sutter C, Tissot P, Roebber P, Stewart J, White M and Williams J. (2023). Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences. Risk Analysis. 10.1111/risa.14245. 44:6. (1498-1513). Online publication date: 1-Jun-2024.

    https://onlinelibrary.wiley.com/doi/10.1111/risa.14245

  • Hou T, Tseng Y and Yuan C. (2024). Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust. International Journal of Information Management. 10.1016/j.ijinfomgt.2024.102775. 76. (102775). Online publication date: 1-Jun-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S0268401224000239

  • Califano G, Zhang T and Spence C. (2024). Would you trust an AI chef? Examining what people think when AI becomes creative with food. International Journal of Gastronomy and Food Science. 10.1016/j.ijgfs.2024.100973. (100973). Online publication date: 1-Jun-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S1878450X24001069

  • Trevisan F, Troullinou P, Kyriazanos D, Fisher E, Fratantoni P, Sir C and Bertelli V. (2024). Deconstructing controversies to design a trustworthy AI future. Ethics and Information Technology. 10.1007/s10676-024-09771-9. 26:2. Online publication date: 1-Jun-2024.

    https://link.springer.com/10.1007/s10676-024-09771-9

  • Chiaburu T, Haußer F and Bießmann F. (2024). Uncertainty in XAI: Human Perception and Modeling Approaches. Machine Learning and Knowledge Extraction. 10.3390/make6020055. 6:2. (1170-1192).

    https://www.mdpi.com/2504-4990/6/2/55

  • Assis de Souza A, Stubbs A, Hesselink D, Baan C and Boer K. (2024). Cherry on Top or Real Need? A Review of Explainable Machine Learning in Kidney Transplantation. Transplantation. 10.1097/TP.0000000000005063.

    https://journals.lww.com/10.1097/TP.0000000000005063

  • Gregori L, Missier P, Stidolph M, Torlone R and Wood A. (2024). Design and Development of a Provenance Capture Platform for Data Science 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW). 10.1109/ICDEW61823.2024.00042. 979-8-3503-8403-1. (285-290).

    https://ieeexplore.ieee.org/document/10555066/

  • Degachi C, Mehrotra S, Yurrita M, Niforatos E and Tielman M. Practising Appropriate Trust in Human-Centred AI Design. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. (1-8).

    https://doi.org/10.1145/3613905.3650825

  • Oksanen J. Bridging the Integrity Gap: Towards AI-assisted Design Research. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. (1-5).

    https://doi.org/10.1145/3613905.3647962

  • Kim S. Establishing Appropriate Trust in AI through Transparency and Explainability. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. (1-6).

    https://doi.org/10.1145/3613905.3638184

  • Schoeffer J, De-Arteaga M and Kühl N. Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-18).

    https://doi.org/10.1145/3613904.3642621

  • Yoo D, Woo H, Nguyen V, Birnbaum M, Kruzan K, Kim J, Abowd G and De Choudhury M. Patient Perspectives on AI-Driven Predictions of Schizophrenia Relapses: Understanding Concerns and Opportunities for Self-Care and Treatment. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-20).

    https://doi.org/10.1145/3613904.3642369

  • Metzger L, Miller L, Baumann M and Kraus J. Empowering Calibrated (Dis-)Trust in Conversational Agents: A User Study on the Persuasive Power of Limitation Disclaimers vs. Authoritative Style. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-19).

    https://doi.org/10.1145/3613904.3642122

  • Vereschak O, Alizadeh F, Bailly G and Caramiaux B. Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is Made. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-14).

    https://doi.org/10.1145/3613904.3642018

  • Salimzadeh S, He G and Gadiraju U. Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision Making. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-17).

    https://doi.org/10.1145/3613904.3641905

  • Ala-Luopa S, Olsson T, Väänänen K, Hartikainen M and Makkonen J. (2024). Trusting Intelligent Automation in Expert Work: Accounting Practitioners’ Experiences and Perceptions. Computer Supported Cooperative Work (CSCW). 10.1007/s10606-024-09499-6.

    https://link.springer.com/10.1007/s10606-024-09499-6

  • Wall E, Matzen L, El-Assady M, Masters P, Hosseinpour H, Endert A, Borgo R, Chau P, Perer A, Schupp H, Strobelt H and Padilla L. (2024). Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization 2024 IEEE 17th Pacific Visualization Conference (PacificVis). 10.1109/PacificVis60374.2024.00012. 979-8-3503-9380-4. (22-31).

    https://ieeexplore.ieee.org/document/10541601/

  • Allen G, Gan L and Zheng L. (2024). Interpretable Machine Learning for Discovery: Statistical Challenges and Opportunities. Annual Review of Statistics and Its Application. 10.1146/annurev-statistics-040120-030919. 11:1. (97-121). Online publication date: 22-Apr-2024.

    https://www.annualreviews.org/content/journals/10.1146/annurev-statistics-040120-030919

  • Manovi L, Capelli L, Marchioni A, Martinini F, Setti G, Mangia M and Rovatti R. (2024). SVD-based Peephole and Clustering to Enhance Trustworthiness in DNN Classifiers 2024 IEEE 6th International Conference on AI Circuits and Systems (AICAS). 10.1109/AICAS59952.2024.10595919. 979-8-3503-8363-8. (129-133).

    https://ieeexplore.ieee.org/document/10595919/

  • Zhang B. (2023). Public Opinion toward Artificial Intelligence. The Oxford Handbook of AI Governance. 10.1093/oxfordhb/9780197579329.013.36. (553-571).

    https://academic.oup.com/edited-volume/41989/chapter/411053295

  • Boughton L, Miller C, Acar Y, Wermke D and Kästner C. Decomposing and Measuring Trust in Open-Source Software Supply Chains. Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results. (57-61).

    https://doi.org/10.1145/3639476.3639775

  • Balasooriya B, Sedera D and Sorwar G. (2024). The Behavioural Impact of Artificial Intelligence. Enhancing and Predicting Digital Consumer Behavior with AI. 10.4018/979-8-3693-4453-8.ch016. (311-329).

    https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-4453-8.ch016

  • Biewer S, Baum K, Sterz S, Hermanns H, Hetmank S, Langer M, Lauber-Rönsberg A and Lehr F. (2024). Software doping analysis for human oversight. Formal Methods in System Design. 10.1007/s10703-024-00445-2.

    https://link.springer.com/10.1007/s10703-024-00445-2

  • Kahr P, Rooks G, Snijders C and Willemsen M. The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.. Proceedings of the 29th International Conference on Intelligent User Interfaces. (609-622).

    https://doi.org/10.1145/3640543.3645167

  • Crooks C, Talwalkar S, Sharma T, Arora K and Venkatesh K. (2024). Designing Human-centered Artificial Intelligence to Assist with Domestic Abuse Recovery: Mitigating Technology Enabled Coercive Control SoutheastCon 2024. 10.1109/SoutheastCon52093.2024.10500080. 979-8-3503-1710-7. (934-941).

    https://ieeexplore.ieee.org/document/10500080/

  • Zahedi Z. Modeling, Engendering and Leveraging Trust in Human-Robot Interaction: A Mental Model Based Framework. Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. (166-168).

    https://doi.org/10.1145/3610978.3638370

  • Pink S, Quilty E, Grundy J and Hoda R. (2024). Trust, artificial intelligence and software practitioners: an interdisciplinary agenda. AI & SOCIETY. 10.1007/s00146-024-01882-7.

    https://link.springer.com/10.1007/s00146-024-01882-7

  • Bach T, Khan A, Hallock H, Beltrão G and Sousa S. (2022). A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. International Journal of Human–Computer Interaction. 10.1080/10447318.2022.2138826. 40:5. (1251-1266). Online publication date: 3-Mar-2024.

    https://www.tandfonline.com/doi/full/10.1080/10447318.2022.2138826

  • Zafari S, de Pagter J, Papagni G, Rosenstein A, Filzmoser M and Koeszegi S. (2024). Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System. Multimodal Technologies and Interaction. 10.3390/mti8030020. 8:3. (20).

    https://www.mdpi.com/2414-4088/8/3/20

  • Schoenherr J and Thomson R. When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions. IEEE Transactions on Technology and Society. 10.1109/TTS.2024.3370095. 5:1. (61-70).

    https://ieeexplore.ieee.org/document/10457538/

  • Fleiß J, Bäck E and Thalmann S. (2024). Mitigating Algorithm Aversion in Recruiting: A Study on Explainable AI for Conversational Agents. ACM SIGMIS Database: the DATABASE for Advances in Information Systems. 55:1. (56-87). Online publication date: 5-Feb-2024.

    https://doi.org/10.1145/3645057.3645062

  • Duan X, Pei B, Ambrose G, Hershkovitz A, Cheng Y and Wang C. (2023). Towards transparent and trustworthy prediction of student learning achievement by including instructors as co-designers: a case study. Education and Information Technologies. 10.1007/s10639-023-11954-8. 29:3. (3075-3096). Online publication date: 1-Feb-2024.

    https://link.springer.com/10.1007/s10639-023-11954-8

  • Subías-Beltrán P, Pitarch C, Migliorelli C, Marte L, Galofré M and Orte S. (2024). The Role of Transparency in AI-Driven Technologies: Targeting Healthcare. AI - Ethical and Legal Challenges [Working Title]. 10.5772/intechopen.1007444.

    https://www.intechopen.com/online-first/1192309

  • Wang X, Zhang Y and Zheng J. (2024). Beyond the Value for AI Adopters: Analyzing the Impacts of Autonomous Vehicle Testing on Traffic Conditions in California. SSRN Electronic Journal. 10.2139/ssrn.4775378.

    https://www.ssrn.com/abstract=4775378

  • Fidon L, Aertsen M, Kofler F, Bink A, David A, Deprest T, Emam D, Guffens F, Jakab A, Kasprian G, Kienast P, Melbourne A, Menze B, Mufti N, Pogledic I, Prayer D, Stuempflen M, Elslander E, Ourselin S, Deprest J and Vercauteren T. A Dempster-Shafer Approach to Trustworthy AI With Application to Fetal Brain MRI Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 10.1109/TPAMI.2023.3346330. (1-12).

    https://ieeexplore.ieee.org/document/10388220/

  • Zhou J, Lin Y, Chen Q, Zhang Q, Huang X and He L. CausalABSC: Causal Inference for Aspect Debiasing in Aspect-Based Sentiment Classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 10.1109/TASLP.2023.3340606. 32. (830-840).

    https://ieeexplore.ieee.org/document/10347447/

  • Bach T, Kristiansen J, Babic A and Jacovi A. Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review. IEEE Access. 10.1109/ACCESS.2024.3437190. 12. (106385-106414).

    https://ieeexplore.ieee.org/document/10620168/

  • Torpmann-Hagen B, Riegler M, Halvorsen P and Johansen D. A Robust Framework for Distributional Shift Detection Under Sample-Bias. IEEE Access. 10.1109/ACCESS.2024.3393296. 12. (59598-59611).

    https://ieeexplore.ieee.org/document/10507823/

  • Kim M, Kim S, Kim J, Song T and Kim Y. (2024). Do stakeholder needs differ? - Designing stakeholder-tailored Explainable Artificial Intelligence (XAI) interfaces. International Journal of Human-Computer Studies. 181:C. Online publication date: 1-Jan-2024.

    https://doi.org/10.1016/j.ijhcs.2023.103160

  • Dierickx L, van Dalen A, Opdahl A and Lindén C. (2024). Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review. Disinformation in Open Online Media. 10.1007/978-3-031-71210-4_1. (1-15).

    https://link.springer.com/10.1007/978-3-031-71210-4_1

  • Klimava Y, Beltrão G and Paramonova I. (2024). Investigating Trust Perceptions Toward AI in Industrial Designers. Digital Interaction and Machine Intelligence. 10.1007/978-3-031-66594-3_20. (190-199).

    https://link.springer.com/10.1007/978-3-031-66594-3_20

  • Karumbaiah S, Ganesh A, Bharadwaj A and Anderson L. (2024). Evaluating Behaviors of General Purpose Language Models in a Pedagogical Context. Artificial Intelligence in Education. 10.1007/978-3-031-64299-9_4. (47-61).

    https://link.springer.com/10.1007/978-3-031-64299-9_4

  • Theodorou G, Karagiorgou S, Fulignoli A and Magri R. (2024). On Explaining and Reasoning About Optical Fiber Link Problems. Explainable Artificial Intelligence. 10.1007/978-3-031-63797-1_14. (268-289).

    https://link.springer.com/10.1007/978-3-031-63797-1_14

  • Jahn T, Hühn P and Förster M. (2024). Wasn’t Expecting that – Using Abnormality as a Key to Design a Novel User-Centric Explainable AI Method. Design Science Research for a Resilient Future. 10.1007/978-3-031-61175-9_5. (66-80).

    https://link.springer.com/10.1007/978-3-031-61175-9_5

  • Rubegni E, Ayoub O, Rizzo S, Barbero M, Bernegger G, Faraci F, Mangili F, Soldini E, Trimboli P and Facchini A. (2024). Designing for Complementarity: A Conceptual Framework to Go Beyond the Current Paradigm of Using XAI in Healthcare. Artificial Intelligence in HCI. 10.1007/978-3-031-60606-9_16. (277-296).

    https://link.springer.com/10.1007/978-3-031-60606-9_16

  • Henriques A, Parola H, Gonçalves R and Rodrigues M. (2024). Integrating Explainable AI: Breakthroughs in Medical Diagnosis and Surgery. Good Practices and New Perspectives in Information Systems and Technologies. 10.1007/978-3-031-60218-4_23. (254-272).

    https://link.springer.com/10.1007/978-3-031-60218-4_23

  • Wünn T, Sent D, Peute L and Leijnen S. (2024). Trust in Artificial Intelligence: Exploring the Influence of Model Presentation and Model Interaction on Trust in a Medical Setting. Artificial Intelligence. ECAI 2023 International Workshops. 10.1007/978-3-031-50485-3_6. (76-86).

    https://link.springer.com/10.1007/978-3-031-50485-3_6

  • Behery M, Brauner P, Zhou H, Uysal M, Samsonov V, Bellgardt M, Brillowski F, Brockhoff T, Ghahfarokhi A, Gleim L, Gorißen L, Grochowski M, Henn T, Iacomini E, Kaster T, Koren I, Liebenberg M, Reinsch L, Tirpitz L, Trinh M, Posada-Moreno A, Liehner L, Schemmer T, Vervier L, Völker M, Walderich P, Zhang S, Brecher C, Schmitt R, Decker S, Gries T, Häfner C, Herty M, Jarke M, Kowalewski S, Kuhlen T, Schleifenbaum J, Trimpe S, Aalst W, Ziefle M and Lakemeyer G. (2024). Actionable Artificial Intelligence for the Future of Production. Internet of Production. 10.1007/978-3-031-44497-5_4. (91-136).

    https://link.springer.com/10.1007/978-3-031-44497-5_4

  • Aliferis C and Simon G. (2024). Artificial Intelligence (AI) and Machine Learning (ML) for Healthcare and Health Sciences: The Need for Best Practices Enabling Trust in AI and ML. Artificial Intelligence and Machine Learning in Health Care and Medical Sciences. 10.1007/978-3-031-39355-6_1. (1-31).

    https://link.springer.com/10.1007/978-3-031-39355-6_1

  • Schrills T and Franke T. (2023). How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems. ACM Transactions on Interactive Intelligent Systems. 13:4. (1-34). Online publication date: 31-Dec-2024.

    https://doi.org/10.1145/3588594

  • Li W, Yigitcanlar T, Nili A and Browne W. (2023). Tech Giants’ Responsible Innovation and Technology Strategy: An International Policy Review. Smart Cities. 10.3390/smartcities6060153. 6:6. (3454-3492).

    https://www.mdpi.com/2624-6511/6/6/153

  • Venger A and Dozortsev V. (2023). Trust in Artificial Intelligence: Modeling the Decision Making of Human Operators in Highly Dangerous Situations. Mathematics. 10.3390/math11244956. 11:24. (4956).

    https://www.mdpi.com/2227-7390/11/24/4956

  • Fel T, Boutin V, Moayeri M, Cadène R, Bethune L, Andéol L, Chalvidal M and Serre T. A holistic approach to unifying automatic concept extraction and concept importance estimation. Proceedings of the 37th International Conference on Neural Information Processing Systems. (54805-54818).

    /doi/10.5555/3666122.3668513

  • Fel T, Boissin T, Boutin V, Picard A, Novello P, Colin J, Linsley D, Rousseau T, Cadène R, Goetschalckx L, Gardes L and Serre T. Unlocking feature visualization for deeper networks with magnitude constrained optimization. Proceedings of the 37th International Conference on Neural Information Processing Systems. (37813-37826).

    /doi/10.5555/3666122.3667768

  • Soroko D, Savino G, Gray N and Schöning J. Social Transparency in Network Monitoring and Security Systems. Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia. (37-53).

    https://doi.org/10.1145/3626705.3627773

  • Stodt J, Reich C and Clarke N. (2023). A Novel Metric for XAI Evaluation Incorporating Pixel Analysis and Distance Measurement 2023 IEEE 35th International Conference on Tools with Artificial Intelligence (ICTAI). 10.1109/ICTAI59109.2023.00009. 979-8-3503-4273-4. (1-9).

    https://ieeexplore.ieee.org/document/10356243/

  • Duenser A and Douglas D. (2023). Whom to Trust, How and Why: Untangling Artificial Intelligence Ethics Principles, Trustworthiness, and Trust. IEEE Intelligent Systems. 38:6. (19-26). Online publication date: 1-Nov-2023.

    https://doi.org/10.1109/MIS.2023.3322586

  • McNeese N, Flathmann C, O'Neill T and Salas E. (2023). Stepping out of the shadow of human-human teaming. Computers in Human Behavior. 148:C. Online publication date: 1-Nov-2023.

    https://doi.org/10.1016/j.chb.2023.107874

  • Alvarado R. (2022). What kind of trust does AI deserve, if any?. AI and Ethics. 10.1007/s43681-022-00224-x. 3:4. (1169-1183). Online publication date: 1-Nov-2023.

    https://link.springer.com/10.1007/s43681-022-00224-x

  • de Brito Duarte R, Correia F, Arriaga P, Paiva A and Mbunge E. (2023). AI Trust: Can Explainable AI Enhance Warranted Trust?. Human Behavior and Emerging Technologies. 10.1155/2023/4637678. 2023. (1-12). Online publication date: 31-Oct-2023.

    https://www.hindawi.com/journals/hbet/2023/4637678/

  • Wang T, Chen J, Li D, Liu X, Wang H and Zhou K. (2023). Fast GPU-based Two-way Continuous Collision Handling. ACM Transactions on Graphics. 42:5. (1-15). Online publication date: 31-Oct-2023.

    https://doi.org/10.1145/3604551

  • Chun J and Elkins K. (2023). eXplainable AI with GPT4 for story analysis and generation: A novel framework for diachronic sentiment analysis. International Journal of Digital Humanities. 10.1007/s42803-023-00069-8. 5:2-3. (507-532).

    https://link.springer.com/10.1007/s42803-023-00069-8

  • Coghlan S, Gyngell C and Vears D. (2023). Ethics of artificial intelligence in prenatal and pediatric genomic medicine. Journal of Community Genetics. 10.1007/s12687-023-00678-4. 15:1. (13-24).

    https://link.springer.com/10.1007/s12687-023-00678-4

  • El-Sappagh S, Alonso-Moral J, Abuhmed T, Ali F and Bugarín-Diz A. (2023). Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges. Artificial Intelligence Review. 10.1007/s10462-023-10415-5. 56:10. (11149-11296). Online publication date: 1-Oct-2023.

    https://link.springer.com/10.1007/s10462-023-10415-5

  • Chen X, Li S, Liu S, Fowler R and Wang X. (2023). MeetScript: Designing Transcript-based Interactions to Support Active Participation in Group Video Meetings. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-32). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610196

  • Pendse S, Kumar N and De Choudhury M. (2023). Marginalization and the Construction of Mental Illness Narratives Online: Foregrounding Institutions in Technology-Mediated Care. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-30). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610195

  • Sheehan A and Le Dantec C. (2023). Making Meaning from the Digitalization of Blue-Collar Work. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-21). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610194

  • Chen Q and Zhang A. (2023). Judgment Sieve: Reducing Uncertainty in Group Judgments through Interventions Targeting Ambiguity versus Disagreement. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-26). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610074

  • Ma M, Kim C, Hall K and Kim J. (2023). It Takes Two to Avoid Pregnancy: Addressing Conflicting Perceptions of Birth Control Pill Responsibility in Romantic Relationships. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-27). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610073

  • Zhang R, Duan W, Flathmann C, McNeese N, Freeman G and Williams A. (2023). Investigating AI Teammate Communication Strategies and Their Impact in Human-AI Teams for Effective Teamwork. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-31). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610072

  • Roeder L, Hoyte P, van der Meer J, Fell L, Johnston P, Kerr G and Bruza P. (2023). A Quantum Model of Trust Calibration in Human–AI Interactions. Entropy. 10.3390/e25091362. 25:9. (1362).

    https://www.mdpi.com/1099-4300/25/9/1362

  • Kaate I, Salminen J, Jung S, Almerekhi H and Jansen B. How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction. Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter. (1-12).

    https://doi.org/10.1145/3605390.3605397

  • Wiedermann C, Mahlknecht A, Piccoliori G and Engl A. (2023). Redesigning Primary Care: The Emergence of Artificial-Intelligence-Driven Symptom Diagnostic Tools. Journal of Personalized Medicine. 10.3390/jpm13091379. 13:9. (1379).

    https://www.mdpi.com/2075-4426/13/9/1379

  • Speith T and Langer M. (2023). A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI) 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). 10.1109/REW57809.2023.00061. 979-8-3503-2691-8. (325-331).

    https://ieeexplore.ieee.org/document/10260827/

  • Caddell J and Nilchiani R. The Dynamics of Trust: Path Dependence in Interpersonal Trust. IEEE Engineering Management Review. 10.1109/EMR.2023.3285098. 51:3. (148-165).

    https://ieeexplore.ieee.org/document/10148640/

  • Yang M, Zhang X, Wang J and Zhou X. (2023). Causal representation for few-shot text classification. Applied Intelligence. 10.1007/s10489-023-04667-5. 53:18. (21422-21432). Online publication date: 1-Sep-2023.

    https://link.springer.com/10.1007/s10489-023-04667-5

  • Onari M, Grau I, Nobile M and Zhang Y. (2023). Trustworthy Artificial Intelligence in Medical Applications: A Mini Survey 2023 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB). 10.1109/CIBCB56990.2023.10264883. 979-8-3503-1017-7. (1-8).

    https://ieeexplore.ieee.org/document/10264883/

  • Natarajan N. Human-AI collaboration in recruitment and selection. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. (7089-7090).

    https://doi.org/10.24963/ijcai.2023/819

  • Amoozadeh M, Daniels D, Chen S, Nam D, Kumar A, Hilton M, Alipour M and Ragavan S. Towards Characterizing Trust in Generative Artificial Intelligence among Students. Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 2. (3-4).

    https://doi.org/10.1145/3568812.3603469

  • Crockett K, Colyer E, Gerber L and Latham A. Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses. IEEE Transactions on Artificial Intelligence. 10.1109/TAI.2021.3137091. 4:4. (778-791).

    https://ieeexplore.ieee.org/document/9658213/

  • Löfström H. (2023). On the Definition of Appropriate Trust and the Tools that Come with it 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE). 10.1109/CSCE60160.2023.00256. 979-8-3503-2759-5. (1555-1562).

    https://ieeexplore.ieee.org/document/10487244/

  • Talwalkar S and Crooks C. (2023). Designing User-Centered Artificial Intelligence to Assist in Recovery from Domestic Abuse 2023 Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE). 10.1109/CSCE60160.2023.00066. 979-8-3503-2759-5. (375-377).

    https://ieeexplore.ieee.org/document/10487313/

  • Attig C, Schrills T, Gödker M, Wollstadt P, Wiebel-Herboth C, Calero Valdez A and Franke T. Enhancing Trust in Smart Charging Agents—The Role of Traceability for Human-Agent-Cooperation. HCI International 2023 – Late Breaking Papers. (313-324).

    https://doi.org/10.1007/978-3-031-48057-7_19

  • Scharowski N, Perrig S, Svab M, Opwis K and Brühlmann F. (2023). Exploring the effects of human-centered AI explanations on trust and reliance. Frontiers in Computer Science. 10.3389/fcomp.2023.1151150. 5.

    https://www.frontiersin.org/articles/10.3389/fcomp.2023.1151150/full

  • Pasricha S. AI Ethics in Smart Healthcare. IEEE Consumer Electronics Magazine. 10.1109/MCE.2022.3220001. 12:4. (12-20).

    https://ieeexplore.ieee.org/document/9940606/

  • Chen R, Wang J, Williamson D, Chen T, Lipkova J, Lu M, Sahai S and Mahmood F. (2023). Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature Biomedical Engineering. 10.1038/s41551-023-01056-8. 7:6. (719-742).

    https://www.nature.com/articles/s41551-023-01056-8

  • Santhalingam P, Pathak P, Rangwala H and Kosecka J. (2023). Synthetic Smartwatch IMU Data Generation from In-the-wild ASL Videos. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 7:2. (1-34). Online publication date: 12-Jun-2023.

    https://doi.org/10.1145/3596261

  • Haliburton L, Kheirinejad S, Schmidt A and Mayer S. (2023). Exploring Smart Standing Desks to Foster a Healthier Workplace. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 7:2. (1-22). Online publication date: 12-Jun-2023.

    https://doi.org/10.1145/3596260

  • Park J and Lee U. (2023). Understanding Disengagement in Just-in-Time Mobile Health Interventions. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 7:2. (1-27). Online publication date: 12-Jun-2023.

    https://doi.org/10.1145/3596240

  • Wang J, Zhao Z, Ou M, Cui J and Wu B. (2023). Automatic Update for Wi-Fi Fingerprinting Indoor Localization via Multi-Target Domain Adaptation. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 7:2. (1-27). Online publication date: 12-Jun-2023.

    https://doi.org/10.1145/3596239

  • Cheng H, Lou W, Yang Y, Chen Y and Zhang X. (2023). TwinkleTwinkle. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 7:2. (1-30). Online publication date: 12-Jun-2023.

    https://doi.org/10.1145/3596238

  • Lai V, Chen C, Smith-Renner A, Liao Q and Tan C. Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (1369-1385).

    https://doi.org/10.1145/3593013.3594087

  • Miller T. Explainable AI is Dead, Long Live Explainable AI!. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (333-342).

    https://doi.org/10.1145/3593013.3594001

  • Scharowski N, Benk M, Kühne S, Wettstein L and Brühlmann F. Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (248-260).

    https://doi.org/10.1145/3593013.3593994

  • Knowles B, Fledderjohann J, Richards J and Varshney K. Trustworthy AI and the Logics of Intersectional Resistance. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (172-182).

    https://doi.org/10.1145/3593013.3593986

  • Kim S, Watkins E, Russakovsky O, Fong R and Monroy-Hernández A. Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (77-88).

    https://doi.org/10.1145/3593013.3593978

  • Lertvittayakumjorn P and Toni F. Argumentative explanations for pattern-based text classifiers. Argument & Computation. 10.3233/AAC-220004. 14:2. (163-234).

    https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/AAC-220004

  • Fel T, Picard A, Bethune L, Boissin T, Vigouroux D, Colin J, Cadénc R and Serre T. (2023). CRAFT: Concept Recursive Activation FacTorization for Explainability 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10.1109/CVPR52729.2023.00266. 979-8-3503-0129-8. (2711-2721).

    https://ieeexplore.ieee.org/document/10205223/

  • Kaminwar S, Goschenhofer J, Thomas J, Thon I and Bischl B. (2023). Structured Verification of Machine Learning Models in Industrial Settings. Big Data. 10.1089/big.2021.0112. 11:3. (181-198). Online publication date: 1-Jun-2023.

    https://www.liebertpub.com/doi/10.1089/big.2021.0112

  • Vered M, Livni T, Howe P, Miller T and Sonenberg L. (2023). The Effects of Explanations on Automation Bias. Artificial Intelligence. 10.1016/j.artint.2023.103952. (103952). Online publication date: 1-Jun-2023.

    https://linkinghub.elsevier.com/retrieve/pii/S000437022300098X

  • Coghlan S and Parker C. (2023). Harm to Nonhuman Animals from AI: a Systematic Account and Framework. Philosophy & Technology. 10.1007/s13347-023-00627-6. 36:2. Online publication date: 1-Jun-2023.

    https://link.springer.com/10.1007/s13347-023-00627-6

  • Xu J, Zhang X, Li H, Yoo C and Pan Y. (2023). Is Everyone an Artist? A Study on User Experience of AI-Based Painting System. Applied Sciences. 10.3390/app13116496. 13:11. (6496).

    https://www.mdpi.com/2076-3417/13/11/6496

  • Unver M. (2023). Governing fiduciary relationships or building up a governance model for trust in AI? Review of healthcare as a socio-technical system. International Review of Law, Computers & Technology. 10.1080/13600869.2023.2192569. 37:2. (198-226). Online publication date: 4-May-2023.

    https://www.tandfonline.com/doi/full/10.1080/13600869.2023.2192569

  • Pelly M, Fatehi F, Liew D and Verdejo-Garcia A. (2023). Artificial intelligence for secondary prevention of myocardial infarction: A qualitative study of patient and health professional perspectives. International Journal of Medical Informatics. 10.1016/j.ijmedinf.2023.105041. 173. (105041). Online publication date: 1-May-2023.

    https://linkinghub.elsevier.com/retrieve/pii/S138650562300059X

  • Vianello A, Laine S and Tuomi E. (2022). Improving Trustworthiness of AI Solutions: A Qualitative Approach to Support Ethically-Grounded AI Design. International Journal of Human–Computer Interaction. 10.1080/10447318.2022.2095478. 39:7. (1405-1422). Online publication date: 21-Apr-2023.

    https://www.tandfonline.com/doi/full/10.1080/10447318.2022.2095478

  • Bansal G, Buçinca Z, Holstein K, Hullman J, Smith-Renner A, Stumpf S and Wu S. Workshop on Trust and Reliance in AI-Human Teams (TRAIT). Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. (1-6).

    https://doi.org/10.1145/3544549.3573831

  • Suresh H, Shanmugam D, Chen T, Bryan A, D'Amour A, Guttag J and Satyanarayan A. Kaleidoscope: Semantically-grounded, context-specific ML model evaluation. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-13).

    https://doi.org/10.1145/3544548.3581482

  • Bi N, Huang Y, Han C and Hsu J. (2023). You Know What I Meme: Enhancing People's Understanding and Awareness of Hateful Memes Using Crowdsourced Explanations. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW1. (1-27). Online publication date: 14-Apr-2023.

    https://doi.org/10.1145/3579593

  • Albini E, Rago A, Baroni P and Toni F. (2023). Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers. Frontiers in Artificial Intelligence. 10.3389/frai.2023.1099407. 6.

    https://www.frontiersin.org/articles/10.3389/frai.2023.1099407/full

  • Papagni G, de Pagter J, Zafari S, Filzmoser M and Koeszegi S. (2022). Artificial agents’ explainability to support trust: considerations on timing and context. AI & Society. 38:2. (947-960). Online publication date: 1-Apr-2023.

    https://doi.org/10.1007/s00146-022-01462-7

  • Kiyasseh D, Laca J, Haque T, Miles B, Wagner C, Donoho D, Anandkumar A and Hung A. (2023). A multi-institutional study using artificial intelligence to provide reliable and fair feedback to surgeons. Communications Medicine. 10.1038/s43856-023-00263-3. 3:1.

    https://www.nature.com/articles/s43856-023-00263-3

  • Schemmer M, Kuehl N, Benz C, Bartos A and Satzger G. Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations. Proceedings of the 28th International Conference on Intelligent User Interfaces. (410-422).

    https://doi.org/10.1145/3581641.3584066

  • Sloane M, Solano-Kamaiko I, Yuan J, Dasgupta A and Stoyanovich J. (2023). Introducing contextual transparency for automated decision systems. Nature Machine Intelligence. 10.1038/s42256-023-00623-7. 5:3. (187-195).

    https://www.nature.com/articles/s42256-023-00623-7

  • Lukashova-Sanz O, Dechant M and Wahl S. (2023). The Influence of Disclosing the AI Potential Error to the User on the Efficiency of User–AI Collaboration. Applied Sciences. 10.3390/app13063572. 13:6. (3572).

    https://www.mdpi.com/2076-3417/13/6/3572

  • Scherer B and Lehner S. (2022). Trust me, I am a Robo-advisor. Journal of Asset Management. 10.1057/s41260-022-00284-y. 24:2. (85-96). Online publication date: 1-Mar-2023.

    https://link.springer.com/10.1057/s41260-022-00284-y

  • Narayanan D and Tan Z. (2023). Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI. Minds and Machines. 10.1007/s11023-023-09628-y. 33:1. (55-82). Online publication date: 1-Mar-2023.

    https://link.springer.com/10.1007/s11023-023-09628-y

  • Broderick T, Gelman A, Meager R, Smith A and Zheng T. (2023). Toward a taxonomy of trust for probabilistic machine learning. Science Advances. 10.1126/sciadv.abn3999. 9:7. Online publication date: 17-Feb-2023.

    https://www.science.org/doi/10.1126/sciadv.abn3999

  • Razov P and Garaganov A. (2023). Digitalization of mass media as a factor of influence on trust in artificial intelligence. Digital Sociology. 10.26425/2658-347X-2022-5-4-90-97. 5:4. (90-97).

    https://digitalsociology.guu.ru/jour/article/view/209

  • Tag B, van Berkel N, Verma S, Zhao B, Berkovsky S, Kaafar D, Kostakos V and Ohrimenko O. DDoD: Dual Denial of Decision Attacks on Human-AI Teams. IEEE Pervasive Computing. 10.1109/MPRV.2022.3218773. 22:1. (77-84).

    https://ieeexplore.ieee.org/document/10054350/

  • Pup F and Atzori M. Applications of Self-Supervised Learning to Biomedical Signals: A Survey. IEEE Access. 10.1109/ACCESS.2023.3344531. 11. (144180-144203).

    https://ieeexplore.ieee.org/document/10365170/

  • Park S, Kim H, Park J and Lee Y. Designing and Evaluating User Experience of an AI-Based Defense System. IEEE Access. 10.1109/ACCESS.2023.3329257. 11. (122045-122056).

    https://ieeexplore.ieee.org/document/10304133/

  • Gjorgjevikj A, Mishev K, Antovski L and Trajanov D. Requirements Engineering in Machine Learning Projects. IEEE Access. 10.1109/ACCESS.2023.3294840. 11. (72186-72208).

    https://ieeexplore.ieee.org/document/10179899/

  • Amarasinghe K, Rodolfa K, Lamba H and Ghani R. (2023). Explainable machine learning for public policy: Use cases, gaps, and research directions. Data & Policy. 10.1017/dap.2023.2. 5.

    https://www.cambridge.org/core/product/identifier/S2632324923000020/type/journal_article

  • Peters T and Visser R. (2023). The Importance of Distrust in AI. Explainable Artificial Intelligence. 10.1007/978-3-031-44070-0_15. (301-317).

    https://link.springer.com/10.1007/978-3-031-44070-0_15

  • Wan C, Belo R, Zejnilović L and Lavado S. (2023). The Duet of Representations and How Explanations Exacerbate It. Explainable Artificial Intelligence. 10.1007/978-3-031-44067-0_10. (181-197).

    https://link.springer.com/10.1007/978-3-031-44067-0_10

  • Eckhardt S, Knaeble M, Bucher A, Staehelin D, Dolata M, Agotai D and Schwabe G. (2023). “Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence. Human-Computer Interaction – INTERACT 2023. 10.1007/978-3-031-42286-7_2. (27-48).

    https://link.springer.com/10.1007/978-3-031-42286-7_2

  • Alhaji B, Prilla M and Rausch A. (2023). Robot Collaboration and Model Reliance Based on Its Trust in Human-Robot Interaction. Human-Computer Interaction – INTERACT 2023. 10.1007/978-3-031-42283-6_2. (17-39).

    https://link.springer.com/10.1007/978-3-031-42283-6_2

  • Gates L, Leake D and Wilkerson K. (2023). Cases Are King: A User Study of Case Presentation to Explain CBR Decisions. Case-Based Reasoning Research and Development. 10.1007/978-3-031-40177-0_10. (153-168).

    https://link.springer.com/10.1007/978-3-031-40177-0_10

  • Jalali A, Haslhofer B, Kriglstein S and Rauber A. (2023). Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis. Intelligent Computing. 10.1007/978-3-031-37717-4_46. (712-733).

    https://link.springer.com/10.1007/978-3-031-37717-4_46

  • Schrills T, Gruner M, Peuscher H and Franke T. (2023). Safe Environments to Understand Medical AI - Designing a Diabetes Simulation Interface for Users of Automated Insulin Delivery. Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. 10.1007/978-3-031-35748-0_23. (306-328).

    https://link.springer.com/10.1007/978-3-031-35748-0_23

  • Kühne M, Wiebel-Herboth C, Wollstadt P, Calero Valdez A and Franke T. (2023). Who’s in Charge of Charging? Investigating Human-Machine-Cooperation in Smart Charging of Electric Vehicles. HCI in Mobility, Transport, and Automotive Systems. 10.1007/978-3-031-35678-0_8. (131-143).

    https://link.springer.com/10.1007/978-3-031-35678-0_8

  • Paramonova I, Sousa S and Lamas D. (2023). Exploring Factors Affecting User Perception of Trustworthiness in Advanced Technology: Preliminary Results. Learning and Collaboration Technologies. 10.1007/978-3-031-34411-4_25. (366-383).

    https://link.springer.com/10.1007/978-3-031-34411-4_25

  • Reinhard P, Li M, Dickhaut E, Reh C, Peters C and Leimeister J. (2023). A Conceptual Model for Labeling in Reinforcement Learning Systems: A Value Co-creation Perspective. Design Science Research for a New Society: Society 5.0. 10.1007/978-3-031-32808-4_8. (123-137).

    https://link.springer.com/10.1007/978-3-031-32808-4_8

  • Behery M, Brauner P, Zhou H, Uysal M, Samsonov V, Bellgardt M, Brillowski F, Brockhoff T, Ghahfarokhi A, Gleim L, Gorissen L, Grochowski M, Henn T, Iacomini E, Kaster T, Koren I, Liebenberg M, Reinsch L, Tirpitz L, Trinh M, Posada-Moreno A, Liehner L, Schemmer T, Vervier L, Völker M, Walderich P, Zhang S, Brecher C, Schmitt R, Decker S, Gries T, Häfner C, Herty M, Jarke M, Kowalewski S, Kuhlen T, Schleifenbaum J, Trimpe S, Aalst W, Ziefle M and Lakemeyer G. (2023). Actionable Artificial Intelligence for the Future of Production. Internet of Production. 10.1007/978-3-030-98062-7_4-2. (1-46).

    https://link.springer.com/10.1007/978-3-030-98062-7_4-2

  • Behery M, Brauner P, Zhou H, Uysal M, Samsonov V, Bellgardt M, Brillowski F, Brockhoff T, Ghahfarokhi A, Gleim L, Gorissen L, Grochowski M, Henn T, Iacomini E, Kaster T, Koren I, Liebenberg M, Reinsch L, Tirpitz L, Trinh M, Posada-Moreno A, Liehner L, Schemmer T, Vervier L, Völker M, Walderich P, Zhang S, Brecher C, Schmitt R, Decker S, Gries T, Häfner C, Herty M, Jarke M, Kowalewski S, Kuhlen T, Schleifenbaum J, Trimpe S, Aalst W, Ziefle M and Lakemeyer G. (2023). Actionable Artificial Intelligence for the Future of Production. Internet of Production. 10.1007/978-3-030-98062-7_4-1. (1-46).

    https://link.springer.com/10.1007/978-3-030-98062-7_4-1

  • Owens E, Sheehan B, Mullins M, Cunneen M, Ressel J and Castignani G. (2022). Explainable Artificial Intelligence (XAI) in Insurance. Risks. 10.3390/risks10120230. 10:12. (230).

    https://www.mdpi.com/2227-9091/10/12/230

  • Harper S and Weber E. Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making. Journal of Social Computing. 10.23919/JSC.2022.0017. 3:4. (345-362).

    https://ieeexplore.ieee.org/document/10054635/

  • Garcia K, Mishler S, Xiao Y, Wang C, Hu B, Still J and Chen J. (2022). Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign. Journal of Cognitive Engineering and Decision Making. 10.1177/15553434221117001. 16:4. (237-251). Online publication date: 1-Dec-2022.

    http://journals.sagepub.com/doi/10.1177/15553434221117001

  • Appelganc K, Rieger T, Roesler E and Manzey D. (2022). How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives. Journal of Cognitive Engineering and Decision Making. 10.1177/15553434221104615. 16:4. (207-221). Online publication date: 1-Dec-2022.

    http://journals.sagepub.com/doi/10.1177/15553434221104615

  • Bagave P, Westberg M, Dobbe R, Janssen M and Ding A. (2022). Accountable AI for Healthcare IoT Systems 2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA). 10.1109/TPS-ISA56441.2022.00013. 978-1-6654-7408-5. (20-28).

    https://ieeexplore.ieee.org/document/10063369/

  • Lukyanenko R, Maass W and Storey V. (2022). Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities. Electronic Markets. 10.1007/s12525-022-00605-4. 32:4. (1993-2020). Online publication date: 1-Dec-2022.

    https://link.springer.com/10.1007/s12525-022-00605-4

  • Riedl R. (2022). Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions. Electronic Markets. 10.1007/s12525-022-00594-4. 32:4. (2021-2051). Online publication date: 1-Dec-2022.

    https://link.springer.com/10.1007/s12525-022-00594-4

  • Chazette L, Brunotte W and Speith T. (2022). Explainable software systems: from requirements analysis to system evaluation. Requirements Engineering. 27:4. (457-487). Online publication date: 1-Dec-2022.

    https://doi.org/10.1007/s00766-022-00393-5

  • Ying Z, Hase P and Bansal M. VISFIS. Proceedings of the 36th International Conference on Neural Information Processing Systems. (17057-17072).

    /doi/10.5555/3600270.3601511

  • Feder A, Horowitz G, Wald Y, Reichart R and Rosenfeld N. In the eye of the beholder. Proceedings of the 36th International Conference on Neural Information Processing Systems. (14419-14433).

    /doi/10.5555/3600270.3601318

  • Roy N, Kim J and Rabinowitz N. Explainability via causal self-talk. Proceedings of the 36th International Conference on Neural Information Processing Systems. (7655-7670).

    /doi/10.5555/3600270.3600826

  • Lin C, Fan H, Chang Y, Ou L, Liu J, Wang Y and Jung T. (2022). Modelling the Trust Value for Human Agents Based on Real-Time Human States in Human-Autonomous Teaming Systems. Technologies. 10.3390/technologies10060115. 10:6. (115).

    https://www.mdpi.com/2227-7080/10/6/115

  • Marot A, Donnot B, Chaouache K, Kelly A, Huang Q, Hossain R and Cremer J. (2022). Learning to run a power network with trust. Electric Power Systems Research. 10.1016/j.epsr.2022.108487. 212. (108487). Online publication date: 1-Nov-2022.

    https://linkinghub.elsevier.com/retrieve/pii/S0378779622006137

  • Mohajeri B and Cheng J. “Inconsistent Performance”: Understanding Concerns of Real-World Users on Smart Mobile Health Applications Through Analyzing App Reviews. Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. (1-4).

    https://doi.org/10.1145/3526114.3558698

  • Starke G and Ienca M. (2022). Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence. Cambridge Quarterly of Healthcare Ethics. 10.1017/S0963180122000445. (1-10).

    https://www.cambridge.org/core/product/identifier/S0963180122000445/type/journal_article

  • Feder A, Keith K, Manzoor E, Pryzant R, Sridhar D, Wood-Doughty Z, Eisenstein J, Grimmer J, Reichart R, Roberts M, Stewart B, Veitch V and Yang D. (2022). Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond. Transactions of the Association for Computational Linguistics. 10.1162/tacl_a_00511. 10. (1138-1158). Online publication date: 18-Oct-2022.

    https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00511/113490/Causal-Inference-in-Natural-Language-Processing

  • Albini E, Rago A, Baroni P and Toni F. Descriptive Accuracy in Explanations: The Case of Probabilistic Classifiers. Scalable Uncertainty Management. (279-294).

    https://doi.org/10.1007/978-3-031-18843-5_19

  • Brumen B, Göllner S and Tropmann-Frick M. Aspects and Views on Responsible Artificial Intelligence. Machine Learning, Optimization, and Data Science. (384-398).

    https://doi.org/10.1007/978-3-031-25599-1_29

  • Starke G and Poppe C. (2022). Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine. Ethics and Information Technology. 24:3. Online publication date: 1-Sep-2022.

    https://doi.org/10.1007/s10676-022-09650-1

  • Speith T. (2022). How to Evaluate Explainability? - A Case for Three Criteria 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW). 10.1109/REW56159.2022.00024. 978-1-6654-6000-2. (92-97).

    https://ieeexplore.ieee.org/document/9920152/

  • Tilloo P, Parron J, Obidat O, Zhu M and Wang W. (2022). A POMDP-based Robot-Human Trust Model for Human-Robot Collaboration 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). 10.1109/CYBER55403.2022.9907660. 978-1-6654-7267-8. (1009-1014).

    https://ieeexplore.ieee.org/document/9907660/

  • Ooge J and Verbert K. (2022). Visually Explaining Uncertain Price Predictions in Agrifood: A User-Centred Case-Study. Agriculture. 10.3390/agriculture12071024. 12:7. (1024).

    https://www.mdpi.com/2077-0472/12/7/1024

  • Constantin A, Atkinson M, Bernabeu M, Buckmaster F, Dhillon B, McTrusty A, Strang N and Williams R. (2022). Optometrists’ Perspectives regarding Artificial Intelligence Assistance and contributing Retinal Images to a Repository: a Pilot Study (Preprint). JMIR Human Factors. 10.2196/40887.

    http://preprints.jmir.org/preprint/40887/accepted

  • Gunny A, Rankin D, Harris P, Katsavounidis E, Marx E, Saleem M, Coughlin M and Benoit W. (2022). A Software Ecosystem for Deploying Deep Learning in Gravitational Wave Physics HPDC '22: The 31st International Symposium on High-Performance Parallel and Distributed Computing. 10.1145/3526058.3535454. 9781450393096. (9-17). Online publication date: 1-Jul-2022.

    https://dl.acm.org/doi/10.1145/3526058.3535454

  • Ferrario A. (2021). Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems. Journal of Medical Ethics. 10.1136/medethics-2021-107482. 48:7. (492-494). Online publication date: 1-Jul-2022.

    https://jme.bmj.com/lookup/doi/10.1136/medethics-2021-107482

  • Robertson S and Díaz M. Understanding and Being Understood: User Strategies for Identifying and Recovering From Mistranslations in Machine Translation-Mediated Chat. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (2223-2238).

    https://doi.org/10.1145/3531146.3534638

  • Schoeffer J, Kuehl N and Machowski Y. “There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (1616-1628).

    https://doi.org/10.1145/3531146.3533218

  • Ferrario A and Loi M. How Explainability Contributes to Trust in AI. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (1457-1466).

    https://doi.org/10.1145/3531146.3533202

  • Thornton L, Knowles B and Blair G. The Alchemy of Trust: The Creative Act of Designing Trustworthy Socio-Technical Systems. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (1387-1398).

    https://doi.org/10.1145/3531146.3533196

  • Liao Q and Sundar S. Designing for Responsible Trust in AI Systems: A Communication Perspective. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (1257-1268).

    https://doi.org/10.1145/3531146.3533182

  • Crisan A, Drouhard M, Vig J and Rajani N. Interactive Model Cards: A Human-Centered Approach to Model Documentation. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. (427-439).

    https://doi.org/10.1145/3531146.3533108

  • Fietta V, Zecchinato F, Stasi B, Polato M and Monaro M. Dissociation Between Users’ Explicit and Implicit Attitudes Toward Artificial Intelligence: An Experimental Study. IEEE Transactions on Human-Machine Systems. 10.1109/THMS.2021.3125280. 52:3. (481-489).

    https://ieeexplore.ieee.org/document/9627595/

  • Dauda O, Awotunde J, Muyideen AbdulRaheem and Salihu S. (2022). Basic Issues and Challenges on Explainable Artificial Intelligence (XAI) in Healthcare Systems. Principles and Methods of Explainable Artificial Intelligence in Healthcare. 10.4018/978-1-6684-3791-9.ch011. (248-271).

    https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-6684-3791-9.ch011

  • Lu Q, Zhu L, Xu X, Whittle J and Xing Z. (2022). Towards a roadmap on software engineering for responsible AI CAIN '22: 1st Conference on AI Engineering - Software Engineering for AI. 10.1145/3522664.3528607. 9781450392754. (101-112). Online publication date: 16-May-2022.

    https://dl.acm.org/doi/10.1145/3522664.3528607

  • Tolmeijer S, Christen M, Kandul S, Kneer M and Bernstein A. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-17).

    https://doi.org/10.1145/3491102.3517732

  • Lyons H, Wijenayake S, Miller T and Velloso E. What’s the Appeal? Perceptions of Review Processes for Algorithmic Decisions. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-15).

    https://doi.org/10.1145/3491102.3517606

  • Kapania S, Siy O, Clapper G, SP A and Sambasivan N. ”Because AI is 100% right and safe”: User Attitudes and Sources of AI Authority in India. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-18).

    https://doi.org/10.1145/3491102.3517533

  • Langer M, Hunsicker T, Feldkamp T, König C and Grgić-Hlača N. “Look! It’s a Computer Program! It’s an Algorithm! It’s AI!”: Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems?. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-28).

    https://doi.org/10.1145/3491102.3517527

  • Chalhoub G and Sarkar A. “It’s Freedom to Put Things Where My Mind Wants”: Understanding and Improving the User Experience of Structuring Data in Spreadsheets. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-24).

    https://doi.org/10.1145/3491102.3501833

  • Bansal G, Smith-Renner A, Buçinca Z, Wu T, Holstein K, Hullman J and Stumpf S. Workshop on Trust and Reliance in AI-Human Teams (TRAIT). Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. (1-6).

    https://doi.org/10.1145/3491101.3503704

  • Karpagam G, Varma A, M S and V S. (2022). Understanding, Visualizing and Explaining XAI Through Case Studies 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS). 10.1109/ICACCS54159.2022.9785199. 978-1-6654-0816-5. (647-654).

    https://ieeexplore.ieee.org/document/9785199/

  • Suresh H, Lewis K, Guttag J and Satyanarayan A. Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs. Proceedings of the 27th International Conference on Intelligent User Interfaces. (767-781).

    https://doi.org/10.1145/3490099.3511160

  • Nicora G, Rios M, Abu-Hanna A and Bellazzi R. (2022). Evaluating pointwise reliability of machine learning prediction. Journal of Biomedical Informatics. 127:C. Online publication date: 1-Mar-2022.

    https://doi.org/10.1016/j.jbi.2022.103996

  • Williams J, Fiore S and Jentsch F. (2022). Supporting Artificial Social Intelligence With Theory of Mind. Frontiers in Artificial Intelligence. 10.3389/frai.2022.750763. 5.

    https://www.frontiersin.org/articles/10.3389/frai.2022.750763/full

  • Buijsman S and Veluwenkamp H. (2022). Spotting When Algorithms Are Wrong. Minds and Machines. 10.1007/s11023-022-09591-0. 33:4. (541-562).

    https://link.springer.com/10.1007/s11023-022-09591-0

  • Pavlovic M, Geukens S and Wery K. (2021). Hey Google, Have We Met Before? 14th International Conference of the European Academy of Design, Safe Harbours for Design Research. 10.5151/ead2021-139. . (426-434).

    http://www.proceedings.blucher.com.br/article-details/36960

  • Jermutus E, Kneale D, Thomas J and Michie S. (2022). Influences on User Trust in Healthcare Artificial Intelligence: A Systematic Review. Wellcome Open Research. 10.12688/wellcomeopenres.17550.1. 7. (65).

    https://wellcomeopenresearch.org/articles/7-65/v1

  • Waldman A and Martin K. (2022). Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions. Big Data & Society. 10.1177/20539517221100449. 9:1. Online publication date: 1-Jan-2022.

    https://journals.sagepub.com/doi/10.1177/20539517221100449

  • Jagatheesaperumal S, Pham Q, Ruby R, Yang Z, Xu C and Zhang Z. Explainable AI Over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions. IEEE Open Journal of the Communications Society. 10.1109/OJCOMS.2022.3215676. 3. (2106-2136).

    https://ieeexplore.ieee.org/document/9930971/

  • Ferrario A and Loi M. The Robustness of Counterfactual Explanations Over Time. IEEE Access. 10.1109/ACCESS.2022.3196917. 10. (82736-82750).

    https://ieeexplore.ieee.org/document/9851645/

  • Miller G. (2022). Artificial Intelligence Project Success Factors—Beyond the Ethical Principles. Information Technology for Management: Business and Social Issues. 10.1007/978-3-030-98997-2_4. (65-96).

    https://link.springer.com/10.1007/978-3-030-98997-2_4

  • Lertvittayakumjorn P and Toni F. (2021). Explanation-Based Human Debugging of NLP Models: A Survey. Transactions of the Association for Computational Linguistics. 10.1162/tacl_a_00440. 9. (1508-1528). Online publication date: 30-Dec-2022.

    https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00440/108932/Explanation-Based-Human-Debugging-of-NLP-Models-A

  • Coghlan S, Miller T and Paterson J. (2021). Good Proctor or “Big Brother”? Ethics of Online Exam Supervision Technologies. Philosophy & Technology. 10.1007/s13347-021-00476-1. 34:4. (1581-1606). Online publication date: 1-Dec-2021.

    https://link.springer.com/10.1007/s13347-021-00476-1

  • Yasser A and Abu-Elkhier M. Towards fluid software architectures. Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering. (1368-1372).

    https://doi.org/10.1109/ASE51524.2021.9678647

  • Hershcovich D and Donatelli L. (2021). It’s the Meaning That Counts: The State of the Art in NLP and Semantics. KI - Künstliche Intelligenz. 10.1007/s13218-021-00726-6. 35:3-4. (255-270). Online publication date: 1-Nov-2021.

    https://link.springer.com/10.1007/s13218-021-00726-6

  • Suresh H and Guttag J. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. (1-9).

    https://doi.org/10.1145/3465416.3483305

  • Schlicker N and Langer M. Towards Warranted Trust: A Model on the Relation Between Actual and Perceived System Trustworthiness. Proceedings of Mensch und Computer 2021. (325-329).

    https://doi.org/10.1145/3473856.3474018

  • Kastner L, Langer M, Lazar V, Schomacker A, Speith T and Sterz S. (2021). On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). 10.1109/REW53955.2021.00031. 978-1-6654-1898-0. (169-175).

    https://ieeexplore.ieee.org/document/9582305/

  • Langer M, Baum K, Hartmann K, Hessel S, Speith T and Wahl J. (2021). Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). 10.1109/REW53955.2021.00030. 978-1-6654-1898-0. (164-168).

    https://ieeexplore.ieee.org/document/9582361/

  • Spatola N and Macdorman K. (2021). Why Real Citizens Would Turn to Artificial Leaders. Digital Government: Research and Practice. 2:3. (1-24). Online publication date: 31-Jul-2021.

    https://doi.org/10.1145/3447954

  • Pesonen J. ‘Are You OK?’ Students’ Trust in a Chatbot Providing Support Opportunities. Learning and Collaboration Technologies: Games and Virtual Environments for Learning. (199-215).

    https://doi.org/10.1007/978-3-030-77943-6_13

  • Garcia M, Goranson T and Cardier B. (2021). An Executive for Autonomous Systems, Inspired by Fear Memory Extinction. Systems Engineering and Artificial Intelligence. 10.1007/978-3-030-77283-3_13. (259-282).

    https://link.springer.com/10.1007/978-3-030-77283-3_13

  • Lu T, Lu X, Huang Y and Wang H. Promise or Peril? When Human Efficacy Meets AI Capability Augmentation. SSRN Electronic Journal. 10.2139/ssrn.4298793.

    https://www.ssrn.com/abstract=4298793

  • Lu T and Zhang Y. 1+1>2? Information, Humans, and Machines. SSRN Electronic Journal. 10.2139/ssrn.4045718.

    https://www.ssrn.com/abstract=4045718

  • Ferrario A and Loi M. How Explainability Contributes to Trust in AI. SSRN Electronic Journal. 10.2139/ssrn.4020557.

    https://www.ssrn.com/abstract=4020557

  • Martin K and Waldman A. Perceptions of the Legitimacy of Algorithmic Decision-Making. SSRN Electronic Journal. 10.2139/ssrn.3964900.

    https://www.ssrn.com/abstract=3964900

  • Zhang R, Flathmann C, Musick G, Schelble B, McNeese N, Knijnenburg B and Duan W. I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams. ACM Transactions on Interactive Intelligent Systems. 0:0.

    https://doi.org/10.1145/3635474