skip to main content
10.1145/3581641.3584080acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article
Open access

Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations

Published: 27 March 2023 Publication History
  • Get Citation Alerts
  • Abstract

    A common criteria for Explainable AI (XAI) is to support users in establishing appropriate trust in the AI – rejecting advice when it is incorrect, and accepting advice when it is correct. Previous findings suggest that explanations can cause an over-reliance on AI (overly accepting advice). Explanations that evoke appropriate trust are even more challenging for decision-making tasks that are difficult for humans and AI. For this reason, we study decision-making by non-experts in the high-uncertainty domain of stock trading. We compare the effectiveness of three different explanation styles (influenced by inductive, abductive, and deductive reasoning) and the role of AI confidence in terms of a) the users’ reliance on the XAI interface elements (charts with indicators, AI prediction, explanation), b) the correctness of the decision (task performance), and c) the agreement with the AI’s prediction. In contrast to previous work, we look at interactions between different aspects of decision-making, including AI correctness, and the combined effects of AI confidence and explanations styles. Our results show that specific explanation styles (abductive and deductive) improve the user’s task performance in the case of high AI confidence compared to inductive explanations. In other words, these styles of explanations were able to invoke correct decisions (for both positive and negative decisions) when the system was certain. In such a condition, the agreement between the user’s decision and the AI prediction confirms this finding, highlighting a significant agreement increase when the AI is correct. This suggests that both explanation styles are suitable for evoking appropriate trust in a confident AI.
    Our findings further indicate a need to consider AI confidence as a criterion for including or excluding explanations from AI interfaces. In addition, this paper highlights the importance of carefully selecting an explanation style according to the characteristics of the task and data.

    References

    [1]
    2012. Price Rate of Change. John Wiley & Sons, Ltd, Chapter 5, 51–60. https://doi.org/10.1002/9781119204428.ch5 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781119204428.ch5
    [2]
    Rebecca Abraham, Mahmoud El Samad, Amer M. Bakhach, Hani El-Chaarani, Ahmad Sardouk, Sam El Nemar, and Dalia Jaber. 2022. Forecasting a Stock Trend Using Genetic Algorithm and Random Forest. Journal of Risk and Financial Management 15, 5 (2022). https://doi.org/10.3390/jrfm15050188
    [3]
    Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
    [4]
    Md Manjurul Ahsan and Zahed Siddique. 2021. Machine learning based disease diagnosis: A comprehensive review.
    [5]
    Gerald Appel. 2005. Technical Analysis: Power Tools for Active Investors.
    [6]
    Harit Bandi, Suyash Joshi, Siddhant Bhagat, and Dayanand Ambawade. 2021. Integrated Technical and Sentiment Analysis Tool for Market Index Movement Prediction, comprehensible using XAI. In 2021 International Conference on Communication information and Computing Technology (ICCICT). 1–8. https://doi.org/10.1109/ICCICT50803.2021.9510124
    [7]
    Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2–11.
    [8]
    Gagan Bansal, Besmira Nushi, Ece Kamar, Dan Weld, Walter Lasecki, and Eric Horvitz. 2019. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. In AAAI Conference on Artificial Intelligence. AAAI. https://www.microsoft.com/en-us/research/publication/updates-in-human-ai-teams-understanding-and-addressing-the-performance-compatibility-tradeoff/
    [9]
    Suryoday Basak, Saibal Kar, Snehanshu Saha, Luckyson Khaidem, and Sudeepa Roy Dey. 2019. Predicting the direction of stock market prices using tree-based classifiers. The North American Journal of Economics and Finance 47 (2019), 552–567. https://doi.org/10.1016/j.najef.2018.06.013
    [10]
    Eric Benhamou, Jean-Jacques Ohana, David Saltiel, and Beatrice Guez. 2021. Explainable AI (XAI) Models Applied to Planning in Financial Markets. SSRN Electronic Journal(2021). https://doi.org/10.2139/ssrn.3862437
    [11]
    Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces (Helsinki, Finland) (IUI ’22). Association for Computing Machinery, New York, NY, USA, 807–819. https://doi.org/10.1145/3490099.3511139
    [12]
    Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 188 (apr 2021), 21 pages. https://doi.org/10.1145/3449287
    [13]
    Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. Proceedings of the 25th International Conference on Intelligent User Interfaces (Mar 2020). https://doi.org/10.1145/3377325.3377498
    [14]
    Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The Effects of Example-Based Explanations in a Machine Learning Interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 258–262. https://doi.org/10.1145/3301275.3302289
    [15]
    Federico Maria Cau, L. D. Spano, and N. Tintarev. 2020. Considerations for Applying Logical Reasoning to Explain Neural Network Outputs. In XAI.it@AI*IA.
    [16]
    Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI ’21). Association for Computing Machinery, New York, NY, USA, 307–317. https://doi.org/10.1145/3397481.3450644
    [17]
    Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. https://doi.org/10.48550/ARXIV.1702.08608
    [18]
    Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and Hall P. Beck. 2003. The role of trust in automation reliance. International Journal of Human-Computer Studies 58, 6 (2003), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7 Trust and Technology.
    [19]
    Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2013. G*Power 3.1.7: A flexible statistical power analysis program for the social, Behavioral and Biomedical sciences, Beh. Res. Meth.s 39 (01 2013), 175–191.
    [20]
    Peter Flach and Antonis Kakas. 2000. Abductive and Inductive Reasoning: Background and Issues. (01 2000). https://doi.org/10.1007/978-94-017-0606-3-1
    [21]
    Milton Friedman. 1937. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Amer. Statist. Assoc. 32, 200 (1937), 675–701. https://doi.org/10.1080/01621459.1937.10503522 arXiv:https://www.tandfonline.com/doi/pdf/10.1080/01621459.1937.10503522
    [22]
    Milton Friedman. 1940. A Comparison of Alternative Tests of Significance for the Problem of $m$ Rankings. Annals of Mathematical Statistics 11 (1940), 86–92.
    [23]
    Shilpa Gite, Hrituja Khatavkar, Ketan Kotecha, Shilpi Srivastava, Priyam Maheshwari, and Neerav Pandey. 2021. Explainable stock prices prediction from financial news articles using sentiment analysis. PeerJ Computer Science 7 (01 2021), e340. https://doi.org/10.7717/peerj-cs.340
    [24]
    Nikola Gradojevic and Dragan Kukolj. 2022. Unlocking the black box: Non-parametric option pricing before and during COVID-19. Annals of Operations Research (25 Feb 2022). https://doi.org/10.1007/s10479-022-04578-7
    [25]
    J.E. Granville. 1976. Granville’s New Strategy of Daily Stock Market Timing for Maximum Profit.Simon & Schuster.
    [26]
    Ben Green and Yiling Chen. 2019. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 90–99. https://doi.org/10.1145/3287560.3287563
    [27]
    Rhys Green, Matthew Rowe, and Alberto Polleri. 2021. MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator. https://doi.org/10.48550/ARXIV.2109.01531
    [28]
    Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On Calibration of Modern Neural Networks. CoRR abs/1706.04599(2017). arXiv:1706.04599http://arxiv.org/abs/1706.04599
    [29]
    Julian Hatwell, Mohamed Medhat Gaber, and R. Muhammad Atif Azad. 2020. CHIRPS: Explaining random forest classification. Artificial Intelligence Review(2020), 1 – 42.
    [30]
    Robert R. Hoffman, Matthew Johnson, Jeffrey M. Bradshaw, and Al Underbrink. 2013. Trust in Automation. IEEE Intelligent Systems 28, 1 (2013), 84–88. https://doi.org/10.1109/MIS.2013.24
    [31]
    Stephen C. Hora. 1996. Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliability Engineering & System Safety 54, 2 (1996), 217–223. https://doi.org/10.1016/S0951-8320(96)00077-4 Treatment of Aleatory and Epistemic Uncertainty.
    [32]
    Richard W. Arms Jr.1990. Ease of movement. V.8:5 (187–190) pages.
    [33]
    Taylan Kabbani and Fatih Usta. 2022. Predicting The Stock Trend Using News Sentiment Analysis and Technical Indicators in Spark. ArXiv abs/2201.12283(2022).
    [34]
    Alex Kendall and Yarin Gal. 2017. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2017/file/2650d6089a6d640c5e85b2b88265dc2b-Paper.pdf
    [35]
    Eoin M. Kenny, Courtney Ford, Molly Quinn, and Mark T. Keane. 2021. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence 294 (2021), 103459. https://doi.org/10.1016/j.artint.2021.103459
    [36]
    Luckyson Khaidem, Snehanshu Saha, and Sudeepa Roy Dey. 2016. Predicting the direction of stock market prices using random forest. CoRR abs/1605.00003(2016). arXiv:1605.00003http://arxiv.org/abs/1605.00003
    [37]
    Wasiat Khan, Mustansar Ali Ghazanfar, Muhammad Awais Azam, Amin Karami, Khaled Hamed Alyoubi, and Ahmed S. Alfakeeh. 2020. Stock market prediction using machine learning classifiers and social media, news. Journal of Ambient Intelligence and Humanized Computing (2020), 1–24.
    [38]
    Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions. In Proceedings of the 34th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 70), Doina Precupand Yee Whye Teh (Eds.). PMLR, 1885–1894. https://proceedings.mlr.press/v70/koh17a.html
    [39]
    Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 29–38. https://doi.org/10.1145/3287560.3287590
    [40]
    G. Lane. 1984. Lane’s stochastics. Second issue of Technical Analysis of Stocks and Commodities magazine. pp 87–90 pages.
    [41]
    Pedro Lopes, Eduardo Silva, Cristiana Braga, Tiago Oliveira, and Luís Rosado. 2022. XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Applied Sciences (2022).
    [42]
    Rand Kwong Yew Low and Enoch Tan. 2016. The role of analyst forecasts in the momentum effect. International Review of Financial Analysis 48 (2016), 67–84. https://doi.org/10.1016/j.irfa.2016.09.007
    [43]
    Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 78, 16 pages. https://doi.org/10.1145/3411764.3445562
    [44]
    Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777. https://doi.org/10.5555/3295222.3295230
    [45]
    Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
    [46]
    Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A Survey of Evaluation Methods and Measures for Interpretable Machine Learning. CoRR abs/1811.11839(2018). arXiv:1811.11839http://arxiv.org/abs/1811.11839
    [47]
    Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and Franco Turini. 2019. Meaningful Explanations of Black Box AI Decision Systems. Proceedings of the AAAI Conference on Artificial Intelligence 33, 01 (Jul. 2019), 9780–9784. https://doi.org/10.1609/aaai.v33i01.33019780
    [48]
    Amy Rechkemmer and Ming Yin. 2022. When Confidence Meets Accuracy: Exploring the Effects of Multiple Performance Indicators on Trust in Machine Learning Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 535, 14 pages. https://doi.org/10.1145/3491102.3501967
    [49]
    Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 97–101. https://doi.org/10.18653/v1/N16-3020
    [50]
    Perry Sadorsky. 2021. A Random Forests Approach to Predicting Clean Energy Stock Prices. Journal of Risk and Financial Management 14, 2 (2021). https://doi.org/10.3390/jrfm14020048
    [51]
    James Schaffer, John O’Donovan, James Michaelis, Adrienne Raglin, and Tobias Höllerer. 2019. I Can Do Better than Your AI: Expertise and Explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 240–251. https://doi.org/10.1145/3301275.3302308
    [52]
    Mohammad Hossein Shaker and Eyke Hüllermeier. 2020. Aleatoric and Epistemic Uncertainty with Random Forests. CoRR abs/2001.00893(2020). arXiv:2001.00893http://arxiv.org/abs/2001.00893
    [53]
    Adam Shapiro, Moritz Sudhof, and Daniel Wilson. 2017. Measuring News Sentiment. Federal Reserve Bank of San Francisco, Working Paper Series (01 2017), 01–A2. https://doi.org/10.24148/wp2017-01
    [54]
    Si Shi, Rita Tse, Wuman Luo, Stefano D’Addona, and Giovanni Pau. 2022. Machine learning-driven credit risk: a systemic review. Neural Computing and Applications 34 (09 2022). https://doi.org/10.1007/s00521-022-07472-2
    [55]
    Andrew Silva, Mariah Schrum, Erin Hedlund-Botti, Nakul Gopalan, and Matthew Gombolay. 2022. Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of xAI on Human-Agent Interaction. International Journal of Human–Computer Interaction 0, 0(2022), 1–15. https://doi.org/10.1080/10447318.2022.2101698 arXiv:https://doi.org/10.1080/10447318.2022.2101698
    [56]
    Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, and Mark Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404. https://doi.org/10.1016/j.artint.2020.103404
    [57]
    Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831
    [58]
    Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI ’21). Association for Computing Machinery, New York, NY, USA, 318–328. https://doi.org/10.1145/3397481.3450650
    [59]
    Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. "Do You Trust Me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (Paris, France) (IVA ’19). Association for Computing Machinery, New York, NY, USA, 7–9. https://doi.org/10.1145/3308532.3329441
    [60]
    J. Welles. Wilder. 1978. New concepts in technical trading systems. Trend Research.
    [61]
    Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 295–305. https://doi.org/10.1145/3351095.3372852
    [62]
    Jianlong Zhou, Amir H. Gandomi, Fang Chen, and Andreas Holzinger. 2021. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 10, 5 (2021). https://doi.org/10.3390/electronics10050593
    [63]
    Jianlong Zhou, Huaiwen Hu, Zhidong Li, Kun Yu, and Fang Chen. 2019. Physiological Indicators for User Trust in Machine Learning with Influence Enhanced Fact-Checking. In Machine Learning and Knowledge Extraction: Third IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2019, Canterbury, UK, August 26–29, 2019, Proceedings(Canterbury, United Kingdom). Springer-Verlag, Berlin, Heidelberg, 94–113. https://doi.org/10.1007/978-3-030-29726-8_7
    [64]
    Jianlong Zhou, Zhidong Li, Huaiwen Hu, Kun Yu, Fang Chen, Zelin Li, and Yang Wang. 2019. Effects of Influence on User Trust in Predictive Decision Making. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312962
    [65]
    H.-J. Zimmermann. 2000. An application-oriented view of modeling uncertainty. European Journal of Operational Research 122, 2 (2000), 190–198. https://doi.org/10.1016/S0377-2217(99)00228-3

    Cited By

    View all
    • (2024)Interpretability Gone Bad: The Role of Bounded Rationality in How Practitioners Understand Machine LearningProceedings of the ACM on Human-Computer Interaction10.1145/36373548:CSCW1(1-34)Online publication date: 26-Apr-2024
    • (2024)Explainability for Transparent Conversational Information-SeekingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657768(1040-1050)Online publication date: 10-Jul-2024
    • (2024)Explaining Through the Right Reasoning Style: Lessons LearntEngineering Interactive Computer Systems. EICS 2023 International Workshops and Doctoral Consortium10.1007/978-3-031-59235-5_9(90-101)Online publication date: 8-Aug-2024
    • Show More Cited By

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
    March 2023
    972 pages
    ISBN:9798400701061
    DOI:10.1145/3581641
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 March 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AI confidence
    2. Abductive
    3. Deductive
    4. Inductive
    5. Logical reasoning
    6. Random forest
    7. Stock market prediction
    8. XAI

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Sardinia Regional Government and Fondazione di Sardegna

    Conference

    IUI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)679
    • Downloads (Last 6 weeks)67
    Reflects downloads up to 14 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Interpretability Gone Bad: The Role of Bounded Rationality in How Practitioners Understand Machine LearningProceedings of the ACM on Human-Computer Interaction10.1145/36373548:CSCW1(1-34)Online publication date: 26-Apr-2024
    • (2024)Explainability for Transparent Conversational Information-SeekingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657768(1040-1050)Online publication date: 10-Jul-2024
    • (2024)Explaining Through the Right Reasoning Style: Lessons LearntEngineering Interactive Computer Systems. EICS 2023 International Workshops and Doctoral Consortium10.1007/978-3-031-59235-5_9(90-101)Online publication date: 8-Aug-2024
    • (2023)Engineering Interactive Systems Embedding AI TechnologiesCompanion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3596454.3597195(90-92)Online publication date: 27-Jun-2023
    • (2023)The User Interface Technologies Course at the University of CagliariDesign for Equality and Justice10.1007/978-3-031-61688-4_25(260-269)Online publication date: 28-Aug-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media