Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
  • Nizette F, Hammedi W, van Riel A and Steils N. (2024). Why should I trust you? Influence of explanation design on consumer behavior in AI-based services. Journal of Service Management. 10.1108/JOSM-05-2024-0223. 36:1. (50-74). Online publication date: 10-Feb-2025.

    https://www.emerald.com/insight/content/doi/10.1108/JOSM-05-2024-0223/full/html

  • Mathew D, Ebem D, Ikegwu A, Ukeoma P and Dibiaezue N. (2025). Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human. Neural Processing Letters. 10.1007/s11063-025-11732-2. 57:1.

    https://link.springer.com/10.1007/s11063-025-11732-2

  • Schmude T, Koesten L, Möller T and Tschiatschek S. (2025). Information that matters. International Journal of Human-Computer Studies. 193:C. Online publication date: 1-Jan-2025.

    https://doi.org/10.1016/j.ijhcs.2024.103380

  • Xuan Y, Small E, Sokol K, Hettiachchi D and Sanderson M. (2025). Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations. International Journal of Human-Computer Studies. 10.1016/j.ijhcs.2024.103376. 193. (103376). Online publication date: 1-Jan-2025.

    https://linkinghub.elsevier.com/retrieve/pii/S1071581924001599

  • Mill E, Garn W, Ryman-Tubb N and Turner C. (2024). The SAGE Framework for Explaining Context in Explainable Artificial Intelligence. Applied Artificial Intelligence. 10.1080/08839514.2024.2318670. 38:1. Online publication date: 31-Dec-2025.

    https://www.tandfonline.com/doi/full/10.1080/08839514.2024.2318670

  • Walter Y. (2024). Digital Psychology: Introducing a Conceptual Impact Model and the Future of Work. Trends in Psychology. 10.1007/s43076-024-00408-w.

    https://link.springer.com/10.1007/s43076-024-00408-w

  • Piscitelli M, Razzano G, Buscemi G and Capozzoli A. (2024). An interpretable data analytics-based energy benchmarking process for supporting retrofit decisions in large residential building stocks. Energy and Buildings. 10.1016/j.enbuild.2024.115115. (115115). Online publication date: 1-Nov-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S0378778824012313

  • Casolin E and Salim F. Towards Understanding Human-AI Reliance Patterns Through Explanation Styles. Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing. (861-865).

    https://doi.org/10.1145/3675094.3678996

  • Litvinova Y, Mikalef P and Luo X. (2024). Framework for human–XAI symbiosis: extended self from the dual-process theory perspective. Journal of Business Analytics. 10.1080/2573234X.2024.2396366. 7:4. (224-255). Online publication date: 1-Oct-2024.

    https://www.tandfonline.com/doi/full/10.1080/2573234X.2024.2396366

  • Cálem J, Moreira C and Jorge J. (2024). Intelligent systems in healthcare. Computers in Biology and Medicine. 180:C. Online publication date: 1-Sep-2024.

    https://doi.org/10.1016/j.compbiomed.2024.108908

  • Tikhomirov L, Semmler C, McCradden M, Searston R, Ghassemi M and Oakden-Rayner L. (2024). Medical artificial intelligence for clinicians: the lost cognitive perspective. The Lancet Digital Health. 10.1016/S2589-7500(24)00095-5. 6:8. (e589-e594). Online publication date: 1-Aug-2024.

    https://linkinghub.elsevier.com/retrieve/pii/S2589750024000955

  • Pisoni G and Moloney M. (2024). Responsible AI-Based Business Process Management and Improvement. Digital Society. 10.1007/s44206-024-00105-2. 3:2. Online publication date: 1-Aug-2024.

    https://link.springer.com/10.1007/s44206-024-00105-2

  • Kong X, Liu S and Zhu L. (2024). Toward Human-centered XAI in Practice: A survey. Machine Intelligence Research. 10.1007/s11633-022-1407-3. 21:4. (740-770). Online publication date: 1-Aug-2024.

    https://link.springer.com/10.1007/s11633-022-1407-3

  • Calero Valdez A, Heine M, Franke T, Jochems N, Jetter H and Schrills T. (2024). The European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s success. i-com. 10.1515/icom-2024-0014. 23:2. (249-261). Online publication date: 31-Jul-2024.. Online publication date: 1-Aug-2024.

    https://www.degruyter.com/document/doi/10.1515/icom-2024-0014/html

  • Khan A, Hughes J, Valentine D, Ruis L, Sachan K, Radhakrishnan A, Grefenstette E, Bowman S, Rocktäschel T and Perez E. Debating with more persuasive LLMs leads to more truthful answers. Proceedings of the 41st International Conference on Machine Learning. (23662-23733).

    /doi/10.5555/3692070.3693020

  • Fok R and Weld D. (2024). In search of verifiability: Explanations rarely enable complementary performance in AI‐advised decision making. AI Magazine. 10.1002/aaai.12182.

    https://onlinelibrary.wiley.com/doi/10.1002/aaai.12182

  • Bhattacharya A and Verbert K. "How Good Is Your Explanation?": Towards a Standardised Evaluation Approach for Diverse XAI Methods on Multiple Dimensions of Explainability. Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. (513-515).

    https://doi.org/10.1145/3631700.3664911

  • Salimzadeh S and Gadiraju U. “DecisionTime”: A Configurable Framework for Reproducible Human-AI Decision-Making Studies. Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. (66-69).

    https://doi.org/10.1145/3631700.3664885

  • Salimzadeh S and Gadiraju U. When in Doubt! Understanding the Role of Task Characteristics on Peer Decision-Making with AI Assistance. Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. (89-101).

    https://doi.org/10.1145/3627043.3659567

  • Sullivan E. SIDEs: Separating Idealization from Deceptive 'Explanations' in xAI. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (1714-1724).

    https://doi.org/10.1145/3630106.3658999

  • Qu J, Arguello J and Wang Y. Why is "Problems" Predictive of Positive Sentiment? A Case Study of Explaining Unintuitive Features in Sentiment Classification. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (161-172).

    https://doi.org/10.1145/3630106.3658547

  • Fresz B, Lörcher L and Huber M. Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. (1-19).

    https://doi.org/10.1145/3630106.3658537

  • Buijsman S. (2024). Transparency for AI systems: a value-based approach. Ethics and Information Technology. 26:2. Online publication date: 1-Jun-2024.

    https://doi.org/10.1007/s10676-024-09770-w

  • Ehsan U, Passi S, Liao Q, Chan L, Lee I, Muller M and Riedl M. The Who in XAI: How AI Background Shapes Perceptions of AI Explanations. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-32).

    https://doi.org/10.1145/3613904.3642474

  • Salimzadeh S, He G and Gadiraju U. Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision Making. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. (1-17).

    https://doi.org/10.1145/3613904.3641905

  • Walter Y. (2023). The rapid competitive economy of machine learning development: a discussion on the social risks and benefits. AI and Ethics. 10.1007/s43681-023-00276-7. 4:2. (635-648). Online publication date: 1-May-2024.

    https://link.springer.com/10.1007/s43681-023-00276-7

  • Okolo C, Agarwal D, Dell N and Vashistha A. (2024). "If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural India. Proceedings of the ACM on Human-Computer Interaction. 8:CSCW1. (1-28). Online publication date: 17-Apr-2024.

    https://doi.org/10.1145/3637348

  • Rong Y, Leemann T, Nguyen T, Fiedler L, Qian P, Unhelkar V, Seidel T, Kasneci G and Kasneci E. (2023). Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations. IEEE Transactions on Pattern Analysis and Machine Intelligence. 46:4. (2104-2122). Online publication date: 1-Apr-2024.

    https://doi.org/10.1109/TPAMI.2023.3331846

  • Messeri L and Crockett M. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature. 10.1038/s41586-024-07146-0. 627:8002. (49-58). Online publication date: 7-Mar-2024.

    https://www.nature.com/articles/s41586-024-07146-0

  • Subramanian H, Canfield C and Shank D. (2024). Designing explainable AI to improve human-AI team performance. Artificial Intelligence in Medicine. 149:C. Online publication date: 1-Mar-2024.

    https://doi.org/10.1016/j.artmed.2024.102780

  • Al-Ansari N, Al-Thani D, Al-Mansoori R and Duradoni M. (2024). User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature Review. Human Behavior and Emerging Technologies. 10.1155/2024/4628855. 2024:1. Online publication date: 1-Jan-2024.

    https://onlinelibrary.wiley.com/doi/10.1155/2024/4628855

  • Pawar U, O'Shea D, O'Reilly R, Costello M and Beder C. Optimal Neighborhood Contexts in Explainable AI: An Explanandum-Based Evaluation. IEEE Open Journal of the Computer Society. 10.1109/OJCS.2024.3389781. 5. (181-194).

    https://ieeexplore.ieee.org/document/10504877/

  • Bedi P, Thukral A and Dhiman S. (2024). Explainable AI in Disease Diagnosis. Explainable AI in Health Informatics. 10.1007/978-981-97-3705-5_5. (87-111).

    https://link.springer.com/10.1007/978-981-97-3705-5_5

  • Shajalal M, Boden A, Stevens G, Du D and Kern D. (2024). Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments. Explainable Artificial Intelligence. 10.1007/978-3-031-63803-9_23. (418-440).

    https://link.springer.com/10.1007/978-3-031-63803-9_23

  • Sonenberg L. (2023). Logics and collaboration. Logic Journal of the IGPL. 10.1093/jigpal/jzad006. 31:6. (1024-1046). Online publication date: 27-Nov-2023.

    https://academic.oup.com/jigpal/article/31/6/1024/7148508

  • He G, Buijsman S and Gadiraju U. (2023). How Stated Accuracy of an AI System and Analogies to Explain Accuracy Affect Human Reliance on the System. Proceedings of the ACM on Human-Computer Interaction. 7:CSCW2. (1-29). Online publication date: 28-Sep-2023.

    https://doi.org/10.1145/3610067

  • Jain P, Farzan R and Lee A. (2023). Co-Designing with Users the Explanations for a Proactive Auto-Response Messaging Agent. Proceedings of the ACM on Human-Computer Interaction. 7:MHCI. (1-23). Online publication date: 11-Sep-2023.

    https://doi.org/10.1145/3604248

  • Yang W, Wei Y, Wei H, Chen Y, Huang G, Li X, Li R, Yao N, Wang X, Gu X, Amin M and Kang B. (2023). Survey on Explainable AI: From Approaches, Limitations and Applications Aspects. Human-Centric Intelligent Systems. 10.1007/s44230-023-00038-y. 3:3. (161-188).

    https://link.springer.com/10.1007/s44230-023-00038-y

  • Schrills T, Gruner M, Peuscher H and Franke T. Safe Environments to Understand Medical AI - Designing a Diabetes Simulation Interface for Users of Automated Insulin Delivery. Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. (306-328).

    https://doi.org/10.1007/978-3-031-35748-0_23

  • de Santana V, Fucs A, Segura V, de Moraes D and Cerqueira R. (2023). Predicting the need for XAI from high-granularity interaction data. International Journal of Human-Computer Studies. 175:C. Online publication date: 1-Jul-2023.

    https://doi.org/10.1016/j.ijhcs.2023.103029

  • Salimzadeh S, He G and Gadiraju U. A Missing Piece in the Puzzle: Considering the Role of Task Complexity in Human-AI Decision Making. Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization. (215-227).

    https://doi.org/10.1145/3565472.3592959

  • Lai V, Chen C, Smith-Renner A, Liao Q and Tan C. Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (1369-1385).

    https://doi.org/10.1145/3593013.3594087

  • Nannini L, Balayn A and Smith A. Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (1198-1212).

    https://doi.org/10.1145/3593013.3594074

  • Miller T. Explainable AI is Dead, Long Live Explainable AI!. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. (333-342).

    https://doi.org/10.1145/3593013.3594001

  • Balayn A, Rikalo N, Yang J and Bozzon A. Faulty or Ready? Handling Failures in Deep-Learning Computer Vision Models until Deployment: A Study of Practices, Challenges, and Needs. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-20).

    https://doi.org/10.1145/3544548.3581555

  • Verma H, Mlynar J, Schaer R, Reichenbach J, Jreige M, Prior J, Evéquoz F and Depeursinge A. Rethinking the Role of AI with Physicians in Oncology: Revealing Perspectives from Clinical and Research Workflows. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-19).

    https://doi.org/10.1145/3544548.3581506

  • Bertrand A, Viard T, Belloum R, Eagan J and Maxwell W. On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-21).

    https://doi.org/10.1145/3544548.3581314

  • Sivaraman V, Bukowski L, Levin J, Kahn J and Perer A. Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-18).

    https://doi.org/10.1145/3544548.3581075

  • He G, Kuiper L and Gadiraju U. Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. (1-18).

    https://doi.org/10.1145/3544548.3581025

  • Brachman M, Pan Q, Do H, Dugan C, Chaudhary A, Johnson J, Rai P, Chakraborti T, Gschwind T, Laredo J, Miksovic C, Scotton P, Talamadupula K and Thomas G. Follow the Successful Herd: Towards Explanations for Improved Use and Mental Models of Natural Language Systems. Proceedings of the 28th International Conference on Intelligent User Interfaces. (220-239).

    https://doi.org/10.1145/3581641.3584088

  • Bove C, Lesot M, Tijus C and Detyniecki M. Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study. Proceedings of the 28th International Conference on Intelligent User Interfaces. (188-203).

    https://doi.org/10.1145/3581641.3584082

  • Cau F, Hauptmann H, Spano L and Tintarev N. Supporting High-Uncertainty Decisions through AI and Logic-Style Explanations. Proceedings of the 28th International Conference on Intelligent User Interfaces. (251-263).

    https://doi.org/10.1145/3581641.3584080

  • Zhang Z, Storath C, Liu Y and Butz A. Resilience Through Appropriation: Pilots’ View on Complex Decision Support. Proceedings of the 28th International Conference on Intelligent User Interfaces. (397-409).

    https://doi.org/10.1145/3581641.3584056

  • Ferrell B, Raskin S and Zimmerman E. (2023). Calibrating a Transformer-Based Model’s Confidence on Community-Engaged Research Studies: Decision Support Evaluation Study. JMIR Formative Research. 10.2196/41516. 7. (e41516).

    https://formative.jmir.org/2023/1/e41516

  • Belaid M, Bornemann R, Rabus M, Krestel R and Hüllermeier E. (2023). Compare-xAI: Toward Unifying Functional Testing Methods for Post-hoc XAI Algorithms into a Multi-dimensional Benchmark. Explainable Artificial Intelligence. 10.1007/978-3-031-44067-0_5. (88-109).

    https://link.springer.com/10.1007/978-3-031-44067-0_5

  • Wang X and Yin M. (2022). Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons. ACM Transactions on Interactive Intelligent Systems. 12:4. (1-36). Online publication date: 31-Dec-2022.

    https://doi.org/10.1145/3519266

  • Villareale J, Harteveld C and Zhu J. (2022). "I Want To See How Smart This AI Really Is": Player Mental Model Development of an Adversarial AI Player. Proceedings of the ACM on Human-Computer Interaction. 6:CHI PLAY. (1-26). Online publication date: 25-Oct-2022.

    https://doi.org/10.1145/3549482

  • Nakao Y, Stumpf S, Ahmed S, Naseer A and Strappelli L. (2022). Toward Involving End-users in Interactive Human-in-the-loop AI Fairness. ACM Transactions on Interactive Intelligent Systems. 12:3. (1-30). Online publication date: 30-Sep-2022.

    https://doi.org/10.1145/3514258

  • Weber T and Hußmann H. Tooling for Developing Data-Driven Applications: Overview and Outlook. Proceedings of Mensch und Computer 2022. (66-77).

    https://doi.org/10.1145/3543758.3543779

  • Buijsman S. (2022). Defining Explanation and Explanatory Depth in XAI. Minds and Machines. 32:3. (563-584). Online publication date: 1-Sep-2022.

    https://doi.org/10.1007/s11023-022-09607-9

  • Martin T. (2022). On the Need for Collaborative Intelligence in Cybersecurity. Electronics. 10.3390/electronics11132067. 11:13. (2067).

    https://www.mdpi.com/2079-9292/11/13/2067

  • Balayn A, Rikalo N, Lofi C, Yang J and Bozzon A. How can Explainability Methods be Used to Support Bug Identification in Computer Vision Models?. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. (1-16).

    https://doi.org/10.1145/3491102.3517474

  • Khanna R, Dodge J, Anderson A, Dikkala R, Irvine J, Shureih Z, Lam K, Matthews C, Lin Z, Kahng M, Fern A and Burnett M. (2022). Finding AI’s Faults with AAR/AI: An Empirical Study. ACM Transactions on Interactive Intelligent Systems. 12:1. (1-33). Online publication date: 31-Mar-2022.

    https://doi.org/10.1145/3487065

  • Mukund Deshpande N, Gite S, Pradhan B and Ebraheem Assiri M. (2022). Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review. Computer Modeling in Engineering & Sciences. 10.32604/cmes.2022.021225. 133:3. (843-872).

    https://www.techscience.com/CMES/v133n3/49216

  • Chromik M. Making SHAP Rap: Bridging Local and Global Insights Through Interaction and Narratives. Human-Computer Interaction – INTERACT 2021. (641-651).

    https://doi.org/10.1007/978-3-030-85616-8_37

  • Molnar C, König G, Herbinger J, Freiesleben T, Dandl S, Scholbeck C, Casalicchio G, Grosse-Wentrup M and Bischl B. General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. xxAI - Beyond Explainable AI. (39-68).

    https://doi.org/10.1007/978-3-031-04083-2_4