Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

Published: 04 October 2023 Publication History

Abstract

The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search "unreasonable" local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and "reasonable" model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.

References

[1]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y Lim. 2020. COGAM: measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.
[2]
John G Adair. 1984. The Hawthorne effect: a reconsideration of the methodological artifact. Journal of applied psychology, Vol. 69, 2 (1984), 334.
[3]
Saleema Amershi, Max Chickering, Steven M Drucker, Bongshin Lee, Patrice Simard, and Jina Suh. 2015. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 337--346.
[4]
Agathe Balayn, Christoph Lofi, and Geert-Jan Houben. 2021. Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. The VLDB Journal, Vol. 30, 5 (2021), 739--768.
[5]
Alsallakh Bilal, Amin Jourabloo, Mao Ye, Xiaoming Liu, and Liu Ren. 2017. Do convolutional neural networks learn class hierarchy? IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 152--162.
[6]
William Blanzeisky, Barry Smyth, and Pádraig Cunningham. 2022. Algorithmic Bias and Fairness in Case-Based Reasoning. In International Conference on Case-Based Reasoning. Springer, 48--62.
[7]
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and survey of explanation methods for black box models. arXiv preprint arXiv:2102.13076 (2021).
[8]
John Brooke. 2013. SUS: a retrospective. Journal of usability studies, Vol. 8, 2 (2013), 29--40.
[9]
John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry, Vol. 189, 194 (1996), 4--7.
[10]
Antony Brydon. 2019. Why AI Needs Human Input (And Always Will). https://www.forbes.com/sites/forbestechcouncil/2019/10/30/why-ai-needs-human-input-and-always-will/ Retrieved September 10, 2022 from
[11]
Kelly Caine. 2016. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 981--992.
[12]
Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-cam: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE winter conference on applications of computer vision (WACV). IEEE, 839--847.
[13]
Justin Cheng and Michael S Bernstein. 2015. Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 600--611.
[14]
Minsuk Choi, Cheonbok Park, Soyoung Yang, Yonggyu Kim, Jaegul Choo, and Sungsoo Ray Hong. 2019. Aila: Attentive interactive labeling assistant for document classification through attention-based deep neural networks. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--12.
[15]
Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018).
[16]
Chaeyeon Chung, Jung Soo Lee, Kyungmin Park, Junsoo Lee, Jaegul Choo, and Sungsoo Ray Hong. 2021. Understanding Human-side Impact of Sequencing Images in Batch Labeling for Subjective Tasks. Proceedings of the ACM on Human-Computer Interaction CSCW (2021).
[17]
John Joon Young Chung, Jean Y Song, Sindhu Kutty, Sungsoo Ray Hong, Juho Kim, and Walter S Lasecki. 2019. Efficient Elicitation Approaches to Estimate Collective Crowd Answers. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--25. https://doi.org/10.1145/3359164
[18]
Dennis Collaris and Jarke J van Wijk. 2020. ExplainExplore: Visual exploration of machine learning explanations. In 2020 IEEE Pacific Visualization Symposium (PacificVis). IEEE, 26--35.
[19]
Jesus M Darias, Marta Caro-Mart'inez, Belén D'iaz-Agudo, and Juan A Recio-Garcia. 2022. Using Case-Based Reasoning for Capturing Expert Knowledge on Explanation Methods. In International Conference on Case-Based Reasoning. Springer, 3--17.
[20]
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. ERASER: A benchmark to evaluate rationalized NLP models. arXiv preprint arXiv:1911.03429 (2019).
[21]
Gordon Diaper. 1990. The Hawthorne effect: A fresh examination. Educational studies, Vol. 16, 3 (1990), 261--267.
[22]
John J Dudley and Per Ola Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 8, 2 (2018), 1--37.
[23]
Shannon Leigh Eggers and Char Sample. 2020. Vulnerabilities in Artificial Intelligence and Machine Learning Applications and Data. Technical Report. Idaho National Lab.(INL), Idaho Falls, ID (United States).
[24]
Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces. 39--45.
[25]
Hiroshi Fukui, Tsubasa Hirakawa, Takayoshi Yamashita, and Hironobu Fujiyoshi. 2019. Attention branch network: Learning of attention mechanism for visual explanation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10705--10714.
[26]
Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, and Liang Zhao. 2022a. Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning. arXiv preprint arXiv:2212.03954 (2022).
[27]
Yuyang Gao, Tong Sun, Rishab Bhatt, Dazhou Yu, Sungsoo Hong, and Liang Zhao. 2021. GNES: Learning to explain graph neural networks. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 131--140.
[28]
Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, and Zhao Liang. 2022b. RES: A Robust Framework for Guiding Visual Explanation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 432--442.
[29]
Yuyang Gao, Tong Steven Sun, Liang Zhao, and Sungsoo Ray Hong. 2022c. Aligning eyes between humans and deep neural network through interactive attention alignment. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1--28.
[30]
Yolanda Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, Daniel Garijo, Shruti Gadewar, Qifan Yang, and Neda Jahanshad. 2019. Towards human-guided machine learning. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 614--624.
[31]
Yolanda Gil and Bart Selman. 2019. A 20-Year Community Roadmap for Artificial Intelligence Research in the US. arXiv preprint arXiv:1908.02624 (2019).
[32]
Max Glockner, Ivan Habernal, and Iryna Gurevych. 2020. Why do you think that? exploring faithful sentence-level rationales without supervision. arXiv preprint arXiv:2010.03384 (2020).
[33]
Nitesh Goyal, Sungsoo Ray Hong, Regan L Mandryk, Toby Jia-Jun Li, Kurt Luther, and Dakuo Wang. 2023. SHAI 2023: Workshop on Designing for Safety in Human-AI Interactions. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces. 199--201.
[34]
Ben Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--24.
[35]
Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter, Vol. 11, 1 (2009), 10--18.
[36]
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. 2018. Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. In Proceedings of the 2018 chi conference on human factors in computing systems. 1--13.
[37]
Gillian R Hayes. 2014. Knowing by doing: action research as an approach to HCI. In Ways of Knowing in HCI. Springer, 49--68.
[38]
Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision. 2961--2969.
[39]
Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In Proceedings of the European Conference on Computer Vision (ECCV). 771--787.
[40]
Yusuke Hirota, Yuta Nakashima, and Noa Garcia. 2022. Gender and Racial Bias in Visual Question Answering Datasets. arXiv preprint arXiv:2205.08148 (2022).
[41]
Fred Hohman, Kanit Wongsuphasawat, Mary Beth Kery, and Kayur Patel. 2020. Understanding and visualizing data iteration in machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--13.
[42]
Sungsoo Ray Hong, Sonia Castelo, Vito D'Orazio, Christopher Benthune, Aecio Santos, Scott Langevin, David Jonker, Enrico Bertini, and Juliana Freire. 2020a. Towards Evaluating Exploratory Model Building Process with AutoML Systems. arXiv preprint arXiv:2009.00449 (2020).
[43]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020b. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proceedings of the ACM on Human-Computer Interaction, Vol. 4 (2020), 1--26.
[44]
Sungsoo Ray Hong, Jorge Piazentin Ono, Juliana Freire, and Enrico Bertini. 2019. Disseminating Machine Learning to domain experts: Understanding challenges and opportunities in supporting a model building process. In CHI 2019 Workshop, Emerging Perspectives in Human-Centered Machine Learning. ACM.
[45]
Scott E Hudson and Jennifer Mankoff. 2014. Concepts, values, and methods for technical human-computer interaction research. In Ways of Knowing in HCI. Springer, 69--93.
[46]
Aya Abdelsalam Ismail, Hector Corrada Bravo, and Soheil Feizi. 2021. Improving deep learning interpretability by saliency guided training. Advances in Neural Information Processing Systems, Vol. 34 (2021), 26726--26739.
[47]
Minsuk Kahng, Pierre Y Andrews, Aditya Kalro, and Duen Horng Chau. 2017. ActiVis: Visual exploration of industry-scale deep neural network models. IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 88--97.
[48]
Minsuk Kahng, Dezhi Fang, and Duen Horng Chau. 2016. Visual exploration of machine learning results using data cube analysis. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics. 1--6.
[49]
Tharindu Kaluarachchi, Andrew Reis, and Suranga Nanayakkara. 2021. A review of recent deep learning approaches in human-centered machine learning. Sensors, Vol. 21, 7 (2021), 2514.
[50]
Beomyoung Kim, Sangeun Han, and Junmo Kim. 2021. Discriminative region suppression for weakly-supervised semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 1754--1761.
[51]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, Vol. 114, 13 (2017), 3521--3526.
[52]
Michael Koch, Kai von Luck, Jan Schwarzer, and Susanne Draheim. 2018. The novelty effect in large display deployments--Experiences and lessons-learned for evaluating prototypes. In Proceedings of 16th European conference on computer-supported cooperative work-exploratory papers. European Society for Socially Embedded Technologies (EUSSET).
[53]
Josua Krause, Aritra Dasgupta, Jordan Swartz, Yindalon Aphinyanaphongs, and Enrico Bertini. 2017. A workflow for visual diagnostics of binary classifiers using instance-level explanations. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 162--172.
[54]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5686--5697.
[55]
Derek Layder. 1998. Sociological practice: Linking theory and social research. Sage.
[56]
Jungsoo Lee, Eungyeup Kim, Juyoung Lee, Jihyeon Lee, and Jaegul Choo. 2021a. Learning debiased representation via disentangled feature augmentation. Advances in Neural Information Processing Systems, Vol. 34 (2021), 25123--25133.
[57]
Seungho Lee, Minhyun Lee, Jongwuk Lee, and Hyunjung Shim. 2021b. Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5495--5505.
[58]
Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. 2021. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE transactions on neural networks and learning systems (2021).
[59]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, 740--755.
[60]
Chang Liu, Yu Cao, Marlon Alcantara, Benyuan Liu, Maria Brunette, Jesus Peinado, and Walter Curioso. 2017. TX-CNN: Detecting tuberculosis in chest X-ray images using convolutional neural network. In 2017 IEEE international conference on image processing (ICIP). IEEE, 2314--2318.
[61]
Jiakun Liu, Qiao Huang, Xin Xia, Emad Shihab, David Lo, and Shanping Li. 2020. Is using deep learning frameworks free? characterizing technical debt in deep learning frameworks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering in Society. 1--10.
[62]
Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature machine intelligence, Vol. 2, 1 (2020), 56--67.
[63]
Yao Ming. 2017. A survey on visualization for explainable classifiers. (2017).
[64]
Yao Ming, Shaozu Cao, Ruixiang Zhang, Zhen Li, Yuanzhe Chen, Yangqiu Song, and Huamin Qu. 2017. Understanding hidden memories of recurrent neural networks. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 13--24.
[65]
Yao Ming, Huamin Qu, and Enrico Bertini. 2018. Rulematrix: Visualizing and understanding classifiers with rules. IEEE transactions on visualization and computer graphics, Vol. 25, 1 (2018), 342--352.
[66]
Masahiro Mitsuhara, Hiroshi Fukui, Yusuke Sakashita, Takanori Ogata, Tsubasa Hirakawa, Takayoshi Yamashita, and Hironobu Fujiyoshi. 2019. Embedding human knowledge into deep neural network via attention map. arXiv preprint arXiv:1905.03540 (2019).
[67]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 11, 3--4 (2021), 1--45.
[68]
Mohammed Bany Muhammad and Mohammed Yeasin. 2020. Eigen-cam: Class activation map using principal components. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--7.
[69]
Don Norman. 2013. The design of everyday things: Revised and expanded edition. Basic books.
[70]
German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks, Vol. 113 (2019), 54--71.
[71]
Haekyu Park, Nilaksh Das, Rahul Duggal, Austin P Wright, Omar Shaikh, Fred Hohman, and Duen Horng Polo Chau. 2021. Neurocartography: Scalable automatic visual summarization of concepts in deep neural networks. IEEE Transactions on Visualization and Computer Graphics, Vol. 28, 1 (2021), 813--823.
[72]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, Vol. 32 (2019).
[73]
Nicola Pezzotti. 2019. Dimensionality-Reduction Algorithms for Progressive Visual Analytics. (2019).
[74]
Donghao Ren, Saleema Amershi, Bongshin Lee, Jina Suh, and Jason D Williams. 2016. Squares: Supporting interactive performance analysis for multiclass classifiers. IEEE transactions on visualization and computer graphics, Vol. 23, 1 (2016), 61--70.
[75]
Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. 2020. An investigation of why overparameterization exacerbates spurious correlations. In International Conference on Machine Learning. PMLR, 8346--8356.
[76]
Johnny Salda na. 2015. The coding manual for qualitative researchers. Sage.
[77]
Aécio Santos, Sonia Castelo, Cristian Felix, Jorge Piazentin Ono, Bowen Yu, Sungsoo Ray Hong, Cláudio T Silva, Enrico Bertini, and Juliana Freire. 2019. Visus: An interactive system for automatic machine learning model building and curation. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics. 1--7.
[78]
Irving Seidman. 2006. Interviewing as qualitative research: A guide for researchers in education and the social sciences. Teachers college press.
[79]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618--626.
[80]
Xiaoting Shao, Tjitze Rienstra, Matthias Thimm, and Kristian Kersting. 2020. Towards understanding and arguing with classifiers: Recent progress. Datenbank-Spektrum, Vol. 20, 2 (2020), 171--180.
[81]
Gary B Shelly, Thomas J Cashman, and Joy L Starks. 2008. Adobe Photoshop CS3: Complete concepts and techniques. Course Technology Press.
[82]
Herbert A Simon. 1981. The sciences of the artificial, 1969. Massachusetts Institute of Technology (1981).
[83]
Krishna Kumar Singh, Dhruv Mahajan, Kristen Grauman, Yong Jae Lee, Matt Feiszli, and Deepti Ghadiyaram. 2020. Don't judge an object by its context: learning to overcome contextual bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11070--11078.
[84]
Xuelin Situ, Ingrid Zukerman, Cecile Paris, Sameen Maruf, and Gholamreza Haffari. 2021. Learning to explain: Generating stable explanations fast. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 5340--5355.
[85]
Timo Speith. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 2239--2250.
[86]
Thilo Spinner, Udo Schlegel, Hanna Sch"afer, and Mennatallah El-Assady. 2019. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, Vol. 26, 1 (2019), 1064--1074.
[87]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. Attacking convolutional neural network using differential evolution. IPSJ Transactions on Computer Vision and Applications, Vol. 11, 1 (2019), 1--16.
[88]
Justin Talbot, Bongshin Lee, Ashish Kapoor, and Desney S Tan. 2009. EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers. In Proceedings of the SIGCHI conference on human factors in computing systems. 1283--1292.
[89]
Dakuo Wang, Justin D Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019a. Human-AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--24.
[90]
Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. 2016. Cost-effective active learning for deep image classification. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 27, 12 (2016), 2591--2600.
[91]
Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019b. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5310--5319.
[92]
Zijie J Wang, Robert Turko, and Duen Horng Chau. 2021. Dodrio: Exploring transformer models with interactive visualization. arXiv preprint arXiv:2103.14625 (2021).
[93]
Zijie J Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, and Duen Horng Chau. 2020. CNN 101: Interactive visual learning for convolutional neural networks. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1--7.
[94]
Greta Warren, Barry Smyth, and Mark T Keane. 2022. ?Better" Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI). In International Conference on Case-Based Reasoning. Springer, 63--78.
[95]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics, Vol. 26, 1 (2019), 56--65.
[96]
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B Viégas, and Martin Wattenberg. 2017. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE transactions on visualization and computer graphics, Vol. 24, 1 (2017), 1--12.
[97]
Chuan Yan, John Joon Young Chung, Yoon Kiheon, Yotam Gingold, Eytan Adar, and Sungsoo Ray Hong. 2022. FlatMagic: Improving Flat Colorization through AI-Driven Design for Digital Comic Professionals. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1--17.
[98]
Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. Advances in Neural Information Processing Systems, Vol. 33 (2020), 5505--5515.
[99]
Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018a. Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 designing interactive systems conference. 585--596.
[100]
Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018b. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 designing interactive systems conference. 573--584.
[101]
Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using ?annotator rationales" to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference. 260--267.
[102]
Amy X Zhang, Michael Muller, and Dakuo Wang. 2020. How do data science workers collaborate? roles, workflows, and tools. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (2020), 1--23.
[103]
Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, and Avishek Anand. 2019. Dissonance between human and machine understanding. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--23.
[104]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017).

Cited By

View all
  • (2024)The Wand Chooses the IMU - Open Source Hardware for Synchronising Wearables using MagnetometersCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678485(939-943)Online publication date: 5-Oct-2024
  • (2024)ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals’ ShadowingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676332(1-15)Online publication date: 13-Oct-2024
  • (2024)Going Beyond XAI: A Systematic Survey for Explanation-Guided LearningACM Computing Surveys10.1145/364407356:7(1-39)Online publication date: 9-Apr-2024
  • Show More Cited By

Index Terms

  1. Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW2
      CSCW
      October 2023
      4055 pages
      EISSN:2573-0142
      DOI:10.1145/3626953
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 October 2023
      Published in PACMHCI Volume 7, Issue CSCW2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. alignable AI
      2. convolutional neural network (CNN)
      3. deepfuse
      4. explainable AI (XAI)
      5. interactive attention alignment (IAA)
      6. steerable AI

      Qualifiers

      • Research-article

      Funding Sources

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)457
      • Downloads (Last 6 weeks)54
      Reflects downloads up to 09 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)The Wand Chooses the IMU - Open Source Hardware for Synchronising Wearables using MagnetometersCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678485(939-943)Online publication date: 5-Oct-2024
      • (2024)ShadowMagic: Designing Human-AI Collaborative Support for Comic Professionals’ ShadowingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676332(1-15)Online publication date: 13-Oct-2024
      • (2024)Going Beyond XAI: A Systematic Survey for Explanation-Guided LearningACM Computing Surveys10.1145/364407356:7(1-39)Online publication date: 9-Apr-2024
      • (2024)Closing the Knowledge Gap in Designing Data Annotation Interfaces for AI-powered Disaster Management Analytic SystemsProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645214(405-418)Online publication date: 18-Mar-2024
      • (2024)3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration DesignProceedings of the ACM on Human-Computer Interaction10.1145/36372888:CSCW1(1-33)Online publication date: 26-Apr-2024
      • (2024)Squeeze and Slide: Real-time continuous self-reports with physiological arousal to evaluate emotional engagement in short films of contemporary danceExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651886(1-7)Online publication date: 11-May-2024
      • (2024)Data Storytelling in Data Visualisation: Does it Enhance the Efficiency and Effectiveness of Information Retrieval and Insights Comprehension?Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3643022(1-21)Online publication date: 11-May-2024
      • (2024)How Explainable AI Affects Human Performance: A Systematic Review of the Behavioural Consequences of Saliency MapsInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2381929(1-32)Online publication date: 29-Jul-2024
      • (2024)Accessibility of e-books for Secondary School Students in Ireland and CyprusTransforming Media Accessibility in Europe10.1007/978-3-031-60049-4_13(229-245)Online publication date: 20-Aug-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media