Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642754acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs

Published: 11 May 2024 Publication History

Abstract

Large language models (LLMs) exhibit dynamic capabilities and appear to comprehend complex and ambiguous natural language prompts. However, calibrating LLM interactions is challenging for interface designers and end-users alike. A central issue is our limited grasp of how human cognitive processes begin with a goal and form intentions for executing actions, a blindspot even in established interaction models such as Norman’s gulfs of execution and evaluation. To address this gap, we theorize how end-users ‘envision’ translating their goals into clear intentions and craft prompts to obtain the desired LLM response. We define a process of Envisioning by highlighting three misalignments on not knowing: (1) what the task should be, (2) how to instruct the LLM to do the task, and (3) what to expect for the LLM’s output in meeting the goal. Finally, we make recommendations to narrow the gulf of envisioning in human-LLM interactions.

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Katrin Affolter, Kurt Stockinger, and Abraham Bernstein. 2019. A comparative survey of recent natural language interfaces for databases. The VLDB Journal 28 (2019), 793–819.
[2]
[2] Maneesh Agrawala. 2023. https://magrawala.substack.com/p/unpredictable-black-boxes-are-terrible
[3]
Maneesh Agrawala, Doantam Phan, Julie Heiser, John Haymaker, Jeff Klingner, Pat Hanrahan, and Barbara Tversky. 2003. Designing effective step-by-step assembly instructions. ACM Transactions on Graphics (TOG) 22, 3 (2003), 828–837.
[4]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–13.
[5]
Tyler Angert, Miroslav Ivan Suzara, Jenny Han, Christopher Lawrence Pondoc, and Hariharan Subramonyam. 2023. Spellburst: A Node-based Interface for Exploratory Creative Coding with Natural Language Prompts. arXiv preprint arXiv:2308.03921 (2023).
[6]
Anthropic. 2023. Claude. https://claude.ai/
[7]
Anysphere. 2023. Cursor. https://www.cursor.so/
[8]
Franscesca Bacci, Federico Maria Cau, and Lucio Davide Spano. 2020. Inspecting data using natural language queries. In Computational Science and Its Applications–ICCSA 2020: 20th International Conference, Cagliari, Italy, July 1–4, 2020, Proceedings, Part VI 20. Springer, 771–782.
[9]
Mikhail Mikhaĭ Bakhtin. [n. d.]. The dialogic imagination: Four essays.
[10]
Jeanne Bamberger and Donald A Schön. 1983. Learning as reflective conversation with materials: Notes from work in progress. Art Education 36, 2 (1983), 68–73.
[11]
Yavar Bathaee. 2017. The artificial intelligence black box and the failure of intent and causation. Harv. JL & Tech. 31 (2017), 889.
[12]
Piraye Bayman and Richard E Mayer. 1984. Instructional manipulation of users’ mental models for electronic calculators. International Journal of Man-Machine Studies 20, 2 (1984), 189–199.
[13]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
[14]
Olav W Bertelsen and Susanne Bødker. 2003. Activity theory. HCI models, theories, and frameworks: Toward a multidisciplinary science (2003), 291–324.
[15]
Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, and Alice Xiang. 2021. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. arxiv:2011.07586 [cs.CY]
[16]
Bernd Bohnet, Vinh Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, Lierni Sestorain Saralegui, William Weston Cohen, Michael Collins, Dipanjan Das, Don Metzler, Slav Petrov, and Kellie Webster. 2022. Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models. https://arxiv.org/abs/2212.08037
[17]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems 29 (2016).
[18]
Matthew M Botvinick. 2008. Hierarchical models of behavior and prefrontal function. Trends in cognitive sciences 12, 5 (2008), 201–208.
[19]
Summer L Brandt, Joel Lachter, Ricky Russell, and Robert Jay Shively. 2018. A human-autonomy teaming approach for a flight-following task. In Advances in Neuroergonomics and Cognitive Engineering: Proceedings of the AHFE 2017 International Conference on Neuroergonomics and Cognitive Engineering, July 17–21, 2017, The Westin Bonaventure Hotel, Los Angeles, California, USA 8. Springer, 12–22.
[20]
Holly P Branigan, Martin J Pickering, Jamie Pearson, and Janet F McLean. 2010. Linguistic alignment between people and computers. Journal of pragmatics 42, 9 (2010), 2355–2368.
[21]
Ann L Brown. 2017. Metacognitive development and reading. In Theoretical issues in reading comprehension. Routledge, 453–482.
[22]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arxiv:2005.14165 [cs.CL]
[23]
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186. https://doi.org/10.1126/science.aal4230 arXiv:https://www.science.org/doi/pdf/10.1126/science.aal4230
[24]
John M Carroll and Judith Reitman Olson. 1988. Mental models in human-computer interaction. Handbook of human-computer interaction (1988), 45–65.
[25]
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. 2023. Explore, Establish, Exploit: Red Teaming Language Models from Scratch. arXiv preprint arXiv:2306.09442 (2023).
[26]
Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou. 2023. When do you need Chain-of-Thought Prompting for ChatGPT?arxiv:2304.03262 [cs.AI]
[27]
Xiang’Anthony’ Chen, Jeff Burke, Ruofei Du, Matthew K Hong, Jennifer Jacobs, Philippe Laban, Dingzeyu Li, Nanyun Peng, Karl DD Willis, Chien-Sheng Wu, 2023. Next Steps for Human-Centered Generative AI: A Technical Perspective. arXiv preprint arXiv:2306.15774 (2023).
[28]
Richard E Clark, David F Feldon, Jeroen JG Van Merrienboer, Kenneth A Yates, and Sean Early. 2008. Cognitive task analysis. In Handbook of research on educational communications and technology. Routledge, 577–593.
[29]
Russell Cropanzano and Marie S Mitchell. 2005. Social exchange theory: An interdisciplinary review. Journal of management 31, 6 (2005), 874–900.
[30]
Nigel Cross. 2001. Design cognition: Results from protocol and other empirical studies of design activity. Design knowing and learning: Cognition in design education (2001), 79–103.
[31]
Mihaly Csikszentmihalyi and Jacob W Getzels. 1971. Discovery-oriented behavior and the originality of creative products: A study with artists.Journal of personality and social psychology 19, 1 (1971), 47.
[32]
Mihaly Csikszentmihalyi and Jacob W Getzels. 1988. Creativity and problem finding in art. (1988).
[33]
Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, and Ziran Wang. 2023. Drive as you speak: Enabling human-like interaction with large language models in autonomous vehicles. arXiv preprint arXiv:2309.10228 (2023).
[34]
Clarisse Sieckenius De Souza. 2005. The semiotic engineering of human-computer interaction. MIT press.
[35]
Jean Decety and Julie Grèzes. 2006. The power of simulation: Imagining one’s own and other’s behavior. Brain research 1079, 1 (2006), 4–14.
[36]
Victor Dibia. 2023. LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models. arXiv preprint arXiv:2303.02927 (2023).
[37]
Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. 2023. Can AI language models replace human participants?Trends in Cognitive Sciences (2023).
[38]
Judit Dombi, Tetyana Sydorenko, and Veronika Timpe-Laughlin. 2022. Common ground, cooperation, and recipient design in human-computer interactions. Journal of Pragmatics 193 (2022), 4–20.
[39]
Kees Dorst. 2011. The core of ‘design thinking’and its application. Design studies 32, 6 (2011), 521–532.
[40]
Kees Dorst and Nigel Cross. 2001. Creativity in the design process: co-evolution of problem–solution. Design studies 22, 5 (2001), 425–437.
[41]
Karl Duncker. 1945. On problem-solving.(Psychological Monographs, No. 270.). (1945).
[42]
David W Ecker. 1963. The artistic process as qualitative problem solving. The Journal of Aesthetics and Art Criticism 21, 3 (1963), 283–290.
[43]
Alex Endert, Remco Chang, Chris North, and Michelle Zhou. 2015. Semantic interaction: Coupling cognition and computation through usable interactive analytics. IEEE Computer Graphics and Applications 35, 4 (2015), 94–99.
[44]
Umer Farooq and Jonathan Grudin. 2016. Human-computer integration. interactions 23, 6 (2016), 26–32.
[45]
Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov. 2023. From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models. arXiv preprint arXiv:2305.08283 (2023).
[46]
Emilio Ferrara. 2023. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738 (2023).
[47]
Alexander J. Fiannaca, Chinmay Kulkarni, Carrie J Cai, and Michael Terry. 2023. Programming without a Programming Language: Challenges and Opportunities for Designing Developer Tools for Prompt Programming. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 235, 7 pages. https://doi.org/10.1145/3544549.3585737
[48]
Raymond Fok and Daniel S Weld. 2023. In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making. arXiv preprint arXiv:2305.07722 (2023).
[49]
Asbjørn Følstad and Petter Bae Brandtzæg. 2017. Chatbots and the new world of HCI. interactions 24, 4 (2017), 38–42.
[50]
Chris Frith and Uta Frith. 2005. Theory of mind. Current biology 15, 17 (2005), R644–R645.
[51]
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2023. Complexity-Based Prompting for Multi-Step Reasoning. arxiv:2210.00720 [cs.CL]
[52]
Tong Gao, Mira Dontcheva, Eytan Adar, Zhicheng Liu, and Karrie G. Karahalios. 2015. DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST ’15). Association for Computing Machinery, New York, NY, USA, 489–500. https://doi.org/10.1145/2807442.2807478
[53]
James J Gibson. 1977. The theory of affordances. Hilldale, USA 1, 2 (1977), 67–82.
[54]
Github. 2023. Github Copilot. https://github.com/features/copilot
[55]
Peter M Gollwitzer and Gabriele Oettingen. 2020. Implementation intentions. In Encyclopedia of behavioral medicine. Springer, 1159–1164.
[56]
Charles Goodwin and John Heritage. 1990. Conversation analysis. Annual review of anthropology 19, 1 (1990), 283–307.
[57]
Google. 2023. Bard. https://bard.google.com/
[58]
Herbert P Grice. 1975. Logic and conversation. In Speech acts. Brill, 41–58.
[59]
Joy Paul Guilford. 1956. The structure of intellect.Psychological bulletin 53, 4 (1956), 267.
[60]
Andrew B Hargadon and Beth A Bechky. 2006. When collections of creatives become creative collectives: A field study of problem solving at work. Organization science 17, 4 (2006), 484–500.
[61]
John R Hayes. 2013. A new framework for understanding cognition and affect in writing. In The science of writing. Routledge, 1–27.
[62]
Robert R Hoffman, Gary Klein, and Shane T Mueller. 2018. Explaining explanation for “explainable AI”. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 62. SAGE Publications Sage CA: Los Angeles, CA, 197–201.
[63]
Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn’t always right. arXiv preprint arXiv:2104.08315 (2021).
[64]
Enamul Hoque, Vidya Setlur, Melanie Tory, and Isaac Dykeman. 2018. Applying Pragmatics Principles for Interaction with Visual Analytics. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2018), 309–318. https://doi.org/10.1109/TVCG.2017.2744684
[65]
Kasper Hornbæk and Antti Oulasvirta. 2017. What is interaction?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 5040–5052.
[66]
Thomas J Howard, Stephen J Culley, and Elies Dekoninck. 2008. Describing the creative design process by the integration of engineering design and cognitive psychology literature. Design studies 29, 2 (2008), 160–180.
[67]
Edwin Hutchins. 1987. Metaphors for interface design. Institute for Cognitive Science, University of California, San Diego.
[68]
Edwin L Hutchins, James D Hollan, and Donald A Norman. 1985. Direct manipulation interfaces. Human–computer interaction 1, 4 (1985), 311–338.
[69]
Saki Imai. 2022. Is GitHub Copilot a Substitute for Human Pair-Programming? An Empirical Study. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings (Pittsburgh, Pennsylvania) (ICSE ’22). Association for Computing Machinery, New York, NY, USA, 319–321. https://doi.org/10.1145/3510454.3522684
[70]
Michael Jackson. 1995. Software Requirements & Specifications: a lexicon of practice, principles and prejudices. ACM Press/Addison-Wesley Publishing Co.
[71]
David G Jansson and Steven M Smith. 1991. Design fixation. Design studies 12, 1 (1991), 3–11.
[72]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, Article 248 (mar 2023), 38 pages. https://doi.org/10.1145/3571730
[73]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. Comput. Surveys 55, 12 (2023), 1–38.
[74]
Peiling Jiang, Jude Rayan, Steven P Dow, and Haijun Xia. 2023. Graphologue: Exploring Large Language Model Responses with Interactive Diagrams. arXiv preprint arXiv:2305.11473 (2023).
[75]
Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. 2023. Challenges and Applications of Large Language Models. arXiv preprint arXiv:2307.10169 (2023).
[76]
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arxiv:2001.08361 [cs.LG]
[77]
Jan-Frederik Kassel and Michael Rohs. 2018. Valletto: A multimodal interface for ubiquitous visual analytics. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. 1–6.
[78]
Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–23.
[79]
Istvan Kecskes. 2010. The paradox of communication: Socio-cognitive approach to pragmatics. Pragmatics and Society 1, 1 (2010), 50–73.
[80]
Yoonsu Kim, Jueon Lee, Seoyoung Kim, Jaehyuk Park, and Juho Kim. 2023. Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level. arXiv preprint arXiv:2311.07434 (2023).
[81]
Amy J Ko, Brad A Myers, and Htet Htet Aung. 2004. Six learning barriers in end-user programming systems. In 2004 IEEE Symposium on Visual Languages-Human Centric Computing. IEEE, 199–206.
[82]
Jan Kocoń, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydło, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, 2023. ChatGPT: Jack of all trades, master of none. Information Fusion (2023), 101861.
[83]
Shunsuke Koga. 2023. Exploring the Pitfalls of Large Language Models: Inconsistency and Inaccuracy in Answering Pathology Board Examination-Style Questions. medRxiv (2023), 2023–08.
[84]
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2023. Large Language Models are Zero-Shot Reasoners. arxiv:2205.11916 [cs.CL]
[85]
Kristin M Kostick-Quenet, I Glenn Cohen, Sara Gerke, Bernard Lo, James Antaki, Faezah Movahedi, Hasna Njah, Lauren Schoen, Jerry E Estep, and JS Blumenthal-Barby. 2022. Mitigating racial bias in machine learning. Journal of Law, Medicine & Ethics 50, 1 (2022), 92–100.
[86]
Andrew Kuznetsov, Joseph Chee Chang, Nathan Hahn, Napol Rachatasumrit, Bradley Breneisen, Julina Coupland, and Aniket Kittur. 2022. Fuse: In-Situ Sensemaking Support in the Browser. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–15.
[87]
Woo kyoung Ahn, William F. Brewer, and Raymond J. Mooney. 1992. Schema acquisition from a single example.Journal of Experimental Psychology: Learning, Memory, and Cognition 18, 2 (mar 1992), 391–412. https://doi.org/10.1037/0278-7393.18.2.391
[88]
Brenden M. Lake, Tal Linzen, and Marco Baroni. 2019. Human few-shot learning of compositional instructions. arxiv:1901.04587 [cs.CL]
[89]
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2016. Building Machines That Learn and Think Like People. arxiv:1604.00289 [cs.AI]
[90]
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, and Felix Hill. 2022. Can language models learn from explanations in context?arxiv:2204.02329 [cs.CL]
[91]
Gierad P Laput, Mira Dontcheva, Gregg Wilensky, Walter Chang, Aseem Agarwala, Jason Linder, and Eytan Adar. 2013. Pixeltone: A multimodal interface for image editing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2185–2194.
[92]
Karl Spencer Lashley 1951. The problem of serial order in behavior. Vol. 21. Bobbs-Merrill Oxford.
[93]
Keelin Leahy, Shanna R Daly, Seda McKilligan, and Colleen M Seifert. 2020. Design fixation from initial examples: Provided versus self-generated ideas. Journal of Mechanical Design 142, 10 (2020), 101402.
[94]
Peter Lee, Sebastien Bubeck, and Joseph Petro. 2023. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. New England Journal of Medicine 388, 13 (2023), 1233–1239.
[95]
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What Makes Good In-Context Examples for GPT-3?arxiv:2101.06804 [cs.CL]
[96]
Nelson F Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating verifiability in generative search engines. arXiv preprint arXiv:2304.09848 (2023).
[97]
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. arxiv:2107.13586 [cs.CL]
[98]
Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, and Yang Liu. 2023. Jailbreaking chatgpt via prompt engineering: An empirical study. arXiv preprint arXiv:2305.13860 (2023).
[99]
Tania Lombrozo and Susan Carey. 2006. Functional explanation and the function of explanation. Cognition 99, 2 (2006), 167–204. https://doi.org/10.1016/j.cognition.2004.12.009
[100]
Joseph B Lyons, Sean Mahoney, Kevin T Wynne, and Mark A Roebke. 2018. Viewing machines as teammates: A qualitative study. In 2018 AAAI Spring Symposium Series.
[101]
Norman RF Maier. 1931. Reasoning in humans. II. The solution of a problem and its appearance in consciousness.Journal of comparative Psychology 12, 2 (1931), 181.
[102]
Sherin Mary Mathews. 2019. Explainable artificial intelligence applications in NLP, biomedical, and malware classification: A literature review. In Intelligent Computing: Proceedings of the 2019 Computing Conference, Volume 2. Springer, 1269–1292.
[103]
Richard E Mayer. 1981. The psychology of how novices learn computer programming. ACM Computing Surveys (CSUR) 13, 1 (1981), 121–141.
[104]
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On Faithfulness and Factuality in Abstractive Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 1906–1919. https://doi.org/10.18653/v1/2020.acl-main.173
[105]
Lauren McCarthy. 2023. p5.js. https://p5js.org/
[106]
Microsoft. 2023. Visual Studio Code. https://code.visualstudio.com/
[107]
George A Miller. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information.Psychological review 63, 2 (1956), 81.
[108]
George A Miller, Galanter Eugene, and Karl H Pribram. 2017. Plans and the Structure of Behaviour. In Systems Research for Behavioral Science. Routledge, 369–382.
[109]
George A Miller, Eugene Galanter, and Karl H Pribram. 1960. Plans and the structure of behavior. New York, NY: Henry Holt and Co. Inc.
[110]
Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. 2023. Large Language Models as General Pattern Machines. arxiv:2307.04721 [cs.AI]
[111]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 11, 3-4 (2021), 1–45.
[112]
Neville Moray. 1987. Intelligent aids, mental models, and the theory of machines. International journal of man-machine studies 27, 5-6 (1987), 619–629.
[113]
Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. 2023. Levels of AGI: Operationalizing Progress on the Path to AGI. arXiv preprint arXiv:2311.02462 (2023).
[114]
Robert Munro, Steven Bethard, Victor Kuperman, Vicky Tzuyin Lai, Robin Melnick, Christopher Potts, Tyler Schnoebelen, and Harry Tily. 2010. Crowdsourcing and language studies: the new generation of linguistic data. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Association for Computational Linguistics, Los Angeles, 122–130. https://aclanthology.org/W10-0719
[115]
Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456 (2020).
[116]
Dennis C Neale and John M Carroll. 1997. The role of metaphors in user interface design. In Handbook of human-computer interaction. Elsevier, 441–462.
[117]
Allen Newell and Herbert A Simon. 1961. Computer Simulation of Human Thinking: A theory of problem solving expressed as a computer program permits simulation of thinking processes.Science 134, 3495 (1961), 2011–2017.
[118]
Allen Newell and Herbert A Simon. 2007. Computer science as empirical inquiry: Symbols and search. In ACM Turing award lectures. 1975.
[119]
Allen Newell, Herbert Alexander Simon, 1972. Human problem solving. Vol. 104. Prentice-hall Englewood Cliffs, NJ.
[120]
Donald A Norman. 1986. Cognitive engineering. User centered system design 31, 61 (1986), 2.
[121]
Donald A Norman. 2014. Some observations on mental models. In Mental models. Psychology Press, 15–22.
[122]
Notion. 2023. Notion. https://www.notion.so
[123]
David Oniani, Jordan Hilsman, Yifan Peng, Ronald K Poropatich, COL Pamplin, LTC Legault, Yanshan Wang, 2023. From Military to Healthcare: Adopting and Expanding Ethical Principles for Generative Artificial Intelligence. arXiv preprint arXiv:2308.02448 (2023).
[124]
OpenAI. 2023. ChatGPT. https://chat.openai.com/chat
[125]
OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774 [cs.CL]
[126]
Antti Oulasvirta, Jussi PP Jokinen, and Andrew Howes. 2022. Computational rationality as a theory of interaction. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–14.
[127]
Fatma Őzcan, Abdul Quamar, Jaydeep Sen, Chuan Lei, and Vasilis Efthymiou. 2020. State of the art and open challenges in natural language interfaces to data. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. 2629–2636.
[128]
Andrea L. Patalano and Colleen M. Seifert. 1997. Opportunistic Planning: Being Reminded of Pending Goals. Cognitive Psychology 34, 1 (1997), 1–36. https://doi.org/10.1006/cogp.1997.0655
[129]
Roy D Pea. 1982. What is planning development the development of?New Directions for Child and Adolescent Development 1982, 18 (1982), 5–27.
[130]
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and Narrowing the Compositionality Gap in Language Models. arxiv:2210.03350 [cs.CL]
[131]
Ben Prystawski, Paul Thibodeau, Christopher Potts, and Noah D. Goodman. 2023. Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models. arxiv:2209.08141 [cs.CL]
[132]
A Terry Purcell and John S Gero. 1996. Design and other types of fixation. Design studies 17, 4 (1996), 363–383.
[133]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents, 2022. URL https://arxiv. org/abs/2204.06125 7 (2022).
[134]
Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2023. Measuring attribution in natural language generation models. Computational Linguistics (2023), 1–66.
[135]
Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A Survey of Hallucination in Large Foundation Models. arxiv:2309.05922 [cs.AI]
[136]
Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. Cambridge, UK 10, 10 (1996).
[137]
Replit. 2023. Replit. https://replit.com/
[138]
John Restrepo and Henri Christiaans. 2004. Problem structuring and information access in design. Journal of Design Research 4, 2 (2004), 218–236.
[139]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10684–10695.
[140]
Malik Sallam. 2023. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In Healthcare, Vol. 11. MDPI, 887.
[141]
Arnold Sameroff. 2009. The transactional model.American Psychological Association.
[142]
Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask Prompted Training Enables Zero-Shot Task Generalization. arxiv:2110.08207 [cs.LG]
[143]
Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv preprint arXiv:2210.01240 (2022).
[144]
Roger C Schank and Robert P Abelson. 2013. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press.
[145]
D Schon. 1983. Becoming a reflective practitioner. How professionals think in action. London: Temple Smith (1983).
[146]
Gregory Schraw and Rayne Sperling Dennison. 1994. Assessing metacognitive awareness. Contemporary educational psychology 19, 4 (1994), 460–475.
[147]
John R Searle. 1983. Intentionality: An essay in the philosophy of mind. Cambridge university press.
[148]
John R Searle, Ferenc Kiefer, Manfred Bierwisch, 1980. Speech act theory and pragmatics. Vol. 10. Springer.
[149]
Colleen M Seifert, David E Meyer, Natalie Davidson, Andrea L Patalano, and Ilan Yaniv. 1994. Demystification of cognitive insight: Opportunistic assimilation and the prepared-mind hypothesis. (1994).
[150]
Colleen M Seifert and Andrea L Patalano. 2001. Opportunism in memory: Preparing for chance encounters. Current Directions in Psychological Science 10, 6 (2001), 198–201.
[151]
Vidya Setlur, Sarah E Battersby, Melanie Tory, Rich Gossweiler, and Angel X Chang. 2016. Eviza: A natural language interface for visual analysis. In Proceedings of the 29th annual symposium on user interface software and technology. 365–377.
[152]
Vidya Setlur and Melanie Tory. 2022. How do you converse with an analytical chatbot? revisiting gricean maxims for designing analytical conversational behavior. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17.
[153]
Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, and Diyi Yang. 2023. On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, 4454–4470. https://doi.org/10.18653/v1/2023.acl-long.244
[154]
Leixian Shen, Enya Shen, Yuyu Luo, Xiaocong Yang, Xuming Hu, Xiongshuai Zhang, Zhiwei Tai, and Jianmin Wang. 2022. Towards natural language interfaces for data visualization: A survey. IEEE transactions on visualization and computer graphics (2022).
[155]
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3407–3412. https://doi.org/10.18653/v1/D19-1339
[156]
Ben Shneiderman. 1982. The future of interactive systems and the emergence of direct manipulation. Behaviour & Information Technology 1, 3 (1982), 237–256.
[157]
Herbert A Simon. 1956. Rational choice and the structure of the environment.Psychological review 63, 2 (1956), 129.
[158]
Herbert Alexander Simon. 1997. Models of bounded rationality: Empirically grounded economic reason. Vol. 3. MIT press.
[159]
Herbert A Simon and Allen Newell. 1971. Human problem solving: The state of the theory in 1970.American psychologist 26, 2 (1971), 145.
[160]
Arjun Srinivasan, Bongshin Lee, and John Stasko. 2020. Interweaving multimodal interaction with flexible unit visualizations for data exploration. IEEE Transactions on Visualization and Computer Graphics 27, 8 (2020), 3519–3533.
[161]
Arjun Srinivasan and John Stasko. 2020. How to Ask What to Say?: Strategies for Evaluating Natural Language Interfaces for Data Visualization. IEEE Computer Graphics and Applications 40, 4 (2020), 96–103. https://doi.org/10.1109/MCG.2020.2986902
[162]
Nancy Staggers and Anthony F. Norcio. 1993. Mental models: concepts for human-computer interaction research. International Journal of Man-machine studies 38, 4 (1993), 587–605.
[163]
Keith E. Stanovich and Richard F. West. 2000. Individual differences in reasoning: Implications for the rationality debate?Behavioral and Brain Sciences 23, 5 (oct 2000), 645–665. https://doi.org/10.1017/s0140525x00003435
[164]
Matthew Stone. 2005. Communicative intentions and conversational processes in humanhuman and human-computer dialogue. Approaches to studying world-situated language use (2005), 39–70.
[165]
Hariharan Subramonyam, Jane Im, Colleen Seifert, and Eytan Adar. 2022. Solving separation-of-concerns problems in collaborative design of human-AI systems through leaky abstractions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–21.
[166]
Hariharan Subramonyam, Wilmot Li, Eytan Adar, and Mira Dontcheva. 2018. Taketoons: Script-driven performance animation. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 663–674.
[167]
Hariharan Subramonyam, Colleen Seifert, and Eytan Adar. 2021. Towards a process model for co-creating AI experiences. In Designing Interactive Systems Conference 2021. 1529–1543.
[168]
Hariharan Subramonyam, Colleen Seifert, Priti Shah, and Eytan Adar. 2020. Texsketch: Active diagramming through pen-and-ink annotations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[169]
Sangho Suh, Bryan Min, Srishti Palani, and Haijun Xia. 2023. Sensecape: Enabling Multilevel Exploration and Sensemaking with Large Language Models. arXiv preprint arXiv:2305.11483 (2023).
[170]
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D. Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. arxiv:2202.04903 [cs.HC]
[171]
Alaina N Talboy and Elizabeth Fuller. 2023. Challenging the appearance of machine intelligence: Cognitive bias in LLMs. arXiv preprint arXiv:2304.01358 (2023).
[172]
Jenifer Tidwell. 2010. Designing interfaces: Patterns for effective interaction design. " O’Reilly Media, Inc.".
[173]
Tomer Ullman. 2023. Large language models fail on trivial alterations to theory-of-mind tasks. arXiv preprint arXiv:2302.08399 (2023).
[174]
Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, and Jennifer Wortman Vaughan. 2023. Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions. arxiv:2302.07248 [cs.HC]
[175]
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, 2023. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. arXiv preprint arXiv:2306.11698 (2023).
[176]
Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. 2023. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. arxiv:2212.10001 [cs.CL]
[177]
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-Consistency Improves Chain of Thought Reasoning in Language Models. arxiv:2203.11171 [cs.CL]
[178]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.). Vol. 35. Curran Associates, Inc., 24824–24837. https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf
[179]
Justin D Weisz, Michael Muller, Jessica He, and Stephanie Houde. 2023. Toward general design principles for generative AI applications. arXiv preprint arXiv:2301.05578 (2023).
[180]
Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C. Schmidt. 2023. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arxiv:2302.11382 [cs.SE]
[181]
Christopher D Wickens, Justin G Hollands, Simon Banbury, and Raja Parasuraman. 2015. Engineering psychology and human performance. Psychology Press.
[182]
Merlin C Wittrock. 1989. Generative processes of comprehension. Educational psychologist 24, 4 (1989), 345–376.
[183]
Larry E Wood. 1997. User interface design: Bridging the gap from user requirements to design. CRC Press.
[184]
Austin P Wright, Zijie J Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, and Duen Horng Chau. 2020. A comparative analysis of industry human-AI interaction guidelines. arXiv preprint arXiv:2010.11761 (2020).
[185]
Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J Cai. 2022. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–10.
[186]
Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems. 1–22.
[187]
Canwen Xu, Julian McAuley, and Penghan Wang. 2023. Mirror: A Natural Language Interface for Data Querying, Summarization, and Visualization. In Companion Proceedings of the ACM Web Conference 2023. 49–52.
[188]
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond. arxiv:2304.13712 [cs.CL]
[189]
Qian Yang, Justin Cranshaw, Saleema Amershi, Shamsi T Iqbal, and Jaime Teevan. 2019. Sketching nlp: A case study of exploring the right things to design with language intelligence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[190]
Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018. Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 designing interactive systems conference. 585–596.
[191]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
[192]
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arxiv:2305.10601 [cs.CL]
[193]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, 268–282.
[194]
Richard M Young. 2014. Surrogates and mappings: Two kinds of conceptual models for interactive devices. In Mental models. Psychology Press, 43–60.
[195]
Bowen Yu and Cláudio T Silva. 2019. FlowSense: A natural language interface for visual data exploration within a dataflow system. IEEE transactions on visualization and computer graphics 26, 1 (2019), 1–11.
[196]
JD Zamfirescu-Pereira, Heather Wei, Amy Xiao, Kitty Gu, Grace Jung, Matthew G Lee, Bjoern Hartmann, and Qian Yang. 2023. Herding AI Cats: Lessons from Designing a Chatbot by Prompting GPT-3. (2023).
[197]
JD Zamfirescu-Pereira, Richmond Y Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
[198]
Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du. 2023. Explainability for Large Language Models: A Survey. arXiv preprint arXiv:2309.01029 (2023).
[199]
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-Shot Performance of Language Models. arxiv:2102.09690 [cs.CL]

Cited By

View all
  • (2025)An Emerging Design Space of How Tools Support Collaborations in AI Design and DevelopmentProceedings of the ACM on Human-Computer Interaction10.1145/37011819:1(1-28)Online publication date: 10-Jan-2025
  • (2025)Integrating Computational Thinking via AI-Based Design-Based Learning ActivitiesIntegrating Computational Thinking Through Design-Based Learning10.1007/978-981-96-0853-9_4(45-61)Online publication date: 3-Jan-2025
  • (2024)Against Generative UIProceedings of the Halfway to the Future Symposium10.1145/3686169.3686184(1-4)Online publication date: 21-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. cognitive psychology
  2. large language models
  3. prompt-based interactions

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • NSF

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,013
  • Downloads (Last 6 weeks)231
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)An Emerging Design Space of How Tools Support Collaborations in AI Design and DevelopmentProceedings of the ACM on Human-Computer Interaction10.1145/37011819:1(1-28)Online publication date: 10-Jan-2025
  • (2025)Integrating Computational Thinking via AI-Based Design-Based Learning ActivitiesIntegrating Computational Thinking Through Design-Based Learning10.1007/978-981-96-0853-9_4(45-61)Online publication date: 3-Jan-2025
  • (2024)Against Generative UIProceedings of the Halfway to the Future Symposium10.1145/3686169.3686184(1-4)Online publication date: 21-Oct-2024
  • (2024)WaitGPT: Monitoring and Steering Conversational LLM Agent in Data Analysis with On-the-Fly Code VisualizationProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676374(1-14)Online publication date: 13-Oct-2024
  • (2024)DesignPrompt: Using Multimodal Interaction for Design Exploration with Generative AIProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661588(804-818)Online publication date: 1-Jul-2024
  • (2024)Interactions with Generative Information Retrieval SystemsInformation Access in the Era of Generative AI10.1007/978-3-031-73147-1_3(47-71)Online publication date: 12-Sep-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media