Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642160acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Generative AI in the Wild: Prospects, Challenges, and Strategies

Published: 11 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Propelled by their remarkable capabilities to generate novel and engaging content, Generative Artificial Intelligence (GenAI) technologies are disrupting traditional workflows in many industries. While prior research has examined GenAI from a techno-centric perspective, there is still a lack of understanding about how users perceive and utilize GenAI in real-world scenarios. To bridge this gap, we conducted semi-structured interviews with (N = 18) GenAI users in creative industries, investigating the human-GenAI co-creation process within a holistic LUA (Learning, Using and Assessing) framework. Our study uncovered an intriguingly complex landscape: Prospects – GenAI greatly fosters the co-creation between human expertise and GenAI capabilities, profoundly transforming creative workflows; Challenges – Meanwhile, users face substantial uncertainties and complexities arising from resource availability, tool usability, and regulatory compliance; Strategies – In response, users actively devise various strategies to overcome many of such challenges. Our study reveals key implications for the design of future GenAI tools.

    Supplemental Material

    MP4 File - Video Presentation
    Video Presentation

    References

    [1]
    AIContentfy. 2023. The ethics of ChatGPT: Navigating the gray area. https://aicontentfy.com/en/blog/ethics-of-chatgpt-navigating-gray-area
    [2]
    Hussam Alkaissi and Samy I McFarlane. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 15, 2 (2023).
    [3]
    Robert B Allen. 1997. Mental models and user models. In Handbook of Human-Computer Interaction. Elsevier, 49–63.
    [4]
    Nantheera Anantrasirichai and David Bull. 2022. Artificial intelligence in the creative industries: a review. Artificial intelligence review (2022), 1–68.
    [5]
    Cecilia Aragon, Clayton Hutto, Andy Echenique, Brittany Fiore-Gartland, Yun Huang, Jinyoung Kim, Gina Neff, Wanli Xing, and Joseph Bayer. 2016. Developing a research agenda for human-centered data science. In Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing.
    [6]
    Eric PS Baumer, Jenna Burrell, Morgan G Ames, Jed R Brubaker, and Paul Dourish. 2015. On the importance and implications of studying technology non-use. interactions 22, 2 (2015), 52–56.
    [7]
    Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. 2023. Prompting is programming: A query language for large language models. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation.
    [8]
    Susanne Bodker. 1989. A human activity approach to user interfaces. Human-Computer Interaction 4, 3 (1989), 171–195.
    [9]
    Aras Bozkurt and Ramesh C Sharma. 2023. Challenging the status quo and exploring the new boundaries in the age of algorithms: Reimagining the role of generative AI in distance education and online learning. Asian Journal of Distance Education 18, 1 (2023).
    [10]
    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101.
    [11]
    Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (2019), 589–597.
    [12]
    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. In Proceedings of the Advances in Neural Information Processing Systems.
    [13]
    Erik Brynjolfsson, Danielle Li, and Lindsey R Raymond. 2023. Generative AI at work. National Bureau of Economic Research (2023).
    [14]
    Jacques Bughin and Nicolas Van Zeebroeck. 2018. Artificial intelligence: Why a digital base is critical. The McKinsey Quarterly (2018).
    [15]
    John M Carroll and Judith Reitman Olson. 1988. Mental models in human-computer interaction. Handbook of Human-Computer Interaction (1988), 45–65.
    [16]
    Alan Chamberlain, Andy Crabtree, Tom Rodden, Matt Jones, and Yvonne Rogers. 2012. Research in the wild: Understanding ‘in the wild’ approaches to design and development. In Proceedings of the Designing Interactive Systems Conference.
    [17]
    Yi-Shyuan Chiang, Ruei-Che Chang, Yi-Lin Chuang, Shih-Ya Chou, Hao-Ping Lee, I-Ju Lin, Jian-Hua Jiang Chen, and Yung-Ju Chang. 2020. Exploring the design space of user-system communication for smart-home routine assistants. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
    [18]
    Bonnie Chinh, Himanshu Zade, Abbas Ganji, and Cecilia Aragon. 2019. Ways of qualitative coding: A case study of four strategies for resolving disagreements. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.
    [19]
    Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. 2023. Simple and controllable music generation. ArXiv e-prints (2023).
    [20]
    Anamaria Crisan and Brittany Fiore-Gartland. 2021. Fits and Starts: Enterprise Use of AutoML and the Role of Humans in the Loop. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [21]
    Hai Dang, Sven Goller, Florian Lehmann, and Daniel Buschek. 2023. Choice over control: How users write with large language models using diegetic and non-diegetic prompting. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [22]
    Yogesh K Dwivedi, Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M Baabdullah, Alex Koohang, Vishnupriya Raghavan, Manju Ahuja, 2023. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management 71 (2023), 102642.
    [23]
    Tojin T. Eapen, Daniel J. Finkenstadt, Josh Folk, and Lokesh Venkataswamy. 2023. How Generative AI Can Augment Human Creativity. Harvard Business Review (July 2023). https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity
    [24]
    Editorial. 2023. Generating ‘smarter’ biotechnology. Nature Biotechnology 41, 2 (2023), 157–157.
    [25]
    Ziv Epstein, Aaron Hertzmann, Investigators of Human Creativity, Memo Akten, Hany Farid, Jessica Fjeld, Morgan R Frank, Matthew Groh, Laura Herman, Neil Leach, 2023. Art and the science of generative AI. Science 380, 6650 (2023), 1110–1111.
    [26]
    Ziv Epstein, Aaron Hertzmann, the Investigators of Human Creativity, Memo Akten, Hany Farid, Jessica Fjeld, Morgan R. Frank, Matthew Groh, Laura Herman, Neil Leach, Robert Mahari, Alex “Sandy” Pentland, Olga Russakovsky, Hope Schroeder, and Amy Smith. 2023. Art and the science of generative AI. Science 380, 6650 (2023), 1110–1111.
    [27]
    Jo Foord. 2009. Strategies for creative industries: An international review. Creative Industries Journal 1, 2 (2009), 91–113.
    [28]
    Morgan R Frank, David Autor, James E Bessen, Erik Brynjolfsson, Manuel Cebrian, David J Deming, Maryann Feldman, Matthew Groh, José Lobo, Esteban Moro, 2019. Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences 116, 14 (2019), 6531–6539.
    [29]
    Fiona Fui-Hoon Nah, Ruilin Zheng, Jingyuan Cai, Keng Siau, and Langtao Chen. 2023. Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research 25, 3 (2023), 277–304.
    [30]
    Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, 2022. Predictability and surprise in large generative models. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
    [31]
    Katy Ilonka Gero, Tao Long, and Lydia B Chilton. 2023. Social dynamics of AI support in creative writing. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [32]
    Barney G Glaser and Anselm L Strauss. 2017. The discovery of grounded theory: Strategies for qualitative research. Routledge.
    [33]
    Dogan Gursoy, Oscar Hengxuan Chi, Lu Lu, and Robin Nunkoo. 2019. Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management 49 (2019), 157–169.
    [34]
    Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1112–1123.
    [35]
    Steve Harrison, Deborah Tatar, and Phoebe Sengers. 2007. The three paradigms of HCI. In Alt. Chi. Session at the SIGCHI Conference on human factors in computing systems San Jose, California, USA. 1–18.
    [36]
    Rowan T Hughes, Liming Zhu, and Tomasz Bednarz. 2021. Generative adversarial networks–enabled human–artificial intelligence collaborative applications for creative and design industries: A systematic review of current approaches and trends. Frontiers in artificial intelligence 4 (2021), 604234.
    [37]
    Jörn Hurtienne, Luciënne Blessing, 2007. Design for intuitive use-testing image schema theory for user interface design. In Proceedings of the International Conference on Engineering Design.
    [38]
    Sana Ijaz. 2023. 15 countries that banned CHATGPT. https://finance.yahoo.com/news/15-countries-banned-chatgpt-204342617.html
    [39]
    Anna K Jordanous. 2009. Evaluating machine creativity. In Proceedings of the seventh ACM conference on creativity and cognition. 331–332.
    [40]
    Celeste Kidd and Abeba Birhane. 2023. How AI can distort human beliefs. Science 380, 6651 (2023), 1222–1223.
    [41]
    Rafal Kocielnik, Saleema Amershi, and Paul N Bennett. 2019. Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of ai systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [42]
    Kari Kuutti and Liam J Bannon. 2014. The turn to practice in HCI: Towards a research agenda. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [43]
    Michael Liebrenz, Roman Schleifer, Anna Buadze, Dinesh Bhugra, and Alexander Smith. 2023. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. The Lancet Digital Health 5, 3 (2023), e105–e106.
    [44]
    Hannah Limerick, David Coyle, and James W Moore. 2014. The experience of agency in human-computer interactions: a review. Frontiers in Human Neuroscience 8 (2014), 643.
    [45]
    Xiaoyue Ma and Yudi Huo. 2023. Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society 75 (2023), 102362.
    [46]
    Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proc. ACM Hum.-Comput. Interact. 3, CSCW (2019), 1–23.
    [47]
    Michael Muller, Lydia B Chilton, Anna Kantosalo, Q Vera Liao, Mary Lou Maher, Charles Patrick Martin, and Greg Walsh. 2023. GenAICHI 2023: Generative AI and HCI at CHI 2023. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.
    [48]
    Jonas Oppenlaender. 2022. The creativity of text-to-image generation. In Proceedings of the 25th International Academic Mindtrek Conference. ACM.
    [49]
    Amy L Ostrom, Darima Fotheringham, and Mary Jo Bitner. 2019. Customer acceptance of AI in service encounters: understanding antecedents and consequences. Handbook of Service Science, Volume II (2019), 77–103.
    [50]
    Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. In Proceedings of the Advances in Neural Information Processing Systems.
    [51]
    Samir Passi and Steven J Jackson. 2018. Trust in data science: Collaboration, translation, and accountability in corporate data science projects. Proc. ACM Hum.-Comput. Interact. 2, CSCW (2018), 1–28.
    [52]
    Michael Quinn Patton. 1990. Qualitative Evaluation and Research Methods. SAGE.
    [53]
    Janet Rafner, Roger E Beaty, James C Kaufman, Todd Lubart, and Jacob Sherson. 2023. Creativity in the age of generative AI. Nature Human Behaviour (2023), 1–3.
    [54]
    Jeba Rezwana and Mary Lou Maher. 2023. User perspectives on ethical challenges in Human-AI co-creativity: A design fiction study. In Proceedings of the ACM Conference on Creativity and Cognition.
    [55]
    Oliver C Robinson. 2014. Sampling in interview-based qualitative research: A theoretical and practical guide. Qualitative Research in Psychology 11, 1 (2014), 25–41.
    [56]
    Yvonne Rogers. 2011. Interaction design gone wild: Striving for wild theory. Interactions 18, 4 (2011), 58–62.
    [57]
    Yvonne Rogers, Paul Marshall, and John M Carroll. 2017. Research in the Wild. Springer.
    [58]
    John Rooksby. 2013. Wild in the laboratory: A discussion of plans and situated actions. ACM Transactions on Computer-Human Interaction 20, 3 (2013), 1–17.
    [59]
    Anne L Russell. 1995. Stages in learning new technology: Naive adult email users. Computers & Education 25, 4 (1995), 173–178.
    [60]
    Daniel Russo. 2023. Navigating the Complexity of Generative AI Adoption in Software Engineering. ArXiv e-prints (2023).
    [61]
    Tam Sakirin and Rachid Ben Said. 2023. User preferences for ChatGPT-powered conversational interfaces versus traditional methods. Mesopotamian Journal of Computer Science 2023 (2023), 24–31.
    [62]
    Samy. 2023. Why are companies and other organizations banning CHATGPT (and Bard next?). https://medium.com/@samyme/why-are-companies-and-other-organizations-banning-chatgpt-and-bard-next-7770ab8c171b
    [63]
    Christine Satchell and Paul Dourish. 2009. Beyond the user: Use and non-use in HCI. In Proceedings of the Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design.
    [64]
    Keng Siau and Weiyu Wang. 2020. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management 31, 2 (2020), 74–87.
    [65]
    Milica Stojmenovic, Christopher Pilgrim, and Gitte Lindgaard. 2014. Perceived and objective usability and visual appeal in a website domain with a less developed mental model. In Proceedings of the Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design.
    [66]
    Lucille Alice Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge University Press.
    [67]
    Yuan Sun, Qiurong Song, Xinning Gui, Fenglong Ma, and Ting Wang. 2023. AutoML in the wild: Obstacles, workarounds, and expectations. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [68]
    Yuan Sun and S Shyam Sundar. 2022. Exploring the effects of interactive dialogue in improving user control for explainable online symptom checkers. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [69]
    S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
    [70]
    S Shyam Sundar and Mengqi Liao. 2023. Calling BS on ChatGPT: Reflections on AI as a Communication Source. Journalism & Communication Monographs 25, 2 (2023), 165–180.
    [71]
    The White House. 2023. Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI. https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/. Accessed: December 8, 2023.
    [72]
    Mattia Thibault, Timo Kivikangas, Riku Roihankorpi, Petri Pohjola, and Markus Aho. 2023. Who am AI?: Mapping Generative AI Impact and Transformative Potential in Creative Ecosystem. In Proceedings of the 26th International Academic Mindtrek Conference. 344–349.
    [73]
    David R Thomas. 2006. A general inductive approach for qualitative data analysis. American Journal of Evaluation 27, 2 (2006), 237–246.
    [74]
    H. Holden Thorp. 2023. ChatGPT is fun, but not an author. Science 379, 6630 (2023), 313–313.
    [75]
    Jan AGM Van Dijk. 2006. Digital divide research, achievements and shortcomings. Poetics 34, 4-5 (2006), 221–235.
    [76]
    Yunlong Wang, Shuyuan Shen, and Brian Y Lim. 2023. RePrompt: Automatic prompt editing to refine AI-generative art towards precise expressions. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [77]
    Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [78]
    Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems. 1–13.
    [79]
    Yue You, Chun-Hua Tsai, Yao Li, Fenglong Ma, Christopher Heron, and Xinning Gui. 2023. Beyond Self-Diagnosis: How a Chatbot-Based Symptom Checker Should Respond. ACM Trans. Comput.-Hum. Interact. 30, 4, Article 64 (2023), 44 pages.
    [80]
    Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. 2022. Wordcraft: Story writing with large language models. In Proceedings of the ACM Conference on Intelligent User Interfaces.
    [81]
    JD Zamfirescu-Pereira, Richmond Y Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
    [82]
    Lvmin Zhang and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. ArXiv e-prints (2023).
    [83]
    Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. 2023. Synthetic lies: Understanding Ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the CHI Conference on Human Factors in Computing Systems.

    Index Terms

    1. Generative AI in the Wild: Prospects, Challenges, and Strategies

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
      May 2024
      18961 pages
      ISBN:9798400703300
      DOI:10.1145/3613904
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 May 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Generative AI
      2. Human-AI Collaboration
      3. Transparency
      4. User Agency

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Data Availability

      Funding Sources

      Conference

      CHI '24

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 1,673
        Total Downloads
      • Downloads (Last 12 months)1,673
      • Downloads (Last 6 weeks)266
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media