Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3658984acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Investigating and Designing for Trust in AI-powered Code Generation Tools

Published: 05 June 2024 Publication History

Abstract

Trust is a crucial factor for the adoption and responsible usage of generative AI tools in complex tasks such as software engineering. However, we have a limited understanding of how software developers evaluate the trustworthiness of AI-powered code generation tools in real-world settings. To address this gap, we conducted Study 1, an interview study with 17 developers who use AI-powered code generation tools in professional or personal settings. We found that developers’ trust is rooted in the AI tool’s perceived ability, integrity, and benevolence, and is situational, varying according to the context of usage. Existing AI code generation tools lack the affordances for developers to efficiently and effectively evaluate the trustworthiness of AI-powered code generation tools. To explore designs that can augment the existing interface of AI-powered code generation tools, we explored three sets of design concepts (suggestion quality indicators, usage stats, and control mechanisms) that derived from Study 1 findings. In Study 2, a design probe study with 12 developers, we investigated the potential of these design concepts to help developers make effective trust judgments. We discuss the implication of our findings on the design of AI-powered code generation tools and future research on trust in AI.

References

[1]
2021. OpenAI Codex. https://openai.com/blog/openai-codex/
[2]
2023. AI Assistant for software developers | Tabnine. https://www.tabnine.com/
[3]
2023. GitHub Copilot · Your AI pair programmer. https://github.com/features/copilot
[4]
Mayank Agarwal, Kartik Talamadupula, Stephanie Houde, Fernando Martinez, Michael Muller, John Richards, Steven Ross, and Justin D. Weisz. 2021. Quality Estimation & Interpretability for Code Translation. https://doi.org/10.48550/arXiv.2012.07581 arXiv:2012.07581 [cs].
[5]
Shraddha Barke, Michael B. James, and Nadia Polikarpova. 2022. Grounded Copilot: How Programmers Interact with Code-Generating Models. https://doi.org/10.48550/arXiv.2206.15000 arXiv:2206.15000 [cs].
[6]
Christian Bird, Denae Ford, Thomas Zimmermann, Nicole Forsgren, Eirini Kalliamvakou, Travis Lowdermilk, and Idan Gazit. 2023. Taking Flight with Copilot: Early insights and opportunities of AI-powered pair-programming tools. Queue 20, 6 (Jan. 2023), Pages 10:35–Pages 10:57. https://doi.org/10.1145/3582083
[7]
Rishi Bommasani et al. 2021. On the Opportunities and Risks of Foundation Models. https://arxiv.org/abs/2108.07258v3
[8]
Jayson G. Boubin, Christina F. Rusnock, and Jason M. Bindewald. 2017. Quantifying Compliance and Reliance Trust Behaviors to Influence Trust in Human-Automation Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, 1 (Sept. 2017), 750–754. https://doi.org/10.1177/1541931213601672 Publisher: SAGE Publications Inc.
[9]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (Aug. 2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806 Publisher: Routledge _eprint: https://doi.org/10.1080/2159676X.2019.1628806.
[10]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[11]
Margaret Burnett, Scott D Fleming, Shamsi Iqbal, Gina Venolia, Vidya Rajaram, Umer Farooq, Valentina Grigoreanu, and Mary Czerwinski. 2010. Gender differences and programming environments: across programming populations. In Proceedings of the 2010 ACM-IEEE international symposium on empirical software engineering and measurement. 1–10.
[12]
Bill Buxton. 2007. Sketching User Experiences: Getting the Design Right and the Right Design. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
[13]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces(IUI ’20). Association for Computing Machinery, New York, NY, USA, 454–464. https://doi.org/10.1145/3377325.3377498
[14]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 188:1–188:21. https://doi.org/10.1145/3449287
[15]
Ruijia Cheng, Ruotong Wang, Thomas Zimmermann, and Denae Ford. 2024. “It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation Tools. ACM Trans. Interact. Intell. Syst. (mar 2024). https://doi.org/10.1145/3651990 Just Accepted.
[16]
European Commission. 2019. Building Trust in Human-Centric Artificial Intelligence. Retrieved September 1, 2022 from https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelli.
[17]
Arghavan Moradi Dakhel, Vahid Majdinasab, Amin Nikanjam, Foutse Khomh, Michel C. Desmarais, Zhen Ming, and Jiang. 2022. GitHub Copilot AI pair programmer: Asset or Liability?https://doi.org/10.48550/ARXIV.2206.15331
[18]
Sylvain Daronnat, Leif Azzopardi, Martin Halvey, and Mateusz Dubiel. 2021. Inferring Trust From Users’ Behaviours; Agents’ Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration. Frontiers in Robotics and AI 8 (2021), 194. https://doi.org/10.3389/frobt.2021.642201
[19]
Arun Das and Paul Rad. 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. https://doi.org/10.48550/arXiv.2006.11371 arXiv:2006.11371 [cs].
[20]
Thomas Dohmke. 2022. GitHub Copilot is generally available to all developers. https://github.blog/2022-06-21-github-copilot-is-generally-available-to-all-developers/
[21]
Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. 2020. Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces(IUI ’20). Association for Computing Machinery, New York, NY, USA, 297–307. https://doi.org/10.1145/3377325.3377501
[22]
Zakir Durumeric, Frank Li, James Kasten, Johanna Amann, Jethro Beekman, Mathias Payer, Nicolas Weaver, David Adrian, Vern Paxson, Michael Bailey, and J. Alex Halderman. 2014. The Matter of Heartbleed. In Proceedings of the 2014 Conference on Internet Measurement Conference (Vancouver, BC, Canada) (IMC ’14). Association for Computing Machinery, New York, NY, USA, 475–488. https://doi.org/10.1145/2663716.2663755
[23]
Neil A. Ernst and Gabriele Bavota. 2022. AI-Driven Development Is Here: Should You Worry?IEEE Software 39, 2 (2022), 106–110. https://doi.org/10.1109/MS.2021.3133805
[24]
Lucian Gonçales, Kleinner Farias, Bruno da Silva, and Jonathan Fessler. 2019. Measuring the Cognitive Load of Software Developers: A Systematic Mapping Study. In 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC). 42–52. https://doi.org/10.1109/ICPC.2019.00018
[25]
Matthew Guzdial, Nicholas Liao, Jonathan Chen, Shao-Yu Chen, Shukan Shah, Vishwa Shah, Joshua Reno, Gillian Smith, and Mark O. Riedl. 2019. Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–13. https://doi.org/10.1145/3290605.3300854
[26]
W. Hasselbring and R. Reussner. 2006. Toward trustworthy software systems. Computer 39, 4 (2006), 91–92. https://doi.org/10.1109/MC.2006.142
[27]
Daniel Helgesson, Emelie Engström, Per Runeson, and Elizabeth Bjarnason. 2019. Cognitive Load Drivers in Large Scale Software Development. In 2019 IEEE/ACM 12th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). 91–94. https://doi.org/10.1109/CHASE.2019.00030
[28]
Robert R. Hoffman. 2017. A Taxonomy of Emergent Trusting in the Human–Machine Relationship. In Cognitive Systems Engineering. CRC Press. Num Pages: 28.
[29]
Daniel Holliday, Stephanie Wilson, and Simone Stumpf. 2016. User Trust in Intelligent Systems: A Journey Over Time. In Proceedings of the 21st International Conference on Intelligent User Interfaces(IUI ’16). Association for Computing Machinery, New York, NY, USA, 164–168. https://doi.org/10.1145/2856767.2856811
[30]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 624–635. https://doi.org/10.1145/3442188.3445923
[31]
Anna Kawakami, Venkatesh Sivaraman, Logan Stapleton, Hao-Fei Cheng, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Why Do I Care What’s Similar: Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts. In Designing Interactive Systems Conference(DIS ’22). Association for Computing Machinery, New York, NY, USA, 454–470. https://doi.org/10.1145/3532106.3533556
[32]
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 77–88. https://doi.org/10.1145/3593013.3593978
[33]
John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46, 1 (March 2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 Publisher: SAGE Publications Inc.
[34]
Mina Lee, Percy Liang, and Qian Yang. 2022. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–19. https://doi.org/10.1145/3491102.3502030
[35]
Jenny T Liang, Chenyang Yang, and Brad A Myers. 2024. A large-scale survey on the usability of ai programming assistants: Successes and challenges. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1–13.
[36]
Q.Vera Liao and S. Shyam Sundar. 2022. Designing for Responsible Trust in AI Systems: A Communication Perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1257–1268. https://doi.org/10.1145/3531146.3533182
[37]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15. https://doi.org/10.1145/3313831.3376590 arXiv:2001.02478 [cs].
[38]
S. Lipner. 2004. The trustworthy computing security development lifecycle. In 20th Annual Computer Security Applications Conference. 2–13. https://doi.org/10.1109/CSAC.2004.41
[39]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376739
[40]
Xiao Ma, Swaroop Mishra, Ariel Liu, Sophie Su, Jilin Chen, Chinmay Kulkarni, Heng-Tze Cheng, Quoc Le, and Ed Chi. 2023. Beyond ChatBots: ExploreLLM for Structured Thoughts and Personalized Model Responses. arXiv preprint arXiv:2312.00763 (2023).
[41]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Integrative Model Of Organizational Trust. Academy of Management Review 20, 3 (July 1995), 709–734. https://doi.org/10.5465/amr.1995.9508080335 Publisher: Academy of Management.
[42]
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, and Myrthe L. Tielman. 2023. Integrity Based Explanations for Fostering Appropriate Trust in AI Agents. ACM Transactions on Interactive Intelligent Systems (July 2023). https://doi.org/10.1145/3610578 Just Accepted.
[43]
Swati Mishra and Jeffrey M. Rzeszotarski. 2021. Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 139:1–139:26. https://doi.org/10.1145/3449213
[44]
Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. 2023. Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming. https://doi.org/10.48550/arXiv.2210.14306 arXiv:2210.14306 [cs].
[45]
Robert Münscher and Torsten M Kühlmann. 2011. Using critical incident technique in trust research. Handbook of research methods on trust (2011), 161.
[46]
Emerson Murphy-Hill, Ciera Jaspan, Caitlin Sadowski, David Shepherd, Michael Phillips, Collin Winter, Andrea Knight, Edward Smith, and Matthew Jorde. 2021. What Predicts Software Developers’ Productivity?IEEE Transactions on Software Engineering 47, 3 (2021), 582–594. https://doi.org/10.1109/TSE.2019.2900308
[47]
Annette M. O’Connor, Guy Tsafnat, James Thomas, Paul Glasziou, Stephen B. Gilbert, and Brian Hutton. 2019. A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?Systematic Reviews 8, 1 (June 2019), 143. https://doi.org/10.1186/s13643-019-1062-0
[48]
Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2022. Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions. In 2022 IEEE Symposium on Security and Privacy (SP). 754–768. https://doi.org/10.1109/SP46214.2022.9833571 ISSN: 2375-1207.
[49]
Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2022. Do Users Write More Insecure Code with AI Assistants?https://doi.org/10.48550/arXiv.2211.03622 arXiv:2211.03622 [cs].
[50]
Advait Sarkar, Andrew D. Gordon, Carina Negreanu, Christian Poelitz, Sruti Srinivasa Ragavan, and Ben Zorn. 2022. What is it like to program with artificial intelligence?https://doi.org/10.48550/arXiv.2208.06213 arXiv:2208.06213 [cs].
[51]
Xinyue Shen, Zeyuan Chen, Michael Backes, and Yang Zhang. 2023. In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT. https://arxiv.org/abs/2304.08979v1
[52]
Dominik Sobania, Dirk Schweim, and Franz Rothlauf. 2022. A comprehensive survey on program synthesis with evolutionary algorithms. IEEE Transactions on Evolutionary Computation (2022).
[53]
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D. Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 212–228. https://doi.org/10.1145/3490099.3511119
[54]
Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer, and Abraham Bernstein. 2022. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3517732
[55]
Priyan Vaithilingam, Tianyi Zhang, and Elena L. Glassman. 2022. Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(CHI EA ’22). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3491101.3519665
[56]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 327:1–327:39. https://doi.org/10.1145/3476068
[57]
Bryan Wang, Gang Li, and Yang Li. 2023. Enabling conversational interaction with mobile ui using large language models. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
[58]
Justin D. Weisz, Michael Muller, Stephanie Houde, John Richards, Steven I. Ross, Fernando Martinez, Mayank Agarwal, and Kartik Talamadupula. 2021. Perfection Not Required? Human-AI Partnerships in Code Translation. In 26th International Conference on Intelligent User Interfaces(IUI ’21). Association for Computing Machinery, New York, NY, USA, 402–412. https://doi.org/10.1145/3397481.3450656
[59]
Justin D. Weisz, Michael Muller, Steven I. Ross, Fernando Martinez, Stephanie Houde, Mayank Agarwal, Kartik Talamadupula, and John T. Richards. 2022. Better Together? An Evaluation of AI-Supported Code Translation. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 369–391. https://doi.org/10.1145/3490099.3511157
[60]
Eva Wendt, Bengt Fridlund, and Evy Lidell. 2004. Trust and Confirmation in a Gynecologic Examination Situation: A Critical Incident Technique Analysis. Acta Obstetricia et Gynecologica Scandinavica 83, 12 (2004), 1208–1215. https://doi.org/10.1111/j.0001-6349.2004.00597.x
[61]
David Gray Widder, Laura Dabbish, James D. Herbsleb, Alexandra Holloway, and Scott Davidoff. 2021. Trust in Collaborative Automation in High Stakes Software Engineering Work: A Case Study at NASA. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3411764.3445650
[62]
Jim Witschey, Olga Zielinska, Allaire Welk, Emerson Murphy-Hill, Chris Mayhorn, and Thomas Zimmermann. 2015. Quantifying Developers’ Adoption of Security Tools. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (Bergamo, Italy) (ESEC/FSE 2015). Association for Computing Machinery, New York, NY, USA, 260–271. https://doi.org/10.1145/2786805.2786816
[63]
Shundan Xiao, Jim Witschey, and Emerson Murphy-Hill. 2014. Social Influences on Secure Development Tool Adoption: Why Security Tools Spread. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (Baltimore, Maryland, USA) (CSCW ’14). Association for Computing Machinery, New York, NY, USA, 1095–1106. https://doi.org/10.1145/2531602.2531722
[64]
Frank F. Xu, Bogdan Vasilescu, and Graham Neubig. 2022. In-IDE Code Generation from Natural Language: Promise and Challenges. ACM Transactions on Software Engineering and Methodology 31, 2 (March 2022), 29:1–29:47. https://doi.org/10.1145/3487569
[65]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces(IUI ’20). Association for Computing Machinery, New York, NY, USA, 189–201. https://doi.org/10.1145/3377325.3377480
[66]
Qian Yang, Yuexing Hao, Kexin Quan, Stephen Yang, Yiran Zhao, Volodymyr Kuleshov, and Fei Wang. 2023. Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems. https://doi.org/10.1145/3544548.3581393
[67]
Rodrigo Yañez-Gallardo and Sandra Valenzuela-Suazo. 2012. Critical incidents of trust erosion in leadership of head nurses. Revista Latino-Americana de Enfermagem 20 (Feb. 2012), 143–150. https://doi.org/10.1590/S0104-11692012000100019 Publisher: Escola de Enfermagem de Ribeirão Preto / Universidade de São Paulo.
[68]
JD Zamfirescu-Pereira, Richmond Y Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
[69]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 295–305. https://doi.org/10.1145/3351095.3372852
[70]
Yixuan Zhang, Nurul Suhaimi, Nutchanon Yongsatianchot, Joseph D Gaggiano, Miso Kim, Shivani A Patel, Yifan Sun, Stacy Marsella, Jacqueline Griffin, and Andrea G Parker. 2022. Shifting Trust: Examining How Trust and Distrust Emerge, Transform, and Collapse in COVID-19 Information Seeking. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–21. https://doi.org/10.1145/3491102.3501889
[71]
Albert Ziegler, Eirini Kalliamvakou, X. Alice Li, Andrew Rice, Devon Rifkin, Shawn Simister, Ganesh Sittampalam, and Edward Aftandilian. 2022. Productivity assessment of neural code completion. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming(MAPS 2022). Association for Computing Machinery, New York, NY, USA, 21–29. https://doi.org/10.1145/3520312.3534864

Cited By

View all
  • (2025)Evaluation and Prediction of Human Software Developers’ Perception of Large Language Models Suggestions Using GitHub DataSupercomputing10.1007/978-3-031-78459-0_25(347-361)Online publication date: 31-Jan-2025
  • (2024)It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for LearningEducation Sciences10.3390/educsci1410110614:10(1106)Online publication date: 11-Oct-2024
  • (2024)A Transformer-Based Approach for Smart Invocation of Automatic Code CompletionProceedings of the 1st ACM International Conference on AI-Powered Software10.1145/3664646.3664760(28-37)Online publication date: 10-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. generative AI
  2. human-AI interaction
  3. software engineering tooling
  4. trust in AI

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,436
  • Downloads (Last 6 weeks)258
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Evaluation and Prediction of Human Software Developers’ Perception of Large Language Models Suggestions Using GitHub DataSupercomputing10.1007/978-3-031-78459-0_25(347-361)Online publication date: 31-Jan-2025
  • (2024)It Helps with Crap Lecturers and Their Low Effort: Investigating Computer Science Students’ Perceptions of Using ChatGPT for LearningEducation Sciences10.3390/educsci1410110614:10(1106)Online publication date: 11-Oct-2024
  • (2024)A Transformer-Based Approach for Smart Invocation of Automatic Code CompletionProceedings of the 1st ACM International Conference on AI-Powered Software10.1145/3664646.3664760(28-37)Online publication date: 10-Jul-2024
  • (2024)"I look at it as the king of knowledge": How Blind People Use and Understand Generative AI ToolsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675631(1-14)Online publication date: 27-Oct-2024
  • (2024)It’s Organic: Software Testing of Emerging Domains (Keynote)Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering10.1145/3663529.3674720(2-3)Online publication date: 10-Jul-2024
  • (2024)Empirical Evidence on Conversational Control of GUI in Semantic AutomationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645172(869-885)Online publication date: 5-Apr-2024
  • (2024)BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00012(13-23)Online publication date: 2-Sep-2024
  • (2024)Understanding and Designing for Trust in AI-Powered Developer ToolingIEEE Software10.1109/MS.2024.343910841:6(23-28)Online publication date: 4-Oct-2024
  • (2024)AI Pair Programming Acceptance: A Value-Based Approach with AHP Analysis2024 10th International Conference on Control, Decision and Information Technologies (CoDIT)10.1109/CoDIT62066.2024.10708135(556-561)Online publication date: 1-Jul-2024
  • (2024)Ensemble Balanced Nested Dichotomy Fuzzy Models for Software Requirement Risk PredictionIEEE Access10.1109/ACCESS.2024.347394212(146225-146243)Online publication date: 2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media