Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3551349.3559525acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
short-paper

Answering Software Deployment Questions via Neural Machine Reading at Scale

Published: 05 January 2023 Publication History

Abstract

As software systems continue to grow in complexity and scale, deploying and delivering them becomes increasingly difficult. In this work, we develop DeployQA, a novel QA bot that automatically answers software deployment questions over user manuals and Stack Overflow posts. DeployQA is built upon RoBERTa. To bridge the gap between natural language and the domain of software deployment, we propose three adaptations in terms of vocabulary, pre-training, and fine-tuning, respectively. We evaluate our approach on our constructed DeQuAD dataset. The results show that DeployQA remarkably outperforms baseline methods by leveraging the three domain adaptation strategies.
Repository: https://github.com/Smallqqqq/DeployQA
Video: https://www.youtube.com/watch?v=TVf9w8gD3Ho

References

[1]
Lili Bo and Jinting Lu. 2021. Bug Question Answering with Pretrained Encoders. In International Conference on Software Analysis, Evolution, and Reengineering (SANER). 654–660.
[2]
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open-Domain Questions. In Annual Meeting of the Association for Computational Linguistics (ACL). 1870–1879.
[3]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). 4171–4186.
[4]
Tianjiao Du, Junming Cao, Qinyue Wu, Wei Li, Beijun Shen, and Yuting Chen. 2019. Cocoqa: Question Answering for Coding Conventions over Knowledge Graphs. In International Conference on Automated Software Engineering (ASE). 1086–1089.
[5]
Gene Kim, Jez Humble, Patrick Debois, and John Willis. Nov 30, 2021. The DevOps Handbook: How to Create World-Class Agility, Reliability & Security in Technology Organizations. IT Revolution Press.
[6]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692(2019).
[7]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100, 000+ Questions for Machine Comprehension of Text. In 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2383–2392.
[8]
Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Now Publishers Inc.
[9]
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional Attention Flow for Machine Comprehension. In International Conference on Learning Representations (ICLR).
[10]
Jeniya Tabassum, Mounica Maddela, Wei Xu, and Alan Ritter. 2020. Code and Named Entity Recognition in StackOverflow. In Annual Meeting of the Association for Computational Linguistics (ACL). 4913–4926.

Cited By

View all
  • (2022)Code Question Answering via Task-Adaptive Sequence-to-Sequence Pre-training2022 29th Asia-Pacific Software Engineering Conference (APSEC)10.1109/APSEC57359.2022.00035(229-238)Online publication date: Dec-2022

Index Terms

  1. Answering Software Deployment Questions via Neural Machine Reading at Scale

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ASE '22: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering
      October 2022
      2006 pages
      ISBN:9781450394758
      DOI:10.1145/3551349
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 January 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. machine reading comprehension
      2. question answering
      3. software deployment

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Funding Sources

      Conference

      ASE '22

      Acceptance Rates

      Overall Acceptance Rate 82 of 337 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)25
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)Code Question Answering via Task-Adaptive Sequence-to-Sequence Pre-training2022 29th Asia-Pacific Software Engineering Conference (APSEC)10.1109/APSEC57359.2022.00035(229-238)Online publication date: Dec-2022

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media