Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

PTM-APIRec: Leveraging Pre-trained Models of Source Code in API Recommendation

Published: 15 March 2024 Publication History

Abstract

Recommending APIs is a practical and essential feature of IDEs. Improving the accuracy of API recommendations is an effective way to improve coding efficiency. With the success of deep learning in software engineering, the state-of-the-art (SOTA) performance of API recommendation is also achieved by deep-learning-based approaches. However, existing SOTAs either only consider the API sequences in the code snippets or rely on complex operations for extracting hand-crafted features, all of which have potential risks in under-encoding the input code snippets and further resulting in sub-optimal recommendation performance. To this end, this article proposes to utilize the code understanding ability of existing general code Pre-Training Models to fully encode the input code snippet to improve the accuracy of API Recommendation, namely, PTM-APIRec. To ensure that the code semantics of the input are fully understood and the API recommended actually exists, we use separate vocabularies for the input code snippet and the APIs to be predicted. The experimental results on the JDK and Android datasets show that PTM-APIRec surpasses existing approaches. Besides, an effective way to improve the performance of PTM-APIRec is to enhance the pre-trained model with more pre-training data (which is easier to obtain than API recommendation datasets).

References

[1]
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT ’21), Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (Eds.). Association for Computational Linguistics, 2655–2668. DOI:
[2]
Miltiadis Allamanis and Charles Sutton. 2013. Mining source code repositories at massive scale using language modeling. In Proceedings of the 10th Working Conference on Mining Software Repositories (MSR ’13), Thomas Zimmermann, Massimiliano Di Penta, and Sunghun Kim (Eds.). IEEE Computer Society, 207–216. DOI:
[3]
Muhammad Asaduzzaman, Chanchal K. Roy, and Kevin A. Schneider. 2015. PARC: Recommending API methods parameters. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME ’15), Rainer Koschke, Jens Krinke, and Martin P. Robillard (Eds.). IEEE Computer Society, 330–332. DOI:
[4]
Marcel Bruch, Martin Monperrus, and Mira Mezini. 2009. Learning from examples to improve code completion systems. In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2009, Hans van Vliet and Valérie Issarny (Eds.). ACM, 213–222. DOI:
[5]
Chi Chen, Xin Peng, Bihuan Chen, Jun Sun, Zhenchang Xing, Xin Wang, and Wenyun Zhao. 2022. ”More than deep learning”: Post-processing for API sequence recommendation. Empir. Softw. Eng. 27, 1 (2022), 15. DOI:
[6]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. CoRR abs/2107.03374 (2021). arXiv:2107.03374https://arxiv.org/abs/2107.03374
[7]
Matteo Ciniselli, Nathan Cooper, Luca Pascarella, Antonio Mastropaolo, Emad Aghajani, Denys Poshyvanyk, Massimiliano Di Penta, and Gabriele Bavota. 2022. An empirical study on the usage of transformer models for code completion. IEEE Trans. Software Eng. 48, 12 (2022), 4818–4837. DOI:
[8]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT ’19), Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, 4171–4186. DOI:
[9]
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics (EMNLP ’20) (Findings of ACL), Trevor Cohn, Yulan He, and Yang Liu (Eds.), Vol. EMNLP 2020. Association for Computational Linguistics, 1536–1547. DOI:
[10]
Jaroslav M. Fowkes and Charles Sutton. 2016. Parameter-free probabilistic API mining across GitHub. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’16), Thomas Zimmermann, Jane Cleland-Huang, and Zhendong Su (Eds.). ACM, 254–265. DOI:
[11]
Philip Gage. 1994. A new algorithm for data compression. C Users J. 12, 2 (Feb.1994), 23–38.
[12]
Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2016. Deep API learning. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’16), Thomas Zimmermann, Jane Cleland-Huang, and Zhendong Su (Eds.). ACM, 631–642. DOI:
[13]
Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2016. Deep API learning. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’16), Thomas Zimmermann, Jane Cleland-Huang, and Zhendong Su (Eds.). ACM, 631–642. DOI:
[14]
Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. 2022. UniXcoder: Unified cross-modal pre-training for code representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (ACL ’22), Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, 7212–7225. DOI:
[15]
Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training code representations with data flow. In 9th International Conference on Learning Representations (ICLR ’21). OpenReview.net. https://openreview.net/forum?id=jLoC4ez43PZ
[16]
Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar T. Devanbu. 2012. On the naturalness of software. In 34th International Conference on Software Engineering (ICSE ’12), Martin Glinz, Gail C. Murphy, and Mauro Pezzè (Eds.). IEEE Computer Society, 837–847. DOI:
[17]
Qiao Huang, Xin Xia, Zhenchang Xing, David Lo, and Xinyu Wang. 2018. API method recommendation without worrying about the task-API knowledge gap. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE ’18), Marianne Huchard, Christian Kästner, and Gordon Fraser (Eds.). ACM, 293–304. DOI:
[18]
Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. CodeSearchNet challenge: Evaluating the state of semantic code search. CoRR abs/1909.09436 (2019). arXiv:1909.09436http://arxiv.org/abs/1909.09436
[19]
Xue Jiang, Zhuoran Zheng, Chen Lyu, Liang Li, and Lei Lyu. 2021. TreeBERT: A tree-based pre-trained model for programming language. In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI ’21) (Proceedings of Machine Learning Research), Cassio P. de Campos, Marloes H. Maathuis, and Erik Quaeghebeur (Eds.), Vol. 161. AUAI Press, 54–63. https://proceedings.mlr.press/v161/jiang21a.html
[20]
Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and evaluating contextual embedding of source code. In Proceedings of the 37th International Conference on Machine Learning (ICML ’20) (Proceedings of Machine Learning Research), Vol. 119. PMLR, 5110–5121. http://proceedings.mlr.press/v119/kanade20a.html
[21]
Yuning Kang, Zan Wang, Hongyu Zhang, Junjie Chen, and Hanmo You. 2021. APIRecX: Cross-library API recommendation via pre-trained language model. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP ’21), Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 3425–3436. DOI:
[22]
Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with alphacode. CoRR abs/2203.07814 (2022). DOI:arXiv:2203.07814
[23]
Zehao Lin, Guodun Li, Jingfeng Zhang, Yue Deng, Xiangji Zeng, Yin Zhang, and Yao Wan. 2022. XCode: Towards cross-language code representation with large-scale pre-training. ACM Trans. Softw. Eng. Methodol. 31, 3 (2022), 52:1–52:44. DOI:
[24]
Fang Liu, Ge Li, Yunfei Zhao, and Zhi Jin. 2020. Multi-task learning based pre-trained language model for code completion. In 35th IEEE/ACM International Conference on Automated Software Engineering (ASE ’20). IEEE, 473–485. DOI:
[25]
Xiaoyu Liu, LiGuo Huang, and Vincent Ng. 2018. Effective API recommendation without historical software repositories. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE ’18), Marianne Huchard, Christian Kästner, and Gordon Fraser (Eds.). ACM, 282–292. DOI:
[26]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019). arXiv:1907.11692http://arxiv.org/abs/1907.11692
[27]
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1 (NeurIPS Datasets and Benchmarks ’21), Joaquin Vanschoren and Sai-Kit Yeung (Eds.). https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/c16a5320fa475530d9583c34fd356ef5-Abstract-round1.html
[28]
Kawser Wazed Nafi, Tonny Shekha Kar, Banani Roy, Chanchal K. Roy, and Kevin A. Schneider. 2019. CLCDSA: Cross language code clone detection using syntactical features and API documentation. In 34th IEEE/ACM International Conference on Automated Software Engineering (ASE ’19). IEEE, 1026–1037. DOI:
[29]
Anh Tuan Nguyen, Michael Hilton, Mihai Codoban, Hoan Anh Nguyen, Lily Mast, Eli Rademacher, Tien N. Nguyen, and Danny Dig. 2016. API code recommendation using statistical learning from fine-grained changes. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’16), Thomas Zimmermann, Jane Cleland-Huang, and Zhendong Su (Eds.). ACM, 511–522. DOI:
[30]
Anh Tuan Nguyen and Tien N. Nguyen. 2015. Graph-based statistical language model for code. In 37th IEEE/ACM International Conference on Software Engineering (ICSE ’15), Volume 1, Antonia Bertolino, Gerardo Canfora, and Sebastian G. Elbaum (Eds.). IEEE Computer Society, 858–868. DOI:
[31]
Anh Tuan Nguyen, Tung Thanh Nguyen, Hoan Anh Nguyen, Ahmed Tamrawi, Hung Viet Nguyen, Jafar M. Al-Kofahi, and Tien N. Nguyen. 2012. Graph-based pattern-oriented, context-sensitive source code completion. In 34th International Conference on Software Engineering (ICSE ’12), Martin Glinz, Gail C. Murphy, and Mauro Pezzè (Eds.). IEEE Computer Society, 69–79. DOI:
[32]
Phuong Thanh Nguyen, Juri Di Rocco, Davide Di Ruscio, Lina Ochoa, Thomas Degueule, and Massimiliano Di Penta. 2019. FOCUS: A recommender system for mining API function calls and usage patterns. In Proceedings of the 41st International Conference on Software Engineering)ICSE ’19), Joanne M. Atlee, Tevfik Bultan, and Jon Whittle (Eds.). /ACM, 1050–1060. DOI:
[33]
Son Nguyen, Cuong Tran Manh, Kien T. Tran, Tan M. Nguyen, Thu-Trang Nguyen, Kien-Tuan Ngo, and Hieu Dinh Vo. 2023. ARIST: An effective API argument recommendation approach. CoRR abs/2306.06620 (2023). DOI:arXiv:2306.06620
[34]
Tam The Nguyen, Hung Viet Pham, Phong Minh Vu, and Tung Thanh Nguyen. 2015. Recommending API usages for mobile apps with hidden Markov model. In 30th IEEE/ACM International Conference on Automated Software Engineering (ASE ’15), Myra B. Cohen, Lars Grunske, and Michael Whalen (Eds.). IEEE Computer Society, 795–800. DOI:
[35]
Changan Niu, Chuanyi Li, Bin Luo, and Vincent Ng. 2022. Deep learning meets software engineering: A survey on pre-trained models of source code. in Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI ’22), Luc De Raedt (Ed.). ijcai.org, 5546–5555. DOI:
[36]
Changan Niu, Chuanyi Li, Bin Luo, and Vincent Ng. 2022. Deep learning meets software engineering: A survey on pre-trained models of source code. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI ’22), Luc De Raedt (Ed.). ijcai.org, 5546–5555. DOI:
[37]
Changan Niu, Chuanyi Li, Vincent Ng, Jidong Ge, Liguo Huang, and Bin Luo. 2022. SPT-code: Sequence-to-sequence pre-training for learning source code representations. In 44th IEEE/ACM 44th International Conference on Software Engineering (ICSE ’22). ACM, 1–13. DOI:
[38]
Yun Peng, Shuqing Li, Wenwei Gu, Yichen Li, Wenxuan Wang, Cuiyun Gao, and Michael R. Lyu. 2021. Revisiting, benchmarking and exploring API recommendation: How far are we? CoRR abs/2112.12653 (2021). arXiv:2112.12653https://arxiv.org/abs/2112.12653
[39]
Sebastian Proksch, Johannes Lerch, and Mira Mezini. 2015. Intelligent code completion with Bayesian networks. ACM Trans. Softw. Eng. Methodol. 25, 1 (2015), 3:1–3:31. DOI:
[40]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[41]
Mohammad Masudur Rahman, Chanchal Kumar Roy, and David Lo. 2016. RACK: Automatic API recommendation using crowdsourced knowledge. In IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER ’16), Volume 1. IEEE Computer Society, 349–359. DOI:
[42]
Veselin Raychev, Martin T. Vechev, and Eran Yahav. 2014. Code completion with statistical language models. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’14), Michael F. P. O’Boyle and Keshav Pingali (Eds.). ACM, 419–428. DOI:
[43]
Romain Robbes and Andrea Janes. 2019. Leveraging small software engineering data sets with pre-trained neural networks. In Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE (NIER) ’19), Anita Sarma and Leonardo Murta (Eds.). IEEE/ACM, 29–32. DOI:
[44]
Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntelliCode compose: Code generation using transformer. In 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’20), Prem Devanbu, Myra B. Cohen, and Thomas Zimmermann (Eds.). ACM, 1433–1443. DOI:
[45]
Alexey Svyatkovskiy, Ying Zhao, Shengyu Fu, and Neel Sundaresan. 2019. Pythia: AI-assisted code completion system. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19), Ankur Teredesai, Vipin Kumar, Ying Li, Rómer Rosales, Evimaria Terzi, and George Karypis (Eds.). ACM, 2727–2735. DOI:
[46]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 5998–6008. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
[47]
Jue Wang, Yingnong Dang, Hongyu Zhang, Kai Chen, Tao Xie, and Dongmei Zhang. 2013. Mining succinct and high-coverage API usage patterns from source code. In Proceedings of the 10th Working Conference on Mining Software Repositories (MSR ’13), Thomas Zimmermann, Massimiliano Di Penta, and Sunghun Kim (Eds.). IEEE Computer Society, 319–328. DOI:
[48]
Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven C. H. Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP ’21), Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 8696–8708. DOI:
[49]
Moshi Wei, Nima Shiri Harzevili, Yuchao Huang, Junjie Wang, and Song Wang. 2022. CLEAR: Contrastive learning for API recommendation. In 44th IEEE/ACM 44th International Conference on Software Engineering (ICSE ’22). ACM, 376–387. DOI:
[50]
Jinpei Yan, Yong Qi, Qifan Rao, and Hui He. 2018. Learning API suggestion via single LSTM network with deterministic negative sampling. In The 30th International Conference on Software Engineering and Knowledge Engineering, Óscar Mortágua Pereira (Ed.). KSI Research Inc. and Knowledge Systems Institute Graduate School, 137–136. DOI:
[51]
Cheng Zhang, Juyuan Yang, Yi Zhang, Jing Fan, Xin Zhang, Jianjun Zhao, and Peizhao Ou. 2012. Automatic parameter recommendation for practical API usage. In 34th International Conference on Software Engineering (ICSE ’12), Martin Glinz, Gail C. Murphy, and Mauro Pezzè (Eds.). IEEE Computer Society, 826–836. DOI:
[52]
Hao Zhong, Tao Xie, Lu Zhang, Jian Pei, and Hong Mei. 2009. MAPO: Mining and recommending API usage patterns. In Proceedings of the 23rd European Conference on Object-oriented Programming (ECOOP ’09), (Lecture Notes in Computer Science), Sophia Drossopoulou (Ed.), Vol. 5653. Springer, 318–343. DOI:
[53]
Yu Zhou, Xinying Yang, Taolue Chen, Zhiqiu Huang, Xiaoxing Ma, and Harald C. Gall. 2022. Boosting API recommendation with implicit feedback. IEEE Trans. Software Eng. 48, 6 (2022), 2157–2172. DOI:

Cited By

View all
  • (2024)Learning to Detect and Localize Multilingual BugsProceedings of the ACM on Software Engineering10.1145/36608041:FSE(2190-2213)Online publication date: 12-Jul-2024
  • (2024)Code Recommendation for Schema Evolution of Mimic Storage SystemsInternational Journal of Software Engineering and Knowledge Engineering10.1142/S0218194024500499(1-22)Online publication date: 28-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 33, Issue 3
March 2024
943 pages
EISSN:1557-7392
DOI:10.1145/3613618
  • Editor:
  • Mauro Pezzé
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 March 2024
Online AM: 14 November 2023
Accepted: 26 October 2023
Revised: 05 September 2023
Received: 23 May 2023
Published in TOSEM Volume 33, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. API recommendation
  2. Code Pre-training
  3. Code Completion

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • Natural Science Foundation of Jiangsu Province, China
  • NSF award

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)591
  • Downloads (Last 6 weeks)71
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Learning to Detect and Localize Multilingual BugsProceedings of the ACM on Software Engineering10.1145/36608041:FSE(2190-2213)Online publication date: 12-Jul-2024
  • (2024)Code Recommendation for Schema Evolution of Mimic Storage SystemsInternational Journal of Software Engineering and Knowledge Engineering10.1142/S0218194024500499(1-22)Online publication date: 28-Oct-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media