A Survey on Large Language Models for Code Generation

J Jiang, F Wang, J Shen, S Kim, S Kim - arXiv preprint arXiv:2406.00515, 2024 - arxiv.org
J Jiang, F Wang, J Shen, S Kim, S Kim
arXiv preprint arXiv:2406.00515, 2024arxiv.org
Large Language Models (LLMs) have garnered remarkable advancements across diverse
code-related tasks, known as Code LLMs, particularly in code generation that generates
source code with LLM from natural language descriptions. This burgeoning field has
captured significant interest from both academic researchers and industry professionals due
to its practical significance in software development, eg, GitHub Copilot. Despite the active
exploration of LLMs for a variety of code tasks, either from the perspective of natural …
Large Language Models (LLMs) have garnered remarkable advancements across diverse code-related tasks, known as Code LLMs, particularly in code generation that generates source code with LLM from natural language descriptions. This burgeoning field has captured significant interest from both academic researchers and industry professionals due to its practical significance in software development, e.g., GitHub Copilot. Despite the active exploration of LLMs for a variety of code tasks, either from the perspective of natural language processing (NLP) or software engineering (SE) or both, there is a noticeable absence of a comprehensive and up-to-date literature review dedicated to LLM for code generation. In this survey, we aim to bridge this gap by providing a systematic literature review that serves as a valuable reference for researchers investigating the cutting-edge progress in LLMs for code generation. We introduce a taxonomy to categorize and discuss the recent developments in LLMs for code generation, covering aspects such as data curation, latest advances, performance evaluation, and real-world applications. In addition, we present a historical overview of the evolution of LLMs for code generation and offer an empirical comparison using the widely recognized HumanEval and MBPP benchmarks to highlight the progressive enhancements in LLM capabilities for code generation. We identify critical challenges and promising opportunities regarding the gap between academia and practical development. Furthermore, we have established a dedicated resource website (https://codellm.github.io) to continuously document and disseminate the most recent advances in the field.
arxiv.org