The (ab) use of open source code to train large language models

A Al-Kaswan, M Izadi - 2023 IEEE/ACM 2nd International …, 2023 - ieeexplore.ieee.org
2023 IEEE/ACM 2nd International Workshop on Natural Language-Based …, 2023ieeexplore.ieee.org
In recent years, Large Language Models (LLMs) have gained significant popularity due to
their ability to generate human-like text and their potential applications in various fields, such
as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora
of source code scraped from the Internet. The content of these datasets is memorized and
emitted by the models, often in a verbatim manner. In this work, we will discuss the security,
privacy, and licensing implications of memorization. We argue why the use of copyleft code …
In recent years, Large Language Models (LLMs) have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyleft code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue.
ieeexplore.ieee.org