Breaking the silence: the threats of using llms in software engineering

J Sallou, T Durieux, A Panichella - Proceedings of the 2024 ACM/IEEE …, 2024 - dl.acm.org
Proceedings of the 2024 ACM/IEEE 44th International Conference on Software …, 2024dl.acm.org
Large Language Models (LLMs) have gained considerable traction within the Software
Engineering (SE) community, impacting various SE tasks from code completion to test
generation, from program repair to code summarization. Despite their promise, researchers
must still be careful as numerous intricate factors can influence the outcomes of experiments
involving LLMs. This paper initiates an open discussion on potential threats to the validity of
LLM-based research including issues such as closed-source models, possible data leakage …
Large Language Models (LLMs) have gained considerable traction within the Software Engineering (SE) community, impacting various SE tasks from code completion to test generation, from program repair to code summarization. Despite their promise, researchers must still be careful as numerous intricate factors can influence the outcomes of experiments involving LLMs. This paper initiates an open discussion on potential threats to the validity of LLM-based research including issues such as closed-source models, possible data leakage between LLM training data and research evaluation, and the reproducibility of LLM-based findings. In response, this paper proposes a set of guidelines tailored for SE researchers and Language Model (LM) providers to mitigate these concerns. The implications of the guidelines are illustrated using existing good practices followed by LLM providers and a practical example for SE researchers in the context of test case generation.
ACM Digital Library
Показан е най-добрият резултат за това търсене. Показване на всички резултати