Using large language models to generate junit tests: An empirical study

ML Siddiq, JC Da Silva Santos, RH Tanvir… - Proceedings of the 28th …, 2024 - dl.acm.org
Proceedings of the 28th International Conference on Evaluation and …, 2024dl.acm.org
A code generation model generates code by taking a prompt from a code comment, existing
code, or a combination of both. Although code generation models (eg, GitHub Copilot) are
increasingly being adopted in practice, it is unclear whether they can successfully be used
for unit test generation without fine-tuning for a strongly typed language like Java. To fill this
gap, we investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can
generate unit tests. We used two benchmarks (HumanEval and Evosuite SF110) to …
A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models (e.g., GitHub Copilot) are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without fine-tuning for a strongly typed language like Java. To fill this gap, we investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can generate unit tests. We used two benchmarks (HumanEval and Evosuite SF110) to investigate the effect of context generation on the unit test generation process. We evaluated the models based on compilation rates, test correctness, test coverage, and test smells. We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests.
ACM Digital Library
Показан е най-добрият резултат за това търсене. Показване на всички резултати