SecurityEval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques

ML Siddiq, JCS Santos - Proceedings of the 1st International Workshop …, 2022 - dl.acm.org
Proceedings of the 1st International Workshop on Mining Software …, 2022dl.acm.org
Automated source code generation is currently a popular machine-learning-based task. It
can be helpful for software developers to write functionally correct code from a given context.
However, just like human developers, a code generation model can produce vulnerable
code, which the developers can mistakenly use. For this reason, evaluating the security of a
code generation model is a must. In this paper, we describe SecurityEval, an evaluation
dataset to fulfill this purpose. It contains 130 samples for 75 vulnerability types, which are …
Automated source code generation is currently a popular machine-learning-based task. It can be helpful for software developers to write functionally correct code from a given context. However, just like human developers, a code generation model can produce vulnerable code, which the developers can mistakenly use. For this reason, evaluating the security of a code generation model is a must. In this paper, we describe SecurityEval, an evaluation dataset to fulfill this purpose. It contains 130 samples for 75 vulnerability types, which are mapped to the Common Weakness Enumeration (CWE). We also demonstrate using our dataset to evaluate one open-source (i.e., InCoder) and one closed-source code generation model (i.e., GitHub Copilot).
ACM Digital Library