Lost at c: A user study on the security implications of large language model code assistants

G Sandoval, H Pearce, T Nys, R Karri, S Garg… - 32nd USENIX Security …, 2023 - usenix.org
32nd USENIX Security Symposium (USENIX Security 23), 2023usenix.org
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-
based coding assistants. Understanding the impact of these tools on developers' code is
paramount, especially as recent work showed that LLMs may suggest cybersecurity
vulnerabilities. We conduct a security-driven user study (N= 58) to assess code written by
student programmers when assisted by LLMs. Given the potential severity of low-level bugs
as well as their relative frequency in real-world projects, we tasked participants with …
Abstract
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers’ code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N= 58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked ‘shopping list’structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.
usenix.org
Показан е най-добрият резултат за това търсене. Показване на всички резултати