A survey on deep learning for software engineering
In 2006, Geoffrey Hinton proposed the concept of training “Deep Neural Networks (DNNs)”
and an improved model training method to break the bottleneck of neural network …
and an improved model training method to break the bottleneck of neural network …
Code search: A survey of techniques for finding code
L Di Grazia, M Pradel - ACM Computing Surveys, 2023 - dl.acm.org
The immense amounts of source code provide ample challenges and opportunities during
software development. To handle the size of code bases, developers commonly search for …
software development. To handle the size of code bases, developers commonly search for …
An empirical evaluation of GitHub copilot's code suggestions
N Nguyen, S Nadi - Proceedings of the 19th International Conference on …, 2022 - dl.acm.org
GitHub and OpenAI recently launched Copilot, an" AI pair programmer" that utilizes the
power of Natural Language Processing, Static Analysis, Code Synthesis, and Artificial …
power of Natural Language Processing, Static Analysis, Code Synthesis, and Artificial …
Codesearchnet challenge: Evaluating the state of semantic code search
H Husain, HH Wu, T Gazit, M Allamanis… - arXiv preprint arXiv …, 2019 - arxiv.org
Semantic code search is the task of retrieving relevant code given a natural language query.
While related to other information retrieval tasks, it requires bridging the gap between the …
While related to other information retrieval tasks, it requires bridging the gap between the …
No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence
Pre-trained models have been shown effective in many code intelligence tasks. These
models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream …
models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream …
Learning and evaluating contextual embedding of source code
A Kanade, P Maniatis… - … on machine learning, 2020 - proceedings.mlr.press
Recent research has achieved impressive results on understanding and improving source
code by building up on machine-learning techniques developed for natural languages. A …
code by building up on machine-learning techniques developed for natural languages. A …
Unsupervised translation of programming languages
B Roziere, MA Lachaux… - Advances in neural …, 2020 - proceedings.neurips.cc
A transcompiler, also known as source-to-source translator, is a system that converts source
code from a high-level programming language (such as C++ or Python) to another …
code from a high-level programming language (such as C++ or Python) to another …
Reacc: A retrieval-augmented code completion framework
Code completion, which aims to predict the following code token (s) according to the code
context, can improve the productivity of software development. Recent work has proved that …
context, can improve the productivity of software development. Recent work has proved that …
Assessing generalizability of codebert
Pre-trained models like BERT have achieved strong improvements on many natural
language processing (NLP) tasks, showing their great generalizability. The success of pre …
language processing (NLP) tasks, showing their great generalizability. The success of pre …
DOBF: A deobfuscation pre-training objective for programming languages
MA Lachaux, B Roziere… - Advances in Neural …, 2021 - proceedings.neurips.cc
Recent advances in self-supervised learning have dramatically improved the state of the art
on a wide variety of tasks. However, research in language model pre-training has mostly …
on a wide variety of tasks. However, research in language model pre-training has mostly …