skip to main content
10.1145/3510454.3522684acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
short-paper

Is GitHub copilot a substitute for human pair-programming?: an empirical study

Published: 19 October 2022 Publication History
  • Get Citation Alerts
  • Abstract

    This empirical study investigates the effectiveness of pair programming with GitHub Copilot in comparison to human pair-programming. Through an experiment with 21 participants we focus on code productivity and code quality. For experimental design, a participant was given a project to code, under three conditions presented in a randomized order. The conditions are pair-programming with Copilot, human pair-programming as a driver, and as a navigator. The codes generated from the three trials were analyzed to determine how many lines of code on average were added in each condition and how many lines of code on average were removed in the subsequent stage. The former measures the productivity of each condition while the latter measures the quality of the produced code. The results suggest that although Copilot increases productivity as measured by lines of code added, the quality of code produced is inferior by having more lines of code deleted in the subsequent trial.

    References

    [1]
    Ritchie Schacher Adam Archer and Scott Will. [n.d.]. Program in Pairs. Retrieved December 31, 2021 from https://www.ibm.com/garage/method/practices/code/practice_pair_programming/
    [2]
    Scott Carey. 2021. Developers react to GitHub Copilot. Retrieved December 31, 2021 from https://www.infoworld.com/article/3624688/developers-react-to-github-copilot.html
    [3]
    Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
    [4]
    Matteo Ciniselli, Nathan Cooper, Luca Pascarella, Antonio Mastropaolo, Emad Aghajani, Denys Poshyvanyk, Massimiliano Di Penta, and Gabriele Bavota. 2021. An Empirical Study on the Usage of Transformer Models for Code Completion. IEEE Transactions on Software Engineering (2021).
    [5]
    Mik Kersten and Gail C Murphy. 2006. Using task context to improve programmer productivity. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering. 1--11.
    [6]
    Gail C Murphy, Mik Kersten, and Leah Findlater. 2006. How are Java software developers using the Elipse IDE? IEEE software 23, 4 (2006), 76--83.
    [7]
    Emerson Murphy-Hill, Chris Parnin, and Andrew P Black. 2011. How we refactor, and how we know it. IEEE Transactions on Software Engineering 38, 1 (2011), 5--18.
    [8]
    Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Brendan Dolan-Gavitt. 2021. Can OpenAI Codex and Other Large Language Models Help Us Fix Security Bugs? arXiv preprint arXiv:2112.02125 (2021).
    [9]
    Julian Aron Prenner and Romain Robbes. 2021. Automatic Program Repair with OpenAI's Codex: Evaluating QuixBugs. arXiv preprint arXiv:2111.03922 (2021).
    [10]
    Dominik Sobania, Martin Briesch, and Franz Rothlauf. 2021. Choose Your Programming Copilot: A Comparison of the Program Synthesis Performance of GitHub Copilot and Genetic Programming. arXiv preprint arXiv:2111.07875 (2021).
    [11]
    Ayse Tosun, Ayse Bener, and Resat Kale. 2010. Ai-based software defect predictors: Applications and benefits in a case study. In Twenty-Second IAAI Conference.

    Cited By

    View all
    • (2024)Computer Science Education in ChatGPT Era: Experiences from an Experiment in a Programming Course for Novice ProgrammersMathematics10.3390/math1205062912:5(629)Online publication date: 21-Feb-2024
    • (2024)An Analysis of the Costs and Benefits of Autocomplete in IDEsProceedings of the ACM on Software Engineering10.1145/36607651:FSE(1284-1306)Online publication date: 12-Jul-2024
    • (2024)Navigating the Complexity of Generative AI Adoption in Software EngineeringACM Transactions on Software Engineering and Methodology10.1145/365215433:5(1-50)Online publication date: 4-Jun-2024
    • Show More Cited By

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICSE '22: Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings
    May 2022
    394 pages
    ISBN:9781450392235
    DOI:10.1145/3510454
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • IEEE CS

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AI
    2. GitHub
    3. copilot
    4. software development

    Qualifiers

    • Short-paper

    Conference

    ICSE '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 276 of 1,856 submissions, 15%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)970
    • Downloads (Last 6 weeks)49
    Reflects downloads up to 14 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Computer Science Education in ChatGPT Era: Experiences from an Experiment in a Programming Course for Novice ProgrammersMathematics10.3390/math1205062912:5(629)Online publication date: 21-Feb-2024
    • (2024)An Analysis of the Costs and Benefits of Autocomplete in IDEsProceedings of the ACM on Software Engineering10.1145/36607651:FSE(1284-1306)Online publication date: 12-Jul-2024
    • (2024)Navigating the Complexity of Generative AI Adoption in Software EngineeringACM Transactions on Software Engineering and Methodology10.1145/365215433:5(1-50)Online publication date: 4-Jun-2024
    • (2024)An Assessment of ML-based Sentiment Analysis for Intelligent Web FilteringProceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments10.1145/3652037.3652039(80-87)Online publication date: 26-Jun-2024
    • (2024)In-IDE Human-AI Experience in the Era of Large Language Models; A Literature ReviewProceedings of the 1st ACM/IEEE Workshop on Integrated Development Environments10.1145/3643796.3648463(95-100)Online publication date: 20-Apr-2024
    • (2024)How much SPACE do metrics have in GenAI assisted software development?Proceedings of the 17th Innovations in Software Engineering Conference10.1145/3641399.3641419(1-5)Online publication date: 22-Feb-2024
    • (2024)An Industry Case Study on Adoption of AI-based Programming AssistantsProceedings of the 46th International Conference on Software Engineering: Software Engineering in Practice10.1145/3639477.3643648(92-102)Online publication date: 14-Apr-2024
    • (2024)Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642754(1-19)Online publication date: 11-May-2024
    • (2024)Autonomous Crowdsensing: Operating and Organizing Crowdsensing for Sensing AutomationIEEE Transactions on Intelligent Vehicles10.1109/TIV.2024.33555089:3(4254-4258)Online publication date: Mar-2024
    • (2024)CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00040(684-709)Online publication date: 9-Apr-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media