skip to main content
10.1145/3551349.3560438acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

How Readable is Model-generated Code? Examining Readability and Visual Inspection of GitHub Copilot

Published: 05 January 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Background: Recent advancements in large language models have motivated the practical use of such models in code generation and program synthesis. However, little is known about the effects of such tools on code readability and visual attention in practice. Objective: In this paper, we focus on GitHub Copilot to address the issues of readability and visual inspection of model generated code. Readability and low complexity are vital aspects of good source code, and visual inspection of generated code is important in light of automation bias. Method: Through a human experiment (n=21) we compare model generated code to code written completely by human programmers. We use a combination of static code analysis and human annotators to assess code readability, and we use eye tracking to assess the visual inspection of code. Results: Our results suggest that model generated code is comparable in complexity and readability to code written by human pair programmers. At the same time, eye tracking data suggests, to a statistically significant level, that programmers direct less visual attention to model generated code. Conclusion: Our findings highlight that reading code is more important than ever, and programmers should beware of complacency and automation bias with model generated code.

    References

    [1]
    Ritchie Schacher Adam Archer and Scott Will. [n.d.]. Program in Pairs. Retrieved December 31, 2021 from https://www.ibm.com/garage/method/practices/code/practice_pair_programming/
    [2]
    Naser Al Madi, Drew Guarnera, Bonita Sharif, and Jonathan Maletic. 2021. EMIP Toolkit: A Python Library for Customized Post-processing of the Eye Movements in Programming Dataset. In ACM Symposium on Eye Tracking Research and Applications. 1–6.
    [3]
    Naser Al Madi, Cole S Peterson, Bonita Sharif, and Jonathan I Maletic. 2021. From Novice to Expert: Analysis of Token Level Effects in a Longitudinal Eye Tracking Study. In 2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC). IEEE, 172–183.
    [4]
    Marco Barenkamp, Jonas Rebstadt, and Oliver Thomas. 2020. Applications of AI in classical software engineering. AI Perspectives 2, 1 (2020), 1–15.
    [5]
    Boehm Barry 1981. Software engineering economics. New York 197(1981).
    [6]
    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
    [7]
    Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374(2021).
    [8]
    Martha E Crosby, Jean Scholtz, and Susan Wiedenbeck. 2002. The Roles Beacons Play in Comprehension for Novice and Expert Programmers. In PPIG. 5.
    [9]
    Martha E Crosby and Jan Stelovsky. 1990. How do we read algorithms? A case study. Computer 23, 1 (1990), 25–35.
    [10]
    Cem Kaner 2004. Software engineering metrics: What do they measure and how do we know?. In In METRICS 2004. IEEE CS. Citeseer.
    [11]
    Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-Level Code Generation with AlphaCode. (2022).
    [12]
    Zohar Manna and Richard J Waldinger. 1971. Toward automatic program synthesis. Commun. ACM 14, 3 (1971), 151–165.
    [13]
    Alberto S Nuñez-Varela, Héctor G Pérez-Gonzalez, Francisco E Martínez-Perez, and Carlos Soubervielle-Montalvo. 2017. Source code metrics: A systematic mapping study. Journal of Systems and Software 128 (2017), 164–197.
    [14]
    Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Brendan Dolan-Gavitt. 2021. Can OpenAI Codex and Other Large Language Models Help Us Fix Security Bugs?arXiv preprint arXiv:2112.02125(2021).
    [15]
    Norman Peitek, Sven Apel, Chris Parnin, André Brechmann, and Janet Siegmund. 2021. Program comprehension and code complexity metrics: An fmri study. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 524–536.
    [16]
    Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
    [17]
    Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research.Psychological bulletin 124, 3 (1998), 372.
    [18]
    Erik D Reichle, Keith Rayner, and Alexander Pollatsek. 2003. The EZ Reader model of eye-movement control in reading: Comparisons to other models. Behavioral and brain sciences 26, 4 (2003), 445–476.
    [19]
    Simone Scalabrino, Gabriele Bavota, Christopher Vendome, Mario Linares-Vásquez, Denys Poshyvanyk, and Rocco Oliveto. 2017. Automatically assessing code understandability: How far are we?. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 417–427.
    [20]
    Zohreh Sharafi, Bonita Sharif, Yann-Gaël Guéhéneuc, Andrew Begel, Roman Bednarik, and Martha Crosby. 2020. A practical guide on conducting eye tracking studies in software engineering. Empirical Software Engineering 25, 5 (2020), 3128–3174.
    [21]
    Bonita Sharif and Jonathan I Maletic. 2016. iTrace: Overcoming the Limitations of Short Code Examples in Eye Tracking Experiments. In ICSME. 647.
    [22]
    Dominik Sobania, Martin Briesch, and Franz Rothlauf. 2021. Choose Your Programming Copilot: A Comparison of the Program Synthesis Performance of GitHub Copilot and Genetic Programming. arXiv preprint arXiv:2111.07875(2021).
    [23]
    Hidetake Uwano, Masahide Nakamura, Akito Monden, and Ken-ichi Matsumoto. 2006. Analyzing individual performance of source code review using reviewers’ eye movement. In Proceedings of the 2006 symposium on Eye tracking research & applications. 133–140.
    [24]
    Priyan Vaithilingam, Tianyi Zhang, and Elena Glassman. 2022. Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ’22 Extended Abstracts). ACM.
    [25]
    Guido Van Rossum, Barry Warsaw, and Nick Coghlan. 2001. PEP 8: style guide for Python code. Python. org 1565(2001).
    [26]
    Laurie Williams. 2001. Integrating pair programming into a software development process. In Proceedings 14th Conference on Software Engineering Education and Training.’In search of a software engineering profession’(Cat. No. PR01059). IEEE, 27–36.
    [27]
    Frank F Xu, Bogdan Vasilescu, and Graham Neubig. 2022. In-IDE Code Generation from Natural Language: Promise and Challenges. ACM Transactions on Software Engineering and Methodology (TOSEM) 31, 2(2022), 1–47.
    [28]
    Albert Ziegler, Eirini Kalliamvakou, X Alice Li, Andrew Rice, Devon Rifkin, Shawn Simister, Ganesh Sittampalam, and Edward Aftandilian. 2022. Productivity assessment of neural code completion. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. 21–29.

    Cited By

    View all
    • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 15-May-2024
    • (2024)Performance, Workload, Emotion, and Self-Efficacy of Novice Programmers Using AI Code GenerationProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653615(290-296)Online publication date: 3-Jul-2024
    • (2024)How much SPACE do metrics have in GenAI assisted software development?Proceedings of the 17th Innovations in Software Engineering Conference10.1145/3641399.3641419(1-5)Online publication date: 22-Feb-2024
    • Show More Cited By

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ASE '22: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering
    October 2022
    2006 pages
    ISBN:9781450394758
    DOI:10.1145/3551349
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 January 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Copilot
    2. Empirical Study
    3. Eye Tracking
    4. GitHub
    5. Readability

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ASE '22

    Acceptance Rates

    Overall Acceptance Rate 82 of 337 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)386
    • Downloads (Last 6 weeks)13
    Reflects downloads up to 14 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 15-May-2024
    • (2024)Performance, Workload, Emotion, and Self-Efficacy of Novice Programmers Using AI Code GenerationProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653615(290-296)Online publication date: 3-Jul-2024
    • (2024)How much SPACE do metrics have in GenAI assisted software development?Proceedings of the 17th Innovations in Software Engineering Conference10.1145/3641399.3641419(1-5)Online publication date: 22-Feb-2024
    • (2024)"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and TrustProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658941(822-835)Online publication date: 3-Jun-2024
    • (2024)Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers2024 IEEE Security and Privacy Workshops (SPW)10.1109/SPW63631.2024.00014(87-94)Online publication date: 23-May-2024
    • (2024)Methodology for Code Synthesis Evaluation of LLMs Presented by a Case Study of ChatGPT and CopilotIEEE Access10.1109/ACCESS.2024.340385812(72303-72316)Online publication date: 2024
    • (2024)On Eye Tracking in Software EngineeringSN Computer Science10.1007/s42979-024-03045-35:6Online publication date: 26-Jul-2024
    • (2024)Investigating the readability of test codeEmpirical Software Engineering10.1007/s10664-023-10390-z29:2Online publication date: 26-Feb-2024
    • (2024)The Recent Trends of Research on GitHub Copilot: A Systematic ReviewComputing and Informatics10.1007/978-981-99-9589-9_27(355-366)Online publication date: 26-Jan-2024
    • (2024)Impact of AI Tools on Software Development Code QualityDigital Transformation in Education and Artificial Intelligence Application10.1007/978-3-031-62058-4_15(241-256)Online publication date: 3-Jul-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media