skip to main content
10.1145/3287324.3287366acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article
Public Access

Assessing Incremental Testing Practices and Their Impact on Project Outcomes

Published: 22 February 2019 Publication History

Abstract

Software testing is an important aspect of the development process, one that has proven to be a challenge to formally introduce into the typical undergraduate CS curriculum. Unfortunately, existing assessment of testing in student software projects tends to focus on evaluation of metrics like code coverage over the finished software product, thus eliminating the possibility of giving students early feedback as they work on the project. Furthermore, assessing and teaching the process of writing and executing software tests is also important, as shown by the multiple variants proposed and disseminated by the software engineering community, e.g., test-driven development (TDD) or incremental test-last (ITL). We present a family of novel metrics for assessment of testing practices for increments of software development work, thus allowing early feedback before the software project is finished. Our metrics measure the balance and sequence of effort spent writing software tests in a work increment. We performed an empirical study using our metrics to evaluate the test-writing practices of 157 advanced undergraduate students, and their relationships with project outcomes over multiple projects for a whole semester. We found that projects where more testing effort was spent per work session tended to be more semantically correct and have higher code coverage. The percentage of method-specific testing effort spent before production code did not contribute to semantic correctness, and had a negative relationship with code coverage. These novel metrics will enable educators to give students early, incremental feedback about their testing practices as they work on their software projects.

References

[1]
Kalle Aaltonen, Petri Ihantola, and Otto Sepp"al"a. 2010. Mutation Analysis vs. Code Coverage in Automated Assessment of Students' Testing Skills. In Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion (OOPSLA '10). ACM, New York, NY, USA, 153--160.
[2]
Maurìcio Finavaro Aniche. 2018. RepoDriller. https://github.com/ayaankazerouni/repodriller .
[3]
Maurìcio Finavaro Aniche and Marco Aurélio Gerosa. 2010. Most common mistakes in test-driven development practice: Results from an online survey with developers. In Software Testing, Verification, and Validation Workshops (ICSTW), 2010 Third International Conference on. IEEE, 469--478.
[4]
Dave Astels. 2003. Test driven development: A practical guide .Prentice Hall Professional Technical Reference.
[5]
Elena Garc'ia Barriocanal, Miguel-Ángel Sicilia Urbán, Ignacio Aedo Cuevas, and Paloma D'iaz Pérez. 2002. An Experience in Integrating Automated Unit Testing Practices in an Introductory Programming Course. SIGCSE Bull., Vol. 34, 4 (Dec. 2002), 125--128.
[6]
Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, Articles, Vol. 67, 1 (2015), 1--48.
[7]
Kent Beck. 2003. Test-driven development: by example .Addison-Wesley Professional.
[8]
Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015b. When, How, and Why Developers (Do Not) Test in Their IDEs. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179--190.
[9]
Moritz Beller, Georgios Gousios, and Andy Zaidman. 2015a. How (Much) Do Developers Test?. In Proceedings of the 37th International Conference on Software Engineering - Volume 2 (ICSE '15). IEEE Press, Piscataway, NJ, USA, 559--562.
[10]
Moritz Beller, Igor Levaja, Annibale Panichella, Georgios Gousios, and Andy Zaidman. 2016. How to Catch 'Em All: WatchDog, a Family of IDE Plug-ins to Assess Testing. In Proceedings of the 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP '16). ACM, New York, NY, USA, 53--56.
[11]
Thirumalesh Bhat and Nachiappan Nagappan. 2006. Evaluating the Efficacy of Test-driven Development: Industrial Case Studies. In Proceedings of the 2006 ACM/IEEE International Symposium on Empirical Software Engineering (ISESE '06). ACM, New York, NY, USA, 356--363.
[12]
C. Bird, P. C. Rigby, E. T. Barr, D. J. Hamilton, D. M. German, and P. Devanbu. 2009. The promises and perils of mining git. In 2009 6th IEEE International Working Conference on Mining Software Repositories. 1--10.
[13]
Kevin Buffardi and Stephen H. Edwards. 2014. A Formative Study of Influences on Student Testing Behaviors. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE '14). ACM, New York, NY, USA, 597--602.
[14]
Chetan Desai, David Janzen, and Kyle Savage. 2008. A Survey of Evidence for Test-driven Development in Academia. SIGCSE Bull., Vol. 40, 2 (June 2008), 97--101.
[15]
Stephen H. Edwards. 2003. Improving Student Performance by Evaluating How Well Students Test Their Own Programs. J. Educ. Resour. Comput., Vol. 3, 3, Article 1 (Sept. 2003).
[16]
Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Automatically Grading Programming Assignments. In Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE '08). ACM, New York, NY, USA, 328--328.
[17]
Stephen H. Edwards, Zalia Shams, Michael Cogswell, and Robert C. Senkbeil. 2012. Running Students' Software Tests Against Each Others' Code: New Life for an Old "Gimmick". In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (SIGCSE '12). ACM, New York, NY, USA, 221--226.
[18]
Hakan Erdogmus, Grigori Melnik, and Ron Jeffries. 2010. Test-Driven Development.
[19]
D. Fucci, H. Erdogmus, B. Turhan, M. Oivo, and N. Juristo. 2017. A Dissection of the Test-Driven Development Process: Does It Really Matter to Test-First or to Test-Last? IEEE Transactions on Software Engineering, Vol. 43, 7 (July 2017), 597--614.
[20]
Davide Fucci, Giuseppe Scanniello, Simone Romano, Martin Shepperd, Boyce Sigweni, Fernando Uyaguari, Burak Turhan, Natalia Juristo, and Markku Oivo. 2016. An external replication on the effects of test-driven development using a multi-site blind analysis approach. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, 3.
[21]
M. Ghafari, C. Ghezzi, and K. Rubinov. 2015. Automatically identifying focal methods under test in unit test cases. In 2015 IEEE 15th International Working Conference on Source Code Analysis and Manipulation (SCAM). 61--70.
[22]
Michael H. Goldwasser. 2002. A Gimmick to Integrate Software Testing Throughout the Curriculum. In Proceedings of the 33rd SIGCSE Technical Symposium on Computer Science Education (SIGCSE '02). ACM, New York, NY, USA, 271--275.
[23]
Roya Hosseini, Arto Vihavainen, and Peter Brusilovsky. 2014. Exploring problem solving paths in a Java programming course. (2014).
[24]
Liang Huang and Mike Holcombe. 2009. Empirical investigation towards the effectiveness of Test First programming. Information and Software Technology, Vol. 51, 1 (2009), 182--194.
[25]
Edward L. Jones. 2000. Software Testing in the Computer Science Curriculum -- a Holistic Approach. In Proceedings of the Australasian Conference on Computing Education (ACSE '00). ACM, New York, NY, USA, 153--157.
[26]
Ayaan M. Kazerouni, Stephen H. Edwards, T. Simin Hall, and Clifford A. Shaffer. 2017b. DevEventTracker: Tracking Development Events to Assess Incremental Development and Procrastination. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE '17). ACM, New York, NY, USA, 104--109.
[27]
Ayaan M. Kazerouni, Stephen H. Edwards, and Clifford A. Shaffer. 2017a. Quantifying Incremental Development Practices and Their Relationship to Procrastination. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER '17). ACM, New York, NY, USA, 191--199.
[28]
Sami Kollanus. 2010. Test-driven development-still a promising approach?. In Quality of Information and Communications Technology (QUATIC), 2010 Seventh International Conference on the. IEEE, 403--408.
[29]
Joseph Abraham Luke. 2015. Continuously Collecting Software Development Event Data As Students Program . Master's thesis. Virginia Tech.
[30]
E Michael Maximilien and Laurie Williams. 2003. Assessing test-driven development at IBM. In Software Engineering, 2003. Proceedings. 25th International Conference on. IEEE, 564--569.
[31]
Hussan Munir, Misagh Moayyed, and Kai Petersen. 2014. Considering rigor and relevance when evaluating test driven development: A systematic review. Information and Software Technology, Vol. 56, 4 (2014), 375 -- 394.
[32]
Glenford J Myers, Corey Sandler, and Tom Badgett. 2011. The art of software testing .John Wiley & Sons.
[33]
Shinichi Nakagawa and Holger Schielzeth. {n. d.}. A general and simple method for obtaining R2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, Vol. 4, 2 ( {n. d.}), 133--142.
[34]
Zalia Shams and Stephen H. Edwards. 2013. Toward Practical Mutation Analysis for Evaluating the Quality of Student-written Software Tests. In Proceedings of the Ninth Annual International ACM Conference on International Computing Education Research (ICER '13). ACM, New York, NY, USA, 53--58.
[35]
Terry Shepard, Margaret Lamb, and Diane Kelly. 2001. More Testing Should Be Taught. Commun. ACM, Vol. 44, 6 (June 2001), 103--108.
[36]
Jaime Spacco, David Hovemeyer, William Pugh, Fawzi Emad, Jeffrey K. Hollingsworth, and Nelson Padua-Perez. 2006. Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Programming Courses. In Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITICSE '06). ACM, New York, NY, USA, 13--17.
[37]
Jaime Spacco and William Pugh. 2006. Helping Students Appreciate Test-driven Development (TDD). In Companion to the 21st ACM SIGPLAN Symposium on Object-oriented Programming Systems, Languages, and Applications (OOPSLA '06). ACM, New York, NY, USA, 907--913.
[38]
Davide Spadini, Maurìcio Aniche, and Alberto Bacchelli. 2018. PyDriller: Python Framework for Mining Software Repositories .
[39]
Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin P"artel. 2013. Scaffolding Students' Learning Using Test My Code. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE '13). ACM, New York, NY, USA, 117--122.
[40]
Laurie Williams, E Michael Maximilien, and Mladen Vouk. 2003. Test-driven development as a defect-reduction practice. In Software Reliability Engineering, 2003. ISSRE 2003. 14th International Symposium on. IEEE, 34--45.

Cited By

View all
  • (2024)Probeable Problems for Beginner-level Programming-with-AI ContestsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671108(166-176)Online publication date: 12-Aug-2024
  • (2024)Unraveling the code: an in-depth empirical study on the impact of development practices in auxiliary functions implementationSoftware Quality Journal10.1007/s11219-024-09682-432:3(1137-1174)Online publication date: 1-Sep-2024
  • (2023)A Model of How Students Engineer Test Cases With FeedbackACM Transactions on Computing Education10.1145/362860424:1(1-31)Online publication date: 20-Oct-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGCSE '19: Proceedings of the 50th ACM Technical Symposium on Computer Science Education
February 2019
1364 pages
ISBN:9781450358903
DOI:10.1145/3287324
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 February 2019

Permissions

Request permissions for this article.

Check for updates

Badges

  • Best Paper

Author Tags

  1. incremental development
  2. process measurement
  3. software repository mining

Qualifiers

  • Research-article

Funding Sources

Conference

SIGCSE '19
Sponsor:

Acceptance Rates

SIGCSE '19 Paper Acceptance Rate 169 of 526 submissions, 32%;
Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

Upcoming Conference

SIGCSE Virtual 2024
1st ACM Virtual Global Computing Education Conference
December 5 - 8, 2024
Virtual Event , NC , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)176
  • Downloads (Last 6 weeks)17
Reflects downloads up to 01 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Probeable Problems for Beginner-level Programming-with-AI ContestsProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671108(166-176)Online publication date: 12-Aug-2024
  • (2024)Unraveling the code: an in-depth empirical study on the impact of development practices in auxiliary functions implementationSoftware Quality Journal10.1007/s11219-024-09682-432:3(1137-1174)Online publication date: 1-Sep-2024
  • (2023)A Model of How Students Engineer Test Cases With FeedbackACM Transactions on Computing Education10.1145/362860424:1(1-31)Online publication date: 20-Oct-2023
  • (2023)The Impact of a Remote Live-Coding Pedagogy on Student Programming Processes, Grades, and Lecture Questions AskedProceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588846(533-539)Online publication date: 29-Jun-2023
  • (2023)HybridCISave: A Combined Build and Test Selection Approach in Continuous IntegrationACM Transactions on Software Engineering and Methodology10.1145/357603832:4(1-39)Online publication date: 26-May-2023
  • (2023)Understanding and Measuring Incremental Development in CS1Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569880(722-728)Online publication date: 2-Mar-2023
  • (2023)Do the Test Smells Assertion Roulette and Eager Test Impact Students' Troubleshooting and Debugging Capabilities?Proceedings of the 45th International Conference on Software Engineering: Software Engineering Education and Training10.1109/ICSE-SEET58685.2023.00009(29-39)Online publication date: 17-May-2023
  • (2023)Industry perceptions of the competencies needed by novice software testerEducation and Information Technologies10.1007/s10639-023-12055-229:5(6107-6138)Online publication date: 29-Jul-2023
  • (2022)Characterizing high-quality test methodsProceedings of the 19th International Conference on Mining Software Repositories10.1145/3524842.3529092(265-269)Online publication date: 23-May-2022
  • (2022)Developers’ need for the rationale of code commitsJournal of Systems and Software10.1016/j.jss.2022.111320189:COnline publication date: 1-Jul-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media