On the diffusion of test smells in automatically generated test code: An empirical study

Fabio Palomba, D. Di Nucci, Annibale Panichella, Rocco Oliveto, Andrea De Lucia

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

31 Citations (Scopus)

Abstract

Code smells are symptoms of poor design and implementation choices that may hinder code comprehension and possibly increase change- and fault-proneness of source code. Several techniques have been proposed in the literature for detecting code smells. These techniques are generally evaluated by comparing their accuracy on a set of detected candidate code smells against a manually-produced oracle. Unfortunately, such comprehensive sets of annotated code smells are not available in the literature with only few exceptions. In this paper we contribute (i) a dataset of 243 instances of five types of code smells identified from 20 open source software projects, (ii) a systematic procedure for validating code smell datasets, (iii) LANDFILL, a Web-based platform for sharing code smell datasets, and (iv) a set of APIs for programmatically accessing LANDFILL's contents. Anyone can contribute to Landfill by (i) improving existing datasets (e.g., Adding missing instances of code smells, flagging possibly incorrectly classified instances), and (ii) sharing and posting new datasets. Landfill is available at www.sesa.unisa.it/landfill/, while the video demonstrating its features in action is available at http://www.sesa.unisa.it/tools/landfill.jsp.
Original languageEnglish
Title of host publicationProceedings - 9th International Workshop on Search-Based Software Testing, SBST 2016
DOIs
Publication statusPublished - 2016

Fingerprint Dive into the research topics of 'On the diffusion of test smells in automatically generated test code: An empirical study'. Together they form a unique fingerprint.

Cite this