Evaluating the adaptive selection of classifiers for cross-project bug prediction

D. Di Nucci, Fabio Palomba, Andrea De Lucia

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

4 Citations (Scopus)

Abstract

Bug prediction models are used to locate source code elements more likely to be defective. One of the key factors influencing their performances is related to the selection of a machine learning method (a.k.a., classifier) to use when discriminating buggy and non-buggy classes. Given the high complementarity of stand-alone classifiers, a recent trend is the definition of ensemble techniques, which try to effectively combine the predictions of different stand-alone machine learners. In a recent work we proposed ASCI, a technique that dynamically selects the right classifier to use based on the characteristics of the class on which the prediction has to be done. We tested it in a within-project scenario, showing its higher accuracy with respect to the Validation and Voting strategy. In this paper, we continue on the line of research, by (i) evaluating ASCI in a global and local cross-project setting and (ii) comparing its performances with those achieved by a stand-alone and an ensemble baselines, namely Naive Bayes and Validation and Voting, respectively. A key finding of our study shows that ASCI is able to perform better than the other techniques in the context of cross-project bug prediction. Moreover, despite local learning is not able to improve the performances of the corresponding models in most cases, it is able to improve the robustness of the models relying on ASCI.
Original languageEnglish
Title of host publicationProceedings - International Conference on Software Engineering
DOIs
Publication statusPublished - 2018

Fingerprint Dive into the research topics of 'Evaluating the adaptive selection of classifiers for cross-project bug prediction'. Together they form a unique fingerprint.

Cite this