Evaluating the evaluations of code recommender systems: a reality check

S Proksch, S Amann, S Nadi, M Mezini - Proceedings of the 31st IEEE …, 2016 - dl.acm.org
Proceedings of the 31st IEEE/ACM International Conference on Automated …, 2016dl.acm.org
While researchers develop many new exciting code recommender systems, such as method-
call completion, code-snippet completion, or code search, an accurate evaluation of such
systems is always a challenge. We analyzed the current literature and found that most of the
current evaluations rely on artificial queries extracted from released code, which begs the
question: Do such evaluations reflect real-life usages? To answer this question, we capture
6,189 fine-grained development histories from real IDE interactions. We use them as a …
While researchers develop many new exciting code recommender systems, such as method-call completion, code-snippet completion, or code search, an accurate evaluation of such systems is always a challenge. We analyzed the current literature and found that most of the current evaluations rely on artificial queries extracted from released code, which begs the question: Do such evaluations reflect real-life usages? To answer this question, we capture 6,189 fine-grained development histories from real IDE interactions. We use them as a ground truth and extract 7,157 real queries for a specific method-call recommender system. We compare the results of such real queries with different artificial evaluation strategies and check several assumptions that are repeatedly used in research, but never empirically evaluated. We find that an evolving context that is often observed in practice has a major effect on the prediction quality of recommender systems, but is not commonly reflected in artificial evaluations.
ACM Digital Library