Abstract
Assessing the quality of scientific conferences is an important and useful service that can be provided by digital libraries and similar systems. This is specially true for fields such as Computer Science and Electric Engineering, where conference publications are crucial. However, the majority of the existing quality metrics, particularly those relying on bibliographic citations, has been proposed for measuring the quality of journals. In this article we conduct a study about the relative performance of existing journal metrics in assessing the quality of scientific conferences. More importantly, departing from a deep analysis of the deficiencies of these metrics, we propose a new set of quality metrics especially designed to capture intrinsic and important aspects related to conferences, such as longevity, popularity, prestige, and periodicity. To demonstrate the effectiveness of the proposed metrics, we have conducted two sets of experiments that contrast their results against a “gold standard” produced by a large group of specialists. Our metrics obtained gains of more than 12% when compared to the most consistent journal quality metric and up to 58% when compared to standard metrics such as Thomson’s Impact Factor.
Similar content being viewed by others
Notes
We will use the term “conference” to also denote other types of scientific meetings such as symposia, workshops, etc.
Now called Thomson Reuters.
These researchers receive this grant based on the quality of productivity and are considered as top researchers or leaders in their respective fields.
References
Amin, M., & Mabe, M. (2000). Impact factors: Use and abuse. Perspectives in Publishing, 1, 1–6.
Bollen, J., & de Sompel, H. V. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. JASIST, 59, 136–149.
Bollen, J., de Sompel, H. V., & Rodriguez, M. A. (2008). Towards usage-based impact metrics: first results from the MEASUR project. In Proceedings of the 8th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 231–240.
Bollen, J., de Sompel, H. V., Smith, J. A., & Luce, R. (2005). Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing and Management, 41, 1419–1440.
Bollen, J., Rodriguez, M. A., & de Sompel, H. V. (2006). Journal status. Scientometrics, 69, 669–687.
Braun, T., Glänzel, W., & Schubert, A. (2006). A hirsch-type index for journals. Scientometrics, 69, 169–173.
Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30, 107–117.
Clausen, H., & Wormell, I. (2001). A bibliometric analysis of IOLIM conferences 1977–1999. Journal of Information Science, 27, 157–169.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102, 16569–16572.
Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30, 81–93.
Laender, A. H. F., de Lucena, C. J. P., Maldonado, J. C., Silva, E. S., & Ziviani, N. (2008). Assessing the research and education quality of the top Brazilian computer graduate programs. ACM SIGCSE Bulletin, 40, 135–145.
Larsen, B., & Ingwersen, P. (2006). Using citations for ranking in digital libraries. In Proceedings of the 6th ACM/IEEE joint conference on digital libraries, Chapel Hill, NC, p. 370.
Martins, W. S., Gonçalves, M. A., Laender, A. H. F., & Pappa, G. L. (2009). Learning to assess the quality of scientific conferences: A case study in computer science. In Proceedings of the 9th ACM/IEEE-CS joint conference on digital libraries, Austin, TX, pp. 193–202.
Patterson, D. A. (2004). The Health of Research Conferences and the Dearth of Big Idea Papers. Communications of the ACM, 47, 23–24.
Rahm, E., & Thor, A. (2005). Citation analysis of database publications. SIGMOD Record, 34, 48–53.
Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality?. Journal of the Medical Library Association, 1, 42–46.
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314, 498–502.
Sidiropoulos, A., & Manolopoulos, Y. (2006). Generalized comparison of graph-based ranking algorithms for publications and authors. Journal of Systems and Software, 79, 1679–1700.
Souto, M. A. M., Warpechowski, M., & de Oliveira, J. P. M. (2007). An ontological approach for the quality assessment of computer science conferences. Proceedings of the 2007 workshop on quality of information systems, 4802, 202–212.
Yan, S., & Lee D. (2007). Toward alternative measures for ranking venues: a case of database research community. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 235–244.
Zhuang, Z., Elmacioglu, E., Lee, D., & Giles, C. L. (2007). Measuring conference quality by mining program committee characteristics. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 225–234.
Acknowledgements
This research is partially funded by the Brazilian National Institute of Science and Technology for the Web (MCT/CNPq Grant Number 573871/2008-6), by the InfoWeb project (grant number 55.0874/2007-0), and by the authors’s individual research grants from CNPq.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Martins, W.S., Gonçalves, M.A., Laender, A.H.F. et al. Assessing the quality of scientific conferences based on bibliographic citations. Scientometrics 83, 133–155 (2010). https://doi.org/10.1007/s11192-009-0078-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-009-0078-y