Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Agreement, the f-measure, and reliability in information retrieval

J Am Med Inform Assoc. 2005 May-Jun;12(3):296-8. doi: 10.1197/jamia.M1733. Epub 2005 Jan 31.

Abstract

Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the kappa statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that kappa approaches these measures as the number of negative cases grows large. Positive specific agreement-or the equivalent F-measure-may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Humans
  • Information Services
  • Information Storage and Retrieval / standards
  • Information Storage and Retrieval / statistics & numerical data*
  • Internet
  • Medical Informatics / statistics & numerical data
  • Observer Variation
  • Reproducibility of Results*