Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2047403.2047406acmconferencesArticle/Chapter ViewAbstractPublication PageshtConference Proceedingsconference-collections
research-article

Experimental evaluation and hypertexts

Published: 06 June 2011 Publication History

Abstract

The information retrieval field has a strong and long tradition, that dates back to the sixties of the past century, in the experimental evaluation of Information Retrieval (IR) systems in order to assess their performances in a scientifically sound and comparable way. In this context, large-scale international evaluation campaigns, such as Text REtrieval Conference (TREC), Cross-Language Evaluation Forum (CLEF), and NII-NACSIS Test Collection for IR Systems (NTCIR), have been the vehicle and the conduit for the advancement of state-of-the-art techniques and for the development of innovative information systems through common evaluation procedures, regular and systematic evaluation cycles, comparison and benchmarking of the adopted approaches and solutions, spreading and exchange of knowledge and know-how.
Hypertexts play an important role in the information retrieval field, especially since the growth of the Web has given raise to an unprecedented need for effective information access techniques that take into consideration the multilinguality, multimodality, and hypertextual nature of the relevant information resources. This posed novel challenges for experimental evaluation which has to devise techniques for coping with experimental collections able to mimic the Web scale and for designing evaluation tasks that were representative of user needs on the Web.
This talk will discuss open issues concerning how experimental evaluation and hypertext could be better benefit each other. On the one hand, it is time for experimental evaluation to explicitly take into consideration the hypertextual nature of the resources when assessing performances based on retrieved items and not only considering systems as black-boxes that internally exploit the exiting hypertext. On the other hand, experimental evaluation produces huge amount of scientific data that would be better understood and interpreted if they were enriched with links to each other, to other resources, and to user-generated content, such as annotations explaining them.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
PMHR '11: Proceedings of the First Workshop on Personalised Multilingual Hypertext Retrieval
June 2011
61 pages
ISBN:9781450308977
DOI:10.1145/2047403

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2011

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

HT '11
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 41
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media