Crowdsourcing for information retrieval: introduction to the special issue
This introduction to the special issue summarizes and contextualizes six novel research contributions at the intersection of information retrieval (IR) and crowdsourcing (also overlapping crowdsourcing’s closely-related sibling, human computation)...
Implementing crowdsourcing-based relevance experimentation: an industrial perspective
Crowdsourcing has emerged as a viable platform for conducting different types of relevance evaluation. The main reason behind this trend is that it makes possible to conduct experiments extremely fast, with good results at a low cost. However, ...
Increasing cheat robustness of crowdsourcing tasks
Crowdsourcing successfully strives to become a widely used means of collecting large-scale scientific corpora. Many research fields, including Information Retrieval, rely on this novel way of data acquisition. However, it seems to be undermined by a ...
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Crowdsourcing relevance judgments for the evaluation of search engines is used increasingly to overcome the issue of scalability that hinders traditional approaches relying on a fixed group of trusted expert judges. However, the benefits of ...
Identifying top news using crowdsourcing
The influential Text REtrieval Conference (TREC) retrieval conference has always relied upon specialist assessors or occasionally participating groups to create relevance judgements for the tracks that it runs. Recently however, crowdsourcing has ...
Crowdsourcing and the crisis-affected community: Lessons learned and looking forward from Mission 4636
This article reports on Mission 4636, a real-time humanitarian crowdsourcing initiative that processed 80,000 text messages (SMS) sent from within Haiti following the 2010 earthquake. It was the first time that crowdsourcing (microtasking) had ...
Crowdsourcing interactions: using crowdsourcing for evaluating interactive information retrieval systems
In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro ...