Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJanuary 2024
Report on the 3rd Workshop on the Perspectives on the Evaluation of Recommender Systems (PERSPECTIVES 2023) at RecSys 2023
ACM SIGIR Forum (SIGIR), Volume 57, Issue 2December 2023, Article No.: 18, Pages 1–4https://doi.org/10.1145/3642979.3643000Evaluation is a central step when developing, optimizing, and deploying recommender systems. The PERSPECTIVES 2023 workshop, held as part of the 17th ACM Conference on Recommender Systems (RecSys 2023), served as a forum where researchers from both ...
- research-articleJanuary 2024
Report on the 14th Conference and Labs of the Evaluation Forum (CLEF 2023): Experimental IR Meets Multilinguality, Multimodality, and Interaction
- Mohammad Aliannejadi,
- Avi Arampatzis,
- Guglielmo Faggioli,
- Nicola Ferro,
- Anastasia Giachanou,
- Evangelos Kanoulas,
- Dan Li,
- Theodora Tsikrika,
- Michalis Vlachos,
- Stefanos Vrochidis
ACM SIGIR Forum (SIGIR), Volume 57, Issue 2December 2023, Article No.: 16, Pages 1–16https://doi.org/10.1145/3642979.3642998This is a report on the fourteenth edition of the Conference and Labs of the Evaluation Forum (CLEF 2023), held on September 18--21, 2023, in Thessaloniki, Greece. CLEF was a four-day hybrid event combining a conference and an evaluation forum. The ...
- research-articleJanuary 2024
Report on the 46th ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023): Reflections from the Program Co-Chairs
ACM SIGIR Forum (SIGIR), Volume 57, Issue 2December 2023, Article No.: 9, Pages 1–20https://doi.org/10.1145/3642979.3642991The ACM SIGIR Conference on Research and Development in Information Retrieval has been experiencing significant growth over the past few years. In 2023, SIGIR received a total of 822 full paper submissions and 1,787 papers/proposals. This has brought ...
- research-articleJanuary 2024
Report on the 13th Italian Information Retrieval Workshop (IIR 2023)
ACM SIGIR Forum (SIGIR), Volume 57, Issue 2December 2023, Article No.: 8, Pages 1–12https://doi.org/10.1145/3642979.3642990The 13th Italian Information Retrieval Workshop is the thirteenth edition of the annual conference of the Italian information retrieval and recommender systems communities. The two days of the conference gathered interesting studies and research work on ...
- research-articleDecember 2023
Modelling and Explaining IR System Performance Towards Predictive Evaluation
ACM SIGIR Forum (SIGIR), Volume 57, Issue 1June 2023, Article No.: 15, Pages 1–2https://doi.org/10.1145/3636341.3636361Information Retrieval (IR) systems play a fundamental role in many modern commodities, including search engines (SEs), digital libraries, recommender systems, and social networks.
The IR task is particularly challenging because of the volatility of IR ...
-
- research-articleDecember 2023
Report on the 1st Workshop on Query Performance Prediction and Its Evaluation in New Tasks (QPP++ 2023) at ECIR 2023
ACM SIGIR Forum (SIGIR), Volume 57, Issue 1June 2023, Article No.: 12, Pages 1–7https://doi.org/10.1145/3636341.3636356Query Performance Prediction (QPP) is currently primarily applied to ad-hoc retrieval tasks. The Information Retrieval (IR) field is reaching new heights thanks to recent advances in large language models and neural networks, as well as emerging new ...
- research-articleDecember 2023
Report on the 45th European Conference on Information Retrieval (ECIR 2023)
ACM SIGIR Forum (SIGIR), Volume 57, Issue 1June 2023, Article No.: 11, Pages 1–11https://doi.org/10.1145/3636341.3636355This paper reports on the 45th European Conference on Information Retrieval (ECIR 2023), held in Dublin, Ireland, during April 2--6, 2023. The conference was the largest ECIR ever, and brought together hundreds of researchers from Europe and abroad. For ...
- research-articleMarch 2022
Report on the 15th round of NII testbeds and community for information access research (NTCIR-15)
ACM SIGIR Forum (SIGIR), Volume 55, Issue 2December 2021, Article No.: 21, Pages 1–6https://doi.org/10.1145/3527546.3527570This is a report on the NTCIR-15 conference held online in December 2020. NTCIR is a sesquiannual research project designed to evaluate various information access technologies, including information retrieval, information recommendation, question ...
- research-articleMarch 2022
Report on the 1st simulation for information retrieval workshop (Sim4IR 2021) at SIGIR 2021
ACM SIGIR Forum (SIGIR), Volume 55, Issue 2December 2021, Article No.: 10, Pages 1–16https://doi.org/10.1145/3527546.3527559Simulation is used as a low-cost and repeatable means of experimentation. As Information Retrieval (IR) researchers, we are no strangers to the idea of using simulation within our own field---such as the traditional means of IR system evaluation as ...
- research-articleAugust 2021
Effective collection construction for information retrieval evaluation and optimization
ACM SIGIR Forum (SIGIR), Volume 54, Issue 2December 2020, Article No.: 15, Pages 1–2https://doi.org/10.1145/3483382.3483401The availability of test collections in Cranfield paradigm has significantly benefited the development of models, methods and tools in information retrieval. Such test collections typically consist of a set of topics, a document collection and a set of ...
- research-articleAugust 2021
Cheap IR evaluation: fewer topics, no relevance judgements, and crowdsourced assessments
ACM SIGIR Forum (SIGIR), Volume 54, Issue 2December 2020, Article No.: 14, Pages 1–2https://doi.org/10.1145/3483382.3483400To evaluate Information Retrieval (IR) effectiveness, a possible approach is to use test collections, which are composed of a collection of documents, a set of description of information needs (called topics), and a set of relevant documents to each ...
- research-articleAugust 2021
ARQMath: a new benchmark for math-aware CQA and math formula retrieval
ACM SIGIR Forum (SIGIR), Volume 54, Issue 2December 2020, Article No.: 4, Pages 1–9https://doi.org/10.1145/3483382.3483388The Answer Retrieval for Questions on Math (ARQMath) evaluation was run for the first time at CLEF 2020. ARQMath is the first Community Question Answering (CQA) shared task for math, retrieving existing answers from Math Stack Exchange (MSE) that can ...
- research-articleAugust 2021
Proof by experimentation?: towards better IR research
ACM SIGIR Forum (SIGIR), Volume 54, Issue 2December 2020, Article No.: 2, Pages 1–4https://doi.org/10.1145/3483382.3483385Most IR experiments lack both internal and external validity. Top performance is mostly an illusion, given the lack of solid statistical evidence. Thus, reviewers should ignore performance numbers in their judgement, and pay more attention to proper ...
- research-articleAugust 2021
Coopetition in IR research
ACM SIGIR Forum (SIGIR), Volume 54, Issue 2December 2020, Article No.: 1, Pages 1–3https://doi.org/10.1145/3483382.3483384Coopetition is defined as competitors cooperating for the common good, an apt description of the community evaluations such as TREC that build infrastructure to support information retrieval (IR) research. This note summarizes the main points of a ...
- research-articleJuly 2021
Report on the CHIIR 2021 third workshop on evaluation of personalisation in information retrieval (WEPIR 2021)
ACM SIGIR Forum (SIGIR), Volume 55, Issue 1June 2021, Article No.: 7, Pages 1–11https://doi.org/10.1145/3476415.3476422The Third Workshop on Evaluation of Personalisation in Information Retrieval (WEPIR 2021) was held in conjunction with the ACM SIGIR Conference on Human Information Interaction & Retrieval (CHIIR 2021) in Canberra, Australia, as a virtual event. WEPIR ...
- research-articleFebruary 2021
On Fuhr's guideline for IR evaluation
ACM SIGIR Forum (SIGIR), Volume 54, Issue 1June 2020, Article No.: 12, Pages 1–8https://doi.org/10.1145/3451964.3451976In the December 2017 issue of SIGIR Forum, Fuhr presented ten "Thou Shalt Not"s (i.e., warnings against bad practices) for IR experimenters. While his article provides a lot of good materials for discussion, the objective of the present article is to ...
- research-articleFebruary 2021
TREC-COVID: constructing a pandemic information retrieval test collection
- Ellen Voorhees,
- Tasmeer Alam,
- Steven Bedrick,
- Dina Demner-Fushman,
- William R. Hersh,
- Kyle Lo,
- Kirk Roberts,
- Ian Soboroff,
- Lucy Lu Wang
ACM SIGIR Forum (SIGIR), Volume 54, Issue 1June 2020, Article No.: 1, Pages 1–12https://doi.org/10.1145/3451964.3451965TREC-COVID is a community evaluation designed to build a test collection that captures the information needs of biomedical researchers using the scientific literature during a pandemic. One of the key characteristics of pandemic search is the ...
- columnAugust 2017
Relevance-Based Language Models
ACM SIGIR Forum (SIGIR), Volume 51, Issue 2July 2017, Pages 260–267https://doi.org/10.1145/3130348.3130376We explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches. It has long been recognized that the primary obstacle to effective performance of classical models is the need to ...
- columnAugust 2017
IR evaluation methods for retrieving highly relevant documents
ACM SIGIR Forum (SIGIR), Volume 51, Issue 2July 2017, Pages 243–250https://doi.org/10.1145/3130348.3130374This paper proposes evaluation methods based on the use of non-dichotomous relevance judgements in IR experiments. It is argued that evaluation methods should credit IR methods for their ability to retrieve highly relevant documents. This is desirable ...
- columnAugust 2017
Evaluating Evaluation Measure Stability
ACM SIGIR Forum (SIGIR), Volume 51, Issue 2July 2017, Pages 235–242https://doi.org/10.1145/3130348.3130373This paper presents a novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments. It validates several of the rules-of-thumb experimenters use, such as the number of queries needed for a good ...