Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3498366.3505832acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

A User Study on Clarifying Comparative Questions

Published: 14 March 2022 Publication History

Abstract

Vague or ambiguous queries can make it difficult for a search engine to correctly interpret a user’s underlying information need. A relatively “simple” solution then is result diversification to cover different interpretations, while in more “conversational” search interfaces, the user can be prompted to clarify their original request. We study clarification in the scenario of comparative questions that ask to compare several options. In our experiment that reflects a conversational search interface with a clarification component, 70% of the study participants find clarifications useful to retrieve relevant results for questions with unclear comparison aspects (e.g., “Which is better, Bali or Phuket?”) or without explicit comparison objects and aspects (e.g., “What is the best antibiotic?”).

References

[1]
Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. Asking Clarifying Questions in Open-Domain Information-Seeking Conversations. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019). ACM, 475–484.
[2]
Jatin Arora, Sumit Agrawal, Pawan Goyal, and Sayan Pathak. Extracting Entities of Interest from Comparative Product Reviews. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM 2017). ACM, 1975–1978.
[3]
Alexander Bondarenko, Yamen Ajjour, Valentin Dittmar, Niklas Homann, Pavel Braslavski, and Matthias Hagen. Towards Understanding and Answering Comparative Questions. In Proceedings of the 15th ACM International Conference on Web Search and Data Mining (WSDM 2022). ACM.
[4]
Alexander Bondarenko, Pavel Braslavski, Michael Völske, Rami Aly, Maik Fröbe, Alexander Panchenko, Chris Biemann, Benno Stein, and Matthias Hagen. Comparative Web Search Questions. In Proceedings of the 13th ACM International Conference on Web Search and Data Mining (WSDM 2020). ACM, 52–60.
[5]
Pavel Braslavski, Denis Savenkov, Eugene Agichtein, and Alina Dubatovka. What Do You Mean Exactly?: Analyzing Clarification Questions in CQA. In Proceedings of the 2017 Conference on Human Information Interaction and Retrieval (CHIIR 2017). ACM, 345–348.
[6]
Nitin Jindal and Bing Liu. Identifying Comparative Sentences in Text Documents. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006). ACM, 244–251.
[7]
Nitin Jindal and Bing Liu. Mining Comparative Sentences and Relations. In Proceedings of the 21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference (AAAI 2006). AAAI Press, 1331–1336.
[8]
Makoto P. Kato, Ryen W. White, Jaime Teevan, and Susan T. Dumais. Clarifications and Question Specificity in Synchronous Social Q&A. In Proceedings of the 2013 ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2013). ACM, 913–918.
[9]
Johannes Kiesel, Arefeh Bahrami, Benno Stein, Avishek Anand, and Matthias Hagen. Toward Voice Query Clarification. In Proceedings of the 41st International ACM Conference on Research and Development in Information Retrieval (SIGIR 2018). ACM, 1257–1260.
[10]
Johannes Kiesel, Arefeh Bahrami, Benno Stein, Avishek Anand, and Matthias Hagen. Clarifying False Memories in Voice-based Search. In Proceedings of the 2019 Conference on Human Information Interaction & Retrieval (CHIIR 2019). ACM, 331–335.
[11]
Johannes Kiesel, Xiaoni Cai, Roxanne El Baff, Benno Stein, and Matthias Hagen. Toward Conversational Query Reformulation. In Proceedings of the 2nd International Conference on Design of Experimental Search & Information Retrieval Systems (DESIRES 2021)(CEUR Workshop Proceedings). 10 pages.
[12]
Antonios Minas Krasakis, Mohammad Aliannejadi, Nikos Voskarides, and Evangelos Kanoulas. Analysing the Effect of Clarifying Questions on Document Ranking in Conversational Search. In Proceedings of the 2020 ACM SIGIR International Conference on the Theory of Information Retrieval (ICTIR 2020). ACM, 129–132.
[13]
Klaus Krippendorff. 1980. Content Analysis: An Introduction to its Methodology. Sage Publications.
[14]
Vaibhav Kumar and Alan W. Black. ClarQ: A Large-Scale and Diverse Dataset for Clarification Question Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020). ACL, 7296–7301.
[15]
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: A Benchmark for Question Answering Research. Trans. Assoc. Comput. Linguistics 7 (2019), 452–466.
[16]
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). OpenReview.net.
[17]
Thomas W Lauer and Eileen Peacock. An Analysis of Comparison Questions in the Context of Auditing. Discourse Processes 13, 3 (1990), 349–361.
[18]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692(2019).
[19]
Hao Ma, Michael R. Lyu, and Irwin King. Diversifying Query Suggestion Results. In Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI 2010). AAAI Press.
[20]
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016)(CEUR Workshop Proceedings, Vol. 1773). CEUR-WS.org.
[21]
Rodrygo L. T. Santos, Craig MacDonald, and Iadh Ounis. Search Result Diversification. Found. Trends Inf. Retr. 9, 1 (2015), 1–90.
[22]
Matthias Schildwächter, Alexander Bondarenko, Julian Zenker, Matthias Hagen, Chris Biemann, and Alexander Panchenko. Answering Comparative Questions: Better than Ten-Blue-Links?. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval (CHIIR 2019). ACM, 361–365.
[23]
R. F. Simmons. Natural Language Question-Answering Systems. Commun. ACM 13, 1 (1970), 15–30.
[24]
Leila Tavakoli, Hamed Zamani, Falk Scholer, William Bruce Croft, and Mark Sanderson. Analyzing Clarification in Asynchronous Information-Seeking Conversations. Journal of the Association for Information Science and Technology (2021).
[25]
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). ACL, 2369–2380.
[26]
Hamed Zamani, Susan T. Dumais, Nick Craswell, Paul N. Bennett, and Gord Lueck. Generating Clarifying Questions for Information Retrieval. In Proceedings of the Web Conference (WWW 2020). ACM / IW3C2, 418–428.
[27]
Hamed Zamani, Gord Lueck, Everest Chen, Rodolfo Quispe, Flint Luu, and Nick Craswell. MIMICS: A Large-Scale Data Collection for Search Clarification. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM 2020). ACM, 3189–3196.
[28]
Hamed Zamani, Bhaskar Mitra, Everest Chen, Gord Lueck, Fernando Diaz, Paul N. Bennett, Nick Craswell, and Susan T. Dumais. Analyzing and Learning from User Interactions for Search Clarification. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020). ACM, 1181–1190.
[29]
Zhiling Zhang and Kenny Q. Zhu. Diverse and Specific Clarification Question Generation with Keywords. In Proceedings of the Web Conference (WWW 2021). ACM / IW3C2, 3501–3511.

Cited By

View all
  • (2024)Center-retained fine-tuning for conversational question ranking through unsupervised center identificationInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10357861:2Online publication date: 12-Apr-2024

Index Terms

  1. A User Study on Clarifying Comparative Questions
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHIIR '22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval
      March 2022
      399 pages
      ISBN:9781450391863
      DOI:10.1145/3498366
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 March 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Clarification
      2. Comparative questions
      3. Comparison aspects
      4. Comparison objects
      5. User experience
      6. User study

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Funding Sources

      Conference

      CHIIR '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 55 of 163 submissions, 34%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)25
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 03 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Center-retained fine-tuning for conversational question ranking through unsupervised center identificationInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10357861:2Online publication date: 12-Apr-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media