Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3626772.3657982acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper
Open access

Gen-IR @ SIGIR 2024: The Second Workshop on Generative Information Retrieval

Published: 11 July 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Generative information retrieval (Gen-IR) is a fast-growing interdisciplinary research area that investigates how to leverage advances in generative Artificial Intelligence (AI) to improve information retrieval systems. Gen-IR has attracted interest from the information retrieval, natural language processing, and machine learning communities, among others. Since the dawn of Gen-IR last year, there has been an explosion of Gen-IR systems that have launched and are now widely used. Interest in this area across academia and industry is only expected to continue to grow as new research challenges and application opportunities arise. The goal of this proposed workshop, The Second Workshop on Generative Information Retrieval (Gen-IR @ SIGIR 2024) is to provide an interactive venue for exploring a broad range of foundational and applied Gen-IR research. The workshop will focus on tasks such as generative document retrieval, grounded answer generation, generative recommendation, and generative knowledge graphs, all through the lens of model training, model behavior, and broader issues. The workshop will be highly interactive, favoring panel discussions, poster sessions, and roundtable discussions over one-sided keynotes and paper talks.

    References

    [1]
    Garbiel Bénédict, Ruqing Zhang, and Donald Metzler. 2023a. Gen-IR@SIGIR 2023: The First Workshop on Generative Information Retrieval. In SIGIR (SIGIR '23). Association for Computing Machinery, New York, NY, USA, 3460--3463. https://doi.org/10.1145/3539618.3591923
    [2]
    Garbiel Bénédict, Ruqing Zhang, Donald Metzler, Andrew Yates, Romain Deffayet, Philipp Hager, and Sami Jullien. 2023b. Report on the 1st Workshop on Generative Information Retrieval (Gen-IR 2023) at SIGIR 2023. In SIGIR Forum Decembver 2023, Volume 57 Number 2 (SIGIR '23). Association for Computing Machinery, New York, NY, USA. https://sigir.org/wp-content/uploads/2023/12/p13.pdf
    [3]
    Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster. 2022. Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models. https://doi.org/10.48550/ARXIV.2212.08037
    [4]
    Gabriel Bénédict, Olivier Jeunen, Samuele Papa, Samarth Bhargav, Daan Odijk, and Maarten de Rijke. 2023. RecFusion: A Binomial Diffusion Process for 1D Data for Recommendation. arxiv: 2306.08947 [cs.IR]
    [5]
    Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive Entity Retrieval. CoRR, Vol. abs/2010.00904 (2020). https://arxiv.org/abs/2010.00904
    [6]
    Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yiqun Liu, Yixing Fan, and Xueqi Cheng. 2022. CorpusBrain: Pre-train a Generative Retrieval Model for Knowledge-Intensive Language Tasks. In CIKM. 191--200.
    [7]
    Roi Cohen, Mor Geva, Jonathan Berant, and Amir Globerson. 2023. Crawling The Internal Knowledge-Base of Language Models. In Findings of EACL, Andreas Vlachos and Isabelle Augenstein (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 1856--1869. https://doi.org/10.18653/v1/2023.findings-eacl.139
    [8]
    Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2024. Retrieval-Augmented Generation for Large Language Models: A Survey. arxiv: 2312.10997 [cs.CL]
    [9]
    Zihao Li, Aixin Sun, and Chenliang Li. 2023. DiffuRec: A Diffusion Model for Sequential Recommendation. ACM Trans. Inf. Syst., Vol. 42, 3, Article 66 (dec 2023), 28 pages. https://doi.org/10.1145/3631116
    [10]
    Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, and Zhanhui Kang. 2024. Plug-in Diffusion Model for Sequential Recommendation. arxiv: 2401.02913 [cs.IR]
    [11]
    Sanket Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Tran, Jinfeng Rao, Marc Najork, Emma Strubell, and Donald Metzler. 2023. DSI: Updating Transformer Memory with New Documents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 8198--8213.
    [12]
    Donald Metzler, Yi Tay, Dara Bahri, and Marc Najork. 2021. Rethinking Search: Making Domain Experts out of Dilettantes. SIGIR Forum, Vol. 55, 1, Article 13 (jul 2021), 27 pages. https://doi.org/10.1145/3476415.3476428
    [13]
    Ronak Pradeep, Kai Hui, Jai Gupta, Adam Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, and Vinh Tran. 2023. How Does Generative Retrieval Scale to Millions of Passages?. In EMNLP, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 1305--1321. https://doi.org/10.18653/v1/2023.emnlp-main.83
    [14]
    Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. 2023. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563 (2023).
    [15]
    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., Vol. 21, 140 (2020), 1--67.
    [16]
    Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan Hulikal Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q. Tran, Jonah Samost, Maciej Kula, Ed H. Chi, and Maheswaran Sathiamoorthy. 2023. Recommender Systems with Generative Retrieval. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=BJ0fQUU32w
    [17]
    Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for Building an Open-Domain Chatbot. In EACL. Association for Computational Linguistics, Online, 300--325. https://doi.org/10.18653/v1/2021.eacl-main.24
    [18]
    Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, and Donald Metzler. 2022. Transformer Memory as a Differentiable Search Index. In Advances in Neural Information Processing Systems, Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=Vu-B0clPfq
    [19]
    Joojo Walker, Ting Zhong, Fengli Zhang, Qiang Gao, and Fan Zhou. 2022. Recommendation via Collaborative Diffusion Generative Model. In Knowledge Science, Engineering and Management, Gerard Memmi, Baijian Yang, Linghe Kong, Tianwei Zhang, and Meikang Qiu (Eds.). Springer International Publishing, Cham.
    [20]
    Dongsheng Wang, Zhiqiang Ma, Armineh Nourbakhsh, Kang Gu, and Sameena Shah. 2023b. Docgraphlm: Documental graph language model for information extraction. In SIGIR. 1944--1948.
    [21]
    Dongsheng Wang, Zhiqiang Ma, Armineh Nourbakhsh, Kang Gu, and Sameena Shah. 2024. DocGraphLM: Documental Graph Language Model for Information Extraction. arxiv: 2401.02823 [cs.CL]
    [22]
    Peng Wang, Ningyu Zhang, Xin Xie, Yunzhi Yao, Bozhong Tian, Mengru Wang, Zekun Xi, Siyuan Cheng, Kangwei Liu, Guozhou Zheng, and Huajun Chen. 2023d. EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models. arxiv: 2308.07269 [cs.CL]
    [23]
    Wenjie Wang, Xinyu Lin, Fuli Feng, Xiangnan He, and Tat-Seng Chua. 2023a. Generative Recommendation: Towards Next-generation Recommender Paradigm. arxiv: 2304.03516 [cs.IR]
    [24]
    Wenjie Wang, Yiyan Xu, Fuli Feng, Xinyu Lin, Xiangnan He, and Tat-Seng Chua. 2023c. Diffusion Recommender Model. In SIGIR (SIGIR '23). Association for Computing Machinery, New York, NY, USA, 832--841. https://doi.org/10.1145/3539618.3591663
    [25]
    Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W Bruce Croft, Xiaodong Liu, Yelong Shen, and Jingjing Liu. 2019. A hybrid retrieval-generation neural conversation model. In Proceedings of the 28th ACM international conference on information and knowledge management. 1341--1350.
    [26]
    Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2023. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).
    [27]
    Yunqin Zhu, Chao Wang, and Hui Xiong. 2023. Towards Graph-Aware Diffusion Modeling for Collaborative Filtering. arxiv: 2311.08744 [cs.IR]

    Index Terms

    1. Gen-IR @ SIGIR 2024: The Second Workshop on Generative Information Retrieval

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '24: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2024
      3164 pages
      ISBN:9798400704314
      DOI:10.1145/3626772
      This work is licensed under a Creative Commons Attribution-NoDerivatives International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 July 2024

      Check for updates

      Author Tags

      1. generative models
      2. information retrieval
      3. large language models

      Qualifiers

      • Short-paper

      Conference

      SIGIR 2024
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 3
        Total Downloads
      • Downloads (Last 12 months)3
      • Downloads (Last 6 weeks)3

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media