Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3209978.3210186acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
tutorial

Fusion in Information Retrieval: SIGIR 2018 Half-Day Tutorial

Published: 27 June 2018 Publication History

Abstract

Fusion is an important and central concept in Information Retrieval. The goal of fusion methods is to merge different sources of information so as to address a retrieval task. For example, in the adhoc retrieval setting, fusion methods have been applied to merge multiple document lists retrieved for a query. The lists could be retrieved using different query representations, document representations, ranking functions and corpora. The goal of this half day, intermediate-level, tutorial is to provide a methodological view of the theoretical foundations of fusion approaches, the numerous fusion methods that have been devised and a variety of applications for which fusion techniques have been applied.

References

[1]
N. Ailon . 2010. Aggregation of Partial Rankings, p-Ratings and Top-m Lists. Algorithmica, Vol. 57, 2 (2010), 284--300.
[2]
G. Amati, C. Carpineto, and G. Romano . 2004. Query difficulty, robustness, and selective application of query expansion Proc. SIGIR. 127--137.
[3]
Y. Anava, A. Shtok, O. Kurland, and E. Rabinovich . 2016. A Probabilistic Fusion Framework. In Proc. CIKM. 1463--1472.
[4]
A. Arampatzis and J. Kamps . 2009. A signal-to-noise approach to score normalization. Proc. CIKM. 797--806.
[5]
A. Arampatzis and S. Robertson . 2011. Modeling score distributions in information retrieval. Inf. Retr., Vol. 14, 1 (2011), 26--46.
[6]
J. A. Aslam and M. Montague . 2001. Models for metasearch Proc. SIGIR. 276--284.
[7]
J. A. Aslam and V. Pavlu . 2007. Query Hardness Estimation Using Jensen-Shannon Divergence Among Multiple Scoring Functions Proc. ECIR. 198--209.
[8]
J. A. Aslam, V. Pavlu, and R. Savell . 2003. A unified model for metasearch and the efficient evaluation of retrieval systems via the hedge algorithm. In Proc. SIGIR. 393--394.
[9]
J. A. Aslam, V. Pavlu, and E. Yilmaz . 2005. Measure-based Metasearch. In Proc. SIGIR. 571--572.
[10]
P. Bailey, A. Moffat, F. Scholer, and P. Thomas . 2016. UQV100: A test collection with query variability Proc. SIGIR. 725--728.
[11]
P. Bailey, A. Moffat, F. Scholer, and P. Thomas . 2017. Retrieval consistency in the presence of query variations Proc. SIGIR. 395--404.
[12]
N. Balasubramanian and J. Allan . 2010. Learning to select rankers. In Proc. SIGIR. 855--856.
[13]
J. Bartholdi, C. A. Tovey, and M. A. Trick . 1989. Voting schemes for which it can be difficult to tell who won the election. Social Choice and Welfare Vol. 6, 2 (1989), 157--165.
[14]
S. M. Beitzel, E. C. Jensen, A. Chowdhury, O. Frieder, D. A. Grossman, and N. Goharian . 2003. Disproving the Fusion Hypothesis: An Analysis of Data Fusion via Effective Information Retrieval Strategies. In Proc. SAC. 823--827.
[15]
S. M. Beitzel, E. C. Jensen, O. Frieder, A. Chowdhury, and G. Pass . 2005. Surrogate scoring for improved metasearch precision Proc. SIGIR. 583--584.
[16]
N. J. Belkin, C. Cool, W. B. Croft, and J. P. Callan . 1993. The Effect of Multiple Query Variations on Information Retrieval System Performance Proc. SIGIR. 339--346.
[17]
N. J. Belkin, P. Kantor, E. A. Fox, and J. A. Shaw . 1995. Combining the evidence of multiple query representations for information retrieval. Inf. Proc. & Man., Vol. 31, 3 (1995), 431--448.
[18]
R. Benham and J. S. Culpepper . 2017. Risk-reward Trade-offs in Rank Fusion. In Proc. ADCS. Article bibinfoarticleno1, bibinfonumpages1:1--1:8 pages.
[19]
R. Benham, L. Gallagher, J. Mackenzie, T. T. Damessie, R.-C. Chen, F. Scholer, A. Moffat, and J. S. Culpepper . 2017. RMIT at the TREC 2017 CORE Track. In Proc. TREC.
[20]
F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia (Eds.). . 2016. Handbook of Computational Social Choice. Cambridge University Press.
[21]
C. Buckley, D. Dimmick, I. Soboroff, and E. M. Voorhees . 2007. Bias and the limits of pooling for large collections. Inf. Retr. (2007), 491--508.
[22]
C. Buckley and J. Walz . 1999. The TREC-8 query track. In Proc. TREC.
[23]
C. Burges . 2010. From ranknet to lambdarank to lambdamart: An overview. Learning, Vol. 11, 23--581 (2010), 81.
[24]
J. Callan . 2000. Distributed information retrieval. Advances in information retrieval, bibfieldeditorW.B. Croft (Ed.). Kluwer Academic Publishers, Chapter 5, 127--150.
[25]
B. Carterette, V. Pavlu, E. Kanoulas, J. A. Aslam, and J. Allan . 2008. Evaluation over thousands of queries. In Proc. SIGIR. 651--658.
[26]
R.-C. Chen, L. Gallagher, R. Blanco, and J. S. Culpepper . 2017. Efficient cost-aware cascade ranking in multi-stage retrieval Proc. SIGIR. 445--454.
[27]
F. M. Choudhury, Z. Bao, J. S. Culpepper, and T. Sellis . 2017. Monitoring the Top-m Rank Aggregation of Spatial Objects in Streaming Queries Proc. ICDE. 585--596.
[28]
K. Collins-Thompson and J. Callan . 2007. Estimation and use of uncertainty in pseudo-relevance feedback Proc. SIGIR. 303--310.
[29]
G. V. Cormack, C. L. A. Clarke, and S. Büttcher . 2009. Reciprocal rank fusion outperforms Condorcet and individual rank learning methods Proc. SIGIR. 758--759.
[30]
N. Craswell, D. Hawking, and P. B. Thistlewaite . 1999. Merging results from isolated search engines. In Proc. ADC. 189--200.
[31]
W. B. Croft (Ed.). . 2000 a. Advances in Information Retrieval: Recent Research from the Center for Intelligent Information Retrieval. Number 7 in The Kluwer International Series on Information Retrieval. Kluwer.
[32]
W. B. Croft . 2000 b. Combining approaches to information retrieval. See citeNCroft:00a, Chapter 1, 1--36.
[33]
S. Cronen-Townsend, Y. Zhou, and W. B. Croft . 2004. A Language Modeling Framework for Selective Query Expansion. Technical Report IR-338. Center for Intelligent Information Retrieval, University of Massachusetts.
[34]
J. C. de Borda . 1784. Mémoire sur les élections au scrutin. Histoire de l'Academie Royale des Sciences pour 1781 (Paris, 1784) (. 1784).
[35]
F. Diaz . 2007. Regularizing query-based retrieval scores. Inf. Retr., Vol. 10, 6 (2007), 531--562.
[36]
T. G Dietterich . 2000. Ensemble methods in machine learning. In International Workshop on Multiple Classifier Systems. Springer, 1--15.
[37]
B. T. Dinccer, C. Macdonald, and I. Ounis . 2014. Hypothesis testing for the risk-sensitive evaluation of retrieval systems Proc. SIGIR. 23--32.
[38]
C. Dwork, R. Kumar, M. Naor, and D. Sivakumar . 2001. Rank Aggregation Methods for the Web. In Proc. WWW. 613--622.
[39]
M. Efron . 2009. Generative model-based metasearch for data fusion in information retrieval Proc. JCDL. 153--162.
[40]
E. A. Fox and J. A. Shaw . 1994. Combination of multiple searches. In Proc. TREC.
[41]
H. D. Frank and I. Taksa . 2005. Comparing rank and score combination methods for data fusion in information retrieval. Inf. Retr., Vol. 8, 3 (2005), 449--480.
[42]
L. Gallagher, J. Mackenzie, R. Benham, R.-C. Chen, F. Scholer, and J. S. Culpepper . 2017. RMIT at the NTCIR-13 We Want Web task. Proc. NTCIR.
[43]
N. P. Gopalan and K. Batri . 2007. Adaptive Selection of Top- m Retrieval Strategies for Data Fusion in Information Retrieval. Intl. J. of Soft Computing Vol. 2, 1 (2007).
[44]
S. Huo, M. Zhang, Y. Liu, and S. Ma . 2014. Improving tail query performance by fusion model. Proc. CIKM. 559--658.
[45]
A. Juárez-González, M. Montes-y-Gómez, L. V. Pineda, and D. O. Arroyo . 2009. On the Selection of the Best Retrieval Result Per Query - An Alternative Approach to Data Fusion Proc. FQAS. 111--121.
[46]
A. Juárez-González, M. Montes-y-Gómez, L. V. Pineda, D. P. Avenda no, and M. A. Pérez-Couti no . 2010. Selecting the N-Top Retrieval Result Lists for an Effective Data Fusion Proc. CICLing. 580--589.
[47]
Y. Kim, J. Callan, J. S. Culpepper, and A. Moffat . 2017. Efficient distributed selective search. Inf. Retr., Vol. 20, 3 (2017), 221--252.
[48]
A. Klementiev, D. Roth, and K. Small . 2008. Unsupervised rank aggregation with distance-based models Proc. ICML. 472--479.
[49]
A. K. Kozorovitzky and O. Kurland . 2009. From "Identical" to "Similar": Fusing Retrieved Lists Based on Inter-document Similarities Proc. ICTIR. 212--223.
[50]
A. K. Kozorovitzky and O. Kurland . 2011 a. Cluster-based fusion of retrieved lists. In Proc. SIGIR. 893--902.
[51]
A. K. Kozorovitzky and O. Kurland . 2011 b. From "Identical" to "Similar": Fusing Retrieved Lists Based on Inter-document Similarities. J. of AI Res. Vol. 41 (2011).
[52]
K.-L. Kwok, L. Grunfeld, and P. Deng . 2005. Improving weak ad-hoc retrieval by web assistance and data fusion Proc. AIRS. 17--30.
[53]
M. Lalmas . 2002. A Formal Model for Data Fusion. In Proc. FQAS. 274--288.
[54]
G. Lebanon and J. D. Lafferty . 2002. Cranking: Combining Rankings Using Conditional Probability Models on Permutations Proc. ICML. 363--370.
[55]
C.-J. Lee, Q. Ai, W. B. Croft, and D. Sheldon . 2015. An Optimization Framework for Merging Multiple Result Lists Proc. CIKM. 303--312.
[56]
J. H. Lee . 1995. Combining multiple evidence from different properties of weighting schemes Proc. SIGIR. 180--188.
[57]
J. H. Lee . 1997. Analyses of multiple evidence combination. In Proc. SIGIR. 267--276.
[58]
O. Levi, F. Raiber, O. Kurland, and I. Guy . 2016. Selective Cluster-Based Document Retrieval. In Proc. CIKM. 1473--1482.
[59]
J. Li, C. Huang, X. Wang, and S Wu . 2015. Balancing efficiency and effectiveness for fusion-based search engines in the 'big data' environment. Information Research, 21(2), paper 710. (2015).
[60]
S. Liang and M. de Rijke . 2015. Burst-aware data fusion for microblog search. Inf. Proc. & Man., Vol. 51, 2 (2015), 89--113.
[61]
S. Liang, M. de Rijke, and M. Tsagkias . 2013. Late Data Fusion for Microblog Search. In Proc. ECIR. 743--746.
[62]
S. Liang, I. Markov, Z. Ren, and M. de Rijke . 2018. Manifold Learning for Rank Aggregation. In Proc. WWW. 1735--1744.
[63]
S. Liang, Z. Ren, and M. de Rijke . 2014 a. Fusion helps diversification. In Proc. SIGIR. 303--312.
[64]
S. Liang, Z. Ren, and M. de Rijke . 2014 b. The Impact of Semantic Document Expansion on Cluster-Based Fusion for Microblog Search Proc. ECIR. 493--499.
[65]
D. Lillis, F. Toolan, R. W. Collier, and J. Dunnion . 2006. ProbFuse: a probabilistic approach to data fusion. Proc. SIGIR. 139--146.
[66]
D. Lillis, F. Toolan, R. W. Collier, and J. Dunnion . 2008. Extending Probabilistic Data Fusion Using Sliding Windows Proc. ECIR. 358--369.
[67]
D. Lillis, L. Zhang, F. Toolan, R. W. Collier, D. Leonard, and J. Dunnion . 2010. Estimating Probabilities for Effective Data Fusion Proc. SIGIR. 347--354.
[68]
D. E. Losada, J. Parapar, and A. Barreiro . 2017. Multi-armed bandits for adjudicating documents in pooling-based evaluation of information retrieval systems. Inf. Proc. & Man., Vol. 53, 5 (2017), 1005--1025.
[69]
X. Lu, A. Moffat, and J. S. Culpepper . 2016 a. The effect of pooling and evaluation depth on IR metrics. Inf. Retr., Vol. 19, 4 (2016), 416--445.
[70]
X. Lu, A. Moffat, and J. S. Culpepper . 2016 b. Modeling relevance as a function of retrieval rank Proc. AIRS. 3--15.
[71]
X. Lu, A. Moffat, and J. S. Culpepper . 2017. Can deep effectiveness metrics be evaluated using shallow judgment pools? Proc. SIGIR. 35--44.
[72]
J. Mackenzie, F. M. Choudhury, and J. S. Culpepper . 2015. Efficient location-aware web search. In Proc. ADCS. 4.1--4.8.
[73]
R. Manmatha and H. Sever . 2002. A formal approach to score normalization for meta-search Proc. of HLT. 98--103.
[74]
I. Markov, A. Arampatzis, and F. Crestani . 2012. Unsupervised linear score normalization revisited. Proc. SIGIR. 1161--1162.
[75]
G. Markovits, A. Shtok, O. Kurland, and D. Carmel . 2012. Predicting query performance for fusion-based retrieval Proc. CIKM.
[76]
D. Metzler and W. B. Croft . 2005. A Markov random field model for term dependencies Proc. SIGIR. 472--479.
[77]
M. Montague and J. A. Aslam . 2002. Condorcet fusion for improved retrieval. In Proc. CIKM. 538--548.
[78]
M. H. Montague and J. A. Aslam . 2001. Relevance Score Normalization for Metasearch. In Proc. CIKM. 427--433.
[79]
A. Mourao, F. Martins, and J. Magalhaes . 2013. NovaSearch at TREC 2013 Federated Web Search Track: Experiments with rank fusion. Proc. TREC.
[80]
A. Mourao, F. Martins, and J. Magalhaes . 2014. Inverse square rank fusion for multimodal search. Proc. CBMI. 1--6.
[81]
K. B. Ng and P. P. Kantor . 1998. An Investigation of the Preconditions for Effective Data Fusion in Information Retrieval: A Pilot Study. (1998).
[82]
D. Parikh and R. Polikar . 2007. An ensemble-based incremental learning approach to data fusion. IEEE Trans. on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 37, 2 (2007), 437--450.
[83]
T. Qin, X. Geng, and T.-Y. Liu . 2010. A New Probabilistic Model for Rank Aggregation. In Proc. NIPS. 1948--1956.
[84]
E. Rabinovich, O. Rom, and O. Kurland . 2014. Utilizing relevance feedback in fusion-based retrieval Proc. SIGIR. 313--322.
[85]
F. Raiber and O. Kurland . 2014. Query-performance prediction: setting the expectations straight Proc. SIGIR. 13--22.
[86]
M. Sanderson . 2010. Test Collection Based Evaluation of Information Retrieval Systems. Found. Trends in Inf. Ret. Vol. 4, 4 (2010), 247--375.
[87]
S. B. Selvadurai . 2007. Implementing a metasearch framework with content-directed result merging. Master's thesis. bibinfoschoolNorth Carolina State University.
[88]
D. Sheldon, M. Shokouhi, M. Szummer, and N. Craswell . 2011. LambdaMerge: Merging the results of query reformulations Proc. WSDM. 795--804.
[89]
M. Shokouhi . 2007. Segmentation of Search Engine Results for Effective Data-Fusion Proc. ECIR. 185--197.
[90]
M. Shokouhi and L. Si . 2011. Federated Search. Found. Trends in Inf. Ret. Vol. 5, 1 (2011), 1--102.
[91]
X. M. Shou and M. Sanderson . 2002. Experiments on data fusion using headline information Proc. SIGIR. 413--414.
[92]
A. Shtok, O. Kurland, and D. Carmel . 2016. Query Performance Prediction Using Reference Lists. ACM Trans. Inf. Sys., Vol. 34, 4 (2016), 19:1--19:34.
[93]
S. Tulyakov, S. Jaeger, V. Govindaraju, and D. S. Doermann . 2008. Review of Classifier Combination Methods. Machine Learning in Document Analysis and Recognition. 361--386.
[94]
C. C. Vogt . 2000. How much more is better? Characterising the effects of adding more IR Systems to a combination. In Proc. RIAO. 457--475.
[95]
C. C. Vogt and G. W. Cottrell . 1998. Predicting the Performance of Linearly Combined IR Systems Proc. SIGIR. 190--196.
[96]
C. C. Vogt and G. W. Cottrell . 1999. Fusion via linear combination of scores. Inf. Retr., Vol. 1, 3 (1999), 151--173.
[97]
E. M. Voorhees and D. K. Harman . 2005. TREC: Experiment and Evaluation in Information Retrieval. The MIT Press.
[98]
W. Webber, A. Moffat, and J. Zobel . 2010. The Effect of Pooling and Evaluation Depth on Metric Stability Proc. EVIA. 7--15.
[99]
S. Wu . 2007. A Geometric probabilistic framework for data fusion in information retrieval Proc. FUSION. 1--8.
[100]
S. Wu . 2009. Applying statistical principles to data fusion in information retrieval. Expert Syst. Appl., Vol. 36, 2 (2009), 2997--3006.
[101]
S. Wu . 2012 a. Applying the data fusion technique to blog opinion retrieval. Expert Syst. Appl., Vol. 39, 1 (2012), 1346--1353.
[102]
S. Wu . 2012 b. Linear combination of component results in information retrieval. Data Knowl. Eng., Vol. 71, 1 (2012), 114--126.
[103]
S. Wu . 2013. The weighted Condorcet fusion in information retrieval. Inf. Proc.&Man., Vol. 49, 1 (2013), 108--122.
[104]
S. Wu, Y. Bi, and S. I. McClean . 2007. Regression Relevance Models for Data Fusion. In Proc. DEXA. 264--268.
[105]
S. Wu, Y. Bi, and X. Zeng . 2011. The Linear Combination Data Fusion Method in Information Retrieval Proc. DEXA. 219--233.
[106]
S. Wu, Y. Bi, X. Zeng, and L. Han . 2009. Assigning appropriate weights for the linear combination data fusion method in information retrieval. Inf. Proc. & Man., Vol. 45, 4 (2009), 413--426.
[107]
S. Wu and F. Crestani . 2002. Data fusion with estimated weights. In Proc. CIKM. 648--651.
[108]
S. Wu, F. Crestani, and Y. Bi . 2006. Evaluating Score Normalization Methods in Data Fusion Proc. AIRS. 642--648.
[109]
S. Wu and C. Huang . 2014. Search result diversification via data fusion. In Proc. SIGIR. 827--830.
[110]
S. Wu, J. Li, X. Zeng, and Y. Bi . 2014. Adaptive data fusion methods in information retrieval. JASIST, Vol. 65, 10 (2014), 2048--2061.
[111]
S. Wu and S. McClean . 2006. Performance prediction of data fusion for information retrieval. Inf. Proc. & Man., Vol. 42, 4 (2006), 899--915.
[112]
J. Xia, C. Xu, and S. Wu . 2016. Differential Evolution-Based Fusion and Its Properties for Web Search Proc. WISA. 67--70.
[113]
X. Xue and W. B. Croft . 2013. Modeling reformulation using query distributions. ACM Trans. Inf. Sys., Vol. 31, 2 (2013), 6:1--6:34.
[114]
M. Yasukawa, J. S. Culpepper, and F. Scholer . 2015. Data fusion for Japanese term and character n -gram search. Proc. ADCS. 10.1--10.4.
[115]
H. P. Young . 1988. Condorcet's theory of voting. American Political Science Review Vol. 82, 4 (1988), 1231--1244.
[116]
C. Zhang and Y. Ma . 2012. Ensemble machine learning: methods and applications. Springer.

Cited By

View all
  • (2024)Leveraging LLMs for Unsupervised Dense Retriever RankingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657798(1307-1317)Online publication date: 10-Jul-2024
  • (2024)Towards Reliable and Factual Response Generation: Detecting Unanswerable Questions in Information-Seeking ConversationsAdvances in Information Retrieval10.1007/978-3-031-56063-7_25(336-344)Online publication date: 23-Mar-2024
  • (2023)Retrieval-based Diagnostic Decision Support (Preprint)JMIR Medical Informatics10.2196/50209Online publication date: 25-Jun-2023
  • Show More Cited By

Index Terms

  1. Fusion in Information Retrieval: SIGIR 2018 Half-Day Tutorial

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval
    June 2018
    1509 pages
    ISBN:9781450356572
    DOI:10.1145/3209978
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 June 2018

    Check for updates

    Author Tags

    1. ad hoc retrieval
    2. fusion
    3. retrieval methods

    Qualifiers

    • Tutorial

    Funding Sources

    • ARC (Australian Research Council)

    Conference

    SIGIR '18
    Sponsor:

    Acceptance Rates

    SIGIR '18 Paper Acceptance Rate 86 of 409 submissions, 21%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Leveraging LLMs for Unsupervised Dense Retriever RankingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657798(1307-1317)Online publication date: 10-Jul-2024
    • (2024)Towards Reliable and Factual Response Generation: Detecting Unanswerable Questions in Information-Seeking ConversationsAdvances in Information Retrieval10.1007/978-3-031-56063-7_25(336-344)Online publication date: 23-Mar-2024
    • (2023)Retrieval-based Diagnostic Decision Support (Preprint)JMIR Medical Informatics10.2196/50209Online publication date: 25-Jun-2023
    • (2023)Selective Query Processing: A Risk-Sensitive Selection of Search ConfigurationsACM Transactions on Information Systems10.1145/360847442:1(1-35)Online publication date: 21-Aug-2023
    • (2023)On the problem of recommendation for sensitive users and influential itemsKnowledge-Based Systems10.1016/j.knosys.2023.110699275:COnline publication date: 5-Sep-2023
    • (2023)Data Fusion Performance Prophecy: A Random Forest RevelationInformation Integration and Web Intelligence10.1007/978-3-031-48316-5_20(192-200)Online publication date: 22-Nov-2023
    • (2023)Active Semantic Localization with Graph Neural EmbeddingPattern Recognition10.1007/978-3-031-47634-1_17(216-230)Online publication date: 5-Nov-2023
    • (2022)Explainable Multimedia Feature Fusion for Medical ApplicationsJournal of Imaging10.3390/jimaging80401048:4(104)Online publication date: 8-Apr-2022
    • (2022)ranx.fuse: A Python Library for MetasearchProceedings of the 31st ACM International Conference on Information & Knowledge Management10.1145/3511808.3557207(4808-4812)Online publication date: 17-Oct-2022
    • (2022)Preferences on a Budget: Prioritizing Document Pairs when Crowdsourcing Relevance JudgmentsProceedings of the ACM Web Conference 202210.1145/3485447.3511960(319-327)Online publication date: 25-Apr-2022
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media