Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3511808.3557501acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
tutorial

Fairness of Machine Learning in Search Engines

Published: 17 October 2022 Publication History

Abstract

Fairness has gained increasing importance in a variety of AI and machine learning contexts. As one of the most ubiquitous applications of machine learning, search engines mediate much of the information experiences of members of society. Consequently, understanding and mitigating potential algorithmic unfairness in search have become crucial for both users and systems. In this tutorial, we will introduce the fundamentals of fairness in machine learning, for both supervised learning such as classification and ranking, and unsupervised learning such as clustering. We will then present the existing work on fairness in search engines, including the fairness definitions, evaluation metrics, and taxonomies of methodologies. This tutorial will help orient information retrieval researchers to algorithmic fairness, provide an introduction to the growing literature on this topic, and gathering researchers and practitioners interested in this research direction.

References

[1]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dud'ik, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. In ICML.
[2]
Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. 2019. Scalable fair clustering. In ICML.
[3]
Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, et al. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, Vol. 63, 4/5 (2019), 4--1.
[4]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, Vol. 94, 4 (2004), 991--1013.
[5]
Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Li Wei, Yi Wu, Lukasz Heldt, Zhe Zhao, Lichan Hong, Ed H Chi, et al. 2019. Fairness in recommendation ranking through pairwise comparisons. In SIGKDD.
[6]
Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In SIGIR.
[7]
Amin Bigdeli, Negar Arabzadeh, Shirin SeyedSalehi, Morteza Zihayat, and Ebrahim Bagheri. 2022. Gender Fairness in Information Retrieval Systems. In SIGIR.
[8]
Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, and Yuekai Sun. 2021. Individually Fair Ranking. In ICLR.
[9]
L Elisa Celis, Damian Straszak, and Nisheeth K Vishnoi. 2017. Ranking with fairness constraints. arXiv preprint arXiv:1704.06840 (2017).
[10]
Jiawei Chen, Hande Dong, Yang Qiu, Xiangnan He, Xin Xin, Liang Chen, Guli Lin, and Keping Yang. 2021. AutoDebias: Learning to Debias for Recommendation. In SIGIR.
[11]
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. 2017. Fair clustering through fairlets. In NeurIPS.
[12]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, Vol. 5, 2 (2017), 153--163.
[13]
Alexandra Chouldechova and Aaron Roth. 2020. A snapshot of the frontiers of fairness in machine learning. Commun. ACM, Vol. 63, 5 (2020), 82--89.
[14]
Xiangxiang Chu, Bo Zhang, and Ruijun Xu. 2021. FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search. In ICCV.
[15]
André F. Cruz, Pedro Saleiro, Catarina Belé m, Carlos Soares, and Pedro Bizarro. 2021. Promoting Fairness through Hyperparameter Optimization. In ICDM.
[16]
P Deepak and Savitha Sam Abraham. 2020. Fair Outlier Detection. arXiv preprint arXiv:2005.09900.
[17]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS.
[18]
Michael D Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2022. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval, Vol. 16, 1--2 (2022), 1--177.
[19]
Boli Fang, Miao Jiang, Pei-yi Cheng, Jerry Shen, and Yi Fang. 2020. Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems. In IJCAI.
[20]
Ruoyuan Gao and Chirag Shah. 2021. Addressing bias and fairness in search systems. In SIGIR.
[21]
Sahin Cem Geyik, Stuart Ambler, and Krishnaram Kenthapadi. 2019. Fairness-aware ranking in search and recommendation systems with application to Linkedin talent search. In SIGKDD.
[22]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In NeurIPS.
[23]
Christina Ilvento. 2019. Metric learning for individual fairness. arXiv preprint arXiv:1906.00250 (2019).
[24]
Jian Kang and Hanghang Tong. 2021. Fair graph mining. In CIKM.
[25]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
[26]
Nikola Konstantinov and Christoph H Lampert. 2021. Fairness through regularization for learning to rank. arXiv preprint arXiv:2102.05996 (2021).
[27]
Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. 2019. ifair: Learning individually fair data representations for algorithmic decision making. In ICDE.
[28]
Erin LeDell and Sebastien Poirier. 2020. H2o automl: Scalable automatic machine learning. In AutoML Workshop at ICML.
[29]
Peizhao Li and Hongfu Liu. 2022. Achieving Fairness at No Utility Cost via Data Reweighing. arXiv preprint arXiv:2202.00787 (2022).
[30]
Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, and Hongfu Liu. 2020a. On dyadic fairness: Exploring and mitigating bias in graph connections. In ICLR.
[31]
Peizhao Li, Han Zhao, and Hongfu Liu. 2020b. Deep fair clustering for visual learning. In CVPR.
[32]
Yunqi Li, Yingqiang Ge, and Yongfeng Zhang. 2021. Tutorial on fairness of machine learning in recommender systems. In SIGIR.
[33]
Subha Maity, Songkai Xue, Mikhail Yurochkin, and Yuekai Sun. 2021. Statistical inference for individual fairness. arXiv preprint arXiv:2103.16714 (2021).
[34]
Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling fairness and bias in dynamic learning-to-rank. In SIGIR.
[35]
Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, and Yuekai Sun. 2020. Two simple ways to learn individual fairness metrics from data. In ICML.
[36]
Deepak Mukunthu, Parashar Shah, and Wee Hyong Tok. 2019. Practical Automated Machine Learning on Azure: Using Azure Machine Learning to Quickly Build AI Solutions. O'Reilly Media.
[37]
Valerio Perrone, Michele Donini, Krishnaram Kenthapadi, and Cédric Archambeau. 2020. Fair Bayesian optimization. In AutoML at ICML.
[38]
Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Post-processing for individual fairness. NeurIPS (2021).
[39]
Piotr Sapiezynski, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the impact of user attentionon fair group representation in ranked lists. In WWW.
[40]
Robin Schmucker, Michele Donini, Valerio Perrone, and Cédric Archambeau. 2020. Multi-objective multi-fidelity hyperparameter optimization with application to fairness. In Meta-learning Workshop at NeurIPS.
[41]
Xiaoxiao Shang, Zhiyuan Peng, Qiming Yuan, Sabiq Khan, Lauren Xie, Yi Fang, and Subramaniam Vincent. 2022. DIANES: A DEI Audit Toolkit for News Sources. In SIGIR (Demo Track).
[42]
Shubhranshu Shekhar, Neil Shah, and Leman Akoglu. 2021. Fairod: Fairness-aware outlier detection. In AIES.
[43]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In SIGKDD.
[44]
Ashudeep Singh and Thorsten Joachims. 2019. Policy learning for fairness in ranking. In NeurIPS.
[45]
Dylan Slack, Sorelle A Friedler, and Emile Givental. 2020. Fairness warnings and Fair-MAML: learning fairly with minimal data. In FAccT.
[46]
Hanyu Song, Peizhao Li, and Hongfu Liu. 2021. Deep Clustering based Fair Outlier Detection. In SIGKDD.
[47]
Bokun Wang and Ian Davidson. 2019. Towards fair deep clustering with multi-state protected variables. arXiv preprint arXiv:1901.10053 (2019).
[48]
Yuan Wang, Zhiqiang Tao, and Yi Fang. 2022. A Meta-learning Approach to Fair Ranking. In SIGIR.
[49]
Dennis Wei, Karthikeyan Natesan Ramamurthy, and Flavio P Calmon. 2021. Optimized Score Transformation for Consistent Fair Classification. Journal of Machine Learning Research, Vol. 22 (2021), 258--1.
[50]
Qingyun Wu and Chi Wang. 2021. Fair AutoML. CoRR, Vol. abs/2111.06495 (2021). showeprint[arXiv]2111.06495 https://arxiv.org/abs/2111.06495
[51]
Xiaoying Xing, Hongfu Liu, Chen Chen, and Jundong Li. 2021. Fairness-Aware Unsupervised Feature Selection. arXiv preprint arXiv:2106.02216 (2021).
[52]
Songkai Xue, Mikhail Yurochkin, and Yuekai Sun. 2020. Auditing ml models for individual bias and unfairness. In AI&STAT.
[53]
Himank Yadav, Zhengxiao Du, and Thorsten Joachims. 2020. Fair learning-to-rank from implicit feedback. In SIGIR.
[54]
Ke Yang, Vasilis Gkatzelis, and Julia Stoyanovich. 2019. Balanced ranking with diversity constraints. In IJCAI.
[55]
Mikhail Yurochkin, Amanda Bower, and Yuekai Sun. 2019. Training individually fair ML models with sensitive subspace robustness. arXiv preprint arXiv:1907.00020 (2019).
[56]
Mikhail Yurochkin and Yuekai Sun. 2020. Sensei: Sensitive set invariance for enforcing individual fairness. arXiv preprint arXiv:2006.14168 (2020).
[57]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. Fa*ir: A fair top-k ranking algorithm. In CIKM.
[58]
Meike Zehlike and Carlos Castillo. 2020. Reducing disparate exposure in ranking: A learning to rank approach. In WWW.
[59]
Meike Zehlike, Philipp Hacker, and Emil Wiedemann. 2020. Matching code and law: achieving algorithmic fairness with optimal transport. Data Mining and Knowledge Discovery, Vol. 34, 1 (2020), 163--200.
[60]
Meike Zehlike, Ke Yang, and Julia Stoyanovich. 2021. Fairness in ranking: A survey. arXiv preprint arXiv:2103.14000 (2021).
[61]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In AIES.
[62]
Hongjing Zhang and Ian Davidson. 2021. Deep Fair Discriminative Clustering. arXiv preprint arXiv:2105.14146 (2021).
[63]
Chen Zhao, Feng Chen, and Bhavani Thuraisingham. 2021. Fairness-Aware Online Meta-Learning. In SIGKDD.
[64]
Chen Zhao, Feng Chen, Zhuoyi Wang, and Latifur Khan. 2020. A primal-dual subgradient approach for fair meta learning. In ICDM.

Cited By

View all
  • (2024)FATE: Learning Effective Binary Descriptors With Group FairnessIEEE Transactions on Image Processing10.1109/TIP.2024.340613433(3648-3661)Online publication date: 2024
  • (2024)Mitigating Demographic Bias of Federated Learning Models via Robust-Fair Domain Smoothing: A Domain-Shifting Approach2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS)10.1109/ICDCS60910.2024.00078(785-796)Online publication date: 23-Jul-2024
  • (2023)Fair oversampling technique using heterogeneous clustersInformation Sciences: an International Journal10.1016/j.ins.2023.119059640:COnline publication date: 1-Sep-2023

Index Terms

  1. Fairness of Machine Learning in Search Engines

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
      October 2022
      5274 pages
      ISBN:9781450392365
      DOI:10.1145/3511808
      • General Chairs:
      • Mohammad Al Hasan,
      • Li Xiong
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 October 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. fairness
      2. machine learning
      3. search engines

      Qualifiers

      • Tutorial

      Conference

      CIKM '22
      Sponsor:

      Acceptance Rates

      CIKM '22 Paper Acceptance Rate 621 of 2,257 submissions, 28%;
      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      CIKM '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)60
      • Downloads (Last 6 weeks)6
      Reflects downloads up to 09 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)FATE: Learning Effective Binary Descriptors With Group FairnessIEEE Transactions on Image Processing10.1109/TIP.2024.340613433(3648-3661)Online publication date: 2024
      • (2024)Mitigating Demographic Bias of Federated Learning Models via Robust-Fair Domain Smoothing: A Domain-Shifting Approach2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS)10.1109/ICDCS60910.2024.00078(785-796)Online publication date: 23-Jul-2024
      • (2023)Fair oversampling technique using heterogeneous clustersInformation Sciences: an International Journal10.1016/j.ins.2023.119059640:COnline publication date: 1-Sep-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media