Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3593013.3594075acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Disentangling and Operationalizing AI Fairness at LinkedIn

Published: 12 June 2023 Publication History

Abstract

Operationalizing AI fairness at LinkedIn’s scale is challenging not only because there are multiple mutually incompatible definitions of fairness but also because determining what is fair depends on the specifics and context of the product where AI is deployed. Moreover, AI practitioners need clarity on what fairness expectations need to be addressed at the AI level. In this paper, we present the evolving AI fairness framework used at LinkedIn to address these three challenges. The framework disentangles AI fairness by separating out equal treatment and equitable product expectations. Rather than imposing a trade-off between these two commonly opposing interpretations of fairness, the framework provides clear guidelines for operationalizing equal AI treatment complemented with a product equity strategy. This paper focuses on the equal AI treatment component of LinkedIn’s AI fairness framework, shares the principles that support it, and illustrates their application through a case study. We hope this paper will encourage other big tech companies to join us in sharing their approach to operationalizing AI fairness at scale, so that together we can keep advancing this constantly evolving field.

References

[1]
2021. Together we can improve equal access to opportunity. (2021). https://members.linkedin.com/equal-access
[2]
2022. Hide candidate names and photos in LinkedIn Recruiter. (2022). https://www.linkedin.com/help/recruiter/answer/a481559
[3]
2022. Improve Gender Representation in Your Candidate Pools with These Diversity Features. (2022). https://www.linkedin.com/pulse/improve-gender-representation-your-candidate-/
[4]
2022. Microsoft Responsible AI Standard. (2022). https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5cmFl
[5]
Deepak Agarwal, Kinjal Basu, Souvik Ghosh, Ying Xuan, Yang Yang, and Liang Zhang. 2018. Online parameter selection for web-based ranking problems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 23–32.
[6]
Deepak Agarwal, Bee-Chung Chen, and Pradheep Elango. 2009. Explore/exploit schemes for web content optimization. In 2009 Ninth IEEE International Conference on Data Mining. IEEE, 1–10.
[7]
Deepak Agarwal, Bee-Chung Chen, Rupesh Gupta, Joshua Hartman, Qi He, Anand Iyer, Sumanth Kolar, Yiming Ma, Pannagadatta Shivaswamy, Ajit Singh, 2014. Activity ranking in LinkedIn feed. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 1603–1612.
[8]
Deepak Agarwal, Bee-Chung Chen, Qi He, Zhenhao Hua, Guy Lebanon, Yiming Ma, Pannagadatta Shivaswamy, Hsiao-Ping Tseng, Jaewon Yang, and Liang Zhang. 2015. Personalizing linkedin feed. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1651–1660.
[9]
Parag Agrawal, Ankan Saha, Yafei Wang, Aastha Nigam, and Eric Lawrence. 2020. Building a heterogeneous social network recommendation system. (2020). https://engineering.linkedin.com/blog/2020/building-a-heterogeneous-social-network-recommendation-system
[10]
Mohammad Al-Rubaie and J Morris Chang. 2019. Privacy-preserving machine learning: Threats and solutions. IEEE Security & Privacy 17, 2 (2019), 49–58.
[11]
Kiana Alikhademi, Brianna Richardson, Emma Drobina, and Juan E Gilbert. 2021. Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv preprint arXiv:2106.07483 (2021).
[12]
McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What We Can’t Measure, We Can’t Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 249–260.
[13]
Chloé Bakalar, Renata Barreto, Stevie Bergman, Miranda Bogen, Bobbie Chern, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, 2021. Fairness on the ground: Applying algorithmic fairness approaches to production systems. arXiv preprint arXiv:2103.06172 (2021).
[14]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2017. Fairness in machine learning. Nips tutorial 1 (2017), 2.
[15]
Antonio Bella, Cèsar Ferri, José Hernández-Orallo, and María José Ramírez-Quintana. 2010. Calibration of machine learning models. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques. IGI Global, 128–146.
[16]
Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, and Ed H. Chi. 2019. Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements. https://arxiv.org/pdf/1901.04562.pdf
[17]
Miranda Bogen, Pushkar Tripathi, Aditya Srinivas Timmaraju, Mehdi Mashayekhi, Qi Zeng, Rabyd (Rob) Roudani, Sean Gahagan, Andrew Howard, and Isabella Leone. 2023. Toward fairness in personalized ads. (2023). https://about.fb.com/news/2023/01/an-update-on-our-ads-fairness-efforts/
[18]
Abhishek Chakrabortty, Preetam Nandy, and Hongzhe Li. 2018. Inference for individual mediation effects and interventional effects in sparse high-dimensional causal graphical models. arXiv preprint arXiv:1809.10652 (2018).
[19]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
[20]
Alexandra Chouldechova and Aaron Roth. 2020. A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63, 5 (2020), 82–89.
[21]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018).
[22]
Kate Crawford. 2021. The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
[23]
Cyrus DiCiccio, Brian Hsu, YinYin Yu, Preetam Nandy, and Kinjal Basu. 2022. Predictive Rate Parity Testing and Mitigation. arXiv preprint arXiv:2204.05947 (2022).
[24]
Imani Dunbar. 2022. Mythbusting the Feed: How We Work to Address Bias. (2022). https://blog.linkedin.com/2022/november/1/mythbusting-the-feed-how-we-work-to-address-bias
[25]
Imani Dunbar. 2022. Scaling Self-ID to Better Connect Members to Opportunities. (2022). https://www.linkedin.com/pulse/scaling-self-id-better-connect-members-opportunities-imani-dunbar/?trk=pulse-article
[26]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation. Springer, 1–19.
[27]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
[28]
Cynthia Dwork, Aaron Roth, 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3–4 (2014), 211–407.
[29]
Ferdinando Fioretto, Cuong Tran, Pascal Van Hentenryck, and Keyu Zhu. 2022. Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey. arXiv preprint arXiv:2202.08187 (2022).
[30]
Caroline Fontaine and Fabien Galand. 2007. A survey of homomorphic encryption for nonspecialists. EURASIP Journal on Information Security 2007 (2007), 1–10.
[31]
Rina Friedberg, Stuart Ambler, and Guillaume Saint-Jacques. 2022. Representation-Aware Experimentation: Group Inequality Analysis for A/B Testing and Alerting. arXiv preprint arXiv:2204.12011 (2022).
[32]
Rina Friedberg and Ryan Rogers. 2022. Privacy Aware Experimentation over Sensitive Groups: A General Chi Square Approach. arXiv preprint arXiv:2208.08564 (2022).
[33]
Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 219–226.
[34]
Sahin Cem Geyik, Qi Guo, Bo Hu, Cagri Ozcaglar, Ketan Thakkar, Xianren Wu, and Krishnaram Kenthapadi. 2018. Talent search and recommendation systems at LinkedIn: Practical challenges and lessons learned. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 1353–1354.
[35]
Avijit Ghosh, Aalok Shanbhag, and Christo Wilson. 2022. Faircanary: Rapid continuous explainable fairness. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 307–316.
[36]
Ruocheng Guo, Lu Cheng, Jundong Li, P Richard Hahn, and Huan Liu. 2020. A survey of learning causality with data: Problems and methods. ACM Computing Surveys (CSUR) 53, 4 (2020), 1–37.
[37]
Ido Guy and Luiz Pizzato. 2016. People recommendation tutorial. In Proceedings of the 10th ACM Conference on Recommender Systems. 431–432.
[38]
Xudong Han, Timothy Baldwin, and Trevor Cohn. 2021. Balancing out bias: Achieving fairness through training reweighting. arXiv preprint arXiv:2109.08253 (2021).
[39]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016).
[40]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning. PMLR, 1929–1938.
[41]
Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning. PMLR, 1939–1948.
[42]
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. 2018. Evaluating feature importance estimates. (2018).
[43]
Brian Hsu, Xiaotong Chen, Ying Han, Hongseok Namkoong, and Kinjal Basu. 2023. An Operational Perspective to Fairness Interventions: Where and How to Intervene. arXiv preprint arXiv:2302.01574 (2023).
[44]
Brian Hsu, Rahul Mazumder, Preetam Nandy, and Kinjal Basu. 2022. Pushing the limits of fairness impossibility: Who’s the fairest of them all?arXiv preprint arXiv:2208.12606 (2022).
[45]
Vasileios Iosifidis and Eirini Ntoutsi. 2019. Adafair: Cumulative fairness adaptive boosting. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 781–790.
[46]
Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2017. Fairness in reinforcement learning. In International conference on machine learning. PMLR, 1617–1626.
[47]
Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alex Dimakis, and Eric Price. 2021. Fairness for image generation with uncertain sensitive attributes. In International Conference on Machine Learning. PMLR, 4721–4732.
[48]
Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. 2016. Fairness in learning: Classic and contextual bandits. Advances in neural information processing systems 29 (2016).
[49]
Krishnaram Kenthapadi, Benjamin Le, and Ganesh Venkataraman. 2017. Personalized job recommendation system at linkedin: Practical challenges and lessons learned. In Proceedings of the eleventh ACM conference on recommender systems. 346–347.
[50]
Michael P Kim, Amirata Ghorbani, and James Zou. 2019. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 247–254.
[51]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)(Leibniz International Proceedings in Informatics (LIPIcs), Vol. 67), Christos H. Papadimitriou (Ed.). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 43:1–43:23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
[52]
Ron Kohavi, Alex Deng, Brian Frasca, Toby Walker, Ya Xu, and Nils Pohlmann. 2013. Online controlled experiments at large scale. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 1168–1176.
[53]
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. 2020. Fairness without demographics through adversarially reweighted learning. Advances in neural information processing systems 33 (2020), 728–740.
[54]
Honglei Liu, Anuj Kumar, Wenhai Yang, and Benoit Dumoulin. 2018. Explore-Exploit: A Framework for Interactive and Online Learning. arXiv preprint arXiv:1812.00116 (2018).
[55]
Pranay K Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R Varshney, and Ruchir Puri. 2019. Bias mitigation post-processing for individual and group fairness. In Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp). IEEE, 2847–2851.
[56]
Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP). IEEE, 19–38.
[57]
Preetam Nandy, Kinjal Basu, Shaunak Chatterjee, and Ye Tu. 2020. A/B testing in dense large-scale networks: design and inference. Advances in Neural Information Processing Systems 33 (2020), 2870–2880.
[58]
Preetam Nandy, Yunsong Meng, Cyrus DiCiccio, Heloise Logan, Amir Sepehri, Divya Venugopalan, Kinjal Basu, and Noureddine El Karoui. 2021. Using the LinkedIn Fairness Toolkit in large-scale AI systems. (2021). https://engineering.linkedin.com/blog/2021/using-the-linkedin-fairness-toolkit-large-scale-ai
[59]
Preetam Nandy, Divya Venugopalan, Chun Lo, and Shaunak Chatterjee. 2021. A/B testing for recommender systems in a two-sided marketplace. Advances in Neural Information Processing Systems 34 (2021), 6466–6477.
[60]
Arvind Narayanan. 21. Fairness definitions and their politics. In Tutorial presented at the Conf. on Fairness, Accountability, and Transparency.
[61]
Judea Pearl. 2009. Causality. Cambridge university press.
[62]
Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Post-processing for individual fairness. Advances in Neural Information Processing Systems 34 (2021), 25944–25955.
[63]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. Advances in neural information processing systems 30 (2017).
[64]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 469–481.
[65]
Rohan Ramanath, Hakan Inan, Gungor Polatkan, Bo Hu, Qi Guo, Cagri Ozcaglar, Xianren Wu, Krishnaram Kenthapadi, and Sahin Cem Geyik. 2018. Towards deep and representation learning for talent search at LinkedIn. In Proceedings of the 27th ACM international conference on information and knowledge management. 2253–2261.
[66]
Jonathan Roth, Guillaume Saint-Jacques, and YinYin Yu. 2022. An Outcome Test of Discrimination for Ranked Lists. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 350–356.
[67]
Guillaume Saint-Jacques, Amir Sepehri, Nicole Li, and Igor Perisic. 2020. Fairness through Experimentation: Inequality in A/B testing as an approach to responsible design. arXiv preprint arXiv:2002.05819 (2020).
[68]
Jad Salem and Swati Gupta. 2019. Secretary Problems with Biased Evaluations using Partial Ordinal Information. Available at SSRN 3444283 (2019).
[69]
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, and Lijuan Wang. 2022. Prompting gpt-3 to be reliable. arXiv preprint arXiv:2210.09150 (2022).
[70]
Xiaoqiang Sun, Peng Zhang, Joseph K Liu, Jianping Yu, and Weixin Xie. 2018. Private machine learning classification based on fully homomorphic encryption. IEEE Transactions on Emerging Topics in Computing 8, 2 (2018), 352–364.
[71]
Shuhan Tan, Yujun Shen, and Bolei Zhou. 2020. Improving the fairness of deep generative models without retraining. arXiv preprint arXiv:2012.04842 (2020).
[72]
Sriram Vasudevan and Krishnaram Kenthapadi. 2020. Lift: A scalable framework for measuring fairness in ml applications. In Proceedings of the 29th ACM international conference on information & knowledge management. 2773–2780.
[73]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2021. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review 41 (2021), 105567.
[74]
Mingyang Wan, Daochen Zha, Ninghao Liu, and Na Zou. 2022. In-Processing Modeling Techniques for Machine Learning Fairness: A Survey. ACM Transactions on Knowledge Discovery from Data (TKDD) (2022).
[75]
Alice Xiang and Inioluwa Deborah Raji. 2019. On the legal compatibility of fairness definitions. arXiv preprint arXiv:1912.00761 (2019).
[76]
Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu. 2019. Explainable AI: A brief survey on history, research areas, approaches and challenges. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8. Springer, 563–574.
[77]
Kaihe Xu, Hao Yue, Linke Guo, Yuanxiong Guo, and Yuguang Fang. 2015. Privacy-preserving machine learning algorithms for big data systems. In 2015 IEEE 35th international conference on distributed computing systems. IEEE, 318–327.
[78]
Xun Yi, Russell Paulet, Elisa Bertino, Xun Yi, Russell Paulet, and Elisa Bertino. 2014. Homomorphic encryption. Springer.

Cited By

View all
  • (2024)Overcoming Bias in Healthcare Organization Hiring AlgorithmsChange Dynamics in Healthcare, Technological Innovations, and Complex Scenarios10.4018/979-8-3693-3555-0.ch004(66-88)Online publication date: 26-Feb-2024
  • (2024)Integrating Equity in Public Sector Data-Driven Decision Making: Exploring the Desired Futures of Underserved StakeholdersProceedings of the ACM on Human-Computer Interaction10.1145/36869058:CSCW2(1-39)Online publication date: 8-Nov-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AI Fairness strategy
  2. equity
  3. large-organizational process
  4. operationalization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)235
  • Downloads (Last 6 weeks)18
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Overcoming Bias in Healthcare Organization Hiring AlgorithmsChange Dynamics in Healthcare, Technological Innovations, and Complex Scenarios10.4018/979-8-3693-3555-0.ch004(66-88)Online publication date: 26-Feb-2024
  • (2024)Integrating Equity in Public Sector Data-Driven Decision Making: Exploring the Desired Futures of Underserved StakeholdersProceedings of the ACM on Human-Computer Interaction10.1145/36869058:CSCW2(1-39)Online publication date: 8-Nov-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media