Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3617694.3623256acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article

Unraveling the Interconnected Axes of Heterogeneity in Machine Learning for Democratic and Inclusive Advancements

Published: 30 October 2023 Publication History

Abstract

The growing utilization of machine learning (ML) in decision-making processes raises questions about its benefits to society. In this study, we identify and analyze three axes of heterogeneity that significantly influence the trajectory of ML products. These axes are i) values, culture and regulations, ii) data composition, and iii) resource and infrastructure capacity. We demonstrate how these axes are interdependent and mutually influence one another, emphasizing the need to consider and address them jointly. Unfortunately, the current research landscape falls short in this regard, often failing to adopt a holistic approach. We examine the prevalent practices and methodologies that skew these axes in favor of a selected few, resulting in power concentration, homogenized control, and increased dependency. We discuss how this fragmented study of the three axes poses a significant challenge, leading to an impractical solution space that lacks reflection of real-world scenarios. Addressing these issues is crucial to ensure a more comprehensive understanding of the interconnected nature of society and to foster the democratic and inclusive development of ML systems that are more aligned with real-world complexities and its diverse requirements.

References

[1]
Rediet Abebe, Kehinde Aruleba, Abeba Birhane, Sara Kingsley, George Obaido, Sekou L Remy, and Swathi Sadagopan. 2021. Narratives and counternarratives on data sharing in Africa. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 329–341.
[2]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. In International Conference on Machine Learning. PMLR, 60–69.
[3]
Dario Amodei and Danny Hernandez. 2018. AI and compute, 2018. URL https://openai. com/blog/ai-and-compute 4 (2018).
[4]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. In Ethics of Data and Analytics. Auerbach Publications, 254–264.
[5]
Xianglin Bao, Cheng Su, Yan Xiong, Wenchao Huang, and Yifei Hu. 2019. Flchain: A blockchain for auditable federated learning with trust and incentive. In 2019 5th International Conference on Big Data Computing and Communications (BIGCOM). IEEE, 151–159.
[6]
Sara Beery, Grant Van Horn, and Pietro Perona. 2018. Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV). 456–473.
[7]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell, Vinodkumar Prabhakaran, Mark Díaz, and Ben Hutchinson. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 610–623.
[8]
Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Díaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (2022), 1–8.
[9]
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
[10]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2016. Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482 (2016).
[11]
Lotta Brännström, Katarina Giritli Nygren, Gustav Lidén, and Jon Nyhlén. 2018. Lived Experiences of Changing Integration Policies:: Immigrant Narratives of Institutional Support and Labour Market Inclusion/Exclusion in Sweden. Nordic Journal of Migration Research 8, 1 (2018), 25–34.
[12]
Tone Bratteteig and Ina Wagner. 2012. Disentangling Power and Decision-Making in Participatory Design. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2347635.2347642
[13]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[14]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
[15]
Consumer Financial Protection Bureau. 2023. Using publicly available information to proxy for unidentified race and ethnicity. https://www.consumerfinance.gov/data-research/research-reports/using-publicly-available-information-to-proxy-for-unidentified-race-and-ethnicity/ Accessed: 2023-05-10.
[16]
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2022. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646 (2022).
[17]
Rich Caruana. 1997. Multitask learning. Machine learning 28, 1 (1997), 41–75.
[18]
Daniel C Castro, Ian Walker, and Ben Glocker. 2020. Causality matters in medical imaging. Nature Communications 11, 1 (2020), 1–10.
[19]
Simon Caton and Christian Haas. 2020. Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053 (2020).
[20]
Junyi Chai, Taeuk Jang, and Xiaoqian Wang. 2022. Fairness without Demographics through Knowledge Distillation. Advances in Neural Information Processing Systems 35 (2022), 19152–19164.
[21]
Hongyan Chang and Reza Shokri. 2021. On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 292–303.
[22]
Kyla Chasalow and Karen Levy. 2021. Representativeness in statistics, politics, and machine learning. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 77–89.
[23]
YooJung Choi, Golnoosh Farnadi, Behrouz Babaki, and Guy Van den Broeck. 2020. Learning fair naive bayes classifiers by discovering and eliminating discrimination patterns. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 10077–10084.
[24]
Lingyang Chu, Lanjun Wang, Yanjie Dong, Jian Pei, Zirui Zhou, and Yong Zhang. 2021. Fedfair: Training fair models in cross-silo federated learning. arXiv preprint arXiv:2109.05662 (2021).
[25]
Nick Couldry and Ulises A Mejias. 2019. Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media 20, 4 (2019), 336–349.
[26]
Kate Crawford. 2021. The atlas of AI. In The Atlas of AI. Yale University Press.
[27]
Michael Crawshaw. 2020. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796 (2020).
[28]
Jared M Diamond and Doug Ordunio. 2001. Guns, germs, and steel. HighBridge Company.
[29]
Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. 2021. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems 34 (2021).
[30]
Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, 2023. Parameter-efficient fine-tuning of large-scale pre-trained language models. Nature Machine Intelligence (2023), 1–16.
[31]
Josep Domingo-Ferrer, David Sánchez, and Alberto Blanco-Justicia. 2021. The limits of differential privacy (and its misuse in data release and machine learning). Commun. ACM 64, 7 (2021), 33–35.
[32]
Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. 2020. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. Advances in Neural Information Processing Systems 33 (2020), 3557–3568.
[33]
Abolfazl Farahani, Sahar Voghoei, Khaled Rasheed, and Hamid R. Arabnia. 2020. A Brief Review of Domain Adaptation. https://doi.org/10.48550/ARXIV.2010.03978
[34]
Sina Fazelpour and Zachary C. Lipton. 2020. Algorithmic Fairness from a Non-Ideal Perspective. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (New York, NY, USA) (AIES ’20). Association for Computing Machinery, New York, NY, USA, 57–63. https://doi.org/10.1145/3375627.3375828
[35]
Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. 2022. Predictably unequal? The effects of machine learning on credit markets. The Journal of Finance 77, 1 (2022), 5–47.
[36]
Marc A Garcia, Patricia A Homan, Catherine García, and Tyson H Brown. 2021. The Color of COVID-19: Structural Racism and the Disproportionate Impact of the Pandemic on Older Black and Latinx Adults. The Journals of Gerontology: Series B 76, 3 (2021), e75–e80.
[37]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
[38]
Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. 2021. Knowledge distillation: A survey. International Journal of Computer Vision 129 (2021), 1789–1819.
[39]
Nina Grgić-Hlača, Gabriel Lima, Adrian Weller, and Elissa M Redmiles. 2020. Dimensions of diversity in human perceptions of algorithmic fairness. arXiv preprint arXiv:2005.00808 (2020).
[40]
Stuart Hall and Bram Gieben. 1992. The West and the rest: Discourse and power. Race and Racialization, 2E: Essential Readings (1992), 85–95.
[41]
Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning. PMLR, 1939–1948.
[42]
Daniel E Ho and Alice Xiang. 2020. Affirmative algorithms: The legal grounds for fairness as awareness. U. Chi. L. Rev. Online (2020), 134.
[43]
Sara Hooker. 2021. Moving beyond “algorithmic bias is a data problem”. Patterns 2, 4 (2021), 100241.
[44]
Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models. arXiv preprint arXiv:2010.03058 (2020).
[45]
Lily Hu and Yiling Chen. 2018. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference. 1389–1398.
[46]
Vasileios Iosifidis, Besnik Fetahu, and Eirini Ntoutsi. 2019. Fae: A fairness-aware ensemble framework. In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 1375–1380.
[47]
Lilly Irani. 2019. Justice for data janitors. In Think in Public. Columbia University Press, 23–40.
[48]
Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 375–385.
[49]
Hao Jin, Yan Luo, Peilong Li, and Jomol Mathew. 2019. A review of secure and privacy-preserving medical data sharing. IEEE Access 7 (2019), 61656–61669.
[50]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1–2 (2021), 1–210.
[51]
Patty Kostkova, Helen Brewer, Simon De Lusignan, Edward Fottrell, Ben Goldacre, Graham Hart, Phil Koczan, Peter Knight, Corinne Marsolier, Rachel A McKendry, 2016. Who owns the data? Open data for healthcare. Frontiers in public health 4 (2016), 7.
[52]
Robert Koulish. 2017. Immigration detention in the risk classification assessment era. Conn. Pub. Int. LJ 16 (2017), 1.
[53]
Joshua A Kroll. 2018. The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2133 (2018), 20180084.
[54]
Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland, and Patrick Vinck. 2018. Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology 31 (2018), 611–627.
[55]
Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed impact of fair machine learning. In International Conference on Machine Learning. PMLR, 3150–3158.
[56]
Christina Lu, Jackie Kay, and Kevin McKee. 2022. Subverting machines, fluctuating identities: Re-learning human categorization. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1005–1015.
[57]
Jonne MAAS and Juan M DURÁN. 2023. Legitimacy of what?: a call for democratic AI design. International Conference Series on Hybrid Human-Artificial Intelligence (2023).
[58]
Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. 2021. Federated multi-task learning under a mixture of distributions. Advances in Neural Information Processing Systems 34 (2021), 15434–15447.
[59]
Natalia Martinez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax pareto fairness: A multi objective perspective. In International Conference on Machine Learning. PMLR, 6755–6764.
[60]
Alice E Marwick and Danah Boyd. 2018. Understanding privacy at the margins.International Journal of Communication (19328036) 12 (2018).
[61]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
[62]
Milagros Miceli, Martin Schuessler, and Tianling Yang. 2020. Between subjectivity and imposition: Power dynamics in data annotation for computer vision. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–25.
[63]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
[64]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2018. Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867 (2018).
[65]
Tom Michael Mitchell 2007. Machine learning. Vol. 1. McGraw-hill New York.
[66]
Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology 33 (2020), 659–684.
[67]
Jose G Moreno-Torres, Troy Raeder, Rocío Alaiz-Rodríguez, Nitesh V Chawla, and Francisco Herrera. 2012. A unifying view on dataset shift in classification. Pattern recognition 45, 1 (2012), 521–530.
[68]
Viraaji Mothukuri, Reza M Parizi, Seyedamin Pouriyeh, Yan Huang, Ali Dehghantanha, and Gautam Srivastava. 2021. A survey on security and privacy of federated learning. Future Generation Computer Systems 115 (2021), 619–640.
[69]
Hamid Mozaffari, Virat Shejwalkar, and Amir Houmansadr. 2021. Frl: Federated rank learning. arXiv preprint arXiv:2110.04350 (2021).
[70]
Maja Müller, Annette Olesen, and Mette Rømer. 2022. Social work research with marginalized groups–navigating an ethical minefield. Nordic Social Work Research 12, 1 (2022), 63–75.
[71]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
[72]
Cathy O’neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
[73]
Sarah P Otto, Troy Day, Julien Arino, Caroline Colijn, Jonathan Dushoff, Michael Li, Samir Mechai, Gary Van Domselaar, Jianhong Wu, David JD Earn, 2021. The origins and potential future of SARS-CoV-2 variants of concern in the evolving COVID-19 pandemic. Current Biology 31, 14 (2021), R918–R929.
[74]
Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22, 10 (2010), 1345–1359.
[75]
Samir Passi and Solon Barocas. 2019. Problem formulation and fairness. In Proceedings of the conference on fairness, accountability, and transparency. 39–48.
[76]
Sikhar Patranabis, Yash Shrivastava, and Debdeep Mukhopadhyay. 2016. Provably secure key-aggregate cryptosystems with broadcast aggregate keys for online data sharing on the cloud. IEEE Trans. Comput. 66, 5 (2016), 891–904.
[77]
Nathan Peiffer-Smadja, Redwan Maatoug, François-Xavier Lescure, Eric D’ortenzio, Joëlle Pineau, and Jean-Rémi King. 2020. Machine learning for COVID-19 needs global collaboration and data-sharing. Nature Machine Intelligence 2, 6 (2020), 293–294.
[78]
Eric Potash, Joe Brew, Alexander Loewi, Subhabrata Majumdar, Andrew Reece, Joe Walsh, Eric Rozier, Emile Jorgenson, Raed Mansour, and Rayid Ghani. 2015. Predictive modeling for public health: Preventing childhood lead poisoning. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2039–2047.
[79]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[80]
Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract. Ethics and information technology 20, 1 (2018), 5–14.
[81]
Stephanie Carroll Rainie, Tahu Kukutai, Maggie Walter, Oscar Luis Figueroa-Rodríguez, Jennifer Walker, and Per Axelsson. 2019. Indigenous data sovereignty. (2019).
[82]
Debbie Rankin, Michaela Black, Raymond Bond, Jonathan Wallace, Maurice Mulvenna, and Gorka Epelde. 2020. Reliability of Supervised Machine Learning Using Synthetic Data in Health Care: Model to Preserve Privacy for Data Sharing. JMIR Med Inform 8, 7 (20 Jul 2020), e18910. https://doi.org/10.2196/18910
[83]
Ashkan Rezaei, Anqi Liu, Omid Memarrast, and Brian D Ziebart. 2021. Robust fairness under covariate shift. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 9419–9427.
[84]
Abhijit Guha Roy, Shayan Siddiqui, Sebastian Pölsterl, Nassir Navab, and Christian Wachinger. 2019. Braintorrent: A peer-to-peer environment for decentralized federated learning. arXiv preprint arXiv:1905.06731 (2019).
[85]
Saifullah Saifullah, Dominique Mercier, Adriano Lucieri, Andreas Dengel, and Sheraz Ahmed. 2022. Privacy Meets Explainability: A Comprehensive Impact Benchmark. arXiv preprint arXiv:2211.04110 (2022).
[86]
Michael J Sandel, T Anne, 1998. Liberalism and the Limits of Justice. Cambridge University Press.
[87]
Amartya Sanyal, Yaxi Hu, and Fanny Yang. 2022. How unfair is private learning?. In Uncertainty in Artificial Intelligence. PMLR, 1738–1748.
[88]
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1668–1678. https://doi.org/10.18653/v1/P19-1163
[89]
Kailash Karthik Saravanakumar. 2020. The Impossibility Theorem of Machine Fairness–A Causal Perspective. arXiv preprint arXiv:2007.06024 (2020).
[90]
Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, and Ed H Chi. 2019. Transfer of machine learning fairness across domains. arXiv preprint arXiv:1906.09688 (2019).
[91]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59–68.
[92]
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. 2017. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536 (2017).
[93]
Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D. Sculley. 2017. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. arxiv:1711.08536 [stat.ML]
[94]
Daniel L Shapiro. 2010. Relational identity theory: a systematic approach for transforming the emotional dimension of conflict.American Psychologist 65, 7 (2010), 634.
[95]
Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla, Jess Gallegos, Andrew Smart, Emilio Garcia, 2022. Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction. arXiv preprint arXiv:2210.05791 (2022).
[96]
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated multi-task learning. Advances in neural information processing systems 30 (2017).
[97]
Rachel Thomas and David Uminsky. 2020. The problem with metrics is a fundamental problem for ai. arXiv preprint arXiv:2002.08512 (2020).
[98]
Dustin Tran, Jeremiah Liu, Michael W Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, 2022. Plex: Towards reliability using pretrained large model extensions. arXiv preprint arXiv:2207.07411 (2022).
[99]
Rhema Vaithianathan, Tim Maloney, Emily Putnam-Hornstein, and Nan Jiang. 2013. Children in the public benefit system at risk of maltreatment: Identification via predictive modeling. American journal of preventive medicine 45, 3 (2013), 354–359.
[100]
Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564 (2018).
[101]
Jeremy Waldron. 2000. Cultural identity and civic responsibility. Citizenship in diverse societies (2000), 155–174.
[102]
Angelina Wang, Vikram V Ramaswamy, and Olga Russakovsky. 2022. Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 336–349. https://doi.org/10.1145/3531146.3533101
[103]
Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. 2019. Characterizing and avoiding negative transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 11293–11302.
[104]
Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. 2020. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15 (2020), 3454–3469.
[105]
Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre Alvise-Rebuffi, Ira Ktena, Krishnamurthy Dvijotham, and Taylan Cemgil. 2021. A fine-grained analysis on distribution shift. arXiv preprint arXiv:2110.11328 (2021).
[106]
Hairuo Xie, Lars Kulik, and Egemen Tanin. 2011. Privacy-aware collection of aggregate spatial data. Data and Knowledge Engineering 70, 6 (2011), 576–595. https://doi.org/10.1016/j.datak.2011.03.007
[107]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web. 1171–1180.
[108]
Chen Zhang, Yu Xie, Hang Bai, Bin Yu, Weihong Li, and Yuan Gao. 2021. A survey on federated learning. Knowledge-Based Systems 216 (2021), 106775.
[109]
Marilyn Zhang. 2022. Affirmative Algorithms: Relational Equality as Algorithmic Fairness. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 495–507.
[110]
Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2020. A comprehensive survey on transfer learning. Proc. IEEE 109, 1 (2020), 43–76.

Cited By

View all
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692594(13072-13085)Online publication date: 21-Jul-2024
  • (2024)Breaking Barriers: Overcoming Resistance to Curriculum IndigenisationProceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 110.1145/3649165.3690104(53-59)Online publication date: 5-Dec-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
October 2023
498 pages
ISBN:9798400703812
DOI:10.1145/3617694
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 October 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Democratic advancements
  2. FATE
  3. Inclusive and accessible ML
  4. Responsible AI

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EAAMO '23
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)34
  • Downloads (Last 6 weeks)1
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692594(13072-13085)Online publication date: 21-Jul-2024
  • (2024)Breaking Barriers: Overcoming Resistance to Curriculum IndigenisationProceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 110.1145/3649165.3690104(53-59)Online publication date: 5-Dec-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media