Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3617694.3623227acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article
Open access

A Classification of Feedback Loops and Their Relation to Biases in Automated Decision-Making Systems

Published: 30 October 2023 Publication History

Abstract

Prediction-based decision-making systems are becoming increasingly prevalent in various domains. Previous studies have demonstrated that such systems are vulnerable to runaway feedback loops, e.g., when police are repeatedly sent back to the same neighborhoods regardless of the actual rate of criminal activity, which exacerbate existing biases. In practice, the automated decisions have dynamic feedback effects on the system itself – which in ML literature is sometimes referred to as performative predictions – that can perpetuate over time, making it difficult for short-sighted design choices to control the system’s evolution. While researchers started proposing longer-term solutions to prevent adverse outcomes (such as bias towards certain groups), these interventions largely depend on ad hoc modeling assumptions and a rigorous theoretical understanding of the feedback dynamics in ML-based decision-making systems is currently missing. In this paper, we use the language of dynamical systems theory, a branch of applied mathematics that deals with the analysis of the interconnection of systems with dynamic behaviors, to rigorously classify the different types of feedback loops in the ML-based decision-making pipeline. By reviewing existing scholarly work, we show that this classification covers many examples discussed in the algorithmic fairness community, thereby providing a unifying and principled framework to study feedback loops. By qualitative analysis, and through a simulation example of recommender systems, we show which specific types of ML biases are affected by each type of feedback loop. We find that the existence of feedback loops in the ML-based decision-making pipeline can perpetuate, reinforce, or even reduce ML biases.

References

[1]
George Alexandru Adam, Chun-Hao Kingsley Chang, Benjamin Haibe-Kains, and Anna Goldenberg. 2020. Hidden Risks of Machine Learning Applied to Healthcare: Unintended Feedback Loops Between Models and Future Data Causing Model Degradation. In Proceedings of the 5th Machine Learning for Healthcare Conference(Proceedings of Machine Learning Research, Vol. 126), Finale Doshi-Velez, Jim Fackler, Ken Jung, David Kale, Rajesh Ranganath, Byron Wallace, and Jenna Wiens (Eds.). PMLR, 710–731. https://proceedings.mlr.press/v126/adam20a.html
[2]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23, 2016 (2016), 139–159. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[3]
Karl Johan Åström and Richard M Murray. 2021. Feedback systems: an introduction for scientists and engineers. Princeton university press.
[4]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org
[5]
Joachim Baumann, Alessandro Castelnovo, Riccardo Crupi, Nicole Inverardi, and Daniele Regoli. 2023. Bias on Demand: A Modelling Framework That Generates Synthetic Data With Bias. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1002–1013. https://doi.org/10.1145/3593013.3594058
[6]
Joachim Baumann, Anikó Hannák, and Christoph Heitz. 2022. Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, New York, NY, USA, 2315–2326. https://doi.org/10.1145/3531146.3534645
[7]
Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, and Zhiwei Steven Wu. 2019. Equal opportunity in online classification with partial feedback. Advances in Neural Information Processing Systems 32, NeurIPS (2019).
[8]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2021. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research 50, 1 (2021), 3–44. https://doi.org/10.1177/0049124118782533
[9]
Avrim Blum and Yishay Monsour. 2007. Learning, regret minimization, and equilibria. (2007).
[10]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A Friedler and Christo Wilson (Eds.). PMLR, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
[11]
Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. (2020). http://arxiv.org/abs/2010.04053
[12]
Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. 2018. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. (2018), 224–232. https://doi.org/10.1145/3240323.3240370
[13]
Yongxin Chen, Tryphon T. Georgiou, and Michele Pavon. 2021. Optimal Transport in Systems and Control. Annual Review of Control, Robotics, and Autonomous Systems 4, 1 (2021), 89–113. https://doi.org/10.1146/annurev-control-070220-100858
[14]
Silvia Chiappa, Ray Jiang, Tom Stepleton, Aldo Pacchiano, Heinrich Jiang, and John Aslanides. 2020. A General Approach to Fairness with Optimal Transport. The 34th AAAI Conference on Artificial Intelligence (2020). https://ojs.aaai.org/index.php/AAAI/article/view/5771
[15]
Alexandra Chouldechova and Aaron Roth. 2018. The Frontiers of Fairness in Machine Learning. (2018), 1–13. http://arxiv.org/abs/1810.08810
[16]
Alexandra Chouldechova and Aaron Roth. 2020. A Snapshot of the Frontiers of Fairness in Machine Learning. Commun. ACM 63, 5 (4 2020), 82–89. https://doi.org/10.1145/3376898
[17]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’17). Association for Computing Machinery, New York, NY, USA, 797–806. https://doi.org/10.1145/3097983.3098095
[18]
Kate Crawford. 2016. Artificial intelligence’s white guy problem. The New York Times 25, 06 (2016).
[19]
Alexander D’Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, D. Sculley, and Yoni Halpern. 2020. Fairness is not static: Deeper understanding of long term fairness via simulation studies. FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020), 525–534. https://doi.org/10.1145/3351095.3372878
[20]
Robyn M Dawes, David Faust, and Paul E Meehl. 1989. Clinical Versus Actuarial Judgment. Science 243, 4899 (1989), 1668–1674. https://doi.org/10.1126/science.2648573
[21]
Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli. 2018. A broader view on bias in automated decision-making: Reflecting on epistemology and dynamics. arXiv preprint arXiv:1807.00553 (2018).
[22]
Matthew Ellis, Helen Durand, and Panagiotis D. Christofides. 2014. A tutorial review of economic model predictive control methods. Journal of Process Control 24, 8 (2014). http://dx.doi.org/10.1016/j.jprocont.2014.03.01
[23]
Hadi Elzayn, Michael Kearns, Shahin Jabbari, Seth Neel, Zachary Schutzman, Christopher Jung, and Aaron Roth. 2019. Fair algorithms for learning in allocation problems. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (2019), 170–179. https://doi.org/10.1145/3287560.3287571
[24]
Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, Suresh Venkatasubramanian, Mehryar Mohri, and Karthik Sridharan. 2018. Decision making with limited feedback: Error bounds for predictive policing and recidivism prediction. Proceedings of Machine Learning Research 83 (2018), 1–9.
[25]
Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, Suresh Venkatasubramanian, and Christo Wilson. 2018. Runaway Feedback Loops in Predictive Policing. In Proceedings of Machine Learning Research, Vol. 81. 1–12. https://github.com/algofairness/
[26]
Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2016. On the (im)possibility of fairness. https://arxiv.org/abs/1609.07236
[27]
Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. 2022. Predictably Unequal? The Effects of Machine Learning on Credit Markets. Journal of Finance 77, 1 (2022), 5–47. https://doi.org/10.1111/jofi.13090
[28]
João Gama, Indrundefined Žliobaitundefined, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. 2014. A Survey on Concept Drift Adaptation. ACM Comput. Surv. 46, 4 (3 2014). https://doi.org/10.1145/2523813
[29]
W M Grove, D H Zald, B S Lebow, B E Snitz, and C Nelson. 2000. Clinical versus mechanical prediction: a meta-analysis.Psychological assessment 12, 1 (3 2000), 19–30.
[30]
Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner. 2022. Performative Power. In Advances in Neural Information Processing Systems, Alice H Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://doi.org/10.48550/ARXIV.2203.17232
[31]
Moritz Hardt, Eric Mazumdar, Celestine Mendler-Dünner, and Tijana Zrnic. 2023. Algorithmic Collective Action in Machine Learning. arxiv:2302.04262
[32]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic Classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science(ITCS ’16). Association for Computing Machinery, New York, NY, USA, 111–122. https://doi.org/10.1145/2840728.2840730
[33]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems(NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 3323–3331.
[34]
Drew Harwell. 2018. Amazon’s Alexa and Google Home show accent bias, with Chinese and Spanish hardest to understand. https://www.scmp.com/magazines/post-magazine/long-reads/article/2156455/amazons-alexa-and-google-home-show-accent-bias
[35]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 1929–1938. https://proceedings.mlr.press/v80/hashimoto18a.html
[36]
Hoda Heidari, Vedant Nanda, and Krishna P. Gummadi. 2019. On the Long-term Impact of Algorithmic Decision Policies: Effort unfairness and feature segregation through social learning. 36th International Conference on Machine Learning, ICML 2019 2019-June (2019), 4787–4796.
[37]
Corinna Hertweck, Christoph Heitz, and Michele Loi. 2021. On the Moral Justification of Statistical Parity. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 747–757. https://doi.org/10.1145/3442188.3445936
[38]
Lily Hu and Yiling Chen. 2018. A short-term intervention for long-term fairness in the labor market. The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 2 (2018), 1389–1398. https://doi.org/10.1145/3178876.3186044
[39]
Lily Hu, Nicole Immorlica, and Jennifer Wortman Vaughan. 2019. The Disparate Effects of Strategic Manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 259–268. https://doi.org/10.1145/3287560.3287597
[40]
Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin I P Rubinstein, and J D Tygar. 2011. Adversarial Machine Learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence(AISec ’11). Association for Computing Machinery, New York, NY, USA, 43–58. https://doi.org/10.1145/2046684.2046692
[41]
Sterman John D.2000. Business Dynamics: Systems Thinking and Modeling for a Complex World. McGraw-Hill Education.
[42]
Michael Kearns and Ming Li. 1993. Learning in the Presence of Malicious Errors. SIAM J. Comput. 22, 4 (1993), 807–837. https://doi.org/10.1137/0222052
[43]
Michael Kearns and Aaron Roth. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, Inc., USA.
[44]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2017. Human Decisions and Machine Predictions. Technical Report 23180. National Bureau of Economic Research. https://doi.org/10.3386/w23180
[45]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic Fairness. AEA Papers and Proceedings 108 (5 2018), 22–27. https://doi.org/10.1257/pandp.20181018
[46]
Jon Kleinberg and Manish Raghavan. 2020. How Do Classifiers Induce Agents to Invest Effort Strategically?ACM Transactions on Economics and Computation 8, 4 (2020). https://doi.org/10.1145/3417742
[47]
Frank L Lewis, Draguna Vrabie, and Vassilis L Syrmos. 2012. Optimal control. John Wiley & Sons.
[48]
Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea. 2017. Robust Linear Regression Against Training Data Poisoning. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security(AISec ’17). Association for Computing Machinery, New York, NY, USA, 91–102. https://doi.org/10.1145/3128572.3140447
[49]
Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed Impact of Fair Machine Learning. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 3150–3158. https://proceedings.mlr.press/v80/liu18c.html
[50]
Lydia T. Liu, Adam Tauman Kalai, Ashia Wilson, Christian Borgs, Nika Haghtalab, and Jennifer Chayes. 2020. The disparate equilibria of algorithmic decision making when individuals invest rationally. FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020), 381–391. https://doi.org/10.1145/3351095.3372861
[51]
Kristian Lum and William Isaac. 2016. To predict and serve?Significance 13, 5 (2016), 14–19. https://doi.org/10.1111/j.1740-9713.2016.00960.x
[52]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6 (7 2021). https://doi.org/10.1145/3457607
[53]
Celestine Mendler-Dünner, Frances Ding, and Yixin Wang. 2022. Anticipating Performativity by Predicting from Predictions. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.). Vol. 35. Curran Associates, Inc., 31171–31185. https://proceedings.neurips.cc/paper_files/paper/2022/file/ca09b375e8e2b2c789698c079a9fc51c-Paper-Conference.pdf
[54]
Smitha Milli, John Miller, Anca D. Dragan, and Moritz Hardt. 2019. The social cost of strategic classification. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (2019), 230–239. https://doi.org/10.1145/3287560.3287576
[55]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2021. Algorithmic Fairness: Choices, Assumptions, and Definitions. Annual Review of Statistics and Its Application 8, 1 (3 2021), 141–163. https://doi.org/10.1146/annurev-statistics-042720-125902
[56]
Hussein Mouzannar, Mesrob I. Ohannessian, and Nathan Srebro. 2019. From fair decision making to social equality. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (2019), 359–368. https://doi.org/10.1145/3287560.3287599
[57]
Nataša Obermajer, Ravikumar Muthuswamy, Jamie Lesnock, Robert P Edwards, and Pawel Kalinski. 1983. Positive feedback between PGE$_2$ and COX2 redirects the differentiation of human dendritic cells toward stable myeloid-derived suppressor cells. Immunobiology 119, 20 (1983). https://doi.org/10.1182/blood-2011-07-365825
[58]
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data 2 (7 2019). https://doi.org/10.3389/fdata.2019.00013
[59]
Cathy O’neil. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
[60]
Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dunner, and Moritz Hardt. 2020. Performative prediction. 37th International Conference on Machine Learning, ICML 2020 PartF16814 (2020), 7555–7565.
[61]
Nicola Perra and Luis E C Rocha. 2019. Modelling opinion dynamics in the age of algorithmic personalisation. Scientific reports 9, 1 (2019), 1–11.
[62]
Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Comput. Surv. 55, 3 (2 2022). https://doi.org/10.1145/3494672
[63]
Arkalgud Ramaprasad. 1983. On the definition of feedback. Journal of the Society for General Systems Research 28, 1 (1983). https://doi.org/10.1002/bs.3830280103
[64]
Wilbert Samuel Rossi, Jan Willem Polderman, and Paolo Frasca. 2021. The closed loop between opinion formation and personalised recommendations. IEEE Transactions on Control of Network Systems (2021), 1. https://doi.org/10.1109/TCNS.2021.3105616
[65]
Tom Simonite. 2015. Probing the dark side of google’s ad-targeting system. MIT Technology Review (2015).
[66]
Ayan Sinha, David F Gleich, and Karthik Ramani. 2016. Deconvolving Feedback Loops in Recommender Systems. In Advances in Neural Information Processing Systems, D Lee, M Sugiyama, U Luxburg, I Guyon, and R Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/962e56a8a0b0420d87272a682bfd1e53-Paper.pdf
[67]
Yi Sun. 2022. Algorithmic Fairness in Sequential Decision Making. Ph. D. Dissertation.
[68]
Yi Sun, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. 2022. The Backfire Effects of Fairness Constraints. ICML 2022 Workshop on Responsible Decision Making in Dynamic Environments (2022). https://responsibledecisionmaking.github.io/assets/pdf/papers/44.pdf
[69]
Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization(EAAMO ’21). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3465416.3483305
[70]
Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. 2019. Optimal Decision Making Under Strategic Behavior. (2019). http://arxiv.org/abs/1905.09239
[71]
Benjamin van Giffen, Dennis Herhausen, and Tobias Fahse. 2022. Overcoming the pitfalls and perils of algorithms: A classification of machine learning biases and mitigation methods. Journal of Business Research 144 (2022), 93–106. https://doi.org/10.1016/j.jbusres.2022.01.076
[72]
Yevgeniy Vorobeychik and Murat Kantarcioglu. 2018. Adversarial Machine Learning. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-031-01580-9
[73]
Geoffrey I Webb, Roy Hyde, Hong Cao, Hai Long Nguyen, and Francois Petitjean. 2016. Characterizing concept drift. Data Mining and Knowledge Discovery 30, 4 (2016), 964–994. https://doi.org/10.1007/s10618-015-0448-4
[74]
Han Xu, Xiaorui Liu, Yaxin Li, Anil Jain, and Jiliang Tang. 2021. To be Robust or to be Fair: Towards Fairness in Adversarial Training. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 11492–11501. https://proceedings.mlr.press/v139/xu21b.html
[75]
Bernard P Zeigler, Tag Gon Kim, and Herbert Praehofer. 2000. Theory of modeling and simulation. Academic press.
[76]
Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. 2020. Long-Term Impacts of Fair Machine Learning. Ergonomics in Design 28, 3 (2020), 7–11. https://doi.org/10.1177/1064804619884160
[77]
Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, and Mingyan Liu. 2019. Group retention when using machine learning in sequential decision making: The interplay between user dynamics and fairness. Advances in Neural Information Processing Systems 32, NeurIPS (2019).
[78]
Xueru Zhang and Mingyan Liu. 2021. Fairness in Learning-Based Sequential Decision Algorithms: A Survey. Studies in Systems, Decision and Control 325 (2021), 525–555. https://doi.org/10.1007/978-3-030-60990-0_18
[79]
Xueru Zhang, Ruibo Tu, Yang Liu, Mingyan Liu, Hedvig Kjellström, Kun Zhang, and Cheng Zhang. 2020. How do fair decisions fare in long-term qualification?Advances in Neural Information Processing Systems 2020-Decem, NeurIPS (2020), 1–13.
[80]
Kemin Zhou and John Comstock Doyle. 1998. Essentials of robust control. Vol. 104. Prentice hall Upper Saddle River, NJ.

Cited By

View all
  • (2024)Exploring Generative AI as Personally Effective Decision-Making ToolsEnhancing Automated Decision-Making Through AI10.4018/979-8-3693-6230-3.ch014(451-492)Online publication date: 29-Nov-2024
  • (2024)Harm Mitigation in Recommender Systems under User Preference DynamicsProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671925(255-265)Online publication date: 25-Aug-2024
  • (2024)From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term UnemploymentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659020(1984-2006)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
October 2023
498 pages
ISBN:9798400703812
DOI:10.1145/3617694
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 October 2023

Check for updates

Author Tags

  1. automated decision-making
  2. bias
  3. dynamical systems theory
  4. feedback loops
  5. machine learning
  6. performative prediction
  7. sequential decision-making

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

EAAMO '23
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,678
  • Downloads (Last 6 weeks)309
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring Generative AI as Personally Effective Decision-Making ToolsEnhancing Automated Decision-Making Through AI10.4018/979-8-3693-6230-3.ch014(451-492)Online publication date: 29-Nov-2024
  • (2024)Harm Mitigation in Recommender Systems under User Preference DynamicsProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671925(255-265)Online publication date: 25-Aug-2024
  • (2024)From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term UnemploymentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659020(1984-2006)Online publication date: 3-Jun-2024
  • (2024)Toward a Systems Theory of AlgorithmsIEEE Control Systems Letters10.1109/LCSYS.2024.34069438(1198-1210)Online publication date: 2024
  • (2024)Control Strategies for Recommendation Systems in Social NetworksIEEE Control Systems Letters10.1109/LCSYS.2024.34007018(634-639)Online publication date: 2024
  • (2024)Fairness and Bias in Robot LearningProceedings of the IEEE10.1109/JPROC.2024.3403898112:4(305-330)Online publication date: Apr-2024
  • (2024)Policy advice and best practices on bias and fairness in AIEthics and Information Technology10.1007/s10676-024-09746-w26:2Online publication date: 29-Apr-2024
  • (2024)Cognitive Digital Twins for Improving Security in IT-OT Enabled Healthcare ApplicationsHCI for Cybersecurity, Privacy and Trust10.1007/978-3-031-61382-1_10(153-163)Online publication date: 29-Jun-2024
  • (2023)The Impact of Recommendation Systems on Opinion Dynamics: Microscopic Versus Macroscopic Effects2023 62nd IEEE Conference on Decision and Control (CDC)10.1109/CDC49753.2023.10383957(4824-4829)Online publication date: 13-Dec-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media