Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Skip header Section
The Societal Impacts of Algorithmic Decision-MakingSeptember 2023
  • Author:
  • Manish Raghavan
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
ISBN:979-8-4007-0861-9
Published:08 September 2023
Pages:
366
Appears In:
ACMACM Books
Skip Bibliometrics Section
Bibliometrics
Skip Abstract Section
Abstract

This book demonstrates the need for and the value of interdisciplinary research in addressing important societal challenges associated with the widespread use of algorithmic decision-making. Algorithms are increasingly being used to make decisions in various domains such as criminal justice, medicine, and employment. While algorithmic tools have the potential to make decision-making more accurate, consistent, and transparent, they pose serious challenges to societal interests. For example, they can perpetuate discrimination, cause representational harm, and deny opportunities.

The Societal Impacts of Algorithmic Decision-Making presents several contributions to the growing body of literature that seeks to respond to these challenges, drawing on techniques and insights from computer science, economics, and law. The author develops tools and frameworks to characterize the impacts of decision-making and incorporates models of behavior to reason about decision-making in complex environments. These technical insights are leveraged to deepen the qualitative understanding of the impacts of algorithms on problem domains including employment and lending.

The social harms of algorithmic decision-making are far from being solved. While easy solutions are not presented here, there are actionable insights for those who seek to deploy algorithms responsibly. The research presented within this book will hopefully contribute to broader efforts to safeguard societal values while still taking advantage of the promise of algorithmic decision-making.

References

  1. Y. Abbasi-Yadkori, D. Pál, and C. Szepesvári. 2011. Improved algorithms for linear stochastic bandits. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vol. 24. Curran Associates. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2011/file/e1d5be1c7f2f456670de3d53c7b54f4a-Paper.pdf.Google ScholarGoogle Scholar
  2. R. Abebe, S. Barocas, J. Kleinberg, K. Levy, M. Raghavan, and D. G. Robinson. 2020. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, 252–260. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. Schapire. 2014. Taming the monster: A fast and simple algorithm for contextual bandits. In Proceedings of the 31st International Conference on Machine Learning (ICML), Vol. 32. PMLR, 1638–1646. https://proceedings.mlr.press/v32/agarwalb14.html.Google ScholarGoogle Scholar
  4. A. Agarwal, S. Bird, M. Cozowicz, M. Dudik, J. Langford, L. Li, L. Hoang, D. Melamed, S. Sen, R. Schapire, and A. Slivkins. 2016. Multiworld Testing: A System for Experimentation, Learning, and Decision-Making. A white paper, https://github.com/Microsoft/mwt-ds/raw/master/images/MWT-WhitePaper.pdf.Google ScholarGoogle Scholar
  5. A. Agarwal, S. Bird, M. Cozowicz, L. Hoang, J. Langford, S. Lee, J. Li, D. Melamed, G. Oshri, O. Ribas, S. Sen, and A. Slivkins. 2017. Making contextual decisions with low technical debt. arXiv:1606.03966. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  6. I. Ajunwa. 2020. The paradox of automation as anti-bias intervention. Cardozo Law Rev. 41, 1671. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  7. I. Ajunwa. 2021. The auditing imperative for automated hiring. Harv. J. Law Technol. 34, 80. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  8. T. Alon, M. Dobson, A. D. Procaccia, I. Talgam-Cohen, and J. Tucker-Foltz. 2020. Multiagent evaluation mechanisms. In Proceedings of the Conference on Artificial Intelligence, Vol. 34. AAAI, 1774–1781. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  9. R. Alonso and N. Matouschek. 2008. Optimal delegation. Rev. Econ. Stud. 75, 1, 259–293. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  10. J. Angwin and J. Larson. 2022. Bias in criminal risk scores is mathematically inevitable, researchers say. Ethics of Data and Analytics. Auerbach Publications, 265–267.Google ScholarGoogle Scholar
  11. J. Angwin, J. Larson, S. Mattu, and L. Kirchner. 2022. Machine bias. Ethics of Data and Analytics. Auerbach Publications, 254–264.Google ScholarGoogle Scholar
  12. K. J. Arrow. 1963. Uncertainty and the welfare economics of medical care. Am. Econ. Rev. 53, 5, 941–973. http://www.jstor.org/stable/1812044.Google ScholarGoogle Scholar
  13. K. J. Arrow. 1968. The economics of moral hazard: Further comment. Am. Econ. Rev. 58, 3, 537–539. http://www.jstor.org/stable/1813786.Google ScholarGoogle Scholar
  14. Article 29 Data Protection Working Party. 2017. Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679. https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.Google ScholarGoogle Scholar
  15. P. Auer. 2002. Using confidence bounds for exploitation–exploration trade-offs. J. Mach. Learn. Res. 3, 397–422.Google ScholarGoogle Scholar
  16. P. Auer, N. Cesa-Bianchi, and P. Fischer. 2002. Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47, 2–3, 235–256. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. H. Azari Soufiani, D. C. Parkes, and L. Xia. 2012. Random utility theory for social choice. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12). Curran Associates, 126–134.Google ScholarGoogle Scholar
  18. H. Azari Soufiani, H. Diao, Z. Lai, and D. C. Parkes. 2013. Generalized random utility models with multiple types. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2013), Vol. 26. Curran Associates, 73–81.Google ScholarGoogle Scholar
  19. L. Baker, D. Weisberger, D. Diamond, M. Ward, and J. Naso. 2018. audit-AI. Retrieved from https://github.com/pymetrics/audit-ai.Google ScholarGoogle Scholar
  20. J. M. Balkin. 2015. Information fiduciaries and the First Amendment. Univ. Calif. Davis Law Rev. 49, 1183–1234.Google ScholarGoogle Scholar
  21. I. Ball. 2020. Scoring strategic agents. Working paper. arXiv:1909.01888. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  22. J. Bambauer and T. Zarsky. 2018. The algorithm game. Notre Dame Law Rev. 94, 1, 1–48.Google ScholarGoogle Scholar
  23. L. Baritz. 1960. The Servants of Power: A History of the Use of Social Science in American Industry. Wesleyan University Press.Google ScholarGoogle Scholar
  24. S. Barocas and A. D. Selbst. 2016. Big data’s disparate impact. Calif. Law Rev. 104, 671–732. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  25. S. Barocas, A. D. Selbst, and M. Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM, 80–89. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. F. Barrett, R. Adolphs, S. Marsella, A. M. Martinez, and S. D. Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1, 1–68. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  27. H. Bastani, M. Bayati, and K. Khosravi. 2020. Mostly exploration-free algorithms for contextual bandits. Manage. Sci. 67, 3, 1329–1349. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Y. Bechavod, C. Jung, and Z. S. Wu. 2020. Metric-free individual fairness in online learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS’20). Curran Associates, 11214–11225.Google ScholarGoogle Scholar
  29. J. Beel, B. Gipp, and E. Wilde. 2009. Academic search engine optimization (ASEO) optimizing scholarly literature for Google Scholar & Co. J. Sch. Publ. 41, 2, 176–190. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  30. S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. 2010. A theory of learning from different domains. Mach. Learn. 79, 1–2, 151–175. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. M. Bendick, Jr. and A. P. Nunes. 2012. Developing the research basis for controlling bias in hiring. J. Soc. Issues 68, 2, 238–262. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  32. M. Bendick, Jr., C. W. Jackson, and J. H. Romero. 1997. Employment discrimination against older workers: An experimental study of hiring practices. J. Aging Soc. Policy 8, 4, 25–46. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  33. J. R. Bent. 2020. Is algorithmic affirmative action legal? Georgetown Law J. 108, 4, 803.Google ScholarGoogle Scholar
  34. D. Bergemann and S. Morris. 2019. Information design: A unified perspective. J. Econ. Lit. 57, 1, 44–95. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  35. R. Berk. 2016. A primer on fairness in criminal justice risk assessments. Criminologist 41, 6, 6–9.Google ScholarGoogle Scholar
  36. R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth. 2018. Fairness in criminal justice risk assessments: The state of the art. Sociol Methods Res. 50, 1, 3–44. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  37. M. Bertrand and S. Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Am. Econ. Rev. 94, 4, 991–1013. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  38. D. A. Biddle. 2008. Are the uniform guidelines outdated? Federal guidelines, professional standards, and validity generalization (VG). Ind. Organ. Psychol. 45 4, 17–23.Google ScholarGoogle Scholar
  39. A. Bietti, A. Agarwal, and J. Langford. 2018. A contextual bandit bake-off. J. Mach. Learn. Res. 22, 1, 5928–5976.Google ScholarGoogle Scholar
  40. S. Bird, S. Barocas, K. Crawford, F. Diaz, and H. Wallach. 2016. Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI. Available at SSRN: https://ssrn.com/abstract=2846909, also appeared at the Workshop on Fairness, Accountability, and Transparency in Machine Learning.Google ScholarGoogle Scholar
  41. K. P. Birman and F. B. Schneider. 2009. The monoculture risk put into context. IEEE Secur. Priv. 7, 1, 14–17. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. J. Blass. 2019. Algorithmic advertising discrimination. Northwest. Univ. Law Rev. 114, 2, 415–467.Google ScholarGoogle Scholar
  43. H. D. Block and J. Marschak. 1960. Random orderings and stochastic theories of responses. In I. Olkin, S. G. Ghurye, W. Hoeffding, W. G. Madow, and H. B. Mann (Eds.), Contributions to Probability and Statistics. Stanford University Press, 97–132.Google ScholarGoogle Scholar
  44. M. Bogen and A. Rieke. 2018. Help Wanted: An Exploration of Hiring Algorithms, Equity, and Bias. Technical report, Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20–%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf.Google ScholarGoogle Scholar
  45. I. Bohnet, A. van Geen, and M. Bazerman. 2016. When performance trumps gender bias: Joint vs. separate evaluation. Manage. Sci. 62, 5, 1225–1234. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. R. Boleslavsky and K. Kim. 2018. Bayesian Persuasion and Moral Hazard. Available at SSRN 2913669. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  47. S. Bornstein. 2018. Antidiscriminatory algorithms. Ala. Law Rev. 70, 2, 519.Google ScholarGoogle Scholar
  48. D. Braess. 1968. Über ein paradoxon aus der verkehrsplanung. Unternehmensforschung 12, 1, 258–268. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  49. M. Braverman and E. Mossel. 2009. Sorting from noisy information. arXiv:0910.1191. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  50. M. Brkan. 2019. Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. Int. J. Law Inf. Technol. 27, 2, 91–121. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  51. M. Brückner and T. Scheffer. 2011. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’11). ACM, 547–555. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. S. Bubeck and N. Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn. 5, 1, 1–122. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  53. J. Buolamwini and T. Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR, 77–91.Google ScholarGoogle Scholar
  54. T. Calders and S. Verwer. 2012. Three naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 21, 277–292. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. California State Legislature. 1959. Fair Employment and Housing Act.Google ScholarGoogle Scholar
  56. G. Çapan, Ö. Bozal, Ý. Gündoğdu, and A. T. Cemgil. 2020. Towards fair personalization by avoiding feedback loops. arXiv:2012.12862. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  57. G. Carroll. 2015. Robustness and linear contracts. Am. Econ. Rev. 105, 2, 536–563. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  58. R. Caruana, S. Lundberg, M. T. Ribeiro, H. Nori, and S. Jenkins. 2020. Intelligible and explainable machine learning: Best practices and practical challenges. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 3511–3512. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. D. V. Carvalho, E. M. Pereira, and J. S. Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8, 832. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  60. B. Casey, A. Farhangi, and R. Vogl. 2019. Rethinking explainable machines: The GDPR’s “right to explanation” debate and the rise of algorithmic audits in enterprise. Berkeley Technol. Law J. 34, 1, 143–188. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  61. M. Cavicchia. 2015. How to fight implicit bias? With conscious thought, diversity expert tells NABE. American Bar Association: Bar Leader 40, 1.Google ScholarGoogle Scholar
  62. L. E. Celis and N. K. Vishnoi. 2017. Fair personalization. arXiv:1707.02260, also appeared at the Workshop on Fairness, Accountability, and Transparency in Machine Learning. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  63. L. E. Celis, D. Straszak, and N. K. Vishnoi. 2018. Ranking with fairness constraints. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), July 9–13, 2018, Prague, Czech Republic. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 28:1–28:15. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  64. T. Chamorro-Prezumic and R. Akhtar. 2019. Should companies use AI to assess job candidates. Harv. Bus. Rev. https://hbr.org/2019/05/should-companies-use-ai-to-assess-job-candidates.Google ScholarGoogle Scholar
  65. T. Chamorro-Premuzic, D. Winsborough, R. A. Sherman, and R. Hogan. 2016. New talent signals: Shiny new objects or a brave new world? Ind. Organ. Psychol. 9, 3, 621–640. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  66. V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. 2012. The convex geometry of linear inverse problems. Found. Comput. Math. 12, 6, 805–849. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Y.-K. Che and J. Hörner. 2018. Recommender systems as mechanisms for social learning. Q. J. Econ. 133, 2, 871–925. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  68. Y. Chen, C. Podimata, A. D. Procaccia, and N. Shah. 2018. Strategyproof linear regression in high dimensions. In Proceedings of the 2018 ACM Conference on Economics and Computation (EC ’18). ACM, 9–26. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. S. N. S. Cheung. 1969. The Theory of Share Tenancy. Arcadia Press.Google ScholarGoogle Scholar
  70. S. Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. AAAI Press, 7801–7808. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. A. Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2, 153–163. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  72. A. Chouldechova, D. Benavides-Prado, O. Fialko, and R. Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Vol. 18. PMLR, 134–148.Google ScholarGoogle Scholar
  73. W. Chu, L. Li, L. Reyzin, and R. E. Schapire. 2011. Contextual bandits with linear payoff functions. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS), Vol. 15. PMLR, 208–214.Google ScholarGoogle Scholar
  74. D. K. Citron and F. Pasquale. 2014. The scored society: Due process for automated predictions. Wash. Law Rev. 89, 1.Google ScholarGoogle Scholar
  75. A. Clauset, C. R. Shalizi, and M. E. J. Newman. 2009. Power-law distributions in empirical data. SIAM Rev. 51, 4, 661–703. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. S. Coate and G. C. Loury. 1993. Will affirmative-action policies eliminate negative stereotypes? Am. Econ. Rev. 83, 5, 1220–1240.Google ScholarGoogle Scholar
  77. I. N. Cofone. 2018. Algorithmic discrimination is an information problem. Hastings Law J. 70, 1389.Google ScholarGoogle Scholar
  78. J. E. Cohen. 2012a. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Yale University Press.Google ScholarGoogle Scholar
  79. J. E. Cohen. 2012b. What privacy is for. Harv. Law Rev. 126, 7, 1904–1933.Google ScholarGoogle Scholar
  80. R. M. Cohn. 1979a. Statistical laws and the use of statistics in law: A rejoinder to Professor Shoben. Indiana Law J. 55, 3, 537.Google ScholarGoogle Scholar
  81. R. M. Cohn. 1979b. On the use of statistics in employment discrimination cases. Indiana Law J. 55, 3, 493.Google ScholarGoogle Scholar
  82. B. W. Collins. 2007. Tackling unconscious bias in hiring practices: The plight of the Rooney Rule. N. Y. Univ. Law Rev. 82, 3, 870.Google ScholarGoogle Scholar
  83. J. D. Cook. 2009. Upper and Lower Bounds for the Normal Distribution Function. https://www.johndcook.com/blog/norm-dist-bounds/.Google ScholarGoogle Scholar
  84. S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’17). ACM, 797–806. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Covington and Burling. 13 June. 2017. Recommendations to Uber. https://conferences.law.stanford.edu/vcs/wp-content/uploads/sites/11/2017/11/Uber-Report.pdf. Archived version: http://web.archive.org/web/20230607183413/https://conferences.law.stanford.edu/vcs/wp-content/uploads/sites/11/2017/11/Uber-Report.pdf.Google ScholarGoogle Scholar
  86. B. Cowgill. 2018. Bias and productivity in humans and machines. Columbia Business School, Columbia University, 29. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  87. H. Cravens. 1978. The Triumph of Evolution: The Heredity–Environment Controversy, 1900–1941. Johns Hopkins University Press.Google ScholarGoogle Scholar
  88. K. Crawford. 2017. The trouble with bias. In Proceedings of the Conference on Neural Information Processing Systems, invited speaker. https://blog.revolutionanalytics.com/2017/12/the-trouble-with-bias-by-kate-crawford.html.Google ScholarGoogle Scholar
  89. C. S. Crowson, E. J. Atkinson, and T. M. Therneau. 2016. Assessing calibration of prognostic risk scores. Stat. Methods Med. Res. 25, 4, 1692–1706. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  90. R. Cummings, S. Ioannidis, and K. Ligett. 2015. Truthful linear regression. In Proceedings of the 28th Conference on Learning Theory, COLT 2015, Vol. 40. PMLR, 448–483.Google ScholarGoogle Scholar
  91. N. N. Dalvi, P. M. Domingos, Mausam, S. K. Sanghai, and D. Verma. 2004. Adversarial classification. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 99–108. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. H. E. Daniels. 1950. Rank correlation and population models. J. R. Stat. Soc. Ser. B (Methodological) 12, 2, 171–181. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  93. S. Das and Z. Li. 2014. The role of common and private signals in two-sided matching with interviews. In Proceedings of the International Conference on Web and Internet Economics. Springer, Vol. 8877: Lecture Notes in Computer Science. Springer, 492–497. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  94. S. Dasgupta and A. Gupta. 2003. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22, 1, 60–65. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. A. Datta, M. C. Tschantz, and A. Datta. 2015. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. In Proc. Priv. Enhanc. Technol. 1, 92–112. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  96. H. A. David and H. N. Nagaraja. 2005. Basic Distribution Theory. John Wiley & Sons, 9–32. ISBN 9780471722168.Google ScholarGoogle Scholar
  97. H. Davis. 2006. Search Engine Optimization. O’Reilly Media.Google ScholarGoogle Scholar
  98. A. P. Dawid. 1982. The well-calibrated Bayesian. J. Am. Stat. Assoc. 77, 379, 605–610. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  99. M. De-Arteaga, R. Fogliato, and A. Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, 1–12. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. M. De-Arteaga, A. Dubrawski, and A. Chouldechova. 2021. Leveraging expert consistency to improve algorithmic decision support. arXiv:2101.09648. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  101. O. Dekel, F. Fischer, and A. D. Procaccia. 2010. Incentive compatible regression learning. J. Comput. Syst. Sci. 76, 8, 759–777. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. A. Dhurandhar, P.-Y. Chen, R. Luss, C.-C. Tu, P. Ting, K. Shanmugam, and P. Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems, 592–603.Google ScholarGoogle Scholar
  103. W. Dieterich, C. Mendoza, and T. Brennan. July. 2016. COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity. Technical Report. Northpointe. https://go.volarisgroup.com/rs/430-MBX-989/images/%20ProPublica_Commentary_Final_070616.pdf.Google ScholarGoogle Scholar
  104. J. Dong, A. Roth, Z. Schutzman, B. Waggoner, and Z. S. Wu. 2018. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation (EC ’18). ACM, 55–70. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. P. H. DuBois. 1970. A History of Psychological Testing. Allyn and Bacon, Boston.Google ScholarGoogle Scholar
  106. C. DuBois. 2017. What the NFL can teach Congress about hiring more diverse staffs. FiveThirtyEight.Google ScholarGoogle Scholar
  107. C. DuBois and D. W. Schanzenbach. 2017. The Effect of Court-Ordered Hiring Guidelines on Teacher Composition and Student Achievement. Technical Report. National Bureau of Economic Research.Google ScholarGoogle Scholar
  108. M. D. Dunnette and W. C. Borman. 1979. Personnel selection and classification systems. Annu. Rev. Psychol. 30, 1, 477–525. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  109. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science (ITCS ’12). ACM, 214–226. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. H. Edwards and A. J. Storkey. 2016. Censoring representations with an adversary. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico, May 2–4, 2016, Conference Track Proceedings.Google ScholarGoogle Scholar
  111. L. Edwards and M. Veale. 2017. Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law Technol. Rev. 16, 18–74.Google ScholarGoogle Scholar
  112. Equal Credit Opportunity Act, Public Law 93-495. 1974. Codified at 15 U.S.C. § 1691, et seq.Google ScholarGoogle Scholar
  113. Equal Employment Opportunity Commission. 1978. Uniform guidelines on employee selection procedures. Fed. Regist. 43, 166, 38290–38315.Google ScholarGoogle Scholar
  114. V. Eubanks. 2018a. Automating bias. Sci. Am. 319, 5, 68–71. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  115. V. Eubanks. 2018b. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Executive Office of the President. May. 2016. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Technical Report.Google ScholarGoogle Scholar
  117. Fair Credit Reporting Act, Public Law 91-508. 1970. Codified at 15 U.S.C. § 1681, et seq.Google ScholarGoogle Scholar
  118. Federal Register. 1985. 50 Fed. Reg. 10915.Google ScholarGoogle Scholar
  119. U. Feige, P. Raghavan, D. Peleg, and E. Upfal. 1994. Computing with noisy information. SIAM J. Comput. 23, 5, 1001–1018. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’15). ACM, 259–268. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. J. Finocchiaro, R. Maio, F. Monachou, G. K. Patro, M. Raghavan, A.-A. Stoica, and S. Tsirtsis. 2021. Bridging machine learning and mechanism design towards algorithmic fairness. In Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency (FACCT ’21). ACM, 489–503. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. A. Flores, C. Lowenkamp, and K. Bechtel. September. 2016. False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software used Across the Country to Predict Future Criminals. and it’s Biased Against Blacks.” Technical Report, Crime & Justice Institute. http://www.crj.org/assets/2017/07/9_Machine_bias_rejoinder.pdf.Google ScholarGoogle Scholar
  123. Y.-F. Fong and J. Li. 2016. Information revelation in relational contracts. Rev. Econ. Stud. 84, 1, 277–299. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  124. D. P. Foster and R. V. Vohra. 1998. Asymptotic calibration. Biometrika 85, 2, 379–390. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  125. D. Foust and A. Pressman. 2008. Credit scores: Not-so-magic numbers. Bus. Week 7.Google ScholarGoogle Scholar
  126. R. H. Frank. 2000. Why is cost–benefit analysis so controversial? J. Legal Stud. 29, S2, 913–930. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  127. P. Frazier, D. Kempe, J. M. Kleinberg, and R. Kleinberg. 2014. Incentivizing exploration. In Proceedings of the Fifteenth ACM Conference on Economics and Computation (EC ‘14). ACM, 5–22. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  128. J. Friedman, T. Hastie, and R. Tibshirani. 2001. The Elements of Statistical Learning. Springer, New York.Google ScholarGoogle Scholar
  129. J. Fruchterman and J. Melllea. 2018. Expanding Employment Success for People with Disabilities. Technical Report. Benetech.Google ScholarGoogle Scholar
  130. R. G. Fryer, Jr. and G. C. Loury. 2013. Valuing diversity. J. Polit. Econ. 121, 4, 747–774. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  131. Q. Fu and J. Lu. 2012. Micro foundations of multi-prize lottery contests: A perspective of noisy performance ranking. Soc. Choice Welf. 38, 3, 497–517. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  132. H. N. Garb. 1997. Race bias, social class bias, and gender bias in clinical judgment. Clin. Psychol. Sci. Pract. 4, 2, 99–120. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  133. S. S. Garr and C. Jackson. 2019. Diversity & Inclusion Technology: The Rise of a Transformative Market. Technical Report. RedThread Research. https://info.mercer.com/rs/521-DEV-513/images/Mercer_DI_Report_Digital.pdf.Google ScholarGoogle Scholar
  134. T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé III, and K. Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12, 86–92. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  135. P. W. Gerhardt. 1916. Scientific selection of employees. Electric Railway J. 47, 21, 943–945.Google ScholarGoogle Scholar
  136. S. C. Geyik, S. Ambler, and K. Kenthapadi. 2019. Fairness-aware ranking in search & recommendation systems with application to LinkedIn talent search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’19). ACM, 2221–2231. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. T. B. Gillis and J. L. Spiess. 2019. Big data and discrimination. Univ. Chic. Law Rev. 86, 2, 459–488.Google ScholarGoogle Scholar
  138. L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, 80–89. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  139. K. Goddard, A. Roudsari, and J. C. Wyatt. 2012. Automation bias: A systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 19, 1, 121–127. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  140. G. Goh, A. Cotter, M. Gupta, and M. P. Friedlander. 2016. Satisfying real-world goals with dataset constraints. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates, 2415–2423.Google ScholarGoogle Scholar
  141. A. Gong. July. 2016. Ethics for Powerful Algorithms (1 of 4). Medium. https://medium.com/@AbeGong/ethics-for-powerful-algorithms-1-of-3-a060054efd84#.dhsd2ut3i.Google ScholarGoogle Scholar
  142. R. M. Grath, L. Costabello, C. L. Van, P. Sweeney, F. Kamiab, Z. Shen, and F. Lecue. 2018. Interpretable credit application predictions with counterfactual explanations. arXiv:1811.05245. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  143. B. Green. 2021. Data science as political action: Grounding data science in a politics of justice. J. Soc. Comput. 2, 249–265. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  144. A. R. Green, D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni, and M. R. Banaji. 2007. Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients. J. Gen. Intern. Med. 22, 9, 1231–1238. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  145. A. G. Greenwald and M. R. Banaji. 1995. Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychol. Rev. 102, 1, 4–27. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  146. A. G. Greenwald and L. H. Krieger. 2006. Implicit bias: Scientific foundations. Calif. Law Rev. 94, 4, 945–967. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  147. J. Grimmett. 2017. Veterinary practitioners—Personal characteristics and professional longevity. VetScript 30, 11, 54–57.Google ScholarGoogle Scholar
  148. S. J. Grossman and O. D. Hart. 1983. An analysis of the principal–agent problem. Econometrica 51, 1, 7–45. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  149. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. PMLR, 1321–1330.Google ScholarGoogle Scholar
  150. A. Guo, E. Kamar, J. W. Vaughan, H. Wallach, and M. R. Morris. October. 2019. Toward fairness in AI for people with disabilities: A research roadmap. ACM SIGACCESS Access. Comput. 125. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  151. R. A. Guzzo, A. A. Fink, E. King, S. Tonidandel, and R. S. Landis. 2015. Big data recommendations for industrial–organizational psychology: Are we in whoville? Ind. Organ. Psychol. 8, 4, 515–520. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  152. N. Haghtalab, N. Immorlica, B. Lucier, and J. Z. Wang. 2020. Maximizing welfare with incentive-aware evaluation mechanisms. In Proceedings of the 29th International Joint Conference on Artificial Intelligence. IJCAI, 160–166Google ScholarGoogle Scholar
  153. S. Hajian and J. Domingo-Ferrer. 2013. A methodology for direct and indirect discrimination prevention in data mining. IEEE Trans. Knowl. Data Eng. 25, 7, 1445–1459. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. P. Hall, W. Phan, and S. Ambati. 2017. Ideas on interpreting machine learning. O’Reilly.Google ScholarGoogle Scholar
  155. C. Haney. 1982. Employment tests and employment discrimination: A dissenting psychological opinion. Ind. Relat. Law J. 5, 1, 1. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  156. M. Hardt, N. Megiddo, C. H. Papadimitriou, and M. Wootters. 2016a. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. ACM, 111–122. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. M. Hardt, E. Price, and S. Nathan. 2016b. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates, 3323–3331.Google ScholarGoogle Scholar
  158. Z. Harned and H. Wallach. 2019. Stretching human laws to apply to machines: The dangers of a “colorblind” computer. Fla. State Univ. Law Rev. 47, 617.Google ScholarGoogle Scholar
  159. K. D. Harris, P. Murray, and E. Warren. 2018. Letter to U.S. Equal Employment Opportunity Commission. https://www.scribd.com/embeds/388920670/content#from_embed.Google ScholarGoogle Scholar
  160. D. Hellman. 2020. Measuring algorithmic fairness. Va. Law Rev. 106, 4, 811–866.Google ScholarGoogle Scholar
  161. L. A. Hendricks, R. Hu, T. Darrell, and Z. Akata. 2018. Generating counterfactual explanations with natural language. arXiv:1806.09809. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  162. B. E. Hermalin and M. L. Katz. 1991. Moral hazard and verifiability: The effects of renegotiation in agency. Econometrica 59, 6, 1735–1753. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  163. M. Hildebrandt. 2006. Profiling: From data to knowledge. Datenschutz und Datensicherheit-DuD 30, 9, 548–552. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  164. B. Holmström. 1999. Managerial incentive problems: A dynamic perspective. Rev. Econ. Stud. 66, 1, 169–182. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  165. B. Holmström and P. Milgrom. 1987. Aggregation and linearity in the provision of intertemporal incentives. Econometrica 55, 2, 303–328. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  166. B. Holmström and P. Milgrom. 1991. Multitask principal–agent analyses: Incentive contracts, asset ownership, and job design. J. Law Econ. Org. 7, 24–52. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  167. J. Hörner and N. S. Lambert. 2020. Motivational ratings. Rev. Econ. Stud. 88, 4, 1892–1935. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  168. K. Houser. 2019. Can AI solve the diversity problem in the tech industry? Mitigating noise and bias in employment decision-making. Stanford Technol. Law Rev. 22, 290.Google ScholarGoogle Scholar
  169. L. Hu and Y. Chen. 2018. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018. IW3C2, 1389–1398. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  170. L. Hu, N. Immorlica, and J. W. Vaughan. 2019. The disparate effects of strategic manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019. ACM, 259–268. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  171. A. E. Hurley-Hanson and C. M. Giannantonio (Eds.). 2016. Journal of Business Management. Autism in the Workplace, Vol. 22. Chapman University Argyros School of Business and Economics. https://www.chapman.edu/business/_files/journals-and-essays/jbm-editions/jbm-vol-22-no-1-autism-in-the-workplace.pdf.Google ScholarGoogle Scholar
  172. B. Hutchinson and M. Mitchell. 2019. 50 years of test (un)fairness: Lessons for machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 49–58. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  173. Illinois General Assembly. 2019. Artificial Intelligence Video Interview Act.Google ScholarGoogle Scholar
  174. C. Ilvento. 2020. Metric learning for individual fairness. In 1st Symposium on Foundations of Responsible Computing, FORC 2020, Vol. 156. Schloss Dagstuhl–Leibniz-Zentrum fur Informatik, 2:1–2:11. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  175. S. Janson. 2018. Tail bounds for sums of geometric and exponential variables. Stat. Probab. Lett. 135, 1–6. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  176. J. Jarrett and S. Croft. 2018. The Science Behind the Koru Model of Predictive Hiring for Fit. Technical Report. Koru.Google ScholarGoogle Scholar
  177. M. C. Jensen and W. H. Meckling. 1976. Theory of the firm: Managerial behavior, agency costs and ownership structure. J. Financ. Econ. 3, 4, 305–360. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  178. H. Joe. 2000. Inequalities for random utility models, with applications to ranking and subset choice data. Methodol. Comput. Appl. Probab. 2, 4, 359–372. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  179. S. K. Johnson, D. R. Hekman, and E. T. Chan. April. 2016. If there’s only one woman in your candidate pool, there’s statistically no chance she’ll be hired. Harv. Bus. Rev. https://hbr.org/2016/04/if-theres-only-one-woman-in-your-candidate-pool-theres-statistically-no-chance-shell-be-hired.Google ScholarGoogle Scholar
  180. C. Jolls and C. R. Sunstein. 2006. The law of implicit bias. Calif. Law Rev. 94, 969–996. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  181. M. Joseph, M. Kearns, J. H. Morgenstern, and A. Roth. 2016. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, Vol. 29. Curran Associates. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  182. D. Kahneman and A. Tversky. 1977. Intuitive Prediction: Biases and Corrective Procedures. Technical Report. Decisions and Designs, Inc., Mclean, VA.Google ScholarGoogle Scholar
  183. E. Kamenica. 2019. Bayesian persuasion and information design. Annu. Rev. Econ. 11, 249–272. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  184. E. Kamenica and M. Gentzkow. 2011. Bayesian persuasion. Am. Econ. Rev. 101, 6, 2590–2615. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  185. M. E. Kaminski. 2019. The right to explanation, explained. Berkeley Techno. Law J. 34, 1, 189–218. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  186. F. Kamiran and T. Calders. 2009. Classifying without discriminating. In 2nd International Conference on Computer Control and Communication. IEEE, 1–6. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  187. T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35–50. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  188. S. Kannan, J. H. Morgenstern, A. Roth, B. Waggoner, and Z. S. Wu. 2018. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, 2231–2241. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  189. A.-H. Karimi, G. Barthe, B. Balle, and I. Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, Vol. 108: Proceedings of Machine Learning Research. PMLR, 895–905. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  190. M. Kearns, A. Roth, and Z. S. Wu. 2017. Meritocratic fairness for cross-population selection. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. PMLR, 1828–1836.Google ScholarGoogle Scholar
  191. W. F. Kemble. 1916. Testing the fitness of your employees. Ind. Manage. 52, 149–164Google ScholarGoogle Scholar
  192. M. G. Kendall. 1938. A new measure of rank correlation. Biometrika 30, 1/2, 81–93. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  193. S. Kerr. 1975. On the folly of rewarding A, while hoping for B. Acad. Manage. J. 18, 4, 769–783. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  194. L. M. Khan and D. E. Pozen. 2019. A skeptical view of information fiduciaries. Harv. Law Rev. 133, 2, 497–541.Google ScholarGoogle Scholar
  195. M. P. Kim, A. Korolova, G. N. Rothblum, and G. Yona. 2020. Preference-informed fairness. In 11th Innovations in Theoretical Computer Science Conference, Vol. 151. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 16:1–16:23.Google ScholarGoogle Scholar
  196. P. T. Kim. 2016. Data-driven discrimination at work. William Mary Law Rev. 58, 3, 857–936.Google ScholarGoogle Scholar
  197. P. T. Kim. 2017. Auditing algorithms for discrimination. Univ. Pa. Law Rev. Online 166, 1, 189.Google ScholarGoogle Scholar
  198. P. T. Kim. 2018. Big data and artificial intelligence: New challenges for workplace equality. Univ. Louisv. Law Rev. 57, 2, 313–328.Google ScholarGoogle Scholar
  199. P. T. Kim. 2020. Manipulating opportunity. Va. Law Rev. 106, 4, 867–935.Google ScholarGoogle Scholar
  200. B. Kiviat. 2019. Prediction and the Moral Order: Contesting Fairness in Consumer Data Capitalism. Ph.D. thesis. Harvard University.Google ScholarGoogle Scholar
  201. R. F. Kizilcec and H. Lee. 2022. Algorithmic fairness in education. In The Ethics in Artificial Intelligence in Education. Routledge. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  202. J. Kleinberg and M. Raghavan. 2018. Selection problems in the presence of implicit bias. In 9th Innovations in Theoretical Computer Science Conference, Vol. 94. Schloss Dagstuhl–Leibniz-Zentrum fur Informatik, 33:1–33:17. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  203. J. Kleinberg and M. Raghavan. 2020. How do classifiers induce agents to invest effort strategically? ACM Trans. Econ. Comput. 8, 4, 1–23. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  204. J. Kleinberg and M. Raghavan. 2021. Algorithmic monoculture and social welfare. Proc. Natl. Acad. Sci. U. S. A. 118, 22, e2018340118. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  205. R. Kleinberg, A. Niculescu-Mizil, and Y. Sharma. 2010. Regret bounds for sleeping experts and bandits. Mach. Learn. 80, 245–272. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  206. J. Kleinberg, S. Mullainathan, and M. Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. In Innovations in Theoretical Computer Science, Vol. 67. Schloss Dagstuhl–Leibniz-Zentrum fur Informatik, 43:1–43:23.Google ScholarGoogle Scholar
  207. J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan. 2018. Human decisions and machine predictions. Q. J. Econ. 133, 1, 237–293. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  208. J. Kleinberg, J. Ludwig, S. Mullainathan, and C. R. Sunstein. 2019. Discrimination in the age of algorithms. J. Leg. Anal. 10, 113–174. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  209. A. Koenecke, A. Nam, E. Lake, J. Nudell, M. Quartey, Z. Mengesha, C. Toups, J. R. Rickford, D. Jurafsky, and S. Goel. 2020. Racial disparities in automated speech recognition. Proc. Natl. Acad. Sci. U. S. A. 117, 14, 7684–7689. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  210. D. M. Koretz. 2008. Measuring Up. Harvard University Press.Google ScholarGoogle Scholar
  211. D. Koretz, R. Linn, S. Dunbar, and L. Shepard. 1991. The effects of high-stakes testing on achievement: Preliminary findings about generalization across tests. In Annual Meetings of the American Educational Research Association and the National Council on Measurement in Education.Google ScholarGoogle Scholar
  212. R. S. S. Kramer and R. Ward. 2010. Internal facial features are signals of personality and health. Q. J. Exp. Psychol. 63, 11, 2273–2287. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  213. I. Kremer, Y. Mansour, and M. Perry. 2014. Implementing the “wisdom of the crowd.” J. Polit. Econ. 122, 5, 988–1012. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  214. J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu. 2016. Accountable algorithms. Univ. Pa. Law Rev. 165, 3, 633.Google ScholarGoogle Scholar
  215. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates.Google ScholarGoogle Scholar
  216. J.-J. Laffont and D. Martimort. 2009. The Theory of Incentives: The Principal–Agent Model. Princeton University Press.Google ScholarGoogle Scholar
  217. A. Lambrecht and C. Tucker. 2019. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Manage. Sci. 65, 7, 2966–2981. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  218. J. Langford and T. Zhang. 2007. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances in Neural Information Processing Systems (NeurIPS). Curran Associates, 817–824.Google ScholarGoogle Scholar
  219. J. Larson, S. Mattu, L. Kirchner, and J. Angwin. May. 2016. How we analyzed the COMPAS recidivism algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.Google ScholarGoogle Scholar
  220. L. Li, W. Chu, J. Langford, and R. E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW ’10). ACM, 661–670. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  221. M. Lichman. 2013. UCI Machine Learning Repository. https://archive.ics.uci.edu.Google ScholarGoogle Scholar
  222. Z. C. Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3, 31–57. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  223. Z. C. Lipton, J. McAuley, and A. Chouldechova. 2018. Does mitigating ML’s impact disparity require treatment disparity? In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, 8125–8135.Google ScholarGoogle Scholar
  224. Y. Liu, G. Radanovic, C. Dimitrakakis, D. Mandal, and D. C. Parkes. 2017. Calibrated fairness in bandits. arXiv:1707.01875, also appeared at the Workshop on Fairness, Accountability, and Transparency in Machine Learning. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  225. L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt. 2018. Delayed impact of fair machine learning. In Proceedings of the 35th International Conference on Machine Learning, Vol. 80. PMLR, 3150–3158. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  226. M. Lopez and J. Marengo. 2011. An upper bound for the expected difference between order statistics. Math. Mag. 84, 5, 365–369. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  227. Y. Lou, R. Caruana, and J. Gehrke. 2012. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’12). ACM, 150–158. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  228. T. Lu and C. Boutilier. 2011. Learning Mallows models with pairwise preferences. In Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML ’11). Omnipress, 145–152.Google ScholarGoogle Scholar
  229. R. D. Luce. 1959. Individual Choice Behavior: A Theoretical Analysis. Wiley.Google ScholarGoogle Scholar
  230. G. F. Madaus and M. Clarke. 2001. The Adverse Impact of High Stakes Testing on Minority Students: Evidence from 100 Years of Test Data. Technical Report. ERIC.Google ScholarGoogle Scholar
  231. D. Madras, E. Creager, T. Pitassi, and R. Zemel. 2018. Learning adversarially fair and transferable representations. In Proceedings of the 35th International Conference on Machine Learning, Vol. 80, Stockholmsmässan, Stockholm, Sweden, July 10–15. PMLR, 3384–3393.Google ScholarGoogle Scholar
  232. R. Makhijani and J. Ugander. 2019. Parametric models for intransitivity in pairwise rankings. In Proceedings of the World Wide Web Conference (WWW ’19). ACM, 3056–3062. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  233. G. Malgieri and G. Comandé. 2017. Why a right to legibility of automated decision-making exists in the General Data Protection Regulation. Int. Data Priv. Law 7, 4, 243–265. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  234. H. J. Malik. 1966. Exact moments of order statistics from the Pareto distribution. Scand. Actuar. J. 1966, 3–4, 144–157. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  235. C. L. Mallows. 1957. Non-null ranking models. I. Biometrika 44, 1–2, 114–130. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  236. F. Manjoo. 2019. This summer stinks. But at least we’ve got “Old Town Road.” New York Times.Google ScholarGoogle Scholar
  237. C. F. Manski. 1977. The structure of random utility models. Theory Decis. 8, 3, 229–254. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  238. Y. Mansour, A. Slivkins, and V. Syrgkanis. 2015. Bayesian incentive-compatible bandit exploration. In Proceedings of the 16th ACM Conference on Economics and Computation (EC ’15). ACM, 565–582. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  239. Y. Mansour, A. Slivkins, and Z. S. Wu. 2018. Competing bandits: Learning under competition. In Proceedings of the 9th Innovations in Theoretical Computer Science (ITCS), Vol. 94. Schloss Dagstuhl–Leibniz-Zentrum fur Informatik, 48:1–48:27. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  240. A. Mariotti. 2017. Talent Acquisition Benchmarking Report. Technical Report. Society for Human Resource Management. http://web.archive.org/web/20230207163352/https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/2017-Talent-Acquisition-Benchmarking.pdf.Google ScholarGoogle Scholar
  241. D. Martens and F. Provost. 2014. Explaining data-driven document classifications. MIS Q. 38, 1, 73–100. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  242. R. P. McAfee and J. McMillan. 1986. Bidding for contracts: A principal–agent analysis. Rand J. Econ. 17, 3, 326–338. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  243. M. A. Mcdaniel, S. Kepes, and G. C. Banks. 2011. The Uniform Guidelines are a detriment to the field of personnel selection. Ind. Organ. Psychol. 4, 4, 494–514. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  244. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 1–35. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  245. I. Mendoza and L. A. Bygrave. 2017. The right not to be subject to automated decisions based on profiling. In EU Internet Law. Springer, 77–98. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  246. J. Miller, S. Milli, and M. Hardt. 2020. Strategic classification is causal modeling in disguise. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. PMLR, 6917–6926.Google ScholarGoogle Scholar
  247. S. Milli, J. Miller, A. D. Dragan, and M. Hardt. 2019. The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 230–239. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  248. J. P. Mills. 1926. Table of the ratio: Area to bounding ordinate, for any portion of normal curve. Biometrika 18, 3–4, 395–400. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  249. M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 220–229. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  250. A. Morse and K. Pence. 2020. Technological Innovation and Discrimination in Household Finance. Technical Report 26739. National Bureau of Economic Research, Cambridge, MA. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  251. R. K. Mothilal, A. Sharma, and C. Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM, 607–617. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  252. H. Munsterberg. 1998. Psychology and Industrial Efficiency, Vol. 49. A&C Black.Google ScholarGoogle Scholar
  253. I. B. Myers. 1962. The Myers–Briggs Type Indicator. Consulting Psychologists Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  254. B. K. Natarajan. 1995. Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 2, 227–234. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  255. National Research Council. 1989. Fairness in Employment Testing: Validity Generalization, Minority Issues, and the General Aptitude Test Battery. The National Academies Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  256. National Research Council. 2013. New Directions in Assessing Performance Potential of Individuals and Groups: Workshop Summary. The National Academies Press. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  257. D. Neumark, R. J. Bank, and K. D. Van Nort. Sex discrimination in restaurant hiring: An audit study. Q. J. Econ. 111, 3, 915–941, 1996. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  258. New York City Council. 2021. A Local Law to Amend the Administrative Code of the City of New York, in Relation to the Sale of Automated Employment Decision Tools. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=Advanced&Search.Google ScholarGoogle Scholar
  259. A. Niculescu-Mizil and R. Caruana. 2005. Predicting good probabilities with supervised learning. In Proceedings of the 22nd International Conference on Machine Learning (ICML ’05). ACM, 625–632. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  260. H. Nissenbaum. 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  261. S. U. Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.Google ScholarGoogle Scholar
  262. W. T. Norman. 1963. Toward an adequate taxonomy of personality attributes: Replicated factors structure in peer nomination personality ratings. J. Abnorm. Soc. Psychol. 66, 6, 574–583. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  263. Z. Obermeyer and S. Mullainathan. 2019. Dissecting racial bias in an algorithm that guides health decisions for 70 million people. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 89. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  264. N. G. Packin and Y. Lev-Aretz. 2016. On social credit and the right to be unnetworked. Columbia Bus. Law Rev. 2016, 2, 339–425. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  265. Y. Papanastasiou, K. Bimpikis, and N. Savva. 2018. Crowdsourcing exploration. Manage. Sci. 64, 4, 1727–1746. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  266. C. Passariello. 27 September. 2016. Tech firms borrow football play to increase hiring of women. Wall Street J. https://www.wsj.com/articles/tech-firms-borrow-football-play-to-increase-hiring-of-women-1474963562.Google ScholarGoogle Scholar
  267. S. Passi and S. Barocas. 2019. Problem formulation and fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 39–48. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  268. M. V. Pauly. 1968. The economics of moral hazard: Comment. Am. Econ. Rev. 58, 3, 531–537.Google ScholarGoogle Scholar
  269. D. Pedreshi, S. Ruggieri, and F. Turini. 2008. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’08). ACM, 560–568. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  270. E. Perez-Richet and V. Skreta. 2022. Test design under falsification. J. Econom. Soc. 90, 1109–1142. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  271. J. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers 10, 3, 61–74.Google ScholarGoogle Scholar
  272. G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger. 2017. On fairness and calibration. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS ’17). Curran Associates, 5684–5693.Google ScholarGoogle Scholar
  273. J. Porter. 2020. UK ditches exam results generated by biased algorithm after student protests. The Verge.Google ScholarGoogle Scholar
  274. J. F. Power and R. F. Follett. 1987. Monoculture. Sci. Am. 256, 3, 78–87.Google ScholarGoogle ScholarCross RefCross Ref
  275. J. Powles and H. Nissenbaum. 2018. The seductive diversion of ‘solving’ bias in artificial intelligence. Medium.com.Google ScholarGoogle Scholar
  276. R. Puri. 2018. Mitigating bias in AI models. IBM Research Blog.Google ScholarGoogle Scholar
  277. L. Quillian, D. Pager, O. Hexel, and A. H. Midtbøen. 2017. Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Proc. Natl. Acad. Sci. U. S. A. 114, 41, 10870–10875. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  278. S. Ragain and J. Ugander. 2016. Pairwise choice Markov chains. In Advances in Neural Information Processing Systems, 3198–3206.Google ScholarGoogle Scholar
  279. M. Raghavan. 2020. Testimony on New York City Int. No. 1894.Google ScholarGoogle Scholar
  280. M. Raghavan. 2021. The Societal Impacts of Algorithmic Decision-Making. Ph.D. thesis. Cornell University. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  281. M. Raghavan and S. Barocas. 2019. Challenges for Mitigating Bias in Algorithmic Hiring. Brookings.Google ScholarGoogle Scholar
  282. M. Raghavan, A. Slivkins, J. V. Wortman, and Z. S. Wu. 2018. The externalities of exploration and how data diversity helps exploitation. In Proceedings of the 31st Conference on Learning Theory, Vol. 75. PMLR, 1724–1738.Google ScholarGoogle Scholar
  283. M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM, 469–481. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  284. I. D. Raji and J. Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19). ACM, 429–435. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  285. M. Raub. 2018. Bots, bias and big data: Artificial intelligence, algorithmic bias and disparate impact liability in hiring practices. Ark. Law Rev. 71, 529.Google ScholarGoogle Scholar
  286. Regulation B. 1975. 12 C.F.R. § 1002 et seq.Google ScholarGoogle Scholar
  287. J. H. Reiman. 1976. Privacy, intimacy, and personhood. Philos. Public Aff. 6, 26–44.Google ScholarGoogle Scholar
  288. L. Rhue. 2018. Racial influence on automated perceptions of emotions. Available at SSRN 3281765. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  289. P. A. Riach and J. Rich. 2002. Field experiments of discrimination in the market place. Econ. J. 112, 483, F480–F518. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  290. M. T. Ribeiro, S. Singh, and C. Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). ACM, 1135–1144. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  291. P. Rigollet and A. Zeevi. 2010. Nonparametric bandits with covariates. In Proceedings of the Conference on Learning Theory (COLT). DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  292. J. Roach. 2018. Microsoft improves facial recognition technology to perform well across all skin tones, genders. The AI Blog.Google ScholarGoogle Scholar
  293. D. Rodina and J. Farragut. 2020. Inducing effort through grades. CRC TR 224 Discussion Paper Series crctr224_2020_221. University of Bonn and University of Mannheim, Germany.Google ScholarGoogle Scholar
  294. M. C. Rodriguez and Y. Maeda. 2006. Meta-analysis of coefficient alpha. Psychol. Methods 11, 3, 306–322. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  295. S. A. Ross. 1973. The economic theory of agency: The principal’s problem. Am. Econ. Rev. 63, 2, 134–139.Google ScholarGoogle Scholar
  296. E. Ruda and L. E. Albright. 1968. Racial differences on selection instruments related to subsequent job performance. Pers. Psychol. 21, 1, 31–41. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  297. C. Russell. 2019. Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 20–28. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  298. C. Russell, M. J. Kusner, J. R. Loftus, and R. Silva. 2017. When worlds collide: Integrating different counterfactual assumptions in fairness. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS ’17), Vol. 30. Curran Associates, 6417–6426. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  299. E. Salas. 2011. Reply to request for public comment on plan for retrospective analysis of significant regulations pursuant to Executive Order 13563.Google ScholarGoogle Scholar
  300. M. J. Salganik, P. S. Dodds, and D. J. Watts. 2006. Experimental study of inequality and unpredictability in an artificial cultural market. Science 311, 5762, 854–856. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  301. M. R. Sampford. 1953. Some inequalities on Mill’s ratio and related functions. Ann. Math. Stat. 24, 1, 130–132. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  302. J. Sanchez-Monedero, L. Dencik, and L. Edwards. 2020. What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’20). ACM, 458–468. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  303. H. Schuler, J. L. Farr, and M. Smith. 1993. Personnel Selection and Assessment: Individual and Organizational Perspectives. Psychology Press.Google ScholarGoogle Scholar
  304. A. D. Selbst. 2019. A new HUD rule would effectively encourage discrimination by algorithm. Slate. https://slate.com/technology/2019/08/hud-disparate-impact-discrimination-algorithm.html.Google ScholarGoogle Scholar
  305. A. D. Selbst and J. Powles. 2017. Meaningful information and the right to explanation. Int. Data Priv. Law 7, 4, 233–242. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  306. A. D. Selbst and S. Barocas. 2018. The intuitive appeal of explainable machines. Fordham Law Rev. 87, 3, 1085. https://ir.lawnet.fordham.edu/flr/vol87/iss3/11.Google ScholarGoogle Scholar
  307. A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). ACM, 59–68. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  308. Senate Report No. 94-589, 1976.Google ScholarGoogle Scholar
  309. H. Shaban. 13 June. 2017. What is the “Rooney Rule” that Uber just adopted? Washington Post.Google ScholarGoogle Scholar
  310. Y. Shavit, B. Edelman, and B. Axelrod. 2020. Causal strategic linear regression. In Proceedings of the 37th International Conference on Machine Learning (ICML ’20), Vol. 119. PMLR, 8676–8686.Google ScholarGoogle Scholar
  311. E. W. Shoben. 1978. Differential pass–fail rates in employment testing: Statistical proof under Title VII. Harvard Law Rev. 91, 4, 793–813. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  312. E. W. Shoben. 1979. In defense of disparate impact analysis under Title VII: A reply to Dr. Cohn. Indiana Law J. 55, 3, 515.Google ScholarGoogle Scholar
  313. J. Sidanius and M. Crane. 1989. Job evaluation and gender: The case of university faculty. J. Appl. Soc. Psychol. 19, 2, 174–197. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  314. J. Skeem, N. Scurich, and J. Monahan. 2020. Impact of risk assessment on judges’ fairness in sentencing relatively poor defendants. Law Hum. Behav. 44, 1, 51–59. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  315. Society for Industrial and Organizational Psychology. 2018. Principles for the Validation and Use of Personnel Selection Procedures. American Psychological Association.Google ScholarGoogle Scholar
  316. D. J. Solove. 2006. A taxonomy of privacy. Univ. PA Law Rev. 154, 477–560. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  317. M. Spence. 1973. Job market signaling. Q. J. Econ. 87, 3, 355–374. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  318. D. A. Spielman and S.-H. Teng. 2004. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. J. ACM 51, 3, 385–463. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  319. M. Stevenson. 2018. Assessing risk assessment in action. Minn. Law Rev. 103, 303. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  320. J. E. Stiglitz. 1974. Incentives and risk sharing in sharecropping. Rev. Econ. Stud. 41, 2, 219–255. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  321. D. Strauss. 1979. Some results on random utility models. J. Math. Psychol. 20, 1, 35–52. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  322. L. Sweeney. 2013. Discrimination in online ad delivery. Commun. ACM 56, 5, 44–54. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  323. P. Tambe, P. Cappelli, and V. Yakubovich. 2019. Artificial intelligence in human resources management: Challenges and a path forward. Calif. Manage. Rev. 61, 4, 15–42. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  324. I. Taneva. 2019. Information design. Am. Econ. J. Microecon. 11, 4, 151–185. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  325. W. Tang, C.-J. Ho, and Y. Liu. 13–15 April. 2021. Linear models are robust optimal under strategic behavior. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, Vol. 130: Proceedings of Machine Learning Research. PMLR, 2584–2592.Google ScholarGoogle Scholar
  326. P. Tantri. 2020. Fintech for the poor: Financial intermediation without discrimination. Rev. Financ. 25, 2, 561–593. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  327. L. M. Terman. 1916. The Measurement of Intelligence: An Explanation of and a Complete Guide for the Use of the Stanford Revision and Extension of the Binet–Simon Intelligence Scale. Houghton Mifflin.Google ScholarGoogle Scholar
  328. L. L. Thurstone. 1927. A law of comparative judgment. Psychol. Rev. 34, 4, 273. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  329. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart. 2016. Stealing machine learning models via prediction APIs. In Proceedings of the 25th USENIX Conference on Security Symposium (SEC ’16). USENIX Association, 601–618.Google ScholarGoogle Scholar
  330. F. G. Tricomi and A. Erdélyi. 1951. The asymptotic expansion of a ratio of gamma functions. Pacific J. Math. 1, 1, 133–142.Google ScholarGoogle ScholarCross RefCross Ref
  331. J. A. Tropp. 2012. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12, 4, 389–434. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  332. A. B. Tsybakov. 2009. Introduction to Nonparametric Estimation. Springer. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  333. L. E. Tyler. 1947. The Psychology of Human Differences. D Appleton-Century Company.Google ScholarGoogle Scholar
  334. T. R. Tyler. 2006. Why People Obey the Law. Princeton University Press.Google ScholarGoogle Scholar
  335. E. L. Uhlmann and G. L. Cohen. 2005. Constructed criteria: Redefining merit to justify discrimination. Psychol. Sci. 16, 6, 474–480. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  336. U.S. Congress. 1964. Civil Rights Act.Google ScholarGoogle Scholar
  337. U.S. Congress. 1990. Americans with Disabilities Act.Google ScholarGoogle Scholar
  338. U.S. Congress. 1991. Civil Rights Act.Google ScholarGoogle Scholar
  339. US Department of Housing and Urban Development. 2018. Charge of discrimination, HUD v. Facebook. Retrieved from https://archives.hud.gov/news/2019/HUD_v_Facebook.pdf.Google ScholarGoogle Scholar
  340. US Department of Housing and Urban Development. 2019. HUD’s implementation of the Fair Housing Act’s disparate impact standard. Retrieved from https://www.federalregister.gov/documents/2019/08/19/2019-17542/huds-implementation-of-the-fair-housing-acts-disparate-impact-standard.Google ScholarGoogle Scholar
  341. B. Ustun, A. Spangher, and Y. Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT ’19). ACM, 10–19. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  342. L. van den Bergh, E. Denessen, L. Hornstra, M. Voeten, and R. W. Holland. 2010. The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. Am. Educ. Res. J. 47, 2, 497–527. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  343. S. Wachter, B. Mittelstadt, and L. Floridi. 2017. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. Int. Data Priv. Law 7, 2, 76–99. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  344. S. Wachter, B. Mittelstadt, and C. Russell. 2018. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. J. Law Technol. 31, 2, 841–887. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  345. C. Wenneras and A. Wold. 1997. Nepotism and sexism in peer-review. Nature 387, 341–343. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  346. D. R. Williams and S. A. Mohammed. 2009. Discrimination and racial disparities in health: Evidence and needed research. J. Behav. Med. 32, 1, 20–47. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  347. C. Wilson, A. Ghosh, S. Jiang, A. Mislove, L. Baker, J. Szary, K. Trindel, and F. Polli. 2021. Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). ACM, 666–677. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  348. B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro. 07–10 July. 2017. Learning non-discriminatory predictors. In Proceedings of the 2017 Conference on Learning Theory, Vol. 65. PMLR, 1920–1953.Google ScholarGoogle Scholar
  349. K. Yang and J. Stoyanovich. 2017. Measuring fairness in ranked outputs. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management (SSDBM ’17). ACM, 22, 1–6. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  350. J. I. Yellott Jr. 1977. The relationship between Luce’s choice axiom, Thurstone’s theory of comparative judgment, and the double exponential distribution. J. Math. Psychol. 15, 2, 109–144. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  351. J. W. Young. 2001. Differential Validity, Differential Prediction, and College Admission Testing: A Comprehensive Review and Analysis. Research Report No. 2001-6. College Entrance Examination Board.Google ScholarGoogle Scholar
  352. B. Zadrozny and C. Elkan. 2001. Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In Proceedings of the 18th International Conference on Machine Learning (ICML ’01). Morgan Kaufmann, 609–616.Google ScholarGoogle Scholar
  353. M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. 2017a. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW ’17). IW3C2, 1171–1180. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  354. M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi. 20–22 April. 2017b. Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Vol. 54. PMLR, 962–970. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  355. A. Zapechelnyuk. 2020. Optimal quality certification. Am. Econ. Rev. Insights 2, 2, 161–176. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  356. T. Z. Zarsky. 2008. Law and online social networks: Mapping the challenges and promises of user-generated information flows. Fordham Intellect. Prop. Media Entertain. Law J. 18, 3, 741–783.Google ScholarGoogle Scholar
  357. M. Zehlike, F. Bonchi, C. Castillo, S. Hajian, M. Megahed, and R. Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking algorithm. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM ’17). ACM, 1569–1578. DOI: .Google ScholarGoogle ScholarDigital LibraryDigital Library
  358. R. S. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. 2013. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, Vol. 28. PMLR, 325–333.Google ScholarGoogle Scholar
  359. Z. Zhao, T. Villamil, and L. Xia. 2018. Learning mixtures of random utility models. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. AAAI Press, 4530–4538. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  360. D. Zhou, J. Luo, V. M. B. Silenzio, Y. Zhou, J. Hu, G. Currier, and H. Kautz. 2015. Tackling mental health by integrating unobtrusive multimodal sensing. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI ’15). AAAI Press, 1401–1408. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
  361. M. Ziewitz. 2019. Rethinking gaming: The ethical work of optimization in web search engines. Soc. Stud. Sci. 49, 5, 707–731. DOI: .Google ScholarGoogle ScholarCross RefCross Ref
Contributors
  • Massachusetts Institute of Technology
Index terms have been assigned to the content through auto-classification.

Recommendations