Abstract
Bias detection and mitigation is an active area of research in machine learning. This work extends previous research done by the authors Van Busum and Fang (Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, 2023) to provide a rigorous and more complete analysis of the bias found in AI predictive models. Admissions data spanning six years was used to create an AI model to determine whether a given student would be directly admitted into the School of Science under various scenarios at a large urban research university. During this time, submission of standardized test scores as part of a student’s application became optional which led to interesting questions about the impact of standardized test scores on admission decisions. We developed and analyzed AI models to understand which variables are important in admissions decisions, and how the decision to exclude test scores affects the demographics of the students who are admitted. We then evaluated the predictive models to detect and analyze biases these models may carry with respect to three variables chosen to represent sensitive populations: gender, race, and whether a student was the first in his/her family to attend college. We also extended our analysis to show that the biases detected were persistent. Finally, we included several fairness metrics in our analysis and discussed the uses and limitations of these metrics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The datasets generated during and/or analyzed during this study are not publicly available because they are the property of IUPUI.
References
Van Busum K, Fang S (2023) Analysis of AI models for student admissions: a case study. In: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (SAC ’23), March 2023, pp. 17–22. https://doi.org/10.1145/3555776.3577743
Hammer LB, Grigsby TD, Woods S (1998) The conflicting demands of work, family, and school among students at an urban university. J Psychol 132(1):220–226. https://doi.org/10.1080/00223989809599161
Harper SR, Smith EJ, Davis CHF III (2018) A critical race case analysis of black undergraduate student success at an urban university. Urban Edu 53(1):3–25. https://doi.org/10.1177/0042085916668956
Sotomayor L, Tarhan D, Vieta M, McCartney S, Mas A (2022) When students are house-poor: Urban universities, student marginality, and the hidden curriculum of student housing. Cities 124:103572. https://doi.org/10.1016/j.cities.2022.103572
Rodriquez-Planas N (2022) Hitting where it hurts most: COVID-19 and low-income urban college students. Econ Ed Rev 87:102233. https://doi.org/10.1016/j.econedurev.2022.102233
Edelman J (2022) Survey: Test-Optional is Appealing to Minority Students. Retrieved Oct. 12, 2022 from https://www.diverseeducation.com/students/article/15292751/survey-finds-testoptional-policies-a-significant-motivator-for-minority-college-applicants
Bennett CT (2022) Untested admissions: examining changes in application behavior and student demographics under test-optional policies. Am Edu Res J 59(1):180–216. https://doi.org/10.3102/00028312211003526
Belasco AS, Rosinger KO, Hearn JC (2015) The test-optional movement at america’s selective liberal arts colleges: a boon for equity or something else? Edu Evalu Policy Anal 37(2):206–223. https://doi.org/10.3102/0162373714537350
Liu Z, Garg N (2021) Test-optional policies: overcoming strategic behavior and informational gaps. In: Proceedings of the ACM Equity and Access in Algorithms, Mechanisms, and Optimization (EEAAMO ’21), Oct. 5–9, 2021, ACM Inc., New York, NY, pp.1–13. https://doi.org/10.1145/3465416.3483293
Waters A, Miikkulainen R (2014) GRADE: machine learning support for graduate admissions. AI Mag 35(1):64–75. https://doi.org/10.1609/aimag.v35i1.2504
Young NT, Caballero MD (2019) Using machine learning to understand physics graduate school admissions. arXiv: 1907.01570. Retrieved from https://arxiv.org/abs/1907.01570
Jamison J (2017) Applying machine learning to predict Davidson college’s admissions yield. In: Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’17), Mar. 8–11, 2017, Seattle, Washington. ACM Inc., New York, NY, pp. 765–766. https://doi.org/10.1145/3017680.3022468
Acharya MS, Armaan A, Antony AS (2019) A comparison of regression models for prediction of graduate admissions. In: 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), Feb. 21–23, 2019, Chennai, India. IEEE, 1–5. https://doi.org/10.1109/ICCIDS.2019.8862140
Raghavendran CV, Pavan Venkata Vamsi C, Veerraju T, Veluri RK (2021) Predicting student admissions rate into university using machine learning models. In: Bhattacharyya D, Thirupathi Rao N (eds) Machine intelligence and soft computing advances in intelligent systems and computing. Springer, Singapore
Alvero AJ, Arthurs N, Antonio AL, Domingue BW, Gebre-Medhin B, Giebel S, Stevens ML (2020) AI and holistic review: informing human reading in college admissions. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20), Feb. 7–9, 2020, New York, NY. ACM Inc., New York, NY, pp. 200–206. https://doi.org/10.1145/3375627.3375871
Martinez Neda B, Zeng Y, Gago-Masague S (2021) Using machine learning in admissions: reducing human and algorithmic bias in the selection process. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), Mar. 13–20, 2021. Virtual. ACM Inc., New York, NY, p. 1323. https://doi.org/10.1145/3408877.3439664
d’Alessandro B, O’Neil C, LaGatta T (2017) Conscientious classification: a data scientist’s guide to discrimination-aware classification. Big Data 5(2):120–134. https://doi.org/10.1089/big.2016.0048
Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI ‘17), Aug. 19–25, 2017. Melbourne, Australia, pp. 4691–4697. https://doi.org/10.24963/ijcai.2017/654
Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2019) A Survey on Bias and Fairness in Machine Learning. arXiv: 1908.09635. Retrieved from https://arxiv.org/abs/1908.09635
Caton S, Haas C (2020) Fairness in Machine Learning: A Survey. arXiv: 2010.04053. Retrieved from https://arxiv.org/abs/2010.04053
Marcinkowski F, Kieslich K, Starke C, Lünich M (2020) Implications of AI (un-)fairness in higher education admissions: the effects of perceived AI (un-)fairness on exit, voice and organizational reputation. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20), Jan. 2020, pp.122–130. https://doi.org/10.1145/3351095.3372867
Kordzadeh N, Ghasemaghaei M (2021) Algorithmic bias: review, synthesis, and future research directions. Eur J Inform Syst 31:388–409. https://doi.org/10.1080/0960085X.2021.1927212
Memarian B, Doleck T (2023) Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: a systematic review. Comput Edu Artif Intell 5:100152. https://doi.org/10.1016/j.caeai.2023.100152
Belenguer L (2022) AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics 2(4):771–787. https://doi.org/10.1007/s43681-022-00138-8
Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent Trade-Offs in the Fair Determination of Risk Scores. arXiv: 1609.05807v1. Retrieved from https://arxiv.org/pdf/1609.05807v1.pdf
Acknowledgements
The authors want to thank the following people at IUPUI for their help in providing the dataset used in this study, for answering questions about the dataset format, and for explaining policies related to admissions: Jane Williams, Joe Thompson, Steve Graunke, Matt Moody, Norma Fewell, and Lori Hart.
Author information
Authors and Affiliations
Contributions
Both authors contributed equally to all parts of this work.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical approval
This study was reviewed by the IUPUI Institutional Review Board.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Van Busum, K., Fang, S. Bias analysis of AI models for undergraduate student admissions. Neural Comput & Applic (2024). https://doi.org/10.1007/s00521-024-10762-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00521-024-10762-6