Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact

Published: 15 October 2020 Publication History

Abstract

Recent literature has demonstrated the limited and, in some instances, waning role of ethical training in computing classes in the US. The capacity for artificial intelligence (AI) to be inequitable or harmful is well documented, yet it's an issue that continues to lack apparent urgency or effective mitigation. The question we raise in this paper is how to prepare future generations to recognize and grapple with the ethical concerns of a range of issues plaguing AI, particularly when they are combined with surveillance technologies in ways that have grave implications for social participation and restriction?from risk assessment and bail assignment in criminal justice, to public benefits distribution and access to housing and other critical resources that enable security and success within society. The US is a mecca of information and computer science (IS and CS) learning for Asian students whose experiences as minorities renders them familiar with, and vulnerable to, the societal bias that feeds AI bias. Our goal was to better understand how students who are being educated to design AI systems think about these issues, and in particular, their sensitivity to intersectional considerations that heighten risk for vulnerable groups. In this paper we report on findings from qualitative interviews with 20 graduate students, 11 from an AI class and 9 from a Data Mining class. We find that students are not predisposed to think deeply about the implications of AI design for the privacy and well-being of others unless explicitly encouraged to do so. When they do, their thinking is focused through the lens of personal identity and experience, but their reflections tend to center on bias, an intrinsic feature of design, rather than on fairness, an outcome that requires them to imagine the consequences of AI. While they are, in fact, equipped to think about fairness when prompted by discussion and by design exercises that explicitly invite consideration of intersectionality and structural inequalities, many need help to do this empathy 'work.' Notably, the students who more frequently reflect on intersectional problems related to bias and fairness are also more likely to consider the connection between model attributes and bias, and the interaction with context. Our findings suggest that experience with identity-based vulnerability promotes more analytically complex thinking about AI, lending further support to the argument that identity-related ethics should be integrated into IS and CS curriculums, rather than positioned as a stand-alone course.

References

[1]
Angwin, J. and Larson, J. 2016. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. ProPublica.
[2]
Appadurai, A. and Alexander, N. 2019. Failure. Polity.
[3]
Bridges, K.M. 2017. The Poverty of Privacy Rights. Stanford Law Books.
[4]
Collins, P.H. 2019. Intersectionality as Critical Social Theory. Duke University Press Books.
[5]
Collins, P.H. and Bilge, S. 2016. Intersectionality. Polity.
[6]
Crenshaw, K. 1989. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine. University of Chicago Legal Forum. 1989, 1 (1989).
[7]
Crenshaw, K. 1991. Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. Stanford Law Review. 43, 6 (1991), 1241--1299.
[8]
Eubanks, V. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
[9]
Ferguson, A.G. 2017. The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press.
[10]
Flick, C. 2016. Informed consent and the Facebook emotional manipulation study. Research Ethics. 12, 1 (Jan. 2016), 14--28.
[11]
Gillespie, T. 2014. The Relevance of Algorithms. Media Technologies: Essays on Communication, Materiality, and Society. MIT Press.
[12]
Goldstein, J. and Watkins, A. 2019. She Was Arrested at 14. Then Her Photo Went to a Facial Recognition Database. The New York Times.
[13]
Guendelsberger, E. 2019. On the Clock: What Low-Wage Work Did to Me and How It Drives America Insane. Little, Brown and Company.
[14]
Gurses, S. and Hoboken, J. van 2018. Privacy after the Agile Turn. Cambridge Handbook of Consumer Privacy.
[15]
Hagendorff, T. 2020. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines. (Feb. 2020).
[16]
Hao, K. 2018. Amazon is the invisible backbone behind ICE's immigration crackdown. MIT Technology Review.
[17]
Hardt, M., Price, E. and Srebro, N. 2016. Equality of Opportunity in Supervised Learning.
[18]
Hartzog, W. and Richards, N.M. 2020. Privacy's Constitutional Moment and the Limits of Data Protection. 61 Boston College Law Review. (Forthcoming 2020).
[19]
Madden, M. 2017. Privacy, Security, and Digital Inequality. Data & Society.
[20]
Madden, M., Gilman, M., Levy, K. and Marwick, A. 2017. Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans. Washington University Law Review. 95, 1 (Jan. 2017), 053--125.
[21]
Martin, C.D., Huff, C., Gotterbarn, D. and Miller, K. 1996. Implementing a Tenth Strand in the CS Curriculum. Commun. ACM. 39, 12 (Dec. 1996), 75--84.
[22]
Noble, S.U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
[23]
O'Neil, C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books.
[24]
Pierce, J., Fox, S., Merrill, N. and Wong, R. 2018. Differential Vulnerabilities and a Diversity of Tactics: What Toolkits Teach Us About Cybersecurity. Proc. ACM Hum.-Comput. Interact. 2, CSCW (Nov. 2018), 139:1--139:24.
[25]
Rankin, Y.A. and Thomas, J.O. 2019. Straighten Up and Fly Right: Rethinking Intersectionality in HCI Research. Interactions. 26, 6 (Oct. 2019), 64--68.
[26]
Richards, N.M. and Hartzog, W. 2017. Privacy's Trust Gap. 126 Yale Law Journal. 1180, (2017).
[27]
Richardson, R., Schultz, J. and Crawford, K. 2019. Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online. (2019).
[28]
Saltz, J., Skirpan, M., Fiesler, C., Gorelick, M., Yeh, T., Heckman, R., Dewar, N. and Beard, N. 2019. Integrating Ethics Within Machine Learning Courses. ACM Trans. Comput. Educ. 19, 4 (Aug. 2019), 32:1--32:26.
[29]
Sanvig, C., Hamilton, K., Karahalios, K. and Langbort, C. 2014. Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry (Seattle, Washington, USA, May 2014).
[30]
Schutz, A. 1967. The Phenomenology of the Social World. Northwestern University Press.
[31]
Seaver, N. 2017. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society. 4, 2 (Dec. 2017), 2053951717738104.
[32]
Singer, N. 2019. Amazon Is Pushing Facial Technology That a Study Says Could Be Biased. The New York Times.
[33]
Singer, N. 2018. Tech's Ethical 'Dark Side': Harvard, Stanford and Others Want to Address It. The New York Times.
[34]
Starks, H. and Brown Trinidad, S. 2007. Choose Your Method: A Comparison of Phenomenology, Discourse Analysis, and Grounded Theory. Qualitative Health Research. 17, 10 (Dec. 2007), 1372--1380.
[35]
Thomas, J.O., Joseph, N., Williams, A., Crum, C. and Burge, J. 2018. Speaking Truth to Power: Exploring the Intersectional Experiences of Black Women in Computing. 2018 Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT) (Feb. 2018), 1--8.
[36]
Thompson, C. 2019. Coders: Who They Are, What They Think and How They Are Changing Our World. Picador.
[37]
Thoughts On Machine Learning Accuracy: 2018. https://aws.amazon.com/blogs/aws/thoughts-on-machine-learning-accuracy/. Accessed: 2019-07--31.
[38]
Velazco, C. 2018. Facebook can't move fast to fix the things it broke. Engadget. (2018).
[39]
Verma, S. and Rubin, J. 2018. Fairness definitions explained. Proceedings of the International Workshop on Software Fairness (Gothenburg, Sweden, May 2018), 1--7.
[40]
Waser, M.R. 2015. Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (Including Humans). Procedia Computer Science. 71, (Jan. 2015), 106--111.
[41]
Webb, A. 2019. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. PublicAffairs.
[42]
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richarson, R., Schultz, J. and Schwartz, O. 2018. AI Now 2018 Report. AI Now Institute.
[43]
2018. Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems. AI Now Institute.

Cited By

View all
  • (2025)Diversity, Equity, and Inclusion in Computing Science: Culture is the Key, Curriculum Contributes2024 Working Group Reports on Innovation and Technology in Computer Science Education10.1145/3689187.3709611(175-225)Online publication date: 22-Jan-2025
  • (2024)Promoting Responsible AI PracticesCutting-Edge Innovations in Teaching, Leadership, Technology, and Assessment10.4018/979-8-3693-0880-6.ch014(195-211)Online publication date: 29-Mar-2024
  • (2024)Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658926(573-584)Online publication date: 3-Jun-2024
  • Show More Cited By

Index Terms

  1. Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 4, Issue CSCW2
      CSCW
      October 2020
      2310 pages
      EISSN:2573-0142
      DOI:10.1145/3430143
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 October 2020
      Published in PACMHCI Volume 4, Issue CSCW2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. algorithm bias
      2. artificial intelligence
      3. education
      4. ethics
      5. intersectionality

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)258
      • Downloads (Last 6 weeks)26
      Reflects downloads up to 23 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Diversity, Equity, and Inclusion in Computing Science: Culture is the Key, Curriculum Contributes2024 Working Group Reports on Innovation and Technology in Computer Science Education10.1145/3689187.3709611(175-225)Online publication date: 22-Jan-2025
      • (2024)Promoting Responsible AI PracticesCutting-Edge Innovations in Teaching, Leadership, Technology, and Assessment10.4018/979-8-3693-0880-6.ch014(195-211)Online publication date: 29-Mar-2024
      • (2024)Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658926(573-584)Online publication date: 3-Jun-2024
      • (2024)An Autoethnographic Reflection of Prompting a Custom GPT Based on OneselfExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651096(1-9)Online publication date: 11-May-2024
      • (2024)The Who in XAI: How AI Background Shapes Perceptions of AI ExplanationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642474(1-32)Online publication date: 11-May-2024
      • (2024)For Me or Not for Me? The Ease With Which Teens Navigate Accurate and Inaccurate Personalized Social Media ContentProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642297(1-7)Online publication date: 11-May-2024
      • (2024)Moral Engagement and Disengagement in Health Care AI DevelopmentAJOB Empirical Bioethics10.1080/23294515.2024.233690615:4(291-300)Online publication date: 8-Apr-2024
      • (2023)Reframing data ethics in research methods education: a pathway to critical data literacyInternational Journal of Educational Technology in Higher Education10.1186/s41239-023-00380-y20:1Online publication date: 20-Feb-2023
      • (2023)“☑ Fairness Toolkits, A Checkbox Culture?” On the Factors that Fragment Developer Practices in Handling Algorithmic HarmsProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604674(482-495)Online publication date: 8-Aug-2023
      • (2023)The Role of Explainable AI in the Research Field of AI EthicsACM Transactions on Interactive Intelligent Systems10.1145/359997413:4(1-39)Online publication date: 1-Jun-2023
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media