Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3626252.3630815acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Crafting Disability Fairness Learning in Data Science: A Student-Centric Pedagogical Approach

Published: 07 March 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Ensuring the fairness of machine learning (ML) systems for individuals with disabilities is crucial. Proactive measures are required to identify and mitigate biases in data and models, thereby preventing potential harm or bias against people with disabilities. While previous research on ML fairness education primarily concentrated on gender and race fairness, the domain of disability fairness has received comparatively little attention. Addressing this gap, we adopted a student-centric approach to craft a disability fairness teaching intervention. A focus group of students experienced in ML and accessible computing underscored the significance of engagement and scaffolding strategies for effectively learning intricate topics. Consequently, we crafted a disability fairness hands-on programming assignment that delves into uncovering disability bias with a lens that takes intersectionality into account. The assignment was tailored for an introductory undergraduate data science (DS) course. We employed reflective questions and surveys to gauge the effectiveness of our approach. The findings indicate the success of our approach in promoting a deeper understanding of disability fairness within the context of DS education.

    References

    [1]
    2014. "Facebook Tinkers With Users' Emotions in News Feed Experiment, Stirring Outcry". Retrieved August 1, 2023 from https://www.nytimes.com/2014/06/30/technology/facebook-tinkers-with-usersemotions- in-news-feed-experiment-stirring-outcry.html
    [2]
    Patricia Acosta-Vargas, Luis Antonio Salvador-Ullauri, and Sergio Luján-Mora. 2019. A heuristic method to evaluate web accessibility for users with low vision. IEEE Access 7 (2019), 125634--125648.
    [3]
    Catherine Baker, Yasmine Elglaly, and Kristen Shinohara. 2020. A Systematic Analysis of Accessibility in Computing Education Research. 107--113. https://doi.org/10.1145/3328778.3366843
    [4]
    Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating BERT's gender bias. arXiv preprint arXiv:2010.14534 (2020).
    [5]
    Benjamin S. Baumer, Randi L. Garcia, Albert Y. Kim, Katherine M. Kinnaird, and Miles Q. Ott. 2022. Integrating Data Science Ethics Into an Undergraduate Major: A Case Study. Journal of Statistics and Data Science Education 30, 1 (2022), 15--28. https://doi.org/10.1080/26939169.2022.2038041 arXiv:https://doi.org/10.1080/26939169.2022.2038041
    [6]
    Mariano G Beiró and Kyriaki Kalimeri. 2022. Fairness in vulnerable attribute prediction on social media. Data Mining and Knowledge Discovery 36, 6 (2022), 2194--2213.
    [7]
    John Bricout, Paul MA Baker, Nathan W Moon, and Bonita Sharma. 2021. Exploring the smart future of participation: Community, inclusivity, and people with disabilities. International Journal of E-Planning Research (IJEPR) 10, 2 (2021), 94--108.
    [8]
    Francois Buet-Golfouse and Islam Utyagulov. 2022. Towards fair unsupervised learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1399--1409.
    [9]
    Barbara Catania, Giovanna Guerrini, and Chiara Accinelli. 2023. Fairness & friends in the data science era. AI & SOCIETY 38, 2 (2023), 721--731.
    [10]
    Amanda Coston, Alan Mishler, Edward H. Kennedy, and Alexandra Chouldechova. 2020. Counterfactual Risk Assessments, Evaluation, and Fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 582--593. https://doi.org/10.1145/3351095.3372851
    [11]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://doi.org/10.48550/ARXIV.1810.04805
    [12]
    Samantha Jane Dobesh, Tyler Miller, Pax Newman, Yudong Liu, and Yasmine N. Elglaly. 2023. Towards Machine Learning Fairness Education in a Natural Language Processing Course. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 312--318. https://doi.org/10.1145/3545945.3569802
    [13]
    Casey Fiesler, Natalie Garrett, and Nathan Beard. 2020. What do we teach when we teach tech ethics? a syllabi analysis. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 289--295.
    [14]
    Vinitha Gadiraju, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, and Robin Brewer. 2023. "I Wouldn't Say Offensive but...": Disability- Centered Perspectives on Large Language Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT '23). Association for Computing Machinery, New York, NY, USA, 205--216. https://doi.org/10.1145/3593013.3593989
    [15]
    Natalie Garrett, Nathan Beard, and Casey Fiesler. 2020. More Than" If Time Allows" The Role of Ethics in AI Education. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 272--278.
    [16]
    Benedetta Giovanola and Simona Tiribelli. 2023. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI & society 38, 2 (2023), 549--563.
    [17]
    Alexandra Reeve Givens and Meredith Ringel Morris. 2020. Centering Disability Perspectives in Algorithmic Fairness, Accountability, Transparency. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 684. https://doi.org/10.1145/3351095.3375686
    [18]
    Paula Hall and Debbie Ellis. 2023. A systematic review of socio-technical gender bias in AI algorithms. Online Information Review (2023).
    [19]
    Saad Hassan, Matt Huenerfauth, and Cecilia Ovesdotter Alm. 2021. Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens. CoRR abs/2110.00521 (2021). arXiv:2110.00521 https://arxiv.org/abs/2110.00521
    [20]
    Diane Horton, Sheila A. McIlraith, Nina Wang, Maryam Majedi, Emma Mc- Clure, and Benjamin Wald. 2022. Embedding Ethics in Computer Science Courses: Does It Work?. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 1 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 481--487. https://doi.org/10.1145/3478431.3499407
    [21]
    Ayanna Howard, Cha Zhang, and Eric Horvitz. 2017. Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. In 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). IEEE, 1--7.
    [22]
    Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. (2020). https://doi.org/10.48550/ARXIV.2005.00813
    [23]
    Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the international AAAI conference on web and social media, Vol. 8. 216--225.
    [24]
    Sheikh Rabiul Islam, Ingrid Russell, William Eberle, and Darina Dicheva. 2022. Incorporating the Concepts of Fairness and Bias into an Undergraduate Computer Science Course to Promote Fair Automated Decision Systems. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 1075. https://doi.org/10.1145/3478432.3499043
    [25]
    Brittany Johnson and Yuriy Brun. 2022. Fairkit-learn: a fairness evaluation and comparison toolkit. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings. 70--74.
    [26]
    Adam DI Kramer, Jamie E Guillory, and Jeffrey T Hancock. 2014. Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National academy of Sciences of the United States of America 111, 24 (2014), 8788.
    [27]
    Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, Juntao Tan, Shuchang Liu, and Yongfeng Zhang. 2022. Fairness in recommendation: A survey. arXiv preprint arXiv:2205.13619 (2022).
    [28]
    Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR) 54, 6 (2021), 1--35.
    [29]
    Alannah Oleson, Christopher Mendez, Zoe Steine-Hanson, Claudia Hilderbrand, Christopher Perdriau, Margaret Burnett, and Amy J. Ko. 2018. Pedagogical Content Knowledge for Teaching Inclusive Design. In Proceedings of the 2018 ACM Conference on International Computing Education Research (Espoo, Finland) (ICER '18). Association for Computing Machinery, New York, NY, USA, 69--77. https://doi.org/10.1145/3230977.3230998
    [30]
    Alannah Oleson, Meron Solomon, Christopher Perdriau, and Amy Ko. 2023. Teaching Inclusive Design Skills with the CIDER Assumption Elicitation Technique. ACM Trans. Comput.-Hum. Interact. 30, 1, Article 6 (mar 2023), 49 pages. https://doi.org/10.1145/3549074
    [31]
    OpenAI. 2020. Language Models are Few-Shot Learners. https://cdn.openai.com/ better-language-models/language_models_are_few_shot_learners.pdf.
    [32]
    Brandon Palonis, Samantha Jane Dobesh, Selah Bellscheidt, Mohamed Wiem Mkaouer, Yudong Liu, and Yasmine N Elglaly. 2023. Large-Scale Anonymized Text-based Disability Discourse Dataset. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1--5.
    [33]
    Joon Sung Park, Danielle Bragg, Ece Kamar, and Meredith Ringel Morris. 2021. Designing an Online Infrastructure for Collecting AI Data From People With Disabilities. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 52--63. https://doi.org/10.1145/3442188.3445870
    [34]
    Drago Plecko and Elias Bareinboim. 2022. Causal fairness analysis. arXiv preprint arXiv:2207.11385 (2022).
    [35]
    Jose E Reyes Arias, Kale Kurtzhall, Di Pham, Mohamed Wiem Mkaouer, and Yasmine N Elglaly. 2022. Accessibility Feedback in Mobile Application Reviews: A Dataset of Reviews and Accessibility Guidelines. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1--7.
    [36]
    tarot 2022. Tarot Cards of Tech. Retrieved August 11, 2023 from https://tarotcardsoftech.artefactgroup.com/
    [37]
    Nicholas Tilmes. 2022. Disability, fairness, and algorithmic bias in AI recruitment. Ethics and Information Technology 24, 2 (2022), 21.
    [38]
    Shari Trewin. 2018. AI fairness for people with disabilities: Point of view. arXiv preprint arXiv:1811.10670 (2018).
    [39]
    Shari Trewin, Sara Basson, Michael Muller, Stacy Branham, Jutta Treviranus, Daniel Gruen, Daniel Hebert, Natalia Lyckowski, and Erich Manser. 2019. Considerations for AI fairness for people with disabilities. AI Matters 5, 3 (2019), 40--63.
    [40]
    Chia-En Tseng, Seoung Ho Jung, Yasmine N Elglaly, Yudong Liu, and Stephanie Ludi. 2022. Exploration on Integrating Accessibility into an AI Course. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 1. 864--870.
    [41]
    Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, and Kui Ren. 2022. Fairness-aware adversarial perturbation towards bias mitigation for deployed deep models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10379--10388.
    [42]
    Alice C Yu and John Eng. 2020. One algorithm may not fit all: how selection bias affects machine learning performance. Radiographics 40, 7 (2020), 1932--1937.
    [43]
    Wenbin Zhang, Tina Hernandez-Boussard, and Jeremy Weiss. 2023. Censored fairness through awareness. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 14611--14619.

    Index Terms

    1. Crafting Disability Fairness Learning in Data Science: A Student-Centric Pedagogical Approach

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1
        March 2024
        1583 pages
        ISBN:9798400704239
        DOI:10.1145/3626252
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 March 2024

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. cs education
        2. data science
        3. disability
        4. fairness
        5. machine learning

        Qualifiers

        • Research-article

        Funding Sources

        • SIGCSE Special Projects Grant

        Conference

        SIGCSE 2024
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

        Upcoming Conference

        SIGCSE Virtual 2024

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 105
          Total Downloads
        • Downloads (Last 12 months)105
        • Downloads (Last 6 weeks)18

        Other Metrics

        Citations

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media