Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3670947.3670976acmotherconferencesArticle/Chapter ViewAbstractPublication PagesgiConference Proceedingsconference-collections
research-article

Beyond Predictive Algorithms in Child Welfare

Published: 21 September 2024 Publication History

Abstract

Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - the casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.

References

[1]
Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G Robinson. 2020. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 252–260.
[2]
Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill, and Astrid Mager. 2020. Algorithmic profiling of job seekers in Austria: how austerity politics are made effective. Front. Big Data 3: 5. (2020).
[3]
Asbjørn Ammitzbøll Flügge, Thomas Hildebrandt, and Naja Holten Møller. 2021. Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
[4]
Julia Angwin Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. Retrieved August 30, 2022 from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[5]
Maria Antoniak, David Mimno, and Karen Levy. 2019. Narrative paths and negotiation of power in birth stories. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–27.
[6]
Cecilia Aragon, Shion Guha, Marina Kogan, Michael Muller, and Gina Neff. 2022. Human-Centered Data Science: An Introduction. MIT Press.
[7]
Augintel. 2020. About us: SOCIAL IMPACT MEETS A.I.Retrieved August 30, 2022 from https://www.augintel.us/about
[8]
Augintel. 2022. Allegheny County DHS Case Study: Unlocking the Data in Case Notes with Natural Language Processing. Retrieved September 4, 2022 from https://www.youtube.com/watch?v=575pSgam6YY
[9]
Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Calif. L. Rev. 104 (2016), 671.
[10]
Eric PS Baumer, David Mimno, Shion Guha, Emily Quan, and Geri K Gay. 2017. Comparing grounded theory and topic modeling: Extreme divergence or unlikely convergence?Journal of the Association for Information Science and Technology 68, 6 (2017), 1397–1410.
[11]
Stephen J. Bavolek and Richard G. Keene. 2010. AAPI Onine Development Handbook The Adult-Adolescent Parenting Inventory (AAPI-2). Family Development Resources, Inc (2010). https://www.nurturingparenting.com/images/aapionlinehandbook12-5-12.pdf
[12]
S Bekaert, E Paavilainen, H Scheke, A Baldacchino, E Jouet, L Zablocka-Zytka, B Bachi, F Bartoli, G Carra, RM Cioni, 2021. Family members’ perspectives of child protection services, a metasynthesis of the literature. Children and Youth Services Review (2021), 106094.
[13]
David M. Blei. 2012. Probabilistic Topic Models. Commun. ACM 55, 4 (apr 2012), 77–84. https://doi.org/10.1145/2133806.2133826
[14]
David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. J. Mach. Learn. Res. 3 (March 2003), 993–1022.
[15]
Emily Adlin Bosk. 2018. What counts? Quantification, worker judgment, and divergence in child welfare decision making. Human Service Organizations: Management, Leadership & Governance 42, 2 (2018), 205–224.
[16]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
[17]
Leo Breiman. 2001. Random Forests. Machine Learning 45, 1 (01 Oct 2001), 5–32. https://doi.org/10.1023/A:1010933404324
[18]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 41.
[19]
Jenna Burrell and Marion Fourcade. 2021. The Society of Algorithms. Annual Review of Sociology 47, 1 (2021), 213–237. https://doi.org/10.1146/annurev-soc-090820-020800
[20]
Peter D. Caie, Neofytos Dimitriou, and Ognjen Arandjelović. 2021. Chapter 8 - Precision medicine in digital pathology via image analysis and machine learning. In Artificial Intelligence and Deep Learning in Pathology, Stanley Cohen (Ed.). Elsevier, 149–173. https://doi.org/10.1016/B978-0-323-67538-3.00008-7
[21]
Michael J Camasso and Radha Jagannathan. 2013. Decision making in child protective services: A risky business?Risk analysis 33, 9 (2013), 1636–1649.
[22]
Capacity Building Center for States. 2018. Child Protective Services: A Guide for Caseworkers. https://www.childwelfare.gov/pubPDFs/cps2018.pdf
[23]
Stevie Chancellor, Eric P. S. Baumer, and Munmun De Choudhury. 2019. Who is the "Human" in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 147 (nov 2019), 32 pages. https://doi.org/10.1145/3359249
[24]
N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research 16 (jun 2002), 321–357. https://doi.org/10.1613/jair.953
[25]
Hao-Fei Cheng, Logan Stapleton, Anna Kawakami, Venkatesh Sivaraman, Yanghuidi Cheng, Diana Qing, Adam Perer, Kenneth Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 162, 22 pages. https://doi.org/10.1145/3491102.3501831
[26]
Child Welfare Information Gateway. [n. d.]. Family Preservation Services. Retrieved September 2, 2022 from https://www.childwelfare.gov/topics/supporting/preservation/
[27]
Child Welfare Information Gateway. 2019. What Is Child Abuse and Neglect? Recognizing the Signs and Symptoms. Retrieved August 16, 2022 from https://www.childwelfare.gov/pubPDFs/whatiscan.pdf
[28]
Child Welfare Information Gateway. 2021. Concurrent Planning for Timely Permanency for Children. Retrieved September 2, 2022 from https://www.childwelfare.gov/pubPDFs/concurrent.pdf
[29]
Child Welfare Information Gateway. 2022. The use of safety and risk assessments in child protection cases. Retrieved August 16, 2022 from https://www.childwelfare.gov/topics/systemwide/laws-policies/statutes/safety-risk/
[30]
Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency. 134–148.
[31]
Angèle Christin. 2017. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society 4, 2 (2017), 2053951717718855. https://doi.org/10.1177/2053951717718855
[32]
Kevin Clancy, Joseph Chudzik, Aleksandra J. Snowden, and Shion Guha. 2022. Reconciling data-driven crime analysis with human-centered algorithms. Cities 124 (2022), 103604. https://doi.org/10.1016/j.cities.2022.103604
[33]
Ontario Human Rights Commission. 2017. Under suspicion: Concerns about child welfare. Retrieved September 2, 2022 from https://www.ohrc.on.ca/en/under-suspicion-concerns-about-child-welfare
[34]
Roxanne Connelly, Christopher J Playford, Vernon Gayle, and Chris Dibben. 2016. The role of administrative data in the big data revolution in social science research. Social science research 59 (2016), 1–12.
[35]
Nicola A. Conners, Leanne Whiteside-Mansell, David Deere, Toni Ledet, and Mark C. Edwards. 2006. Measuring the potential for child maltreatment: The reliability and validity of the Adult Adolescent Parenting Inventory—2. Child Abuse & Neglect 30, 1 (2006), 39–53. https://doi.org/10.1016/j.chiabu.2005.08.011
[36]
Victoria A Copeland. 2021. “It’s the Only System We’ve Got”: Exploring Emergency Response Decision-Making in Child Welfare. Columbia Journal of Race and Law 11, 3 (2021), 43–74.
[37]
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning 20, 3 (01 Sep 1995), 273–297. https://doi.org/10.1007/BF00994018
[38]
Amanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, and Hoda Heidari. 2023. A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 690–704.
[39]
Dipto Das, Shion Guha, and Bryan Semaan. 2023. Toward Cultural Bias Evaluation Datasets: The Case of Bengali Gender, Religious, and National Identity. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), Sunipa Dev, Vinodkumar Prabhakaran, David Adelani, Dirk Hovy, and Luciana Benotti (Eds.). Association for Computational Linguistics, Dubrovnik, Croatia, 68–83. https://doi.org/10.18653/v1/2023.c3nlp-1.8
[40]
Alan J. Dettlaff and Reiko Boyd. 2020. Racial Disproportionality and Disparities in the Child Welfare System: Why Do They Exist, and What Can Be Done to Address Them?The ANNALS of the American Academy of Political and Social Science 692, 1 (2020), 253–274. https://doi.org/10.1177/0002716220980329
[41]
Alan J Dettlaff, Stephanie L Rivaux, Donald J Baumann, John D Fluke, Joan R Rycraft, and Joyce James. 2011. Disentangling substantiation: The influence of race, income, and risk on the substantiation decision in child welfare. Children and Youth Services Review 33, 9 (2011), 1630–1637.
[42]
Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, 1 (2018), eaao5580. https://doi.org/10.1126/sciadv.aao5580
[43]
Edx. 2015. Adult-Adolescent Parenting Inventory-2 (AAPI-2). https://edge.edx.org/assets/courseware/v1/cf4d30fc491e497727c24e144cf3c0c3/asset-v1:GeorgetownX+CCHD+2016+type@asset+block/AAPI.pdf
[44]
Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
[45]
Family Development Resources. [n. d.]. Inventory Scoring System for Assessing Parenting Practices. Retrieved May 30, 2023 from https://assessingparenting.com
[46]
Anjalie Field, Amanda Coston, Nupoor Gandhi, Alexandra Chouldechova, Emily Putnam-Hornstein, David Steier, and Yulia Tsvetkov. 2023. Examining Risks of Racial Biases in NLP Tools for Child Protective Services. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1479–1492. https://doi.org/10.1145/3593013.3594094
[47]
Annie Franco, Neil Malhotra, and Gabor Simonovits. 2014. Publication bias in the social sciences: Unlocking the file drawer. Science 345, 6203 (2014), 1502–1505. https://doi.org/10.1126/science.1255484 arXiv:https://www.science.org/doi/pdf/10.1126/science.1255484
[48]
Mikel Galar, Alberto Fernandez, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. 2012. A Review on Ensembles for the Class Imbalance Problem: Bagging-, Boosting-, and Hybrid-Based Approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 4 (2012), 463–484. https://doi.org/10.1109/TSMCC.2011.2161285
[49]
Jennifer M Geiger and Lisa Schelbe. 2021. Foster Care Placement. In The Handbook on Child Welfare Practice. Springer, 219–248.
[50]
Marissa Gerchick, Tobi Jegede, Tarak Shah, Ana Gutierrez, Sophie Beiers, Noam Shemtov, Kath Xu, Anjana Samant, and Aaron Horowitz. 2023. The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1292–1310. https://doi.org/10.1145/3593013.3594081
[51]
Philip Gillingham. 2018. Decision-making about the adoption of information technology in social welfare agencies: Some key considerations. European Journal of Social Work 21, 4 (2018), 521–529.
[52]
Jeremy D Goldhaber-Fiebert and Lea Prince. 2019. Impact Evaluation of a Predictive Risk Modeling Tool for Allegheny County’s Child Welfare Office. Technical Report.
[53]
Travis Greene, Galit Shmueli, Jan Fell, Ching-Fu Lin, Mark L Shope, and Han-Wei Liu. 2020. The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making. arXiv preprint arXiv:2012.00289 (2020).
[54]
Sally Ho and Garance Burke.2022. An algorithm that screens for child neglect raises concerns.https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
[55]
Naja Holten Holten Møller, Trine Rask Rask Nielsen, and Christopher Le Dantec. 2021. Work of the Unemployed: An Inquiry into Individuals’ Experience of Data Usage in Public Services and Possibilities for Their Agency. In Designing Interactive Systems Conference 2021 (Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 438–448. https://doi.org/10.1145/3461778.3462003
[56]
Naja Holten Møller, Irina Shklovski, and Thomas T Hildebrandt. 2020. Shifting concepts of value: Designing algorithmic decision-support systems for public services. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. 1–12.
[57]
Hornby Zeller Associates INC.2018. Allegheny County Predictive Risk Modeling Tool Implementation: Process Evaluation. Technical Report.
[58]
Abigail Z. Jacobs, Su Lin Blodgett, Solon Barocas, Hal Daumé, and Hanna Wallach. 2020. The Meaning and Measurement of Bias: Lessons from Natural Language Processing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 706. https://doi.org/10.1145/3351095.3375671
[59]
Nathalie Japkowicz and Shaju Stephen. 2002. The class imbalance problem: A systematic study. Intelligent Data Analysis (2002), 429–449.
[60]
Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 52, 18 pages. https://doi.org/10.1145/3491102.3517439
[61]
Anna Kawakami, Venkatesh Sivaraman, Logan Stapleton, Hao-Fei Cheng, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. “Why Do I Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts. In Designing Interactive Systems Conference (Virtual Event, Australia) (DIS ’22). Association for Computing Machinery, New York, NY, USA, 454–470. https://doi.org/10.1145/3532106.3533556
[62]
Emily Keddell. 2015. The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool?Critical Social Policy 35, 1 (2015), 69–88. https://doi.org/10.1177/0261018314543224
[63]
Raymond S. Kirk. 2015. Psychometric Properties of the Trauma and Post-Trauma Well-Being Assessment Domains of the North Carolina Family Assessment Scale for General and Reunification Services (NCFAS G+R). Journal of Public Child Welfare 9, 5 (2015), 444–462. https://doi.org/10.1080/15548732.2015.1090364
[64]
Raymond S. Kirk and Kellie Reed-Ashcraft. 2004. NCFAS North Carolina Family Assessment Scale Research Report. National Family Preservation Network (2004), 1–22. https://www.nfpn.org/media/8d86bbe82c2a88d/ncfas_research_report.pdf
[65]
Brianne H Kothari, Kelly D Chandler, Andrew Waugh, Kara K McElvaine, Jamie Jaramillo, and Shannon Lipscomb. 2021. Retention of child welfare caseworkers: The role of case severity and workplace resources. Children and Youth Services Review 126 (2021), 106039.
[66]
Kweku Kwegyir-Aggrey, Marissa Gerchick, Malika Mohan, Aaron Horowitz, and Suresh Venkatasubramanian. 2023. The Misuse of AUC: What High Impact Risk Assessment Gets Wrong. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1570–1583. https://doi.org/10.1145/3593013.3594100
[67]
Bethany R. Lee and Michael A. Lindsey. 2010. North Carolina Family Assessment Scale: Measurement Properties for Youth Mental Health Services. Research on Social Work Practice 20, 2 (2010), 202–211. https://doi.org/10.1177/1049731509334180
[68]
Yaguo Lei. 2017. 3 - Individual intelligent method-based fault diagnosis. In Intelligent Fault Diagnosis and Remaining Useful Life Prediction of Rotating Machinery, Yaguo Lei (Ed.). Butterworth-Heinemann, 67–174. https://doi.org/10.1016/B978-0-12-811534-3.00003-2
[69]
Guillaume Lemaître, Fernando Nogueira, and Christos K. Aridas. 2017. Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. Journal of Machine Learning Research 18, 17 (2017), 1–5. http://jmlr.org/papers/v18/16-365.html
[70]
Jorge M. Lobo, Alberto Jiménez-Valverde, and Raimundo Real. 2008. AUC: A misleading measure of the performance of predictive distribution models. Global Ecology and Biogeography 17, 2 (2008), 145–151. https://doi.org/10.1111/j.1466-8238.2007.00358.x
[71]
Paul Marks. 2022. Algorithmic Hiring Needs a Human Face. Commun. ACM 65, 3 (feb 2022), 17–19. https://doi.org/10.1145/3510552
[72]
Kelly McConvey, Shion Guha, and Anastasia Kuzminykh. 2023. A Human-Centered Review of Algorithms in Decision-Making in Higher Education. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 223, 15 pages. https://doi.org/10.1145/3544548.3580658
[73]
Nicole Mickelson, Traci LaLiberte, and Kristine Piescher. 2017. Assessing Risk: A Comparison of Tools for Child Welfare Practice with Indigenous Families. Retrieved June 15, 2023 from https://cascw.umn.edu/portfolio-items/assessing-risk-a-comparison-of-tools-for-child-welfare-practice-with-indigenous-families/
[74]
Erina Seh Young Moon and Shion Guha. 2024. A Human-Centered Review of Algorithms in Homelessness Research. (2024). https://doi.org/10.1145/3613904.3642392 arXiv:arXiv:2401.13247
[75]
Ramaravind Kommiya Mothilal, Shion Guha, and Syed Ishtiaque Ahmed. 2024. Towards a Non-Ideal Methodological Framework for Responsible ML. arxiv:2401.11131 [cs.HC]
[76]
Michael Muller, Shion Guha, Eric PS Baumer, David Mimno, and N Sadat Shami. 2016. Machine learning and grounded theory method: Convergence, divergence, and combination. In Proceedings of the 19th International Conference on Supporting Group Work. ACM, 3–8.
[77]
National Family Preservation Network. 2009. NCFAS North Carolina Family Assessment Scale Scale & Definitions (v. 2.0). (2009). Retrieved September 2, 2022 from https://cscbroward.org/sites/default/files/2018-06/NCFAS%20Scale_Defs.pdf
[78]
Dong Nguyen, Maria Liakata, Simon DeDeo, Jacob Eisenstein, David Mimno, Rebekah Tromble, and Jane Winters. 2020. How We Do Things With Words: Analyzing Text as Social and Cultural Data. Frontiers in Artificial Intelligence 3 (2020), 62.
[79]
Lan Huong Nguyen and Susan Holmes. 2019. Ten quick tips for effective dimensionality reduction. JPLoS Comput Biol. 15, 6, Article 6:e1006907 (june 2019). https://doi.org/10.1371/journal.pcbi.1006907
[80]
William S. Noble. 2006. What is a support vector machine?Nature Biotechnology 24, 12 (01 Dec 2006), 1565–1567. https://doi.org/10.1038/nbt1206-1565
[81]
Yotam Ophir, Dror Walter, and Eleanor R Marchant. 2020. A Collaborative Way of Knowing: Bridging Computational Communication Research and Grounded Theory Ethnography. Journal of Communication 70, 3 (2020), 447–472.
[82]
Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, New York, NY, USA, 959–972. https://doi.org/10.1145/3531146.3533158
[83]
Joanna Redden, Lina Dencik, and Harry Warne. 2020. Datafied child welfare services: Unpacking politics, economics and power. Policy Studies 41, 5 (2020), 507–526.
[84]
Cheryl Regehr, Marion Bogo, Aron Shlonsky, and Vicki LeBlanc. 2010. Confidence and Professional Judgment in Assessing Children’s Risk of Abuse. Research on Social Work Practice 20, 6 (2010), 621–628. https://doi.org/10.1177/1049731510368050
[85]
Dorothy E. Roberts. 2002. RACIAL HARM: Dorothy Roberts explains how racism works in the child welfare system. Colorlines 5, 3 (Fall 2002), 19.
[86]
Dorothy E. Roberts. 2022. Torn Apart How the Child Welfare System Destroys Black Families–and How Abolition Can Build a Safer World. Basic Books, New York, NY, USA.
[87]
Samantha Robertson, Tonya Nguyen, and Niloufar Salehi. 2021. Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms. arXiv preprint arXiv:2101.10367 (2021).
[88]
Samantha Robertson and Niloufar Salehi. 2020. What If I Don’t Like Any Of The Choices? The Limits of Preference Elicitation for Participatory Algorithm Design. arXiv preprint arXiv:2007.06718 (2020).
[89]
Anjana Samant, Aaron Horowitz, Sophie Beiers, and Kath Xu. 2021. Family Surveillance by Algorithm: The Rapidly Spreading Tools Few Have Heard Of. Retrieved May 2, 2023 from https://www.aclu.org/news/womens-rights/family-surveillance-by-algorithm-the-rapidly-spreading-tools-few-have-heard-of
[90]
Devansh Saxena, Karla Badillo-Urquiola, Pamela Wisniewski, and Shion Guha. 2020. Child welfare system: Interaction of policy, practice and algorithms. In Companion Proceedings of the 2020 ACM International Conference on Supporting Group Work. 119–122.
[91]
Devansh Saxena, Karla Badillo-Urquiola, Pamela Wisniewski, and Shion Guha. 2021. A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021).
[92]
Devansh Saxena, Karla Badillo-Urquiola, Pamela J Wisniewski, and Shion Guha. 2020. A Human-Centered Review of Algorithms used within the US Child Welfare System. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[93]
Devansh Saxena and Shion Guha. 2023. Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-Making. ACM J. Responsib. Comput. (sep 2023). https://doi.org/10.1145/3616473
[94]
Devansh Saxena, Erina Seh-Young Moon, Aryan Chaurasia, Yixin Guan, and Shion Guha. 2023. Rethinking "Risk" in Algorithmic Systems Through A Computational Narrative Analysis of Casenotes in Child-Welfare. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
[95]
Devansh Saxena, Seh Young Moon, Dahlia Shehata, and Shion Guha. 2022. Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare through Casenote Analysis(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 120, 22 pages. https://doi.org/10.1145/3491102.3517742
[96]
Devansh Saxena, Charles Repaci, Melanie D Sage, and Shion Guha. 2022. How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child Welfare. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–7.
[97]
Craig S Schwalbe. 2008. Strengthening the integration of actuarial risk assessment with clinical judgment in an evidence based practice framework. Children and Youth Services Review 30, 12 (2008), 1458–1464.
[98]
Cathrine Seidelin, Therese Moreau, Irina Shklovski, and Naja Holten Møller. 2022. Auditing Risk Prediction of Long-Term Unemployment. Proc. ACM Hum.-Comput. Interact. 6, GROUP, Article 8 (jan 2022), 12 pages. https://doi.org/10.1145/3492827
[99]
Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 59–68. https://doi.org/10.1145/3287560.3287598
[100]
Aron Shlonsky and Dennis Wagner. 2005. The next step: Integrating actuarial risk assessment and clinical judgment into an evidence-based practice framework in CPS case management. Children and youth services review 27, 4 (2005), 409–427.
[101]
Ben Shneiderman. 2022. Human-Centered AI. Oxford University Press. https://doi.org/10.1093/oso/9780192845290.001.0001
[102]
Nancy L. Sidell. 2015. Social Work Documentation. NASW Press, Washington, DC, USA.
[103]
Gabriel Tobin Smith, Valerie B Shapiro, Rachel Wagner Sperry, and Paul A LeBuffe. 2014. A strengths-based approach to supervised visitation in child welfare. Child Care in Practice 20, 1 (2014), 98–119.
[104]
Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Ken Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. In FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022. ACM, 1162–1177. https://doi.org/10.1145/3531146.3533177
[105]
Zach Strassburger. 2016. Medical Decision Making for Youth in the Foster Care System. J. Marshall L. Rev. 49, 4, Article 5 (2016), 1103–1154 pages. https://repository.law.uic.edu/lawreview/vol49/iss4/5/
[106]
Dana Goldstein. New York Times. April 2019. San Francisco Had an Ambitious Plan to Tackle School Segregation. It Made It Worse.https://www.nytimes.com/2019/04/25/us/san-francisco-school-segregation.html
[107]
Rhema Vaithianathan, Diana Benavides-Prado, Erin Dalton, Alex Chouldechova, and Emily Putnam-Hornstein. 2021. Using a Machine Learning Tool to Support High-Stakes Decisions in Child Protection. AI Magazine 42, 1 (2021), 53–60. https://ojs.aaai.org/index.php/aimagazine/article/view/7482
[108]
Rhema Vaithianathan, Emily Putnam-Hornstein, Alexandra Chouldechova, Diana Benavides-Prado, and Rachel Berger. 2020. Hospital Injury Encounters of Children Identified by a Predictive Risk Model for Screening Child Maltreatment Referrals: Evidence From the Allegheny Family Screening Tool. JAMA pediatrics 174, 11 (Nov 2020), e202770. https://doi.org/10.1001/jamapediatrics.2020.2770
[109]
Rhema Vaithianathan, Emily Putnam-Hornstein, Nan Jiang, Parma Nand, and Tim Maloney. 2017. Developing predictive models to support child maltreatment hotline screening decisions: Allegheny County methodology and implementation. Center for Social data Analytics (2017).
[110]
Michael Veale and Irina Brass. 2019. Administration by Algorithm? Public Management Meets Public Sector Machine Learning. 3375391 (2019). https://papers.ssrn.com/abstract=3375391
[111]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems. 1–14.
[112]
Maranke Wieringa. 2020. What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3351095.3372833
[113]
Ben Williamson. 2016. Digital education governance: Data visualization, predictive analytics, and ‘real-time’policy instruments. Journal of Education Policy 31, 2 (2016), 123–141.
[114]
Max L. Wilson, Wendy Mackay, Ed Chi, Michael Bernstein, Dan Russell, and Harold Thimbleby. 2011. RepliCHI - CHI Should Be Replicating and Validating Results More: Discuss. In CHI ’11 Extended Abstracts on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI EA ’11). Association for Computing Machinery, New York, NY, USA, 463–466. https://doi.org/10.1145/1979742.1979491

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
GI '24: Proceedings of the 50th Graphics Interface Conference
June 2024
437 pages
ISBN:9798400718281
DOI:10.1145/3670947
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 September 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic bias
  2. algorithmic decision-making
  3. child welfare
  4. public sector
  5. risk assessments

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Connaught New Researcher award
  • NSERC Discovery Early Career Researcher Grant

Conference

GI '24
GI '24: Graphics Interface
June 3 - 6, 2024
NS, Halifax, Canada

Acceptance Rates

Overall Acceptance Rate 206 of 508 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 31
    Total Downloads
  • Downloads (Last 12 months)31
  • Downloads (Last 6 weeks)9
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media