Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3351095.3372871acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Roles for computing in social change

Published: 27 January 2020 Publication History

Abstract

A recent normative turn in computer science has brought concerns about fairness, bias, and accountability to the core of the field. Yet recent scholarship has warned that much of this technical work treats problematic features of the status quo as fixed, and fails to address deeper patterns of injustice and inequality. While acknowledging these critiques, we posit that computational research has valuable roles to play in addressing social problems --- roles whose value can be recognized even from a perspective that aspires toward fundamental social change. In this paper, we articulate four such roles, through an analysis that considers the opportunities as well as the significant risks inherent in such work. Computing research can serve as a diagnostic, helping us to understand and measure social problems with precision and clarity. As a formalizer, computing shapes how social problems are explicitly defined --- changing how those problems, and possible responses to them, are understood. Computing serves as rebuttal when it illuminates the boundaries of what is possible through technical means. And computing acts as synecdoche when it makes long-standing social problems newly salient in the public eye. We offer these paths forward as modalities that leverage the particular strengths of computational work in the service of social change, without overclaiming computing's capacity to solve social problems on its own.

References

[1]
2019. ML for the Developing World.
[2]
Rediet Abebe and Kira Goldner. 2018. Mechanism design for social good. AI Matters 4, 3 (2018), 27--34.
[3]
Rediet Abebe, Shawndra Hill, Jennifer Wortman Vaughan, Peter M Small, and H Andrew Schwartz. 2019. Using search queries to understand health information needs in Africa. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 3--14.
[4]
Rediet Abebe, Jon Kleinberg, and S. Matthew Weinberg. 2020. Subsidy Allocations in the Presence of Income Shocks. In 34th AAAI Conference on Artificial Intelligence.
[5]
Harold Abelson, Ross Anderson, Steven M Bellovin, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G Neumann, et al. 2015. Keys under doormats: mandating insecurity by requiring government access to all data and communications. Journal of Cybersecurity 1, 1 (2015), 69--79.
[6]
Hal Abelson and et. al. 2016. Letter to the Honorable Elaine C. Duke. https://www.brennancenter.org/sites/default/files/Technology%20Experts%20Letter%20to%20DHS%20Opposing%20the%20Extreme%20Vetting%20Initiative%20-%2011.15.17.pdf
[7]
Philip E Agre. 1997. Lessons Learned in Trying to Reform AI. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997), 131.
[8]
Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through Optimization: How Facebook's Ad Delivery Can Lead to Biased Outcomes. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 199.
[9]
Jack M Balkin. 2004. Digital speech and democratic culture: A theory of freedom of expression for the information society. NYU Law Review 79 (2004), 1.
[10]
Chelsea Barabas, Karthik Dinakar, Joichi Ito, Madars Virza, and Jonathan Zittrain. 2018. Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Vol. 81. 62--76.
[11]
Solon Barocas. 2014. Putting Data to Work. In Data and Discrimination: Collected Essays, Seeta Peña Gangadharan, Virginia Eubanks, and Solon Barocas (Eds.). Open Technology Institute.
[12]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
[13]
Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Calif. L. Rev. 104 (2016), 671.
[14]
Ruha Benjamin. 2019. Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.
[15]
Cynthia L Bennett and Os Keyes. 2019. What is the Point of Fairness? Disability, AI and The Complexity of Justice. In ASSETS 2019 Workshop --- AI Fairness for People with Disabilities.
[16]
Sebastian Benthall. 2018. Computational institutions. https://digifesto.com/2018/12/22/computational-institutions/
[17]
Sebastian Benthall. 2018. The politics of AI ethics is a seductive diversion from fixing our broken capitalist system. https://digifesto.com/2018/12/18/the-politics-of-ai-ethics-is-a-seductive-diversion-from-fixing-our-broken-capitalist-system/
[18]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems. 4349--4357.
[19]
Sarah Brayne. 2017. Big data surveillance: The case of policing. American Sociological Review 82, 5 (2017), 977--1008.
[20]
Meredith Broussard, Nicholas Diakopoulos, Andrea L. Guzman, Rediet Abebe, Michel Dupagne, and Ching-Hua Chuan. 2019. Artificial Intelligence and Journalism. Journalism & Mass Communication Quarterly 96, 3 (2019), 673--695.
[21]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77--91.
[22]
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183--186.
[23]
Donald T. Campbell. 1979. Assessing the impact of planned social change. Evaluation and Program Planning 2 (1979), 67--90. Issue 1.
[24]
Angela Chen. 2019. Inmates in Finland are training AI as part of prison labor. The Verge (2019). https://www.theverge.com/2019/3/28/18285572/prison-labor-finland-artificial-intelligence-data-tagging-vainu
[25]
Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the impact of gender on rank in resume search engines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 651.
[26]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153--163.
[27]
Angèle Christin. 2018. Counting clicks: Quantification and variation in web journalism in the United States and France. Amer. J. Sociology 123, 5 (2018), 1382--1415.
[28]
Danielle Keats Citron. 2018. Technological Due Process. Washington University Law Review (2018).
[29]
David A Clarke. 1988. Digital data processing arbitration system. US Patent 4,789,926.
[30]
Concerned Researchers. 2019. On Recent Research Auditing Commercial Facial Analysis Technology. Medium.com (2019). https://medium.com/@bu64dcjrytwitb8/on-recent-research-auditing-commercial-facial-analysis-technology-19148bda1832
[31]
Sasha Costanza-Chock. 2018. Design justice, AI, and escape from the matrix of domination. Journal of Design and Science (2018).
[32]
Kate Crawford. [n. d.]. You and AI - the politics of AI. https://www.youtube.com/watch?v=HPopJb5aDyA. Accessed: 2019-08-01.
[33]
Kate Crawford and Ryan Calo. 2016. There is a blind spot in AI research. Nature News 538, 7625 (2016), 311.
[34]
Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies 2015, 1 (2015), 92--112.
[35]
David Delacrétaz, Scott Duke Kominers, and Alexander Teytelboym. 2016. Refugee resettlement. University of Oxford Department of Economics Working Paper (2016).
[36]
Lina Dencik, Fieke Jansen, and Philippa Metcalfe. 2018. A conceptual framework for approaching social justice in an age of datafication. DATAJUSTICE project 30 (2018).
[37]
Catherine D'Ignazio and Lauren Klein. 2019. Data feminism. Cambridge, MA: MIT Press.
[38]
Natasha Duarte, Emma Llanso, and Anna Loup. 2018. Mixed Messages? The Limits of Automated Social Media Content Analysis. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, New York, NY, USA, 106--106.
[39]
Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook, and Julie Ciccolini. 2018. Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior 46, 2 (2018), 185--2019.
[40]
Michael Ekstrand and Karen Levy. [n. d.]. FAT* Network. https://fatconference.org/network/. Accessed: 2019-08-01.
[41]
Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
[42]
Batya Friedman and David G Hendry. 2019. Value sensitive design: Shaping technology with moral imagination. Mit Press.
[43]
Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14, 3 (1996), 330--347.
[44]
Seeta Peña Gangadharan and Jędrzej Niklas. 2019. Decentering technology in discourse on discrimination. Information, Communication & Society 22, 7 (2019), 882--899.
[45]
Timnit Gebru. 2019. Oxford Handbook on AI Ethics Book Chapter on Race and Gender. arXiv:cs.CY/1908.06165
[46]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018).
[47]
Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
[48]
Tarleton Gillespie and Nick Seaver. [n. d.]. Critical Algorithm Studies: a Reading List. https://socialmediacollective.org/reading-lists/critical-algorithm-studies/. Accessed: 2019-08-01.
[49]
Ben Green. 2018. "Fair" Risk Assessments: A Precarious Approach for Criminal Justice Reform. In 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning.
[50]
Daniel Greene, Anna Lauren Hoffmann, and Luke Stark. 2019. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences.
[51]
Kevin D Haggerty. 2009. Methodology as a knife fight: The process, politics and paradox of evaluating surveillance. Critical Criminology 17, 4 (2009), 277.
[52]
Drew Harwell and Nick Miroff. 2018. ICE just abandoned its dream of 'extreme vetting' software that could predict whether a foreign visitor would become a terrorist. Washington Post (2018). https://www.washingtonpost.com/news/the-switch/wp/2018/05/17/ice-just-abandoned-its-dream-of-extreme-vetting-software-that-could-predict-whether-a-foreign-visitor-would-become-a-terrorist/
[53]
Nabil Hassein. 2017. Against black inclusion in facial recognition. https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/
[54]
Deborah Hellman. 2019. Measuring Algorithmic Fairness. Virginia Public Law and Legal Theory Research Paper 2019-39 (2019).
[55]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900--915.
[56]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900--915.
[57]
Keith Hoskin. 1996. The 'awful idea of accountability': inscribing people into the measurement of objects. Accountability: Power, ethos and the technologies of managing 265 (1996).
[58]
Lucas D Introna and Helen Nissenbaum. 2000. Shaping the Web: Why the politics of search engines matters. The information society 16, 3 (2000), 169--185.
[59]
Sheila Jasanoff. 2006. Technology as a site and object of politics. In The Oxford handbook of contextual political analysis. Oxford University Press.
[60]
Raj Jayadev. 2019. The Future of Pretrial Justice is Not Bail or System Supervision --- It Is Freedom and Community. Silcon Valley Debug (2019). https://www.siliconvalleydebug.org/stories/the-future-of-pretrial-justice-is-not-money-bail-or-system-supervision-it-s-freedom-and-community
[61]
Will Jones and Alexander Teytelboym. 2018. Matching Systems for Refugees. Journal on Migration and Human Security 5 (2018), 667--681. Issue 3.
[62]
Louis Kaplow. 1992. Rules versus standards: An economic analysis. Duke Lj 42 (1992), 557.
[63]
Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan. 2016. A guide to solving social problems with machine learning. Harvard Business Review (2016).
[64]
Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9--11, 2017, Berkeley, CA, USA. 43:1--43:23.
[65]
John Logan Koepke and David G Robinson. 2018. Danger ahead: Risk assessment and the future of bail reform. Wash. L. Rev. 93 (2018), 1725.
[66]
Scott Duke Kominers. 2018. Good Markets (Really Do) Make Good Neighbors. ACM SIGecom Exchanges 16, 2 (2018), 12--26.
[67]
Leadership Conference on Civil and Human Rights. 2018. The Use of Pretrial "Risk Assessment" Instruments: A Shared Statement of Civil Rights Concerns. http://civilrightsdocs.info/pdf/criminal-justice/Pretrial-Risk-Assessment-Full.pdf
[68]
Lawrence Lessig. 2009. Code: And other laws of cyberspace. Basic Books.
[69]
Emma Lurie and Eni Mustafaraj. 2019. Opening Up the Black Box: Auditing Google's Top Stories Algorithm. In Proceedings of the International Florida Artificial Intelligence Research Society Conference, Vol. 32.
[70]
Robert Manduca. 2019. Mechanism Design for Social Good. RobertManduca.com (2019). http://robertmanduca.com/portfolio/mechanism-design-4-social-good/
[71]
Sandra G. Mayson. 2019. Bias In, Bias Out. Yale Law Journal 128 (2019), 2218. Issue 8.
[72]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 220--229.
[73]
Evgeny Morozov. 2013. To save everything, click here: The folly of technological solutionism. Public Affairs.
[74]
Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism. NYU Press.
[75]
Rebekah Overdorf, Bogdan Kulynych, Ero Balsa, Carmela Troncoso, and Seda Gürses. 2018. Questioning the assumptions behind fairness solutions. arXiv preprint arXiv:1811.11293 (2018).
[76]
Scott E. Page. 2008. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press.
[77]
Frank Pasquale. 2015. The black box society. Harvard University Press.
[78]
Frank Pasquale. 2018. Odd Numbers. Real Life Mag (2018).
[79]
Samir Passi and Solon Barocas. 2019. Problem formulation and fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 39--48.
[80]
Komal S Patel. 2018. Testing the Limits of the First Amendment: How Online Civil Rights Testing Is Protected Speech Activity. Columbia Law Review 118, 5 (2018), 1473--1516.
[81]
Julia Powles. 2018. The seductive diversion of 'solving' bias in artificial intelligence.
[82]
Ruchir Puri. 2018. Mitigating Bias in AI Models. IBM Research Blog (2018).
[83]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM.
[84]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. AAAI/ACM Conf. on AI Ethics and Society (2019).
[85]
Joel R Reidenberg. 1997. Lex informatica: The formulation of information policy rules through technology. Tex. L. Rev. 76 (1997), 553.
[86]
John Roach. 2018. Microsoft improves facial recognition technology to perform well across all skin tones, genders. The AI Blog (2018).
[87]
Sarah T Roberts. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
[88]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry 22 (2014).
[89]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 59--68.
[90]
Jeff Spross. 2019. How robots became a scapegoat for the destruction of the working class. The Week (2019). https://theweek.com/articles/837759/how-robots-became-scapegoat-destruction-working-class
[91]
Latanya Sweeney. 2013. Discrimination in online ad delivery. ACM Queue (2013).
[92]
Michael S Wald and Maria Woolverton. 1990. Risk assessment: The emperor's new clothes? Child Welfare: Journal of Policy, Practice, and Program (1990).
[93]
Meredith Whittaker, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Mysers West, Rashida Richardson, Jason Schultz, and Oscar Schwartz. 2018. AI now report 2018. AI Now Institute at New York University.
[94]
Langdon Winner. 1980. Do artifacts have politics? Daedalus (1980), 121--136.
[95]
Alice Xiang and Inioluwa Deborah Raji. 2019. On the Legal Compatibility of Fairness Definitions. arXiv preprint arXiv:1912.00761 (2019).

Cited By

View all
  • (2024)Public Computing Intellectuals in the Age of AI CrisisACM Transactions on Computing Education10.1145/370316224:4(1-26)Online publication date: 4-Nov-2024
  • (2024)Explainable AI in Practice: Practitioner Perspectives on AI for Social Good and User Engagement in the Global SouthProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694707(1-16)Online publication date: 29-Oct-2024
  • (2024)Ending Affirmative Action Harms Diversity Without Improving Academic MeritProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694706(1-17)Online publication date: 29-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
January 2020
895 pages
ISBN:9781450369367
DOI:10.1145/3351095
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. discrimination
  2. inequality
  3. social change
  4. societal implications of AI

Qualifiers

  • Research-article

Conference

FAT* '20
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,462
  • Downloads (Last 6 weeks)146
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Public Computing Intellectuals in the Age of AI CrisisACM Transactions on Computing Education10.1145/370316224:4(1-26)Online publication date: 4-Nov-2024
  • (2024)Explainable AI in Practice: Practitioner Perspectives on AI for Social Good and User Engagement in the Global SouthProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694707(1-16)Online publication date: 29-Oct-2024
  • (2024)Ending Affirmative Action Harms Diversity Without Improving Academic MeritProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694706(1-17)Online publication date: 29-Oct-2024
  • (2024)Articulation Work and Tinkering for Fairness in Machine LearningProceedings of the ACM on Human-Computer Interaction10.1145/36869738:CSCW2(1-23)Online publication date: 8-Nov-2024
  • (2024)Beyond Predictive Algorithms in Child WelfareProceedings of the 50th Graphics Interface Conference10.1145/3670947.3670976(1-13)Online publication date: 3-Jun-2024
  • (2024)Opportunities, tensions, and challenges in computational approaches to addressing online harassmentProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661623(1483-1498)Online publication date: 1-Jul-2024
  • (2024)Wage Theft and Technology in the Home Care ContextProceedings of the ACM on Human-Computer Interaction10.1145/36374288:CSCW1(1-30)Online publication date: 26-Apr-2024
  • (2024)Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing ToolProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658991(1596-1608)Online publication date: 3-Jun-2024
  • (2024)Epistemic Power in AI Ethics Labor: Legitimizing Located ComplaintsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658973(1295-1304)Online publication date: 3-Jun-2024
  • (2024)Impact Charts: A Tool for Identifying Systematic Bias in Social Systems and DataProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658965(1187-1198)Online publication date: 3-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media