Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3593013.3594104acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Interrogating the T in FAccT

Published: 12 June 2023 Publication History

Abstract

Fairness, accountability, and transparency are the three conceptual foundations of the FAccT conference. Transparency, however, has yet to be scrutinized to the same degree as accountability and fairness. As a result, we don't know: What does this community mean when it talks about transparency? How are we doing transparency? And to what ends? What commitments does (or should) the T in FAccT signify? This paper interrogates the T in FAccT using perspectives from critical transparency literature. Subsequently, we argue that FAccT might be better off dropping the T from its title for two reasons: (1) transparency can often be counterproductive to FAccT's primary objectives and (2) it is misleading as FAccT is mainly preoccupied with explainability rather than actual transparency. If we want to keep the T, we need to reframe how we think about and do transparency by making transparency contingent, reclaiming it from explainability, and bringing people into transparency processes.

References

[1]
ai-now-2017-report: https://ainowinstitute.org/announcements/ai-now-2017-report.html. Accessed: 2023-02-06.
[2]
Alloa, E. 2018. Transparency: A Magic Concept of Modernity. Transparency, Society and Subjectivity. E. Alloa and D. Thomä, eds. London: Palgrave Macmillan. 21–55.
[3]
Ananny, M. and Crawford, K. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. 20, 3 (Mar. 2018), 973–989.
[4]
Anatomy of an AI System: https://anatomyof.ai/. Accessed: 2023-02-05.
[5]
Anik, A.I. and Bunt, A. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2021), 1–13.
[6]
Ashoka 2019. Why We Need Data For Black Lives. Forbes Magazine.
[7]
Balagopalan, A., Zhang, H., Hamidieh, K., Hartvigsen, T., Rudzicz, F. and Ghassemi, M. 2022. The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 1194–1206.
[8]
Bardzell, J., Bardzell, S. and Blythe, M. 2018. Critical Theory and Interaction Design. MIT Press.
[9]
Bardzell, S. and Bardzell, J. 2011. Towards a feminist HCI methodology. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
[10]
Barocas, S., Selbst, A.D. and Raghavan, M. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2020), 80–89.
[11]
Bhatt, U., Andrus, M., Weller, A. and Xiang, A. 2020. Machine learning explainability for external stakeholders. arXiv preprint arXiv:2007. 05408. (2020).
[12]
Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., Ghosh, J., Puri, R., Moura, J.M.F. and Eckersley, P. 2020. Explainable Machine Learning in Deployment. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2020), 648–657.
[13]
Birchall, C. 2018. Interrupting Transparency. Transparency, Society and Subjectivity: Critical Perspectives. E. Alloa and D. Thomä, eds. Springer International Publishing. 343–367.
[14]
Birchall, C. 2021. Radical Secrecy: The Ends of Transparency in Datafied America. University of Minnesota Press.
[15]
Bordt, S., Finck, M., Raidl, E. and von Luxburg, U. 2022. Post-Hoc Explanations Fail to Achieve Their Purpose in Adversarial Contexts. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 891–905.
[16]
Burrell, J. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society. 3, 1 (Jun. 2016), 2053951715622512.
[17]
Cai, C.J., Winter, S., Steiner, D., Wilcox, L. and Terry, M. 2019. “Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW (Nov. 2019), 1–24.
[18]
Can we open the black box of AI? 2016. http://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731. Accessed: 2023-02-06.
[19]
Data for Black Lives: 2016. https://d4bl.org/. Accessed: 2022-11-10.
[20]
Díaz, M., Kivlichan, I., Rosen, R., Baker, D., Amironesei, R., Prabhakaran, V. and Denton, E. 2022. CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 2342–2351.
[21]
DiSalvo, C. 2012. Adversarial Design. The MIT Press. 145.
[22]
Ehsan, U., Liao, Q.V., Muller, M., Riedl, M.O. and Weisz, J.D. 2021. Expanding Explainability: Towards Social Transparency in AI systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2021), 1–19.
[23]
Ehsan, U., Wintersberger, P., Liao, Q.V., Watkins, E.A., Manger, C., Daumé, H., III, Riener, A. and Riedl, M.O. 2022. Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2022), 1–7.
[24]
Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M. and Hussmann, H. 2018. Bringing Transparency Design into Practice. 23rd International Conference on Intelligent User Interfaces (New York, NY, USA, Mar. 2018), 211–223.
[25]
Engelmann, S., Chen, M., Fischer, F., Kao, C.-Y. and Grossklags, J. 2019. Clear Sanctions, Vague Rewards: How China's Social Credit System Currently Defines “Good” and “Bad” Behavior. Proceedings of the Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2019), 69–78.
[26]
Eslami, M., Krishna Kumaran, S.R., Sandvig, C. and Karahalios, K. 2018. Communicating Algorithmic Process in Online Behavioral Advertising. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2018), 1–13.
[27]
Eslami, M., Vaccaro, K., Lee, M.K., Elazari Bar On, A., Gilbert, E. and Karahalios, K. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2019), 1–14.
[28]
Etzioni, A. 2018. The Limits of Transparency. Transparency, Society and Subjectivity: Critical Perspectives. E. Alloa and D. Thomä, eds. Springer International Publishing. 179–201.
[29]
Eubanks, V. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
[30]
Executive Office of the President 2020. Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. Federal register. 85, (Dec. 2020), 78939–78943.
[31]
Fraser, H., Simcock, R. and Snoswell, A.J. 2022. AI Opacity and Explainability in Tort Litigation. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 185–196.
[32]
Friedman, B. and Hendry, D.G. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press.
[33]
Ghassemi, M., Oakden-Rayner, L. and Beam, A.L. 2021. The false hope of current approaches to explainable artificial intelligence in health care. The Lancet. Digital health. 3, 11 (Nov. 2021), e745–e750.
[34]
Gillespie, T. 2014. The relevance of algorithms. Media technologies: Essays on communication, materiality, and society. Gillespie, Tarleton and Boczkowski, Pablo J. and Foot, Kirsten A., ed. The MIT Press.
[35]
Gregory, J. 2003. Scandinavian Approaches to Participatory Design. International Journal of Engaging Education. 19, 1 (2003), 62–74.
[36]
Han, B.-C. 2015. The transparency society. Stanford University Press.
[37]
Hansen, H.K., Christensen, L.T. and Flyverbom, M. 2015. Introduction: Logics of transparency in late modernity: Paradoxes, mediation and governance. European Journal of Social Theory. 18, 2 (May 2015), 117–131.
[38]
Harden, J.J. and Kirkland, J.H. 2021. Does transparency inhibit political compromise? American journal of political science. 65, 2 (Apr. 2021), 493–509.
[39]
Harrison, S., Tatar, D. and Sengers, P. 2007. The three paradigms of HCI. Alt. Chi. Session at the SIGCHI Conference on human factors in computing systems San Jose, California, USA (2007), 1–18.
[40]
Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P. and Mitchell, M. 2021. Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Mar. 2021), 560–575.
[41]
Jhaver, S., Bruckman, A. and Gilbert, E. 2019. Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit. Proc. ACM Hum.-Comput. Interact. 3, CSCW (Nov. 2019), 1–27.
[42]
Karkkainen, B.C. 2019. Information as Environmental Regulation: TRI and Performance Benchmarking, Precursor to a New Paradigm? Environmental Law. Routledge. 191–304.
[43]
Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., Bintz, C., Raz, D. and Krafft, P.M. 2020. Toward Situated Interventions for Algorithmic Equity: Lessons from the Field. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2020), 45–55.
[44]
Kaur, H., Adar, E., Gilbert, E. and Lampe, C. 2022. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 702–714.
[45]
Kim, S. and Lee, J. 2017. Citizen Participation and Transparency in Local Government: Do Participation Channels and Policy Making Phases Matter?
[46]
Kizilcec, R.F. 2016. How Much Information? Effects of Transparency on Trust in an Algorithmic Interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2016), 2390–2395.
[47]
Kollnig, K., Shuba, A., Van Kleek, M., Binns, R. and Shadbolt, N. 2022. Goodbye Tracking? Impact of IOS App Tracking Transparency and Privacy Labels. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 508–520.
[48]
Laufer, B., Jain, S., Cooper, A.F., Kleinberg, J. and Heidari, H. 2022. Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 401–426.
[49]
Liao, Q.V., Gruen, D. and Miller, S. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–15.
[50]
Lima, G., Grgić-Hlača, N., Jeong, J.K. and Cha, M. 2022. The Conflict Between Explainable and Accountable Decision-Making Algorithms. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 2103–2113.
[51]
Long, D. and Magerko, B. 2020. What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2020), 1–16.
[52]
MacIntyre, A. 1981. The Nature of the Virtues. The Hastings Center report. 11, 2 (1981), 27–34.
[53]
Marcinkevičs, R. and Vogt, J.E. 2020. Interpretability and Explainability: A Machine Learning Zoo Mini-tour. arXiv [cs.LG].
[54]
Mercado-Lara, E. and Gil-Garcia, J.R. 2014. Open government and data intermediaries: the case of AidData. Proceedings of the 15th Annual International Conference on Digital Government Research (New York, NY, USA, Jun. 2014), 335–336.
[55]
Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence.
[56]
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D. and Gebru, T. 2018. Model Cards for Model Reporting. arXiv [cs.LG].
[57]
Mittelstadt, B., Russell, C. and Wachter, S. 2019. Explaining Explanations in AI. Proceedings of the Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2019), 279–288.
[58]
Norval, C., Cornelius, K., Cobbe, J. and Singh, J. 2022. Disclosure by Design: Designing Information Disclosures to Support Meaningful Transparency and Accountability. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 679–690.
[59]
O'neil, C. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
[60]
Pasquale, F. 2015. The black box society: The secret algorithms that control money and information. Harvard University Press.
[61]
Poirier, L. 2022. Accountable Data: The Politics and Pragmatics of Disclosure Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 1446–1456.
[62]
Pozen, D.E. 2020. Seeing transparency more clearly. Public administration review. 80, 2 (Mar. 2020), 326–331.
[63]
Pushkarna, M., Zaldivar, A. and Kjartansson, O. 2022. Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. arXiv [cs.HC].
[64]
Racial Disparities in Law Enforcement Stops: 2021. https://www.ppic.org/publication/racial-disparities-in-law-enforcement-stops/. Accessed: 2023-05-11.
[65]
Rader, E., Cotter, K. and Cho, J. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2018), 1–13.
[66]
Ramesh, D., Kameswaran, V., Wang, D. and Sambasivan, N. 2022. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 1917–1928.
[67]
Rasmussen, J. 1986. Information Processing and Human-machine Interaction: An Approach to Cognitive Engineering. North-Holland.
[68]
Recital 58 - The Principle of Transparency - General Data Protection Regulation (GDPR): April, 27 2016. https://gdpr-info.eu/recitals/no-58/. Accessed: 2022-11-22.
[69]
Ribera, M. and Lapedriza, A. Can we do better explanations? A proposal of User-Centered Explainable AI.
[70]
Richardson, R. 2019. Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force AI Now Institute. AI Now Institute.
[71]
Robertson, S. and Díaz, M. 2022. Understanding and Being Understood: User Strategies for Identifying and Recovering From Mistranslations in Machine Translation-Mediated Chat. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 2223–2238.
[72]
Rostamzadeh, N., Mincu, D., Roy, S., Smart, A., Wilcox, L., Pushkarna, M., Schrouff, J., Amironesei, R., Moorosi, N. and Heller, K. 2022. Healthsheet: Development of a Transparency Artifact for Health Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 1943–1961.
[73]
Rudin, C. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature machine intelligence. 1, 5 (May 2019), 206–215.
[74]
Rudin, C., Wang, C. and Coker, B. 2020. The age of secrecy and unfairness in recidivism prediction. 2.1. 2, 1 (Jan. 2020).
[75]
Schauer, F. 2011. Transparency in three dimensions. University of Illinois law review. (2011).
[76]
Schoeffer, J., Kuehl, N. and Machowski, Y. 2022. “There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 1616–1628.
[77]
Schuff, H., Jacovi, A., Adel, H., Goldberg, Y. and Vu, N.T. 2022. Human Interpretation of Saliency-based Explanation Over Text. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jun. 2022), 611–636.
[78]
Selbst, A.D. and Barocas, S. 2018. The Intuitive Appeal of Explainable Machines.
[79]
Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J. 2019. Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2019), 59–68.
[80]
Sendak, M., Elish, M.C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S. and O'Brien, C. 2020. “The human body is a black box”: supporting clinical decision-making with deep learning. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2020), 99–109.
[81]
Shang, R., Feng, K.J.K. and Shah, C. 2022. Why Am I Not Seeing It? Understanding Users’ Needs for Counterfactual Explanations in Everyday Recommendations. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 1330–1340.
[82]
Shen, H., DeVos, A., Eslami, M. and Holstein, K. 2021. Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. arXiv preprint arXiv:2105. 02980. (2021).
[83]
Sokol, K. and Flach, P. 2020. Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2020), 56–67.
[84]
Speith, T. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. 2022 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2022), 2239–2250.
[85]
Springer, A. and Whittaker, S. 2019. Progressive disclosure: empirically motivated approaches to designing effective transparency. Proceedings of the 24th International Conference on Intelligent User Interfaces (New York, NY, USA, Mar. 2019), 107–120.
[86]
[86]Stop LAPD Spying Coalition: 2021. https://stoplapdspying.org/. Accessed: 2022-11-23.
[87]
Transparency Is Not Enough: 2010. http://www.danah.org/papers/talks/2010/Gov2Expo.html. Accessed: 2023-02-02.
[88]
Valdovinos, J.I. 2018. Transparency as Ideology, Ideology as Transparency: Towards a Critique of the Meta-aesthetics of Neoliberal Hegemony. Open Cultural Studies. 2, 1 (Dec. 2018), 654–667.
[89]
Wagner, B., Rozgonyi, K., Sekwenz, M.-T., Cobbe, J. and Singh, J. 2020. Regulating Transparency? Facebook, Twitter and the German Network Enforcement Act. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, 2020), 261–271.
[90]
Wieringa, M. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2020), 1–18.
[91]
Zhang, Y., Liao, Q.V. and Bellamy, R.K.E. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (New York, NY, USA, Jan. 2020), 295–305.
[92]
Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Cited By

View all
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
  • (2024)Reparations of the horse? Algorithmic reparation and overspecialized remediesBig Data & Society10.1177/2053951724127067011:3Online publication date: 24-Sep-2024
  • (2024)Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding ApproachesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658985(1494-1514)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Check for updates

Author Tags

  1. Critical Transparency Studies
  2. Explainability
  3. Interpretability
  4. Transparency

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)529
  • Downloads (Last 6 weeks)74
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
  • (2024)Reparations of the horse? Algorithmic reparation and overspecialized remediesBig Data & Society10.1177/2053951724127067011:3Online publication date: 24-Sep-2024
  • (2024)Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding ApproachesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658985(1494-1514)Online publication date: 3-Jun-2024
  • (2024)Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau’s Adoption of Differential PrivacyProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658962(1150-1162)Online publication date: 3-Jun-2024
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658951(1002-1013)Online publication date: 3-Jun-2024
  • (2024)Mind The Gap: Designers and Standards on Algorithmic System Transparency for UsersProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642531(1-16)Online publication date: 11-May-2024
  • (2024)Signs of the Smart City: Exploring the Limits and Opportunities of TransparencyProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641931(1-14)Online publication date: 11-May-2024
  • (2023)Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harmBig Data & Society10.1177/2053951723121155310:2Online publication date: 15-Nov-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media