Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Published: 18 October 2021 Publication History

Abstract

Twitter uses machine learning to crop images, where crops are centered around the part predicted to be the most salient. In fall 2020, Twitter users raised concerns that the automated image cropping system on Twitter favored light-skinned over dark-skinned individuals, as well as concerns that the system favored cropping woman's bodies instead of their heads. In order to address these concerns, we conduct an extensive analysis using formalized group fairness metrics. We find systematic disparities in cropping and identify contributing factors, including the fact that the cropping based on the single most salient point can amplify the disparities because of an effect we term argmax bias. However, we demonstrate that formalized fairness metrics and quantitative analysis on their own are insufficient for capturing the risk of representational harm in automatic cropping. We suggest the removal of saliency-based cropping in favor of a solution that better preserves user agency. For developing a new solution that sufficiently address concerns related to representational harm, our critique motivates a combination of quantitative and qualitative methods that include human-centered design.

References

[1]
Adobe. 2020. Smart Crop. https://www.adobe.com/marketing/experience-manager-assets/smart-crop.html Accessed: 2021--1--6.
[2]
McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 249--260.
[3]
Edoardo Ardizzone, Alessandro Bruno, and Giuseppe Mazzola. 2013. Saliency based image cropping. In International Conference on Image Analysis and Processing. Springer, 773--782.
[4]
Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a feminist HCI methodology: social science, feminism, and HCI. In Proceedings of the SIGCHI conference on human factors in computing systems. 675--684.
[5]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning .fairmlbook.org. http://www.fairmlbook.org.
[6]
Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code .John Wiley & Sons.
[7]
Sebastian Benthall and Bruce D Haynes. 2019. Racial categories in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 289--298.
[8]
J.D. Biersdorfer. 2017. Fitting Your Photos on Instagram. The New York Times (2017).
[9]
James E Bollman, Ramana L Rao, Dennis L Venable, and Reiner Eschbach. 1999. Automatic image cropping. US Patent 5,978,519.
[10]
Ali Borji. 2018. Saliency prediction in the deep learning era: Successes, limitations, and future challenges. arXiv preprint arXiv:1810.03716 (2018).
[11]
Ali Borji and Laurent Itti. 2015. Cat2000: A large scale fixation dataset for boosting saliency research. arXiv preprint arXiv:1505.03581 (2015).
[12]
Alan Borning and Michael Muller. 2012. Next steps for value sensitive design. In Proceedings of the SIGCHI conference on human factors in computing systems. 1125--1134.
[13]
Richard Buchanan. 2001. Human dignity and human rights: Thoughts on the principles of human-centered design. Design issues, Vol. 17, 3 (2001), 35--39.
[14]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. 77--91.
[15]
United States Census Bureau. 2020. https://www.census.gov/topics/population/race/about.html. Accessed: 2020--12--22.
[16]
Judith Butler, Fabiana AA Jardim, Jacqueline Moraes Teixeira, and Sebasti ao Rinaldi. 2020. Endangered/endangering: Schematic racism and white paranoia. Educacc ao e Pesquisa, Vol. 46 (2020).
[17]
Jiansheng Chen, Gaocheng Bai, Shaoheng Liang, and Zhengqin Li. 2016. Automatic image cropping: A computational complexity study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 507--515.
[18]
Yi-Ling Chen, Tzu-Wei Huang, Kai-Han Chang, Yu-Chen Tsai, Hwann-Tzong Chen, and Bing-Yu Chen. 2017a. Quantitative Analysis of Automatic Image Cropping Algorithms: A Dataset and Comparative Study. CoRR, Vol. abs/1701.01480 (2017). arxiv: 1701.01480 http://arxiv.org/abs/1701.01480
[19]
Yi-Ling Chen, Tzu-Wei Huang, Kai-Han Chang, Yu-Chen Tsai, Hwann-Tzong Chen, and Bing-Yu Chen. 2017b. Quantitative analysis of automatic image cropping algorithms: A dataset and comparative study. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 226--234.
[20]
Patricia Hill Collins. 2002. Black feminist thought: Knowledge, consciousness, and the politics of empowerment .routledge.
[21]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018).
[22]
Sasha Costanza-Chock. 2018. Design justice: Towards an intersectional feminist framework for design theory and practice. Proceedings of the Design Research Society (2018).
[23]
Kate Crawford. 2017. The trouble with bias. In Conference on Neural Information Processing Systems, invited speaker.
[24]
Kimberlé Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. u. Chi. Legal f. (1989), 139.
[25]
david ayman shamma. 2020. https://medium.com/swlh/behind-twitters-biased-ai-cropping-and-how-to-fix-it-c0bff96c8d3e. Accessed: 2020--12--22.
[26]
Terrance de Vries, Ishan Misra, Changhan Wang, and Laurens van der Maaten. 2019. Does object recognition work for everyone?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 52--59.
[27]
Malkia Devich-Cyril. 2020. Defund Facial Recognition. The Atlantic (2020).
[28]
Ravit Dotan and Smitha Milli. 2019. Value-laden disciplinary shifts in machine learning. arXiv preprint arXiv:1912.01172 (2019).
[29]
Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 chi conference on human factors in computing systems. 278--288.
[30]
Arturo Escobar. 2018. Designs for the Pluriverse .Duke University Press.
[31]
Facebook. [n.d.]. Select an Aspect Ratio for a Video Ad. https://www.facebook.com/business/help/268849943715692?id=603833089963720
[32]
Sam Feder. 2020. Disclosure. https://www.netflix.com/title/81284247 Accessed: 2021--1--6.
[33]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259--268.
[34]
Karen E Fields and Barbara Jeanne Fields. 2014. Racecraft: The soul of inequality in American life .Verso Trade.
[35]
Barbara L Fredrickson and Tomi-Ann Roberts. 1997. Objectification theory: Toward understanding women's lived experiences and mental health risks. Psychology of women quarterly, Vol. 21, 2 (1997), 173--206.
[36]
Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods. University of Washington technical report 2--12 (2002).
[37]
Batya Friedman, Peter H Kahn, and Alan Borning. 2008. Value sensitive design and information systems. The handbook of information and computer ethics (2008), 69--101.
[38]
Susan Gasson. 2003. Human-centered vs. user-centered approaches to information system design. Journal of Information Technology Theory and Application (JITTA), Vol. 5, 2 (2003), 5.
[39]
Karamjit S Gill. 1990. Summary of human-centered systems research in Europe .University of Brighton, SEAKE Centre.
[40]
The Guardian. 2020. https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm. Accessed: 2020--12--22.
[41]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 501--512.
[42]
Sally Haslanger. 2000. Gender and race:(What) are they?(What) do we want them to be? Noûs, Vol. 34, 1 (2000), 31--55.
[43]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[44]
Charles Hirschman. 2004. The origins and demise of the concept of race. Population and development review, Vol. 30, 3 (2004), 385--415.
[45]
Xun Huang, Chengyao Shen, Xavier Boix, and Qi Zhao. 2015. Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision. 262--270.
[46]
Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in Context. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[47]
Tilke Judd, Krista Ehinger, Frédo Durand, and Antonio Torralba. 2009. Learning to Predict Where Humans Look. In IEEE International Conference on Computer Vision (ICCV).
[48]
Maximilian Kasy and Rediet Abebe. 2021. Fairness, equality, and power in algorithmic decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 576--586.
[49]
Carolyn Korsmeyer. 2004. Feminist aesthetics. (2004).
[50]
Alexander Kroner, Mario Senden, Kurt Driessens, and Rainer Goebel. 2020. Contextual encoder-decoder network for visual saliency prediction. Neural Networks, Vol. 129 (2020), 261--270. https://doi.org/10.1016/j.neunet.2020.05.004 Code available at https://github.com/gradio-app/saliency.
[51]
Matthias Kümmerer, Thomas SA Wallis, and Matthias Bethge. 2016. DeepGaze II: Reading fixations from deep features trained on object recognition. arXiv preprint arXiv:1610.01563 (2016).
[52]
Debbie S Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods, Vol. 47, 4 (2015), 1122--1135.
[53]
Catharine A MacKinnon. 1987. Feminism unmodified: Discourses on life and law .Harvard university press.
[54]
Shubhanshu Mishra, Brent D. Fegley, Jana Diesner, and Vetle I. Torvik. 2018. Self-citation is the hallmark of productive authors, of any gender. PLOS ONE, Vol. 13, 9 (sep 2018), e0195773. https://doi.org/10.1371/journal.pone.0195773
[55]
Shubhanshu Mishra, Sijun He, and Luca Belli. 2020. Assessing Demographic Bias in Named Entity Recognition. In Bias in Automatic Knowledge Graph Construction - A Workshop at AKBC 2020 .arxiv: 2008.03415 http://arxiv.org/abs/2008.03415
[56]
Mozilla. 2021. https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images. Accessed: 2021--1-15.
[57]
Paul Mozur. 2019. One month, 500,000 face scans: How China is using AI to profile a minority. The New York Times, Vol. 4 (2019).
[58]
Laura Mulvey. 1989. Visual pleasure and narrative cinema. In Visual and other pleasures. Springer, 14--26.
[59]
Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA, Vol. 1170.
[60]
Jennifer C Nash. 2008. Strange bedfellows: Black feminism and antipornography feminism. Social Text, Vol. 26, 4 (2008), 51--76.
[61]
Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism .nyu Press.
[62]
Juan F Perea. 1998. The Black/White binary paradigm of race: The normal science of American racial thought. La Raza LJ, Vol. 10 (1998), 127.
[63]
Vinay Prabhu. 2020. https://medium.com/@VinayPrabhu/on-the-twitter-cropping-controversy-critique-clarifications-and-comments-7ac66154f687. Accessed: 2020--12--22.
[64]
Richard Russell. 2009. A sex difference in facial contrast and its exaggeration by cosmetics. Perception, Vol. 38, 8 (2009), 1211--1219.
[65]
Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R Brubaker. 2020. How We've Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (2020), 1--35.
[66]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59--68.
[67]
Wikidata Query Service. 2019. https://query.wikidata.org/. Accessed: 2020--12--22.
[68]
Ben Shneiderman and Pattie Maes. 1997. Direct manipulation vs. interface agents. interactions, Vol. 4, 6 (1997), 42--61.
[69]
Bongwon Suh, Haibin Ling, Benjamin B. Bederson, and David W. Jacobs. 2003. Automatic Thumbnail Cropping and Its Effectiveness. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (Vancouver, Canada) (UIST '03). Association for Computing Machinery, New York, NY, USA, 95--104. https://doi.org/10.1145/964696.964707
[70]
Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue, Vol. 11, 3 (2013), 10--29.
[71]
John Tasioulas. 2021. The role of the arts and humanities in thinking about artificial intelligence (AI). https://www.adalovelaceinstitute.org/blog/mobilising-intellectual-resources-arts-humanities/
[72]
Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Huszár. 2018. Faster gaze prediction with dense networks and fisher pruning. arXiv preprint arXiv:1801.05787 (2018).
[73]
Lucas Theis and Zehan Wang. 2018. Speedy Neural Networks for Smart Auto-Cropping of Images. https://blog.twitter.com/engineering/en_us/topics/infrastructure/2018/Smart-Auto-Cropping-of-Images.html
[74]
Twitter. 2020. https://blog.twitter.com/official/en_us/topics/product/2020/transparency-image-cropping.html. Accessed: 2020--12--22.
[75]
Twitter. 2021. https://blog.twitter.com/engineering/en_us/topics/insights/2021/sharing-learnings-about-our-image-cropping-algorithm. Accessed: 2021-07-07.
[76]
Shannon Vallor. 2021. Mobilising the intellectual resources of the arts and humanities. https://www.adalovelaceinstitute.org/blog/mobilising-intellectual-resources-arts-humanities/
[77]
Nancy A Van House. 2011. Feminist HCI meets Facebook: Performativity and social networking sites. Interacting with computers, Vol. 23, 5 (2011), 422--429.
[78]
VentureBeat. 2019. https://venturebeat.com/2019/01/02/ai-predictions-for-2019-from-yann-lecun-hilary-mason-andrew-ng-and-rumman-chowdhury/. Accessed: 2021--1-15.
[79]
Cunrui Wang, Qingling Zhang, Wanquan Liu, Yu Liu, and Lixin Miao. 2019. Facial feature discovery for ethnicity recognition. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 9, 1 (2019), e1278.
[80]
X. Xie, Hao Liu, W. Ma, and H. Zhang. 2006. Browsing large pictures under limited display sizes. IEEE Transactions on Multimedia, Vol. 8 (2006), 707--715.
[81]
Qian Yang. 2017. The role of design in creating machine-learning-enhanced user experience. In 2017 AAAI spring symposium series.

Cited By

View all
  • (2024)Beyond Behaviorist Representational Harms: A Plan for Measurement and MitigationProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658946(933-946)Online publication date: 3-Jun-2024
  • (2024)The Fall of an Algorithm: Characterizing the Dynamics Toward AbandonmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658910(337-358)Online publication date: 3-Jun-2024
  • (2024)Cruising Queer HCI on the DL: A Literature Review of LGBTQ+ People in HCIProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642494(1-21)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 5, Issue CSCW2
CSCW2
October 2021
5376 pages
EISSN:2573-0142
DOI:10.1145/3493286
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NoDerivatives International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021
Published in PACMHCI Volume 5, Issue CSCW2

Check for updates

Author Tags

  1. demographic parity
  2. ethical HCI
  3. fairness in machine learning
  4. image cropping
  5. representational harm

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)394
  • Downloads (Last 6 weeks)32
Reflects downloads up to 10 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Beyond Behaviorist Representational Harms: A Plan for Measurement and MitigationProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658946(933-946)Online publication date: 3-Jun-2024
  • (2024)The Fall of an Algorithm: Characterizing the Dynamics Toward AbandonmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658910(337-358)Online publication date: 3-Jun-2024
  • (2024)Cruising Queer HCI on the DL: A Literature Review of LGBTQ+ People in HCIProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642494(1-21)Online publication date: 11-May-2024
  • (2024)Wikibench: Community-Driven Data Curation for AI Evaluation on WikipediaProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642278(1-24)Online publication date: 11-May-2024
  • (2024)Can Generative AI improve social science?Proceedings of the National Academy of Sciences10.1073/pnas.2314021121121:21Online publication date: 9-May-2024
  • (2023)AuCFSR: Authentication and Color Face Self-Recovery Using Novel 2D Hyperchaotic System and Deep Learning ModelsSensors10.3390/s2321895723:21(8957)Online publication date: 3-Nov-2023
  • (2023)Ecologies of Violence on Social Media: An Exploration of Practices, Contexts, and Grammars of Online HarmSocial Media + Society10.1177/205630512311968829:3Online publication date: 8-Sep-2023
  • (2023)Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm ReductionProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604673(723-741)Online publication date: 8-Aug-2023
  • (2023)Discrimination through Image Selection by Job Advertisers on FacebookProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594115(1772-1788)Online publication date: 12-Jun-2023
  • (2023)AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South AsiaProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594016(506-517)Online publication date: 12-Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media