Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3320435.3320442acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article

What Makes an Image Tagger Fair?

Published: 07 June 2019 Publication History

Abstract

Image analysis algorithms have been a boon to personalization in digital systems and are now widely available via easy-to-use APIs. However, it is important to ensure that they behave fairly in applications that involve processing images of people, such as dating apps. We conduct an experiment to shed light on the factors influencing the perception of "fairness." Participants are shown a photo along with two descriptions (human- and algorithm-generated). They are then asked to indicate which is "more fair" in the context of a dating site, and explain their reasoning. We vary a number of factors, including the gender, race and attractiveness of the person in the photo. While participants generally found human-generated tags to be more fair, API tags were judged as being more fair in one setting - where the image depicted an "attractive," white individual. In their explanations, participants often mention accuracy, as well as the objectivity/subjectivity of the tags in the description. We relate our work to the ongoing conversation about fairness in opaque tools like image tagging APIs, and their potential to result in harm.

References

[1]
Oscar Alvarado and Annika Waern. 2018. Towards Algorithmic Experience: Initial Efforts for Social Media Contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 286.
[2]
Arthur Aron, Gary W Lewandowski Jr, Debra Mashek, and Elaine N Aron. 2013. The self-expansion model of motivation and cognition in close relationships. The Oxford handbook of close relationships (2013), 90--115.
[3]
Pinar Barlas, Kyriakos Kyriakou, Styliani Kleanthous, and Jahna Otterbacher. 2019. Social B(eye)as: Human and Machine Descriptions of People Images. In Proceedings of the 13th Annual Conference on Web and Social Media (ICWSM '19). AAAI.
[4]
Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. In Conference on Fairness, Accountability and Transparency. 149--159.
[5]
Rebecca J Brand, Abigail Bonatsos, Rebecca D'Orazio, and Hilary DeShong. 2012. What is beautiful is good, even online: Correlations between photo attractiveness and text attractiveness in men's online dating profiles. Computers in Human Behavior, Vol. 28, 1 (2012), 166--170.
[6]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77--91.
[7]
Trudy Hui Hui Chua and Leanne Chang. 2016. Follow me and like my beautiful selfies: Singapore teenage girls' engagement in self-presentation and peer comparison on social media. Computers in Human Behavior, Vol. 55 (2016), 190--197.
[8]
Michael A DeVito, Jeremy Birnholtz, Jeffery T Hancock, Megan French, and Sunny Liu. 2018. How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self-Presentation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 120.
[9]
Nicholas Diakopoulos. 2016. Accountability in algorithmic decision making. Commun. ACM, Vol. 59, 2 (2016), 56--62.
[10]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. I always assumed that I wasn't really that close to {her}: Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, 153--162.
[11]
Tarleton Gillespie. 2014. The relevance of algorithms. Media technologies: Essays on communication, materiality, and society, Vol. 167 (2014).
[12]
Matthew Kay, Cynthia Matuszek, and Sean A Munson. 2015. Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3819--3828.
[13]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[14]
Debbie S Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods, Vol. 47, 4 (2015), 1122--1135.
[15]
Evelyn P Meier and James Gray. 2014. Facebook photo activity associated with body image disturbance in adolescent girls. Cyberpsychology, Behavior, and Social Networking, Vol. 17, 4 (2014), 199--206.
[16]
Cathy O'Neil. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy .Broadway Books.
[17]
Luiz Pizzato, Thomas Chung, Tomek Rej, Irena Koprinska, Kalina Yacef, and Judy Kay. September 2010 a. Learning user preference in online dating. http://www.ke.tu-darmstadt.de/events/ PL-10/papers/8-Pizzato.pdf In: Hüllermeier, E., Fürnkranz, J. (eds.), Proceedings of the Preference Learning (PL-10) Tutorial and Workshop, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD). http://www.ke.tu-darmstadt.de/events/ PL-10/papers/8-Pizzato.pdf.
[18]
Luiz Pizzato, Tomasz Rej, Joshua Akehurst, Irena Koprinska, Kalina Yacef, and Judy Kay. 2013. Recommending people to people: the nature of reciprocal recommenders with a case study in online dating. User Modeling and User-Adapted Interaction, Vol. 23, 5 (01 Nov 2013), 447--488.
[19]
Luiz Pizzato, Tomek Rej, Thomas Chung, Irena Koprinska, Kalina Yacef, and Judy Kay. 2010b. Reciprocal Recommender System for Online Dating. In Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys '10). ACM, New York, NY, USA, 353--354.
[20]
Lauren Rhue. 2018. Racial Influence on Automated Perceptions of Emotions. Available at SSRN 3281765 (2018).
[21]
Carol D Ryff. 1989. Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of personality and social psychology, Vol. 57, 6 (1989), 1069.
[22]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry (2014), 1--23.
[23]
Michael A Stefanone, Derek Lackaff, and Devan Rosen. 2011. Contingencies of self-worth and social-networking-site behavior. Cyberpsychology, Behavior, and Social Networking, Vol. 14, 1--2 (2011), 41--49.
[24]
Michele Wilson. 2017. Algorithms (and the) everyday. Information, Communication & Society, Vol. 20, 1 (2017), 137--150.
[25]
Peng Xia, Shuangfei Zhai, Benyuan Liu, Yizhou Sun, and Cindy Chen. 2016. Design of reciprocal recommendation systems for online dating. Social Network Analysis and Mining, Vol. 6, 1 (10 Jun 2016), 32.
[26]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing .
[27]
Yong Zheng, Tanaya Dave, Neha Mishra, and Harshit Kumar. 2018. Fairness In Reciprocal Recommendations: A Speed-Dating Study. In Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP '18). ACM, New York, NY, USA, 29--34.

Cited By

View all
  • (2023)Much Ado About GenderProceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578316(269-279)Online publication date: 19-Mar-2023
  • (2023)AI-Enabled Solutions, Explainability and Ethical Concerns for Predicting Sepsis in ICUs: A Systematic Review2023 IEEE 19th International Conference on e-Science (e-Science)10.1109/e-Science58273.2023.10254863(1-9)Online publication date: 9-Oct-2023
  • (2023)Fairness Perceptions of Artificial Intelligence: A Review and Path ForwardInternational Journal of Human–Computer Interaction10.1080/10447318.2023.221089040:1(4-23)Online publication date: 26-May-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '19: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization
June 2019
377 pages
ISBN:9781450360210
DOI:10.1145/3320435
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic bias
  2. computer vision
  3. fairness
  4. image analysis

Qualifiers

  • Research-article

Funding Sources

Conference

UMAP '19
Sponsor:

Acceptance Rates

UMAP '19 Paper Acceptance Rate 30 of 122 submissions, 25%;
Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)24
  • Downloads (Last 6 weeks)1
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Much Ado About GenderProceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578316(269-279)Online publication date: 19-Mar-2023
  • (2023)AI-Enabled Solutions, Explainability and Ethical Concerns for Predicting Sepsis in ICUs: A Systematic Review2023 IEEE 19th International Conference on e-Science (e-Science)10.1109/e-Science58273.2023.10254863(1-9)Online publication date: 9-Oct-2023
  • (2023)Fairness Perceptions of Artificial Intelligence: A Review and Path ForwardInternational Journal of Human–Computer Interaction10.1080/10447318.2023.221089040:1(4-23)Online publication date: 26-May-2023
  • (2022)Mitigating Bias in Algorithmic Systems—A Fish-eye ViewACM Computing Surveys10.1145/352715255:5(1-37)Online publication date: 3-Dec-2022
  • (2021)Learning from user interactions with rankingsACM SIGIR Forum10.1145/3483382.348340254:2(1-2)Online publication date: 20-Aug-2021
  • (2021)Effective collection construction for information retrieval evaluation and optimizationACM SIGIR Forum10.1145/3483382.348340154:2(1-2)Online publication date: 20-Aug-2021
  • (2021)Cheap IR evaluationACM SIGIR Forum10.1145/3483382.348340054:2(1-2)Online publication date: 20-Aug-2021
  • (2021)Report on the ISMIR 2020 special sessionACM SIGIR Forum10.1145/3483382.348339854:2(1-7)Online publication date: 20-Aug-2021
  • (2021)Report on the first workshop on bias in automatic knowledge graph construction at AKBC 2020ACM SIGIR Forum10.1145/3483382.348339354:2(1-9)Online publication date: 20-Aug-2021
  • (2021)4 KidRec - what does good look likeACM SIGIR Forum10.1145/3483382.348339154:2(1-7)Online publication date: 20-Aug-2021
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media