Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3604915.3608841acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
short-paper
Open access

Group Fairness for Content Creators: the Role of Human and Algorithmic Biases under Popularity-based Recommendations

Published: 14 September 2023 Publication History

Abstract

The Creator Economy faces concerning levels of unfairness. Content creators (CCs) publicly accuse platforms of purposefully reducing the visibility of their content based on protected attributes, while platforms place the blame on viewer biases. Meanwhile, prior work warns about the “rich-get-richer” effect perpetuated by existing popularity biases in recommender systems: Any initial advantage in visibility will likely be exacerbated over time. What remains unclear is how the biases based on protected attributes from platforms and viewers interact and contribute to the observed inequality in the context of popularity-biased recommender systems. The difficulty of the question lies in the complexity and opacity of the system. To overcome this challenge, we design a simple agent-based model (ABM) that unifies the platform systems which allocate the visibility of CCs (e.g., recommender systems, moderation) into a single popularity-based function, which we call the visibility allocation system (VAS). Through simulations, we find that although viewer homophilic biases do alone create inequalities, small levels of additional biases in VAS are more harmful. From the perspective of interventions, our results suggest that (a) attempts to reduce attribute-biases in moderation and recommendations should precede those reducing viewers’ homophilic tendencies, (b) decreasing the popularity-biases in VAS decreases but not eliminates inequalities, (c) boosting the visibility of protected CCs to overcome viewers’ homophily with respect to one fairness metric is unlikely to produce fair outcomes with respect to all metrics, and (d) the process is also unfair for viewers and this unfairness could be overcome through the same interventions. More generally, this work demonstrates the potential of using ABMs to better understand the causes and effects of biases and interventions within complex sociotechnical systems.

References

[1]
Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2017. Controlling popularity bias in learning-to-rank recommendation. In Proceedings of the eleventh ACM conference on recommender systems. 42–46.
[2]
Reed Albergotti. 2020. Black creators sue YouTube, alleging racial discrimination. The Washington Post. https://www.washingtonpost.com/technology/2020/06/18/black-creators-sue-youtube-alleged-race-discrimination/
[3]
Greg Bensinger and Reed Albergotti. 2019. YouTube discriminates against LGBT content by unfairly culling it, suit alleges. The Washington Post. https://www.washingtonpost.com/technology/2019/08/14/youtube-discriminates-against-lgbt-content-by-unfairly-culling-it-suit-alleges/
[4]
Clara Lindh Bergendorff. 2021. From The Attention Economy To The Creator Economy: A Paradigm Shift. Forbes. https://www.forbes.com/sites/claralindhbergendorff/2021/03/12/from-the-attention-economy-to-the-creator-economy-a-paradigm-shift/?sh=512e4cd8faa7
[5]
Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 514–524.
[6]
Eszter Bokányi and Anikó Hannák. 2020. Understanding inequalities in ride-hailing services through simulations. Scientific reports 10, 1 (2020), 6500.
[7]
Elizabeth E Bruch and MEJ Newman. 2018. Aspirational pursuit of mates in online dating markets. Science Advances 4, 8 (2018), eaap9815.
[8]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
[9]
Robin Burke. 2017. Multisided fairness for recommendation. arXiv preprint arXiv:1707.00093 (2017).
[10]
Kyle Chayka. 2021. What the “Creator Economy” Promises—and What It Actually Does. The New Yorker. https://www.newyorker.com/culture/infinite-scroll/what-the-creator-economy-promises-and-what-it-actually-does
[11]
Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1–39.
[12]
Alexander D’Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, David Sculley, and Yoni Halpern. 2020. Fairness is not static: deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 525–534.
[13]
We Are Social; Hootsuite; DataReportal. 2022. Most popular social networks worldwide as of January 2022, ranked by number of monthly active users. Statista. https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/
[14]
Arthur De Vany. 2003. Hollywood economics: How extreme uncertainty shapes the film industry. Routledge.
[15]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
[16]
Aytek Erdil and Haluk Ergin. 2008. What’s the matter with tie-breaking? Improving efficiency in school choice. American Economic Review 98, 3 (2008), 669–689.
[17]
Chloë FitzGerald, Angela Martin, Delphine Berner, and Samia Hurst. 2019. Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: a systematic review. BMC psychology 7, 1 (2019), 1–12.
[18]
Werner Geyser. 2022. Creator Earnings: Benchmark Report 2022. Influencer Marketing Hub. https://influencermarketinghub.com/creator-earnings-benchmark-report/
[19]
Werner Geyser. 2022. The State of Influencer Marketing 2022: Benchmark Report. Influencer Marketing Hub. https://influencermarketinghub.com/influencer-marketing-benchmark-report/
[20]
Nigel Gilbert and Klaus Troitzsch. 2005. Simulation for the social scientist. McGraw-Hill Education (UK).
[21]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 501–512.
[22]
Natali Helberger, Kari Karppinen, and Lucia D’Acunto. 2018. Exposure diversity as a design principle for recommender systems. Information, Communication & Society 21, 2 (2018), 191–207.
[23]
Alex Hern. 2020. TikTok ’tried to filter out videos from ugly, poor or disabled users’. The Guardian. https://www.theguardian.com/technology/2020/mar/17/tiktok-tried-to-filter-out-videos-from-ugly-poor-or-disabled-users
[24]
Influencity. 2020. The Largest Influencer Study in Europe. https://influencity.com/resources/studies/the-largest-influencer-study-of-europe
[25]
Stefania Ionescu, Anikó Hannák, and Kenneth Joseph. 2021. An agent-based model to evaluate interventions on online dating platforms to decrease racial homogamy. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 412–423.
[26]
Stefania Ionescu, Nicolò Pagan, and Anikó Hannák. 2023. Individual Fairness for Social Media Influencers. In International Conference on Complex Networks and Their Applications. Springer, 162–175.
[27]
IZEA. 2022. The State of Influencer Equality: 2022 Report. https://izea.com/resources/insights/2022-state-of-influencer-equality/
[28]
Jiepu Jiang, Ahmed Hassan Awadallah, Xiaolin Shi, and Ryen W White. 2015. Understanding and predicting graded search satisfaction. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. 57–66.
[29]
Fariba Karimi, Mathieu Génois, Claudia Wagner, Philipp Singer, and Markus Strohmaier. 2018. Homophily influences ranking of minorities in social networks. Scientific reports 8, 1 (2018), 11077.
[30]
Matevž Kunaver and Tomaž Požrl. 2017. Diversity in recommender systems–A survey. Knowledge-based systems 123 (2017), 154–162.
[31]
Calvin K Lai, Allison L Skinner, Erin Cooley, Sohad Murrar, Markus Brauer, Thierry Devos, Jimmy Calanchini, Y Jenny Xiao, Christina Pedram, Christopher K Marshburn, 2016. Reducing implicit racial preferences: II. Intervention effectiveness across time.Journal of Experimental Psychology: General 145, 8 (2016), 1001.
[32]
Peggy J Liu, Brent McFerran, and Kelly L Haws. 2020. Mindful matching: Ordinal versus nominal attributes. Journal of Marketing Research 57, 1 (2020), 134–155.
[33]
Eli Lucherini, Matthew Sun, Amy Winecoff, and Arvind Narayanan. 2021. T-RECS: A simulation tool to study the societal impact of recommender systems. arXiv preprint arXiv:2107.08959 (2021).
[34]
Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. 2020. Feedback loop and bias amplification in recommender systems. In Proceedings of the 29th ACM international conference on information & knowledge management. 2145–2148.
[35]
Dana Mastro and Riva Tukachinsky. 2012. The influence of media exposure on the formation, activation, and application of racial/ethnic stereotypes. The international encyclopedia of media studies (2012).
[36]
Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI’06 extended abstracts on Human factors in computing systems. 1097–1101.
[37]
Maria Mellor. 2020. Why is TikTok creating filter bubbles based on your race? WIRED. https://www.wired.co.uk/article/tiktok-filter-bubbles
[38]
MSL. 2021. MSL Study Reveals Racial Pay Gap in Influencer Marketing. https://www.mslgroup.com/whats-new-at-msl/msl-study-reveals-racial-pay-gap-influencer-marketing
[39]
Goran Murić, Alexey Tregubov, Jim Blythe, Andrés Abeliuk, Divya Choudhary, Kristina Lerman, and Emilio Ferrara. 2022. Large-scale agent-based simulations of online social networks. Autonomous Agents and Multi-Agent Systems 36, 2 (2022), 38.
[40]
Joaquim Neto, A Jorge Morais, Ramiro Gonçalves, and António Leça Coelho. 2022. Multi-agent-based recommender systems: a literature review. In Proceedings of Sixth International Congress on Information and Communication Technology: ICICT 2021, London, Volume 1. Springer, 543–555.
[41]
Mark EJ Newman. 2001. The structure of scientific collaboration networks. Proceedings of the national academy of sciences 98, 2 (2001), 404–409.
[42]
Jakob Nielsen. 2006. Participation inequality: lurkers vs. contributors in internet communities. Jakob Nielsen’s Alertbox 107 (2006), 108.
[43]
Nicolò Pagan, Wenjun Mei, Cheng Li, and Florian Dörfler. 2021. A meritocratic network formation model for the rise of social media influencers. Nature communications 12, 1 (2021), 1–12.
[44]
Amy Pei, Yakov Bart, Koen Pauwels, and Kwong Chan. 2022. Racial Pay Gap in Influencer Marketing. Available at SSRN 4156872 (2022).
[45]
Matthew J Salganik, Peter Sheridan Dodds, and Duncan J Watts. 2006. Experimental study of inequality and unpredictability in an artificial cultural market. science 311, 5762 (2006), 854–856.
[46]
Thomas C. Schelling. 1971. Dynamic models of segregation. The Journal of Mathematical Sociology 1, 2 (1971), 143–186. https://doi.org/10.1080/0022250X.1971.9989794
[47]
Stephen A Spiller and Lena Belogolova. 2017. On consumer beliefs about quality and taste. Journal of Consumer Research 43, 6 (2017), 970–991.
[48]
Chandra Steele. 2022. How Racial Inequalities Affect Influencers. PCMag. https://influencity.com/resources/studies/the-largest-influencer-study-of-europe
[49]
Louise T Su. 2003. A comprehensive and systematic model of user evaluation of web search engines: I. Theory and background. Journal of the American society for information science and technology 54, 13 (2003), 1175–1192.
[50]
Morgan Sung. 2019. TikTok users of color call for better visibility on the For You Page. Mashable. https://mashable.com/article/tiktok-users-of-color-call-for-visibility-for-you-page
[51]
Douglas R Turnbull, Sean McQuillan, Vera Crabtree, John Hunter, and Sunny Zhang. 2022. Exploring Popularity Bias in Music Recommendation Models and Commercial Steaming Services. arXiv preprint arXiv:2208.09517 (2022).
[52]
Sirui Yao and Bert Huang. 2017. Beyond parity: Fairness objectives for collaborative filtering. Advances in neural information processing systems 30 (2017).
[53]
Jingjing Zhang, Gediminas Adomavicius, Alok Gupta, and Wolfgang Ketter. 2020. Consumption and performance: Understanding longitudinal dynamics of recommender systems via an agent-based simulation framework. Information Systems Research 31, 1 (2020), 76–101.
[54]
Yang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Chonggang Song, Guohui Ling, and Yongdong Zhang. 2021. Causal intervention for leveraging popularity bias in recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 11–20.

Cited By

View all
  • (2024)RobustRecSys @ RecSys2024: Design, Evaluation and Deployment of Robust Recommender SystemsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3687106(1265-1269)Online publication date: 8-Oct-2024
  • (2024)FAER: Fairness-Aware Event-Participant Recommendation in Event-Based Social NetworksIEEE Transactions on Big Data10.1109/TBDATA.2024.337240910:5(655-668)Online publication date: Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
RecSys '23: Proceedings of the 17th ACM Conference on Recommender Systems
September 2023
1406 pages
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 September 2023

Check for updates

Author Tags

  1. agent-based modeling
  2. algorithmic fairness
  3. network formation
  4. popularity bias

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

  • NCCR Automation

Conference

RecSys '23: Seventeenth ACM Conference on Recommender Systems
September 18 - 22, 2023
Singapore, Singapore

Acceptance Rates

Overall Acceptance Rate 254 of 1,295 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)547
  • Downloads (Last 6 weeks)63
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)RobustRecSys @ RecSys2024: Design, Evaluation and Deployment of Robust Recommender SystemsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3687106(1265-1269)Online publication date: 8-Oct-2024
  • (2024)FAER: Fairness-Aware Event-Participant Recommendation in Event-Based Social NetworksIEEE Transactions on Big Data10.1109/TBDATA.2024.337240910:5(655-668)Online publication date: Oct-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media