Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3442381.3450077acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article
Open access

Auditing for Discrimination in Algorithms Delivering Job Ads

Published: 03 June 2021 Publication History

Abstract

Ad platforms such as Facebook, Google and LinkedIn promise value for advertisers through their targeted advertising. However, multiple studies have shown that ad delivery on such platforms can be skewed by gender or race due to hidden algorithmic optimization by the platforms, even when not requested by the advertisers. Building on prior work measuring skew in ad delivery, we develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements. Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race, from skew due to differences in qualification among people in the targeted audience. This distinction is important in U.S. law, where ads may be targeted based on qualifications, but not on protected categories. Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors, such as the ad platform’s optimization for engagement or training its algorithms on biased data. Our method controls for job qualification by comparing ad delivery of two concurrent ads for similar jobs, but for a pair of companies with different de facto gender distributions of employees. We describe the careful statistical tests that establish evidence of non-qualification skew in the results. Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn. We confirm skew by gender in ad delivery on Facebook, and show that it cannot be justified by differences in qualifications. We fail to find skew in ad delivery on LinkedIn. Finally, we suggest improvements to ad platform practices that could make external auditing of their algorithms in the public interest more feasible and accurate.

References

[1]
[1] ACLU. Facebook EEOC complaints. https://www.aclu.org/cases/facebook-eeoc-complaints?redirect=node/70165.
[2]
[2] Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., and Rieke, A.Discrimination through optimization: How facebook’s ad delivery can lead to biased outcomes. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing (2019).
[3]
[3] Ali, M., Sapiezynski, P., Korolova, A., Mislove, A., and Rieke, A.Ad delivery algorithms: The hidden arbiters of political messaging. In 14th ACM International Conference on Web Search and Data Mining (2021).
[4]
[4] Andrus, M., Spitzer, E., Brown, J., and Xiang, A.”What We Can’t Measure, We Can’t Understand”: Challenges to demographic data procurement in the pursuit of fairness. In ACM Conference on Fairness, Accountability, and Transparency (FAccT) (2021).
[5]
[5] Angwin, J., and Paris Jr., T.Facebook lets advertisers exclude users by race – ProPublica. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race, October 26, 2016.
[6]
[6] Angwin, J., Scheiber, N., and Tobin, A.Dozens of companies are using Facebook to exclude older workers from job ads – ProPublica. https://www.propublica.org/article/facebook-ads-age-discrimination-targeting, December 20, 2017.
[7]
[7] Asplund, J., Eslami, M., Sundaram, H., Sandvig, C., and Karahalios, K.Auditing race and gender discrimination in online housing markets. In Proceedings of the International AAAI Conf. on Web and Social Media (2020).
[8]
[8] Barocas, S., and Selbst, A. D.Big data’s disparate impact. California Law Review 104, 3 (2016), 671–732.
[9]
[9] Bogen, M., and Rieke, A.Help wanted: an examination of hiring algorithms, equity, and bias. Technical report, Upturn (2018).
[10]
[10] Bogen, M., Rieke, A., and Ahmed, S.Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).
[11]
[11] CFR. 12 CFR section 202.4 (b)—discouragement. https://www.law.cornell.edu/cfr/text/12/202.4.
[12]
[12] CFR. 24 CFR section 100.75—discriminatory advertisements, statements and notices. https://www.law.cornell.edu/cfr/text/24/100.75.
[13]
[13] Cohen, A., and Nissim, K.Linear program reconstruction in practice. Journal of Privacy and Confidentiality 10, 1 (2020).
[14]
[14] Datta, A., Datta, A., Makagon, J., Mulligan, D. K., and Tschantz, M. C.Discrimination in online personalization: A multidisciplinary inquiry. FAT (2018).
[15]
[15] Datta, A., Tschantz, M. C., and Datta, A.Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 1 (2015).
[16]
[16] DiversityReports.org. Diversity reports - Nvidia. https://www.diversityreports.org/company-information/nvidia, 2020. Last accessed on Feb 28, 2021.
[17]
[17] Dominos. Gender pay gap report 2018. https://investors.dominos.co.uk/sites/default/files/attachments/dominos-corporate-stores-sheermans-limited-gender-pay-gap-2018-report.pdf, 2018. Last accessed on October 6, 2020.
[18]
[18] Dwork, C., and Ilvento, C.Fairness Under Composition. In 10th Innovations in Theoretical Computer Science Conference (ITCS) (2019).
[19]
[19] Dwork, C., and Roth, A.The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science(2014).
[20]
[20] Facebook. Choose the right objective. https://www.facebook.com/business/help/1438417719786914.
[21]
[21] Facebook. Marketing API—Facebook for developers. https://developers.facebook.com/docs/marketing-apis/.
[22]
[22] Facebook. Simplifying targeting categories. https://www.facebook.com/business/news/update-to-facebook-ads-targeting-categories/, 2020.
[23]
[23] Faizullabhoy, I., and Korolova, A.Facebook’s advertising platform: New attack vectors and the need for interventions. In IEEE Workshop on Technology and Consumer Protection (ConPro) (2018).
[24]
[24] Gelauff, L., Goel, A., Munagala, K., and Yandamuri, S.Advertising for demographically fair outcomes. arXiv preprint arXiv:2006.03983 (2020).
[25]
[25] Hardt, M., Price, E., and Srebro, N.Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (2016).
[26]
[26] Imana, B., Korolova, A., and Heidemann, J.Dataset of content and delivery statistics of ads used in “Auditing for discrimination in algorithms delivering job ads”. https://ant.isi.edu/datasets/addelivery/.
[27]
[27] Kayser-Bril, N.Automated discrimination: Facebook uses gross stereotypes to optimize ad delivery. https://algorithmwatch.org/en/story/automated-discrimination-facebook-google/, October 18, 2020.
[28]
[28] Kim, M. P., Korolova, A., Rothblum, G. N., and Yona, G.Preference-informed fairness. In Innovations in Theoretical Computer Science (2020).
[29]
[29] Korolova, A.Privacy violations using microtargeted ads: A case study. Journal of Privacy and Confidentiality 3, 1 (2011), 27–49.
[30]
[30] Lambrecht, A., and Tucker, C.Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 65, 7 (2019), 2966–2981.
[31]
[31] Laura Murphy and Associates. Facebook’s civil rights audit – progress report. https://about.fb.com/wp-content/uploads/2019/06/civilrightaudit_final.pdf, June 30, 2019.
[32]
[32] Laura Murphy and Associates. Facebook’s civil rights audit – Final report. https://about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf, July 8 2020.
[33]
[33] Lecuyer, M., Spahn, R., Spiliopolous, Y., Chaintreau, A., Geambasu, R., and Hsu, D.Sunlight: Fine-grained targeting detection at scale with statistical confidence. In CCS (2015).
[34]
[34] LinkedIn. Ads Reporting. https://docs.microsoft.com/en-us/linkedin/marketing/integrations/ads-reporting/ads-reporting.
[35]
[35] LinkedIn. Audience Counts. https://docs.microsoft.com/en-us/linkedin/marketing/integrations/ads/advertising-targeting/audience-counts.
[36]
[36] LinkedIn. Campaign quality scores for sponsored content. https://www.linkedin.com/help/lms/answer/85406.
[37]
[37] LinkedIn. LinkedIn marketing developer platform. https://docs.microsoft.com/en-us/linkedin/marketing/.
[38]
[38] LinkedIn. Select a marketing objective for your ad campaign. https://www.linkedin.com/help/lms/answer/94698/select-a-marketing-objective-for-your-ad-campaign.
[39]
[39] Merrill, J. B., and Tobin, A.Facebook moves to block ad transparency tools – including ours. https://www.propublica.org/article/facebook-blocks-ad-transparency-tools, January 28, 2019.
[40]
[40] Mozilla. Facebook’s ad archive API is inadequate. https://blog.mozilla.org/blog/2019/04/29/facebooks-ad-archive-api-is-inadequate/, 2019.
[41]
[41] Narayanan, A., and Shmatikov, V.Robust de-anonymization of large sparse datasets. In 2008 IEEE Symposium on Security and Privacy (2008).
[42]
[42] Netflix. Inclusion takes root at Netflix: Our first report. https://about.netflix.com/en/news/netflix-inclusion-report-2021, 2021.
[43]
[43] Nissim, K., Steinke, T., Wood, A., Altman, M., Bembenek, A., Bun, M., Gaboardi, M., O’Brien, D. R., and Vadhan, S.Differential privacy: A primer for a non-technical audience. Vand. J. Ent. & Tech. L. 21 (2018).
[44]
[44] North Carolina State Board of Elections. Voter history data. https://dl.ncsbe.gov/index.html. Downloaded on April 23, 2020.
[45]
[45] Nvidia. Global diversity and inclusion report. https://www.nvidia.com/en-us/about-nvidia/careers/diversity-and-inclusion/, 2021. Last accessed on Feb 28, 2021.
[46]
[46] Reisman, D., Schultz, J., Crawford, K., and Whittaker, M.Algorithmic impact assessments: A practical framework for public agency accountability. AI Now (2018).
[47]
[47] Sandberg, S.Doing more to protect against discrimination in housing, employment and credit advertising. https://about.fb.com/news/2019/03/protecting-against-discrimination-in-ads/, March 19, 2019.
[48]
[48] Sandvig, C., Hamilton, K., Karahalios, K., and Langbort, C.Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry 22 (2014), 4349–4357.
[49]
[49] Sapiezynski, P., Ghosh, A., Kaplan, L., Mislove, A., and Rieke, A.Algorithms that ”don’t see color”: Comparing biases in lookalike and special ad audiences. arXiv preprint arXiv:1912.07579 (2019).
[50]
[50] Selyukh, A.Why suburban moms are delivering your groceries. NPR https://www.npr.org/2019/05/25/722811953/why-suburban-moms-are-delivering-your-groceries, May 25, 2019.
[51]
[51] Shukla, S.A better way to learn about ads on facebook. https://about.fb.com/news/2019/03/a-better-way-to-learn-about-ads/, March 28 2019.
[52]
[52] Speicher, T., Ali, M., Venkatadri, G., Ribeiro, F. N., Arvanitakis, G., Benevenuto, F., Gummadi, K. P., Loiseau, P., and Mislove, A.Potential for discrimination in online targeted advertising. In Proceedings of Machine Learning Research (2018), S. A. Friedler and C. Wilson, Eds.
[53]
[53] Spencer, S.Upcoming update to housing, employment, and credit advertising policies. https://www.blog.google/technology/ads/upcoming-update-housing-employment-and-credit-advertising-policies/, 2020.
[54]
[54] Sweeney, L.Discrimination in online ad delivery: Google ads, black names and white names, racial discrimination, and click advertising. Queue (2013).
[55]
[55] Tobin, A., and Merrill, J. B.Facebook is letting job advertisers target only men – ProPublica. https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men, September 18, 2018.
[56]
[56] U.S. Bureau of Labor Statistics. Employed persons by detailed industry, sex, race, and Hispanic or Latino ethnicity. https://www.bls.gov/cps/cpsaat18.pdf, 2018.
[57]
[57] U.S. Equal Employment Opportunity Commission. Prohibited employment policies/practices. https://www.eeoc.gov/prohibited-employment-policiespractices.
[58]
[58] USC. 29 USC section 623—prohibition of age discrimination. https://www.law.cornell.edu/uscode/text/29/623.
[59]
[59] USC. 42 USC section 2000e-3—other unlawful employment practices. https://www.law.cornell.edu/uscode/text/42/2000e-3.
[60]
[60] USC. 47 USC section 230—protection for private blocking and screening of offensive material. https://www.law.cornell.edu/uscode/text/47/230.
[61]
[61] Venkatadri, G., Andreou, A., Liu, Y., Mislove, A., Gummadi, K. P., Loiseau, P., and Goga, O.Privacy risks with Facebook’s PII-based targeting: Auditing a data broker’s advertising interface. In IEEE Symposium on Security and Privacy (SP) (2018).
[62]
[62] Venkatadri, G., and Mislove, A.On the Potential for Discrimination via Composition. In Internet Measurement Conference (IMC’20) (2020).
[63]
[63] Wilson, C., Ghosh, A., Jiang, S., Mislove, A., Baker, L., Szary, J., Trindel, K., and Polli, F.Building and auditing fair algorithms: A case study in candidate screening. In ACM Conference on Fairness, Accountability, and Transparency (FAccT) (2021).
[64]
[64] Zhang, J., and Bareinboim, E.Fairness in decision-making - the causal explanation formula. In Association for the Advancement of Artificial Intelligence(2018).

Cited By

View all
  • (2024)Empowerment of Women Through Education and Training in Artificial IntelligenceAI Tools and Applications for Women’s Safety10.4018/979-8-3693-1435-7.ch008(132-149)Online publication date: 19-Jan-2024
  • (2024)Platformization of Inequality: Gender and Race in Digital Labor PlatformsProceedings of the ACM on Human-Computer Interaction10.1145/36373858:CSCW1(1-22)Online publication date: 26-Apr-2024
  • (2024)Auditing for Racial Discrimination in the Delivery of Education AdsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659041(2348-2361)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '21: Proceedings of the Web Conference 2021
April 2021
4054 pages
ISBN:9781450383127
DOI:10.1145/3442381
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 June 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

WWW '21
Sponsor:
WWW '21: The Web Conference 2021
April 19 - 23, 2021
Ljubljana, Slovenia

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,032
  • Downloads (Last 6 weeks)138
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Empowerment of Women Through Education and Training in Artificial IntelligenceAI Tools and Applications for Women’s Safety10.4018/979-8-3693-1435-7.ch008(132-149)Online publication date: 19-Jan-2024
  • (2024)Platformization of Inequality: Gender and Race in Digital Labor PlatformsProceedings of the ACM on Human-Computer Interaction10.1145/36373858:CSCW1(1-22)Online publication date: 26-Apr-2024
  • (2024)Auditing for Racial Discrimination in the Delivery of Education AdsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659041(2348-2361)Online publication date: 3-Jun-2024
  • (2024)Fairness in Online Ad DeliveryProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658980(1418-1432)Online publication date: 3-Jun-2024
  • (2024)The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal ModelsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658968(1229-1244)Online publication date: 3-Jun-2024
  • (2024)Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642621(1-18)Online publication date: 11-May-2024
  • (2024)"I know even if you don't tell me": Understanding Users' Privacy Preferences Regarding AI-based Inferences of Sensitive Information for PersonalizationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642180(1-21)Online publication date: 11-May-2024
  • (2024)Sourcing algorithms: Rethinking fairness in hiring in the era of algorithmic recruitmentInternational Journal of Selection and Assessment10.1111/ijsa.12499Online publication date: 2-Sep-2024
  • (2024)AI auditing: The Broken Bus on the Road to AI Accountability2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00037(612-643)Online publication date: 9-Apr-2024
  • (2024)Mutual Information-Based Counterfactuals for Fair Graph Neural Networks2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650076(1-8)Online publication date: 30-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media