Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3461702.3462540acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities

Published: 30 July 2021 Publication History

Abstract

Advances in algorithmic fairness have largely omitted sexual orientation and gender identity. We explore queer concerns in privacy, censorship, language, online safety, health, and employment to study the positive and negative effects of artificial intelligence on queer communities. These issues underscore the need for new directions in fairness research that take into account a multiplicity of considerations, from privacy preservation, context sensitivity and process fairness, to an awareness of sociotechnical impact and the increasingly important role of inclusive and participatory research processes. Most current approaches for algorithmic fairness assume that the target characteristics for fairness---frequently, race and legal gender---can be observed or recorded. Sexual orientation and gender identity are prototypical instances of unobserved characteristics, which are frequently missing, unknown or fundamentally unmeasurable. This paper highlights the importance of developing new approaches for algorithmic fairness that break away from the prevailing assumption of observed characteristics.

References

[1]
Mohsen Abbasi, Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. Fairness in representation: Quantifying stereotyping as a representational harm. In Proceedings of the 2019 SIAM International Conference on Data Mining. SIAM, 801--809.
[2]
Roberto L Abreu and Maureen C Kenny. 2018. Cyberbullying and LGBTQ youth: A systematic literature review and recommendations for prevention and intervention. Journal of Child & Adolescent Trauma, Vol. 11, 1 (2018), 81--97.
[3]
William Agnew, Samy Bengio, Os Keyes, Margaret Mitchell, Shira Mitchell, Vinodkumar Prabhakaran, David Vazquez, and Raphael Gontijo Lopes. 2018. Queer in AI 2018 Workshop. https://queerai.github.io/QueerInAI/QinAIatNeurIPS.html. Accessed: 2020-09--10.
[4]
William Agnew, Natalia Bilenko, and Raphael Gontijo Lopes. 2019. Queer in AI 2019 Workshop. https://sites.google.com/corp/view/queer-in-ai/neurips-2019/. Accessed: 2020-09--10.
[5]
Mohammad Al-Rubaie and J Morris Chang. 2019. Privacy-preserving machine learning: Threats and solutions. IEEE Security & Privacy, Vol. 17, 2 (2019), 49--58.
[6]
Amnesty International. n.d. Troll Patrol. https://decoders.amnesty.org/projects/troll-patrol. Accessed: 2020-09--10.
[7]
Amnesty International Canada. 2015. LGBTI Rights. https://www.amnesty.ca/our-work/issues/lgbti-rights. Accessed: 2020-09--10.
[8]
McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2020. `What we can't measure, we can't understand': Challenges to demographic data procurement in the pursuit of fairness. arXiv preprint arXiv:2011.02282 (2020), 1--12.
[9]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[10]
Sherry R Arnstein. 1969. A ladder of citizen participation. Journal of the American Institute of Planners, Vol. 35, 4 (1969), 216--224.
[11]
Florence Ashley. 2019. Gatekeeping hormone replacement therapy for transgender patients is dehumanising. Journal of Medical Ethics, Vol. 45, 7 (2019), 480--482.
[12]
Imran Awan. 2014. Islamophobia and Twitter: A typology of online hate against Muslims on social media. Policy & Internet, Vol. 6, 2 (2014), 133--150.
[13]
Kimberly F Balsam, Yamile Molina, Blair Beadnell, Jane Simoni, and Karina Walters. 2011. Measuring multiple minority stress: the LGBT People of Color Microaggressions Scale. Cultural Diversity and Ethnic Minority Psychology, Vol. 17, 2 (2011), 163.
[14]
David Bamman, Brendan O'Connor, and Noah Smith. 2012. Censorship and deletion practices in Chinese social media. First Monday (2012).
[15]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and Machine Learning. http://www.fairmlbook.org.
[16]
Tom L Beauchamp and James F Childress. 2001. Principles of Biomedical Ethics .Oxford University Press, USA.
[17]
Tom Begley, Tobias Schwedes, Christopher Frye, and Ilya Feige. 2020. Explainability for fair machine learning. arXiv preprint arXiv:2010.07389 (2020), 1--15.
[18]
Joel Bellenson. 2019. 122 Shades of Gray. https://www.geneplaza.com/app-store/72/preview. Accessed: 2020-09--10.
[19]
Nikhil Bhattasali and Esha Maiti. 2015. Machine `gaydar': Using Facebook profiles to predict sexual orientation.
[20]
Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Conference on Fairness, Accountability and Transparency. 149--159.
[21]
Kuteesa R Bisaso, Godwin T Anguzu, Susan A Karungi, Agnes Kiragga, and Barbara Castelnuovo. 2017. A survey of machine learning applications in HIV clinical research and care. Computers in Biology and Medicine, Vol. 91 (2017), 366--371.
[22]
Kuteesa R Bisaso, Susan A Karungi, Agnes Kiragga, Jackson K Mukonzo, and Barbara Castelnuovo. 2018. A comparative study of logistic regression based machine learning techniques for prediction of early virological suppression in antiretroviral initiating HIV patients. BMC Medical Informatics and Decision Making, Vol. 18, 1 (2018), 1--10.
[23]
Raphaël Bize, Erika Volkmar, Sylvie Berrut, Denise Medico, Hugues Balthasar, Patrick Bodenmann, and Harvey J Makadon. 2011. Access to quality primary care for LGBT people. Revue Medicale Suisse, Vol. 7, 307 (2011), 1712--1717.
[24]
Ragnhildur I Bjarnadottir, Walter Bockting, Sunmoo Yoon, and Dawn W Dowding. 2019. Nurse documentation of sexual orientation and gender identity in home healthcare: A text mining study. CIN: Computers, Informatics, Nursing, Vol. 37, 4 (2019), 213--221.
[25]
Miranda Bogen, Aaron Rieke, and Shazeda Ahmed. 2020. Awareness in practice: Tensions in access to sensitive attribute data for antidiscrimination. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 492--500.
[26]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems. 4349--4357.
[27]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175--1191.
[28]
Michael J Bosia, Sandra M McEvoy, and Momin Rahman. 2020. The Oxford Handbook of Global LGBT and Sexual Diversity Politics .Oxford University Press.
[29]
Lisa Bowleg. 2020. We're not all in this together: On COVID-19, intersectionality, and structural inequality. American Journal of Public Health, Vol. 110, 7 (2020), 917.
[30]
Brook. n.d. Gender: A few definitions. https://www.brook.org.uk/your-life/gender-a-few-definitions/. Accessed: 2020--10-01.
[31]
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020), 1--75.
[32]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77--91.
[33]
Ian Burn. 2018. Not all laws are created equal: Legal differences in state non-discrimination laws and the impact of LGBT employment protections. Journal of Labor Research, Vol. 39, 4 (2018), 462--497.
[34]
Joseph Burridge. 2004. `I am not homophobic but...': Disclaiming in discourse resisting repeal of Section 28. Sexualities, Vol. 7, 3 (2004), 327--344.
[35]
Judith Butler. 2011. Bodies That Matter: On the Discursive Limits of Sex .Taylor & Francis.
[36]
Tapabrata Chakraborti, Arijit Patra, and J Alison Noble. 2020. Contrastive fairness in machine learning. IEEE Letters of the Computer Society, Vol. 3, 2 (2020), 38--41.
[37]
Hongyan Chang and Reza Shokri. 2020. On the privacy risks of algorithmic fairness. arXiv preprint arXiv:2011.03731 (2020), 1--14.
[38]
Bobby Chesney and Danielle Citron. 2019. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., Vol. 107 (2019), 1753.
[39]
Jill M Chonody, Scott Edward Rutledge, and Scott Smith. 2012. `That's so gay': Language use and antigay bias among heterosexual college students. Journal of Gay & Lesbian Social Services, Vol. 24, 3 (2012), 241--259.
[40]
Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency. 134--148.
[41]
Jennifer Cobbe. 2019. Algorithmic censorship on social platforms: Power, legitimacy, and resistance. Legitimacy, and Resistance (2019).
[42]
Sarah Cook. 2017. The Battle for China's Spirit: Religious Revival, Repression, and Resistance under Xi Jinping .Rowman & Littlefield.
[43]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018), 1--25.
[44]
Marta R Costa-jussà. 2019. An analysis of gender bias studies in natural language processing. Nature Machine Intelligence, Vol. 1, 11 (2019), 495--496.
[45]
Elliot Creager, Jörn-Henrik Jacobsen, and Richard Zemel. 2020. Exchanging lessons between algorithmic fairness and domain generalization. arXiv preprint arXiv:2010.07249 (2020).
[46]
Kimberlé Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum (1989), 139.
[47]
J Crocker, B Major, and C Steele. 1998. Social stigma. In The Handbook of Social Psychology, Daniel Todd Gilbert, Susan T Fiske, and Gardner Lindzey (Eds.), Vol. 1. Oxford University Press.
[48]
Natasha Culzac. 2014. Egypt's police ?using social media and apps like Grindr to trap gay people'. https://www.independent.co.uk/news/world/africa/egypt-s-police-using-social-media-and-apps-grindr-trap-gay-people-9738515.html.
[49]
Christina DeJong and Eric Long. 2014. The death penalty as genocide: The persecution of `homosexuals' in Uganda. In Handbook of LGBT Communities, Crime, and Justice. Springer, 339--362.
[50]
Laure Delisle, Alfredo Kalaitzis, Krzysztof Majewski, Archy de Berker, Milena Marin, and Julien Cornebise. 2019. A large-scale crowdsourced analysis of abuse against women journalists and politicians on Twitter. arXiv preprint arXiv:1902.03093 (2019), 1--13.
[51]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. 214--226.
[52]
Wayne R Dynes. 2014. The Homophobic Mind .Lulu.com.
[53]
Jesse M Ehrenfeld, Keanan Gabriel Gottlieb, Lauren Brittany Beach, Shelby E Monahan, and Daniel Fabbri. 2019. Development of a natural language processing algorithm to identify and evaluate transgender patients in electronic health record systems. Ethnicity & Disease, Vol. 29, Suppl 2 (2019), 441.
[54]
Tom Embury-Dennis. 2020. Bullied and blackmailed: Gay men in Morocco falling victims to outing campaign sparked by Instagram model. https://www.independent.co.uk/news/world/africa/gay-men-morocco-dating-apps-grindr-instagram-sofia-taloni-a9486386.html. Accessed: 2020-09--10.
[55]
Kawin Ethayarajh. 2020. Is your classifier actually biased? Measuring fairness under uncertainty with Bernstein bounds. arXiv preprint arXiv:2004.12332 (2020), 1--6.
[56]
European Commission. n.d. What personal data is considered sensitive? https://ec.europa.eu/info/law/law-topic/data-protection/reform/rules-business-and-organisations/legal-grounds-processing-data/sensitive-data/what-personal-data-considered-sensitive_en. Accessed: 2020--10-07.
[57]
Drew Fudenberg and David K Levine. 2012. Fairness, risk preferences and independence: Impossibility theorems. Journal of Economic Behavior & Organization, Vol. 81, 2 (2012), 606--612.
[58]
Andrea Ganna, Karin JH Verweij, Michel G Nivard, Robert Maier, Robbee Wedow, Alexander S Busch, Abdel Abdellaoui, Shengru Guo, J Fah Sathirapongsasuti, Paul Lichtenstein, et al. 2019. Large-scale GWAS reveals insights into the genetic architecture of same-sex sexual behavior. Science, Vol. 365, 6456 (2019), eaat7693.
[59]
Andrew Gelman, G Marrson, and Daniel Simpson. 2018. Gaydar and the fallacy of objective measurement. Unpublished manuscript. Retrieved from http://www.stat.columbia.edu/gelman/research/unpublished/gaydar2.pdf.
[60]
Oguzhan Gencoglu. 2020. Cyberbullying detection with fairness constraints. arXiv preprint arXiv:2005.06625 (2020), 1--11.
[61]
Alessandra Gomes, Dennys Antonialli, and Thiago Dias Oliva. 2019. Drag queens and artificial intelligence: Should computers decide what is `toxic' on the internet? https://www.internetlab.org.br/en/freedom-of-expression/drag-queens-and-artificial-intelligence-should-computers-decide-what-is-toxic-on-the-internet/. Accessed: 2020-09--10.
[62]
Vincent Grari, Sylvain Lamprier, and Marcin Detyniecki. 2020. Adversarial learning for counterfactual fairness. arXiv preprint arXiv:2008.13122 (2020), 1--11.
[63]
Maya Gupta, Andrew Cotter, Mahdi Milani Fard, and Serena Wang. 2018. Proxy fairness. arXiv preprint arXiv:1806.11212 (2018), 1--12.
[64]
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. 2018. Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--13.
[65]
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 501--512.
[66]
Alexander Hans, Daniel Schneegaß, Anton Maximilian Sch"afer, and Steffen Udluft. 2008. Safe exploration for reinforcement learning. In ESANN. 143--148.
[67]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning. PMLR, 1929--1938.
[68]
Hoda Heidari and Andreas Krause. 2018. Preventing disparate treatment in sequential decision making. In International Joint Conference on Artificial Intelligence. 2248--2254.
[69]
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2020 a. Aligning AI with shared human values. arXiv preprint arXiv:2008.02275 (2020), 1--29.
[70]
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020 b. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020), 1--27.
[71]
Gregory M Herek, Douglas C Kimmel, Hortensia Amaro, and Gary B Melton. 1991. Avoiding heterosexist bias in psychological research. American Psychologist, Vol. 46, 9 (1991), 957.
[72]
Leora Hoshall. 2012. Afraid of who you are: No promo homo laws in public school sex education. Tex. J. Women & L., Vol. 22 (2012), 219.
[73]
Kimberly A Houser. 2019. Can AI solve the diversity problem in the tech ondustry: Mitigating noise and bias in employment decision-making. Stan. Tech. L. Rev., Vol. 22 (2019), 290.
[74]
Ning Hsieh and Matt Ruther. 2016. Sexual minority health and health risk factors: Intersection effects of gender, race, and sexual identity. American Journal of Preventive Medicine, Vol. 50, 6 (2016), 746--755.
[75]
Human Rights Watch. 2018. US: LGBT people face healthcare barriers. https://www.hrw.org/news/2018/07/23/us-lgbt-people-face-healthcare-barriers. Accessed: 2020-09--10.
[76]
Human Rights Watch. n.d. Maps of anti-LGBT laws country by country. http://internap.hrw.org/features/features/lgbt_laws/. Accessed: 2020--10-06.
[77]
Intertech LGBT+ Diversity Forum. n.d. Intertech LGBT+ Diversity Forum. https://intertechlgbt.interests.me/. Accessed: 2020--10-07.
[78]
Abigail Z Jacobs and Hanna Wallach. 2019. Measurement and fairness. arXiv preprint arXiv:1912.05511 (2019), 1--11.
[79]
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. 2019. Differentially private fair learning. In International Conference on Machine Learning. PMLR, 3000--3008.
[80]
Emma A Jane. 2020. Online abuse and harassment. The International Encyclopedia of Gender, Media, and Communication (2020), 1--16.
[81]
Bargav Jayaraman and David Evans. 2019. Evaluating differentially private machine learning in practice. In 28th USENIX Security Symposium (USENIX Security 19). 1895--1912.
[82]
Disi Ji, Padhraic Smyth, and Mark Steyvers. 2020. Can I trust my fairness metric? Assessing fairness with unlabeled data and Bayesian inference. Advances in Neural Information Processing Systems, Vol. 33 (2020).
[83]
Ti John, William Agnew, Alex Markham, Anja Meunier, Manu Saraswat, Andrew McNamara, and Raphael Gontijo Lopes. 2020. Queer in AI 2020 Workshop. https://sites.google.com/corp/view/queer-in-ai/icml-2020. Accessed: 2020-09--10.
[84]
Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. 2019. Eliciting and enforcing subjective individual fairness. arXiv preprint arXiv:1905.10660 (2019), 1--33.
[85]
Shanna K Kattari, Miranda Olzman, and Michele D Hanna. 2018. `You look fine!' Ableist experiences by people with invisible disabilities. Affilia, Vol. 33, 4 (2018), 477--492.
[86]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning. PMLR, 2564--2572.
[87]
Os Keyes. 2018. The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on Human-Computer Interaction, Vol. 2, CSCW (2018), 1--22.
[88]
Aparup Khatua, Erik Cambria, Kuntal Ghosh, Nabendu Chaki, and Apalak Khatua. 2019. Tweeting in support of LGBT? A deep learning approach. In Proceedings of the ACM India Joint International Conference on Data Science and Management of Data. 342--345.
[89]
Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Fairness through computationally-bounded awareness. Advances in Neural Information Processing Systems, Vol. 31 (2018), 4842--4852.
[90]
Tania King, Zoe Aitken, Allison Milner, Eric Emerson, Naomi Priest, Amalia Karahalios, Anne Kavanagh, and Tony Blakely. 2018. To what extent is the association between disability and mental health in adolescents mediated by bullying? A causal mediation analysis. International Journal of Epidemiology, Vol. 47, 5 (2018), 1402--1413.
[91]
Alexander Kondakov. 2019. The censorship `propaganda' legislation in Russia. State-Sponsored Homophobia (2019).
[92]
Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, and Shane Legg. 2020. Specification gaming: The flip side of AI ingenuity. https://deepmind.com/blog/article/Specification-gamingthe-flip-side-of-AI-ingenuity.
[93]
Michael W Kraus, Brittany Torrez, Jun Won Park, and Fariba Ghayebi. 2019. Evidence for the reproduction of social class in brief speech. Proceedings of the National Academy of Sciences, Vol. 116, 46 (2019), 22998--23003.
[94]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066--4076.
[95]
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H Chi. 2020. Fairness without demographics through adversarially reweighted learning. arXiv preprint arXiv:2006.13114 (2020), 1--15.
[96]
Victoria R LeCroy and Joshua S Rodefer. 2019. The influence of job candidate LGBT association on hiring decisions. North American Journal of Psychology, Vol. 21, 2 (2019).
[97]
Jong Soo Lee, Elijah Paintsil, Vivek Gopalakrishnan, and Musie Ghebremichael. 2019. A comparison of machine learning techniques for classification of HIV patients with antiretroviral therapy-induced mitochondrial toxicity from those without mitochondrial toxicity. BMC Medical Research Methodology, Vol. 19, 1 (2019), 1--10.
[98]
Lesbians Who Tech. n.d. Lesbians Who Tech. https://lesbianswhotech.org/. Accessed: 2020--10-07.
[99]
LGBT Technology Institute. n.d. LGBT Technology Institute. https://www.lgbttech.org/. Accessed: 2020--10-07.
[100]
Danielle Li, Lindsey R Raymond, and Peter Bergman. 2020. Hiring as exploration. Technical Report. National Bureau of Economic Research.
[101]
Chen Liang, Dena Abbott, Y Alicia Hong, Mahboubeh Madadi, and Amelia White. 2019. Clustering help-seeking behaviors in LGBT online communities: A prospective trial. In International Conference on Human-Computer Interaction. Springer, 345--355.
[102]
William M Liu and Saba R Ali. 2008. Social class and classism: Understanding the psychological impact of poverty and inequality. Handbook of Counseling Psychology, Vol. 4 (2008), 159--175.
[103]
Yujia Liu, Weiming Zhang, and Nenghai Yu. 2017. Protecting privacy in shared photos via adversarial examples based stealth. Security and Communication Networks, Vol. 2017 (2017).
[104]
Yi-Ling Liu. 2020. How a dating app helped a generation of Chinese come out of the closet. https://www.nytimes.com/2020/03/05/magazine/blued-china-gay-dating-app.html.
[105]
Shen Lu and Katie Hunt. 2016. China bans same-sex romance from TV screens. http://www.https://edition.cnn.com/2016/03/03/asia/china-bans-same-sex-dramas/index.html. Accessed: 2020-09--10.
[106]
Julia L Marcus, Leo B Hurley, Douglas S Krakower, Stacey Alexeeff, Michael J Silverberg, and Jonathan E Volk. 2019. Use of electronic health record data and machine learning to identify candidates for HIV pre-exposure prophylaxis: A modelling study. The Lancet HIV, Vol. 6, 10 (2019), e688--e695.
[107]
Donald Martin Jr., Vinod Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S Isaac. 2020. Participatory problem formulation for fairer machine learning through community based system dynamics. arXiv preprint arXiv:2005.07572 (2020), 1--6.
[108]
Vickie M Mays and Susan D Cochran. 2001. Mental health correlates of perceived discrimination among lesbian, gay, and bisexual adults in the United States. American Journal of Public Health, Vol. 91, 11 (2001), 1869--1876.
[109]
Joseph McCormick. 2015. WARNING: These Grindr profiles are actually robots trying to steal your info. https://www.pinknews.co.uk/2015/08/11/warning-these-grindr-profiles-are-actually-robots-trying-to-steal-your-info/. Accessed: 2020-09--10.
[110]
Melissa D McCradden, Shalmali Joshi, Mjaye Mazwi, and James A Anderson. 2020. Ethical limitations of algorithmic fairness solutions in health care machine learning. The Lancet Digital Health, Vol. 2, 5 (2020), e221--e223.
[111]
Elizabeth McDermott. 2015. Asking for help online: Lesbian, gay, bisexual and trans youth, self-harm and articulating the `failed' self. Health, Vol. 19, 6 (2015), 561--577.
[112]
Mental Health Foundation. 2020. Mental health statistics: LGBT people. https://www.mentalhealth.org.uk/statistics/mental-health-statistics-lgbt-people. Accessed: 2020-09--10.
[113]
Ilan H Meyer. 1995. Minority stress and mental health in gay men. Journal of Health and Social Behavior (1995), 38--56.
[114]
Ilan H Meyer. 2003. Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence. Psychological Bulletin, Vol. 129, 5 (2003), 674.
[115]
Eva Moore, Amy Wisniewski, and Adrian Dobs. 2003. Endocrine treatment of transsexual people: A review of treatment regimens, outcomes, and adverse effects. The Journal of Clinical Endocrinology & Metabolism, Vol. 88, 8 (2003), 3467--3473.
[116]
G Cristina Mora. 2014. Making Hispanics: How activists, bureaucrats, and media constructed a new American. University of Chicago Press.
[117]
Kevin L Nadal, Marie-Anne Issa, Jayleen Leon, Vanessa Meterko, Michelle Wideman, and Yinglee Wong. 2011. Sexual orientation microaggressions: `Death by a thousand cuts' for lesbian, gay, and bisexual youth. Journal of LGBT Youth, Vol. 8, 3 (2011), 234--259.
[118]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 634--646.
[119]
National Health Service. n.d. Gender dysphoria. https://www.nhs.uk/conditions/gender-dysphoria/. Accessed: 2020-09--10.
[120]
Kathryn O'Neill. 2020. Health vulnerabilities to COVID-19 among LGBT adults in California. https://escholarship.org/uc/item/8hc4z2gb. Accessed: 2020-09--10.
[121]
Nancy Ordover. 2003. American Eugenics: Race, Queer Anatomy, and the Science of Nationalism .U of Minnesota Press.
[122]
Out in Tech. n.d. Out in Tech. https://outintech.com/. Accessed: 2020--10-07.
[123]
Mike C Parent, Cirleen DeBlaere, and Bonnie Moradi. 2013. Approaches to research on intersectionality: Perspectives on gender, LGBT, and racial/ethnic identities. Sex Roles, Vol. 68, 11--12 (2013), 639--645.
[124]
Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. arXiv preprint arXiv:1808.07231 (2018), 1--6.
[125]
Vinicius Gomes Pereira. 2018. Using supervised machine learning and sentiment analysis techniques to predict homophobia in Portuguese tweets. Ph.D. Dissertation. Fundacc ao Getulio Vargas.
[126]
Adam Poulsen, Eduard Fosch-Villaronga, and Roger Andre Søraa. 2020. Queering machines. Nature Machine Intelligence, Vol. 2, 3 (2020), 152--152.
[127]
Mattia Prosperi, Yi Guo, Matt Sperrin, James S Koopman, Jae S Min, Xing He, Shannan Rich, Mo Wang, Iain E Buchan, and Jiang Bian. 2020. Causal inference and counterfactual prediction in machine learning for actionable healthcare. Nature Machine Intelligence, Vol. 2, 7 (2020), 369--375.
[128]
Meghan Romanelli and Kimberly D Hudson. 2017. Individual and systemic barriers to health care: Perspectives of lesbian, gay, bisexual, and transgender adults. American Journal of Orthopsychiatry, Vol. 87, 6 (2017), 714.
[129]
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. Faceforensics+: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1--11.
[130]
Jan A Roth, Gorjan Radevski, Catia Marzolini, Andri Rauch, Huldrych F Günthard, Roger D Kouyos, Christoph A Fux, Alexandra U Scherrer, Alexandra Calmy, Matthias Cavassini, et al. 2020. Cohort-derived machine learning models for individual prediction of chronic kidney disease in people living with Human Immunodeficiency Virus: A prospective multicenter cohort study. The Journal of Infectious Diseases (2020).
[131]
Punyajoy Saha, Binny Mathew, Pawan Goyal, and Animesh Mukherjee. 2019. HateMonitors: Language agnostic abuse detection in social media. arXiv preprint arXiv:1909.12642 (2019), 1--8.
[132]
Maria C Sanchez and Linda Schlossberg. 2001. Passing: Identity and Interpretation in Sexuality, Race, and Religion. Vol. 29. NYU Press.
[133]
Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R Brubaker. 2020. How we've taught algorithms to see identity: Constructing race and gender in image databases for facial analysis. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (2020), 1--35.
[134]
Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. 1--10.
[135]
Jason S Schneider, Vincent MB Silenzio, and Laura Erickson-Schroth. 2019. The GLMA Handbook on LGBT Health. ABC-CLIO.
[136]
Dominic Scicchitano. 2019. The `real' Chechen man: Conceptions of religion, nature, and gender and the persecution of sexual minorities in postwar Chechnya. Journal of Homosexuality (2019), 1--18.
[137]
Brad Sears and Christy Mallory. 2011. Documented evidence of employment discrimination & its effects on LGBT people.
[138]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 59--68.
[139]
John Semerdjian, Konstantinos Lykopoulos, Andrew Maas, Morgan Harrell, Julie Priest, Pedro Eitz-Ferrer, Connor Wyand, and Andrew Zolopa. 2018. Supervised machine learning to predict HIV outcomes using electronic health record and insurance claims data. In AIDS 2018 Conference.
[140]
Katherine Sender. 2018. The gay market is dead, long live the gay market: From identity to algorithm in predicting consumer behavior. Advertising & Society Quarterly, Vol. 18, 4 (2018).
[141]
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2020. Towards controllable biases in language generation. arXiv preprint arXiv:2005.00268 (2020), 1--16.
[142]
Mark Sherry. 2019. Disablist hate speech online. Disability Hate Speech: Social, Cultural and Political Contexts (2019).
[143]
Sonia Singh, Ruiguang Song, Anna Satcher Johnson, Eugene McCray, and H Irene Hall. 2018. HIV incidence, prevalence, and undiagnosed infections in US men who have sex with men. Annals of Internal Medicine, Vol. 168, 10 (2018), 685--694.
[144]
Brij Mohan Lal Srivastava, Aurélien Bellet, Marc Tommasi, and Emmanuel Vincent. 2019 a. Privacy-preserving adversarial representation learning in ASR: Reality or illusion?. In 20th Annual Conference of the International Speech Communication Association.
[145]
Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019 b. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2459--2468.
[146]
Stonewall. n.d. The truth about trans. https://www.stonewall.org.uk/truth-about-trans. Accessed: 2021-04--19.
[147]
Yolande Strengers, Lizhen Qu, Qiongkai Xu, and Jarrod Knibbe. 2020. Adhering, steering, and queering: Treatment of gender in natural language generation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.
[148]
Derek H Suite, Robert La Bril, Annelle Primm, and Phyllis Harrison-Ross. 2007. Beyond misdiagnosis, misunderstanding and mistrust: Relevance of the historical perspective in the medical and mental health treatment of people of color. Journal of the National Medical Association, Vol. 99, 8 (2007), 879.
[149]
Seyed Amin Tabatabaei, Mark Hoogendoorn, and Aart van Halteren. 2018. Narrowing reinforcement learning: Overcoming the cold start problem for personalized health interventions. In International Conference on Principles and Practice of Multi-Agent Systems. Springer, 312--327.
[150]
Rima S Tanash, Zhouhan Chen, Tanmay Thakur, Dan S Wallach, and Devika Subramanian. 2015. Known unknowns: An analysis of Twitter censorship in Turkey. In Proceedings of the 14th ACM Workshop on Privacy in the Electronic Society. 11--20.
[151]
Margit Tavits and Efrén O Pérez. 2019. Language influences mass opinion toward gender and LGBT equality. Proceedings of the National Academy of Sciences, Vol. 116, 34 (2019), 16781--16786.
[152]
Elliot A Tebbe and Bonnie Moradi. 2016. Suicide risk in trans populations: An application of minority stress theory. Journal of Counseling Psychology, Vol. 63, 5 (2016), 520.
[153]
The Trevor Project. 2020. National survey on LGBTQ youth mental health 2020. https://www.thetrevorproject.org/survey-2020/. Accessed: 2020-09--10.
[154]
The Trevor Project. n.d. The Trevor Project. https://www.thetrevorproject.org/. Accessed: 2020-09--10.
[155]
Crispin Thurlow. 2001. Naming the “outsider within”: Homophobic pejoratives and the verbal abuse of lesbian, gay and bisexual high-school pupils. Journal of Adolescence, Vol. 24, 1 (2001), 25--38.
[156]
R Jay Turner and Samuel Noh. 1988. Physical disability and depression: A longitudinal analysis. Journal of Health and Social Behavior (1988), 23--37.
[157]
Aaron van Dorn, Rebecca E Cooney, and Miriam L Sabin. 2020. COVID-19 exacerbating inequalities in the US. Lancet, Vol. 395, 10232 (2020), 1243.
[158]
Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, Vol. 4, 2 (2017), 1--17.
[159]
Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. 2018. Deep learning for computer vision: A brief review. Computational Intelligence and Neuroscience, Vol. 2018 (2018).
[160]
Barbara C Wallace and Erik Santacruz. 2017. Addictions and substance abuse in the LGBT community: New approaches. LGBT Psychology and Mental Health: Emerging Research and Advances (2017), 153--175.
[161]
Yuanyuan Wang, Zhishan Hu, Ke Peng, Ying Xin, Yuan Yang, Jack Drescher, and Runsen Chen. 2019. Discrimination against LGBT populations in China. The Lancet Public Health, Vol. 4, 9 (2019), e440--e441.
[162]
Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, Vol. 114, 2 (2018), 246.
[163]
Michael Weinberg. 2009. LGBT-inclusive language. English Journal, Vol. 98, 4 (2009), 50.
[164]
Gregor Wolbring. 2001. Where do we draw the line? Surviving eugenics in a technological world. Disability and the Life Course: Global Perspectives (2001), 38--49.
[165]
Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. arXiv preprint arXiv:2005.12246 (2020), 1--8.
[166]
Elad Yom-Tov, Guy Feraru, Mark Kozdoba, Shie Mannor, Moshe Tennenholtz, and Irit Hochberg. 2017. Encouraging physical activity in patients with diabetes: Intervention using a reinforcement learning system. Journal of Medical Internet Research, Vol. 19, 10 (2017), e338.
[167]
Jillian York. 2015. Privatising censorship online. Global Information Society Watch 2015: Sexual rights and the internet (2015), 26--29.
[168]
Sean D Young, Wenchao Yu, and Wei Wang. 2017. Toward automating HIV identification: Machine learning for rapid identification of HIV-related social media data. Journal of Acquired Immune Deficiency Syndromes, Vol. 74, Suppl 2 (2017), S128.
[169]
Jiaming Zhang, Jitao Sang, Xian Zhao, Xiaowen Huang, Yanfeng Sun, and Yongli Hu. 2020. Adversarial privacy-preserving filter. In Proceedings of the 28th ACM International Conference on Multimedia. 1423--1431.

Cited By

View all
  • (2024)Ethical Considerations for Artificial Intelligence Applications for HIVAI10.3390/ai50200315:2(594-601)Online publication date: 7-May-2024
  • (2024)Inherent Limitations of AI FairnessCommunications of the ACM10.1145/362470067:2(48-55)Online publication date: 25-Jan-2024
  • (2024)Recourse for Reclamation: Chatting with Generative Language ModelsExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650999(1-14)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
          July 2021
          1077 pages
          ISBN:9781450384735
          DOI:10.1145/3461702
          This work is licensed under a Creative Commons Attribution International 4.0 License.

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 30 July 2021

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. algorithmic fairness
          2. gender identity
          3. machine learning
          4. marginalised groups
          5. queer communities
          6. sexual orientation

          Qualifiers

          • Research-article

          Conference

          AIES '21
          Sponsor:

          Acceptance Rates

          Overall Acceptance Rate 61 of 162 submissions, 38%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)1,489
          • Downloads (Last 6 weeks)188
          Reflects downloads up to 18 Aug 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Ethical Considerations for Artificial Intelligence Applications for HIVAI10.3390/ai50200315:2(594-601)Online publication date: 7-May-2024
          • (2024)Inherent Limitations of AI FairnessCommunications of the ACM10.1145/362470067:2(48-55)Online publication date: 25-Jan-2024
          • (2024)Recourse for Reclamation: Chatting with Generative Language ModelsExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650999(1-14)Online publication date: 11-May-2024
          • (2024)Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi studyBMJ Open10.1136/bmjopen-2023-07910514:3(e079105)Online publication date: 15-Mar-2024
          • (2024)How AI hype impacts the LGBTQ + communityAI and Ethics10.1007/s43681-024-00423-8Online publication date: 14-Feb-2024
          • (2023)Should they? Mobile Biometrics and Technopolicy Meet Queer Community ConsiderationsProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3617694.3623255(1-10)Online publication date: 30-Oct-2023
          • (2023)Fairness Without Demographic Data: A Survey of ApproachesProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3617694.3623234(1-12)Online publication date: 30-Oct-2023
          • (2023)Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI HarmsProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604682(375-386)Online publication date: 8-Aug-2023
          • (2023)Beyond the Binary --- Queering AI for an Inclusive FutureInteractions10.1145/359014130:3(31-33)Online publication date: 1-May-2023
          • (2023)Wearable Identities: Understanding Wearables’ Potential for Supporting the Expression of Queer IdentitiesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581327(1-19)Online publication date: 19-Apr-2023
          • Show More Cited By

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media