Sheilla Njoto, PhD Scholar at the University of Melbourne, is an expert at the intersection of technology, society, and ethics. Currently associated with the Centre for AI and Digital Ethics within the School of Social and Political Sciences, her research focuses on understanding how semi-automated hiring processes may discriminate against women and the feminine language used in CVs. In the professional realm, Sheilla serves as the Strategy Lead at Nation Insights in Jakarta, where she leverages human and cultural insights to drive compelling business strategies. She passionately assists businesses in recognizing and actualizing their core purposes, translating abstract ideas into actionable plans. Beyond academia and the corporate world, Sheilla's commitment to social engagement is manifested in her multiple leadership roles. She is the Co-Founder and Strategic Advisor of Gen: Politics Ltd, a Melbourne-based not-for-profit organization aimed at fostering connections between young people and policymaking. Her dedication to youth employment issues is further exemplified in her role as Sous-Chair of the Youth Employment agenda of Y20 Indonesia 2022, where she has actively chaired plenaries discussing strategies to improve youth employment conditions. With a multidisciplinary approach that seamlessly integrates her academic research with her professional and social commitments, Sheilla Njoto stands as an influential figure in her field. Her work not only contributes to the scholarly understanding of technology's societal impact but also shapes practical solutions to pressing contemporary challenges. Supervisors: Prof Leah Ruppanner and PhD
This paper explores the extent to which gender bias is introduced in the deployment of automation... more This paper explores the extent to which gender bias is introduced in the deployment of automation for hiring practices. We use an interdisciplinary methodology to test our hypotheses: observing a human-led recruitment panel and building an explainable algorithmic prototype from the ground up, to quantify gender bias. The key findings of this study are threefold: identifying potential sources of human bias from a recruitment panel's ranking of CVs; identifying sources of bias from a potential algorithmic pipeline which simulates human decision making; and recommending ways to mitigate bias from both aspects. Our research has provided an innovative research design that combines social science and data science to theorise how automation may introduce bias in hiring practices, and also pinpoint where it is introduced. It also furthers the current scholarship on gender bias in hiring practices by providing key empirical inferences on the factors contributing to bias.
This paper discusses the impact of biases in Machine Learning and how it could lead to unintended... more This paper discusses the impact of biases in Machine Learning and how it could lead to unintended consequences. It is an attempt to set a perspective on the importance of data and algorithm when employing Machine Learning in predictive analytics. The paper provides an overview on Data Analytics and Machine Learning as the emerging topics in today's information era. In particular, it describes what is meant by Machine Learning and how Machine Learning can be utilized for automated decision making in the Accounting discipline. It also depicts a case on the use of Machine Learning for Accounting job recruitment in Australia and in the United States. Our results show that our female candidates are disadvantaged in two ways: (1) the Machine Learning algorithm viewed gender specific name more favorable; (2) human recruiters were more likely to view and call the male applicant for interview. We demonstrate two clear pathways through which female job applicants experience occupational discrimination in job recruitment process.
Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motiv... more Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motivating example for exploring bias in NLP. However, the manifestation of gendered language in application materials has received limited attention. This paper investigates the framing of skills and background in CVs of selfidentified men and women. We introduce a data set of 1.8K authentic, English-language, CVs from the US, covering 16 occupations, allowing us to partially control for the confound occupation-specific gender base rates. We find that (1) women use more verbs evoking impressions of low power; and (2) classifiers capture gender signal even after data balancing and removal of pronouns and named entities, and this holds for both transformer-based and linear classifiers.
This paper examines how the adoption of hiring algorithms expands gender inequality in the labour... more This paper examines how the adoption of hiring algorithms expands gender inequality in the labour force. It hypothesises that, given its nature in predicting the future based on historical data, hiring algorithms exhibit the risk of discriminating women. Its proneness in repeating, if not expanding, societal bias is predominantly geared by its propensity in blindly feeding on (1) data misrepresentation, (2) correlational errors, and (3) the limitation of datasets. The study presents that, firstly, despite the identical qualifications, skills and experience, hiring algorithms rank male candidates higher than female. This includes ‘passive’ submissions on online candidate profiling. Secondly, despite the overrepresentation of women in parenthood and non-traditional workforce, hiring algorithms discriminate both male and female candidates with parental leave in comparison to those without. Thirdly, it reveals that hiring algorithms are significantly more prone to conceiving gender discrimination in assessing gender-based key-worded resumés compared to the entirely identical resumés. This paper has demonstrated that the rise of digitalisation should redefine the meaning of ‘fairness’, ‘discrimination’ and ‘accountability’. Despite the seriousness of these problems, however, the lack of cross-disciplinary study in this particular issue pertains. This paper’s contentions are a mere reprise of arguments that offer complex theories. It has sought to start a new conversation about the acute problems faced by women.
This paper explores the extent to which gender bias is introduced in the deployment of automation... more This paper explores the extent to which gender bias is introduced in the deployment of automation for hiring practices. We use an interdisciplinary methodology to test our hypotheses: observing a human-led recruitment panel and building an explainable algorithmic prototype from the ground up, to quantify gender bias. The key findings of this study are threefold: identifying potential sources of human bias from a recruitment panel's ranking of CVs; identifying sources of bias from a potential algorithmic pipeline which simulates human decision making; and recommending ways to mitigate bias from both aspects. Our research has provided an innovative research design that combines social science and data science to theorise how automation may introduce bias in hiring practices, and also pinpoint where it is introduced. It also furthers the current scholarship on gender bias in hiring practices by providing key empirical inferences on the factors contributing to bias.
This paper discusses the impact of biases in Machine Learning and how it could lead to unintended... more This paper discusses the impact of biases in Machine Learning and how it could lead to unintended consequences. It is an attempt to set a perspective on the importance of data and algorithm when employing Machine Learning in predictive analytics. The paper provides an overview on Data Analytics and Machine Learning as the emerging topics in today's information era. In particular, it describes what is meant by Machine Learning and how Machine Learning can be utilized for automated decision making in the Accounting discipline. It also depicts a case on the use of Machine Learning for Accounting job recruitment in Australia and in the United States. Our results show that our female candidates are disadvantaged in two ways: (1) the Machine Learning algorithm viewed gender specific name more favorable; (2) human recruiters were more likely to view and call the male applicant for interview. We demonstrate two clear pathways through which female job applicants experience occupational discrimination in job recruitment process.
Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motiv... more Gender discrimination in hiring is a pertinent and persistent bias in society, and a common motivating example for exploring bias in NLP. However, the manifestation of gendered language in application materials has received limited attention. This paper investigates the framing of skills and background in CVs of selfidentified men and women. We introduce a data set of 1.8K authentic, English-language, CVs from the US, covering 16 occupations, allowing us to partially control for the confound occupation-specific gender base rates. We find that (1) women use more verbs evoking impressions of low power; and (2) classifiers capture gender signal even after data balancing and removal of pronouns and named entities, and this holds for both transformer-based and linear classifiers.
This paper examines how the adoption of hiring algorithms expands gender inequality in the labour... more This paper examines how the adoption of hiring algorithms expands gender inequality in the labour force. It hypothesises that, given its nature in predicting the future based on historical data, hiring algorithms exhibit the risk of discriminating women. Its proneness in repeating, if not expanding, societal bias is predominantly geared by its propensity in blindly feeding on (1) data misrepresentation, (2) correlational errors, and (3) the limitation of datasets. The study presents that, firstly, despite the identical qualifications, skills and experience, hiring algorithms rank male candidates higher than female. This includes ‘passive’ submissions on online candidate profiling. Secondly, despite the overrepresentation of women in parenthood and non-traditional workforce, hiring algorithms discriminate both male and female candidates with parental leave in comparison to those without. Thirdly, it reveals that hiring algorithms are significantly more prone to conceiving gender discrimination in assessing gender-based key-worded resumés compared to the entirely identical resumés. This paper has demonstrated that the rise of digitalisation should redefine the meaning of ‘fairness’, ‘discrimination’ and ‘accountability’. Despite the seriousness of these problems, however, the lack of cross-disciplinary study in this particular issue pertains. This paper’s contentions are a mere reprise of arguments that offer complex theories. It has sought to start a new conversation about the acute problems faced by women.
Uploads
Papers by Sheilla M Njoto