Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 201... more Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 177-183).In recent years, a significant amount of resources and attention has been directed at increasing the diversity of the hi-tech workforce in the United States. Generally speaking, the underrepresentation of minorities and women in tech has been understood as an "educational pipeline problem," - for a variety of reasons, these groups lack the social supports and resources needed to develop marketable technical literacies. In this thesis I complicate the educational pipeline narrative by taking a close look at the perspectives and practices of three different groups. First, I explore widespread assumptions and recruitment practices found in the tech industry, based on interviews I conducted with over a dozen leaders and founders of tech companies. I found that widespread notions of what merit...
<p>This chapter discusses contemporary debates regarding the use of artificial intelligence... more <p>This chapter discusses contemporary debates regarding the use of artificial intelligence as a vehicle for criminal justice reform. It closely examines two general approaches to what has been widely branded as "algorithmic fairness" in criminal law: the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions; and the development of "best practices" and managerialist standards for maintaining a baseline of accuracy, transparency, and validity in these systems. Attempts to render AI-branded tools more accurate by addressing narrow notions of bias miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The chapter then calls for a radically different understanding of the role and function of the carceral state, as a starting place for re-imagining the role of "AI" as a transformative force in the criminal legal system.</p>
Research within the social sciences and humanities has long characterized the work of data scienc... more Research within the social sciences and humanities has long characterized the work of data science as a sociotechnical process, comprised of a set of logics and techniques that are inseparable from specific social norms, expectations and contexts of development and use. Yet all too often the assumptions and premises underlying data analysis remain unexamined, even in contemporary debates about the fairness of algorithmic systems. This blindspot exists in part because the methodological toolkit used to evaluate the fairness of algorithmic systems remains limited to a narrow set of computational and legal modes of analysis. In this paper, we expand on Elish and Boyd's [17] call for data scientists to develop more robust frameworks for understanding their work as situated practice by examining a specific methodological debate within the field of anthropology, frequently referred to as the practice of "studying up". We reflect on the contributions that the call to "st...
Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias... more Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as "proxies" for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose. If machine learning is operationalized merely in the service of predicting indivi...
Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly prol... more Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems. The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state.
Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 201... more Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 177-183).In recent years, a significant amount of resources and attention has been directed at increasing the diversity of the hi-tech workforce in the United States. Generally speaking, the underrepresentation of minorities and women in tech has been understood as an "educational pipeline problem," - for a variety of reasons, these groups lack the social supports and resources needed to develop marketable technical literacies. In this thesis I complicate the educational pipeline narrative by taking a close look at the perspectives and practices of three different groups. First, I explore widespread assumptions and recruitment practices found in the tech industry, based on interviews I conducted with over a dozen leaders and founders of tech companies. I found that widespread notions of what merit...
<p>This chapter discusses contemporary debates regarding the use of artificial intelligence... more <p>This chapter discusses contemporary debates regarding the use of artificial intelligence as a vehicle for criminal justice reform. It closely examines two general approaches to what has been widely branded as "algorithmic fairness" in criminal law: the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions; and the development of "best practices" and managerialist standards for maintaining a baseline of accuracy, transparency, and validity in these systems. Attempts to render AI-branded tools more accurate by addressing narrow notions of bias miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The chapter then calls for a radically different understanding of the role and function of the carceral state, as a starting place for re-imagining the role of "AI" as a transformative force in the criminal legal system.</p>
Research within the social sciences and humanities has long characterized the work of data scienc... more Research within the social sciences and humanities has long characterized the work of data science as a sociotechnical process, comprised of a set of logics and techniques that are inseparable from specific social norms, expectations and contexts of development and use. Yet all too often the assumptions and premises underlying data analysis remain unexamined, even in contemporary debates about the fairness of algorithmic systems. This blindspot exists in part because the methodological toolkit used to evaluate the fairness of algorithmic systems remains limited to a narrow set of computational and legal modes of analysis. In this paper, we expand on Elish and Boyd's [17] call for data scientists to develop more robust frameworks for understanding their work as situated practice by examining a specific methodological debate within the field of anthropology, frequently referred to as the practice of "studying up". We reflect on the contributions that the call to "st...
Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias... more Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as "proxies" for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose. If machine learning is operationalized merely in the service of predicting indivi...
Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly prol... more Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems. The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state.
Uploads