Part III Rogerian Argument Essay
Part III Rogerian Argument Essay
Part III Rogerian Argument Essay
For Discrimination in AI
change the behavior of professionals from the tech community” (Hagendorff, 2020)
PART III: ROGERIAN ARGUMENT
For Discrimination in AI
A. Stakeholder I – Viewpoints B.
Stakeholder II – Viewpoints C.
Interpretations / Interwoven Perspectives / Deeper Meanings
For Discrimination in AI
increasing efficiency and accuracy of many tasks. While AI has many promising benefits
individuals when making decisions. It seems counterintuitive that the programs that are
meant to increase efficiency can be biased, but the biases can stem from several
sections of development including the data, algorithm, and developer. With AI being
algorithms increase. For example, AI is being used in the judicial system as a prediction
technique that predicts future criminal activity, and court decisions are made with that in
mind. Of course, solutions to discrimination have been proposed. On the technical side,
there are already bias mitigation softwares (often build-in to the AI) used to spot bias in
allow for more opportunities to point out flaws that challenge an AI program’s ethicality
(Rossi, 2019). However, despite such solutions having been proposed, discrimination is
still a prominent issue in AI. At this point, it is crucial to evaluate the effectiveness of
such solutions, and act based on that. Two of the most commonly implemented
solutions are creating ethical guidelines and increasing diversity in the AI community.
Many companies create ethical guidelines in response to concerns about the
goodness of the AI technologies they use or put out. These ethical guidelines give
general rules about how developers and other involved employees under the company
should create and put their AI program in place. With discrimination in AI becoming a
prominent issue, AI4People under the Atomium - European Institute for Science, Media,
and Democracy has stepped up to the plate. Started in June of 2017, the organization’s
goal is to create a common public space for laying out the founding principles, policies,
one of its key members, Prof. Luciano Floridi, presented a unified ethical framework for
ethical framework, those creating AI under the principles outlines will create fair, non-
Services Industry, and Media & Technology. Each of these subcommittees will provide
more industry-specific recommendations rather than the general rules that many
companies offer.
community. With more diverse AI teams that include a range of identities on all
spectrums, the technologies can be created with a broader societal context and various
perspectives. The hope is that perspectives from groups who have been historically
discriminated against will be able to point out possible issues in which the program
might become biased such as using a predominantly white dataset, or neglecting to test
the algorithm on females. The followup question to this solution is how can we increase
through several universities known for their technology programs including Carnegie
Mellon University, Princeton University, Georgia Tech University, and more. The
summer programs aims to serve groups historically excluded from AI: indigenous
peoples, black, hispanic or latinx, pacific islander, southeast asian people, trans and
non-binary people, two-spirit people, cis women and girls, lesbian, gay, bisexual, and
queer people, student who demonstrated financial need, and future first-generation
college students. Clearly, AI4ALL targets all underrepresented groups, and seeks to
broaden AI’s diversity in all spectrums of identity. According to AI4ALL’s website, the
organization has “impacted 12,300 people in all 50 states around the world through our
programs and our alumni outreach”. Educating underrepresented groups about AI and
Both the preventative ethical approach and the increasing diversity approach are
valid means to reducing discrimination in AI, but the ethical approach is simply not as
many of these outlines are ineffective, do not go into enough detail, and do not cover
many important concerns. Many guidelines do not go into enough specifics to actually
transparency, and workers under those guidelines have reported that they do not
adhere to them strictly. More specifically, “The effectiveness of guidelines or ethical
codes is almost zero and they do not change the behavior of professionals from the
tech community” (Hagendorff, 2020). While guidelines are important to building ethical
AI programs, this alone will not be enough to combat the complex issue of
developer, the data training sets, and/or the algorithm. The fact that there are several
causes to discrimination in AI is just one reason why this issue is such a daunting one;
affects our opinions (Lin, 2020). Implicit bias is unintentional and hides under the norms
of society, and it often makes its way into AI without the developers realizing. Therefore,
even with the most detailed ethical guidelines, AI will reflect and further perpetuate the
implicit biases of society. However, this does not mean that developers and companies
than the ethical guidlines approach. Between the examples of AI4People and AI4ALL,
AI4ALL already shows promising results by the sheer amount of reach their program
guarantee that any, let alone all, AI programs will be made with such guidelines in mind.
More diverse teams will be more likely to have an ethical mindset from the beginning, as
it is only in their best interest, and the programs can undergo the critique of multiple
perspectives. A common issue with AI is the neglect to test the program on diverse
people, like in face recognition softwares. Face recognition applications were reported
to have difficulty identifying non-Caucasian faces, and voice recognition systems could
not recognize a woman’s voice as accurately as a man’s. If the programs were tested
with a diverse set of consumers in mind, these issues could have been spotted and
corrected. Even more efficiently, if the developing team was diverse itself, including
such diversity would only be the automatic choice. However, diversity itself is not a
catch-all solution. As mentioned above, implicit bias is a tricky situation and can even
an optimized combination of the many solutions, including diversity and AI ethics. The
diversity solution still has a limitation, there aren’t many ways to increase diversity itself
Even then, the students and alumni of AI4ALL aren’t guaranteed to follow through with a
career in AI, nor are they guaranteed to create ethical AI programs, as they could still be
just as likely to discriminate against other groups. Creating more robust and enforced
ethical guidelines along with the active hiring of diverse teams on AI projects, these two
efficiency.
the expanding applications of AI. Several algorithms have been found to make biased
decisions based off of race, gender, and other factors without this being the intention of
biases of society, and there are multiple different possible causes to the bias in each
discrimination in AI are sufficient. On one hand, several developers and companies who
use AI are proposing ethical guidelines to be incorporated in the development of the
algorithms. These ethics cover important topics that will work towards eliminating bias
such as explainability, transparency, and bias mitigation. On the other hand, it has been
shown that these ethical guidelines are not effective, and even after their
more effective solution to AI as the teams creating the programs themselves will have a
broader societal context in mind, but even this approach has its limitations. Therefore,
AI communities should seek to create more robust ethical guidelines and actively hire
diverse teams to work on the programs. AI technologies can and will be applied to
almost every field, and will increase the efficiency and accuracy of many processes.
This fast-growing pace is why it is important to ensure that our approaches to reducing
AI4ALL, ai-4-all.org/.
& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,
doi:10.1007/s11023-020-09517-8.
Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of
Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,
doi:10.1007/s13347-020-00406-7.
EBSCOhost, search.ebscohost.com/login.aspx?
direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
BIBLIOGRAPHY
AI4ALL, ai-4-all.org/.
& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,
doi:10.1007/s11023-020-09517-8.
Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of
Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,
doi:10.1007/s13347-020-00406-7.
EBSCOhost, search.ebscohost.com/login.aspx?
direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
Word Count Screenshot