Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AI Healthcare

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

ETHICAL ISSUES

IN
HEALTHCARE

HAN20090214
British University Vietnam
Ethical Issues in Artificial Intelligence in
Healthcare
HAN20090214

 
Abstract
AI raises legal, ethical, and philosophical questions about privacy, surveillance, bias, and human
judgment. New digital technologies have raised concerns about inaccuracy and data breaches.
Healthcare failures may be harmful to patients. Remember that people see doctors at their
most vulnerable. Artificial intelligence in healthcare may raise legal and ethical difficulties, yet
there are no clear rules. This evaluation emphasizes the algorithmic openness, privacy, and
security of all stakeholders and the cybersecurity of linked vulnerabilities.

Introduction
Healthcare systems encounter growing medical demand, chronic illness, and resource
restrictions. As digital health technologies are used more, healthcare data is expanding. Health
professionals might concentrate on disease causes and monitor preventive actions and
therapies if they are correctly exploited. Therefore, decision-makers, including policymakers,
legislators, and others, should be educated. Computer scientists, statisticians, and clinical
entrepreneurs all agree that AI, and especially machine learning, is going to be critical to the
success of healthcare reform (Marley, 2020). Computer programs that can reason and learn are
called artificially intelligent (includes adaptability, sensory comprehension, and social
engagement. By trying to extract useful insights from the masses of digital data generated at
every level of healthcare delivery, artificial intelligence (AI) could be able to radically alter the
industry (Drukker, 2020).

Generally, artificial intelligence is built using a software and hardware systems. An artificial
neural network (ANN) provides a theoretical basis for the development of AI algorithms. It is a
simulation of the brain of a human with weighted channels for information transfer between
individual neurons. Artificial intelligence programs find complicated, non-linear associations in
vast datasets (analytics). By identifying and fixing algorithmic failures, training machines may
boost the accuracy of the predicting framework (Rong, 2020) (Miller, 2018).

New technologies may introduce inaccuracies and data breaches. Errors in high-risk healthcare
may have severe repercussions for patients. This is crucial because patients encounter
professionals at their most vulnerable healthcare may have severe repercussions for patients.
This is crucial because patients encounter professionals at their most vulnerable. AI can deliver
evidence-based guidance and decision making to clinicians if harnessed properly (AI-Health). It
offers diagnostics, medication development, epidemiology, individualized treatment, and
operational efficiency. Integrating AI solutions into medical practice requires a robust
governance structure to safeguard people from damage, including unethical conduct (Lim,
2017).

Utilization of AI in Medical Research

A significant area of study in the field of artificial intelligence and healthcare is the use of data
from electronic health records. If the underlying database and IT architecture do not try to
minimize the distribution of inconsistent or poor data, then it may be difficult to put the data
that was gathered to good use.

Despite this, AI has the ability to advance clinical care, care quality, and research as a
component of electronic health records (EHRs). Artificial intelligence (AI) that has been properly
built and trained on healthcare data may aid in the identification of clinically optimum
procedures outside of the customary channels of scientific publication, guideline development,
and clinical support systems.. Clinical practice patterns produced from electronic health data
may also be studied by AI, which might help in the creation of recent medical methodologies
for health services (Char, 2020).

Economist Antonio Argandoa argues that proper moral standards should be enforced on
information and data if emerging technologies depend on and are based on them (Hartman, et
al., 2021). The following are some of the components he prescribes:

 Providing information means guaranteeing that it is true and reliable, at least to a fair
degree.
 When collecting or using someone else's data, it's important to remember the ethical
boundaries that surround doing so. 
 Respect for property and safety rights: Areas of possible vulnerability, such network
security, sabotage, theft of information, and impersonation, are strengthened and must
thus be secured.
 Technology's increased anonymity and separation from its users increases the
importance of each individual taking responsibility for their actions.

Incorporating AI into the Pharmaceutical Research and Development Process

Future AI implementation is anticipated to streamline and speed up pharmaceutical


development. Using AI in the form of robots and models of genetic targets, medications,
organs, and illnesses, as well as their development, pharmacology, safety, and performance,
might potentially reduce the time and resources needed for drug development. The drug
research and development process may be accelerated and made more cost-effective using
artificial intelligence. Although, as with any medication trial, finding a lead chemical does not
ensure the creation of a safe and effective therapeutic, artificial intelligence was previously
used to discover prospective Ebola virus therapies (Char, 2020).
Ethical Difficulties

There are four major ethical problems that must be solved before the full potential of AI in
healthcare can be realized. Data privacy, algorithmic fairness, biases, and transparency are
other major factors to think about (Gerke, 2020). There is a political dimension to the issue of
whether or not AI systems may be considered legitimate (Rodrigues, 2020).

The goal is to provide policymakers with the tools they need to proactively address the ethically
complex challenges raised by mandatory AI use in healthcare settings (C.V.Machadoa, 2020)).
Most of the legal discussion around AI has been motivated by worries about the lack of
information about how algorithms work. Due to the increasing prevalence of AI deployment in
potentially harmful settings, there is a rising need for responsible, ethical, and visible AI
development and administration. Information availability and understanding are the two most
crucial aspects of transparency. Algorithm performance details are often hidden from public
view (Albrecht, 2013).

It has been suggested that the human capacity to identify the creator or operator of a violation
might be compromised by machines that operate according to uncorrected principles and
acquire new behavioural patterns. This is troubling because it threatens the foundation of
society's morals and the legal system's responsibility premise. There may be no way to
determine who is responsible for any damage done if AI is utilized. However, it is difficult to
assess the seriousness of the threat since the widespread adoption of robots will drastically
reduce human consciousness (Tigard, 2020).

The use of AI in a healthcare setting necessitates the capacity to maintain professionalism and
integrity in the face of frequent interruptions and changing priorities (Mirbabaie, 2021).
However, the capacity to evaluate the program and understand how it could fail is a basic and
vital component of assessing the security of any medical software. For instance, similarities
exist between the software development process and the method used to create
pharmaceuticals or mechanical systems, in terms of both their constituent parts and the
physiological processes involved.

Why is responsibility necessary?

Artificial intelligence systems are vulnerable to sudden and catastrophic failure when the
environment or circumstance changes. Artificial intelligence (AI) may go from being very
reliable to profoundly untrustworthy in a short duration. Every AI technology must have
restrictions, even if there is not much bias. To make good choices, humans need to be aware of
and comfortable with any limitations imposed by their environment. In addition, Sometimes
people use decision-support tools without questioning their validity. The court system is not
immune to this sort of mistake; judges have revised their verdicts based on the assessments of
risk that resulted in wrong-doings (Mannes, 2020).
Concerns regarding cyber security are prompted by the use of AI outside human involvement.
RAND Perspectives warns that "data diet" vulnerabilities might open up a new attack vector if
AI is used for monitoring or network security in the area of national security. The research also
addresses domestic security problems, such as the (increasing) use of artificial agents by
governments for citizen monitoring. These have been identified as possible threats to people's
fundamental rights. These problems are serious because they endanger essential infrastructure,
which in turn endangers people's lives, their safety, and their ability to get what they need.
Since many cyber security flaws are difficult to spot until after the fact (after the damage has
already been done), they offer a potentially serious risk (Rodrigues, 2020).

Problems with biases in the datasets used to develop algorithms are commonplace in artificial
intelligence (AI) research and development. According to Buolamwini and Gebru, the datasets
used in automated face recognition are biased, making it less effective at identifying people
with darker skin tones, particularly women. In order to be effective, machine learning relies on
a large dataset, and the great amount of currently used datasets from clinical trial research
taken from predetermined groups. For this reason, it's possible that underserved and, by
extension, underrepresented patient groups would fare worse under the created algorithms
(M.Safdarac, 2019).

Who Bears the Responsibility?

There are a variety of situations in which individuals fail to follow through on the ethical
decisions they make and when responsible decision making goes horribly wrong. Of course,
there are situations when individuals deliberately act dishonestly. As unlikely as it may seem,
unethical decisions and actions are always a possibility. Testing AISs is advised; before adopting
such robots and AI systems, they must be developed, tested, evaluated, and analyzed logically
and statistically for reliability, efficiency, stability, and ethical adherence. Verification and
validation may assist clinicians in justifying AIS use. Clinical ethics prohibit unaccountable
behaviour. However, physicians and AIS may be opaque. AIS cannot work in human care if it
cannot be punished. Managers of organizations utilizing AIS should make it very apparent to
their medical staff that blaming the technology is not an acceptable means of escaping
accountability (Smith, 2020).

AI and Bias

AI systems have been shown to include and make use of human and social biases, even at a
large scale. The method is not without fault, but rather the data it employs. Models may be
trained using a wide variety of data, including human assessments and data representing the
downstream effects of social or historical injustices. Furthermore, there is a possibility of bias in
data collecting and utilization, and user-generated data might function as a feedback loop that
bolsters prejudice. We are unaware of any guidelines or standards for documenting and
assessing these models; nonetheless, they should be used as a foundation for future work by
scientists and medical professionals (Nelson, 2019) (Shah et al., 2020).
The role of artificial intelligence is becoming more important that judgments made by AI be
ethical and free of prejudice as our dependence on these systems grows. An open,
understandable, and accountable AI is what humans feel is necessary. In several domains, the
usage of AI algorithms for improving patient paths and surgical results surpasses that of
humans. Starting the era of artificial intelligence in healthcare without using AI is probably
unscientific and immoral, given that AI is expected to supplement, coexist with, or replace
existing systems (Ravi B. Parikh, 2019).

Evaluation
The ethical theory we will discuss is utilitarianism, which has its origins in 18th and 19th century
social and political philosophy but whose central premise is just as important in the 21st
century (Hartman et al., 2021). The core concept of utilitarianism is that results matter, and that
we should make decisions based on how those results will affect the greater good.
Utilitarianism is a consequentialist theory of ethics and social policy because its proponents
argue that we should choose courses of action that have the greatest net benefit to society
(Hartman et al., 2021).

The utilitarian approach has made important contributions to sound ethical decision making,
despite the fact that it has criticism. When assessing the merits of future utilitarian decision
making, it can be helpful to first consider some of the more general criticisms to the theory.

One set of issues is that it's hard to count, measure, compare, and quantify the effects of
actions without using utilitarian reasoning. To follow the utilitarian principle that judgments
should be made based on weighing the relative benefits and costs of several courses of action,
we need some kind of comparative framework. However, in reality, it can be challenging to
make certain comparisons and measures.

In an industry like healthcare where human’s life is the most important thing of all, decisions
should be made to save the most lives possible.

Conclusion
There is a growing need for morally sound AI in the medical field. Data bias may be avoided
with the use of algorithms trained on objective, real-time information. The method and its
implementation in a system need to be evaluated. Machine learning can't take the place of
doctors' experience, but it might help them make more informed choices. Artificial intelligence
might be utilized for screening and evaluation if medical professionals are few in a low-resource
setting. Since all AI decisions are made using algorithms, even the quickest ones are methodical
in comparison to human decision-making. Therefore, even if actions do not have legal
consequences, not the technologies themselves but the minds behind them and the ones who
use them are the ones who must shoulder the burden of accountability. Even though there are
ethical concerns associated with AI, it is expected to either integrate with or replace existing
healthcare systems. Refusing to adopt AI to help human advance might be both unscientific and
unethical, further research should made from different perspective.

(2163 words)
REFFERECNE LIST

1. Hartman, L., DesJardins, J. and MacDonald, C. (2021) “Business Ethics Decision


Making for Personal Integrity & Social Responsibility.” Available at:
https://read.kortext.com/reader/epub/616112 (Accessed: December 8, 2022). 

2. Morley, J. and Floridi, L. (2021) An ethically mindful approach to AI for Health


Care, SSRN. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3830536
(Accessed: December 2, 2022). 
3. Drukker, Noble and Papageoghiou (2020) Introduction to artificial intelligence in
ultrasound imaging in obstetrics and gynecology, Obstetrics & Gynaecology. Available
at: https://obgyn.onlinelibrary.wiley.com/doi/full/10.1002/uog.22122 (Accessed:
December 2, 2022). 
4. Rong, G. et al. (2020) Artificial Intelligence in Healthcare: Review and Prediction case
studies, Engineering. Elsevier. Available at:
https://www.sciencedirect.com/science/article/pii/S2095809919301535 (Accessed:
December 3, 2022). 
5. Lim, S.S. et al. (2017) Artificial Intelligence in medical practice: The question to the
answer?, The American Journal of Medicine. Elsevier. Available at:
https://www.sciencedirect.com/science/article/pii/S0002934317311178 (Accessed:
December 2, 2022). 
6. Char, D., Abramoff, M. and Feudtner, C. (2020) Identifying ethical considerations for
Machine Learning Healthcare Applications, Taylor & Francis. Available at:
https://www.tandfonline.com/doi/abs/10.1080/15265161.2020.1819469 (Accessed:
December 2, 2022). 
7. Joan Stephenson, P.D. (2021) Who offers guidance on use of Artificial Intelligence in
medicine, JAMA Health Forum. JAMA Network. Available at:
https://jamanetwork.com/journals/jama-health-forum/article-abstract/2782125 (Accessed:
December 2, 2022). 
8. Mirbabaie, M. et al. (2021) Artificial Intelligence in hospitals: Providing a status quo of
ethical considerations in academia to Guide Future Research - AI &
Society, SpringerLink. Springer London. Available at:
https://link.springer.com/article/10.1007/s00146-021-01239-4 (Accessed: December 3,
2022).  27
9. Rodrigues, R. (2020) Legal and human rights issues of AI: Gaps, challenges and
vulnerabilities, Journal of Responsible Technology. Elsevier. Available at:
https://www.sciencedirect.com/science/article/pii/S2666659620300056 (Accessed:
December 3, 2022).  28
10. ALBRECHT, J.P. (2013) Report on the proposal for a regulation of the European
Parliament and of the council on the protection of individuals with regard to the
processing of personal data and on the free movement of such data (General Data
Protection Regulation): A7-0402/2013: European Parliament, REPORT on the proposal
for a regulation of the European Parliament and of the Council on the protection of
individuals with regard to the processing of personal data and on the free movement of
such data (General Data Protection Regulation) | A7-0402/2013 | European Parliament.
Available at: https://www.europarl.europa.eu/doceo/document/A-7-2013-0402_EN.html
(Accessed: December 3, 2022).  (29)
11. Tigard, D.W. (2020) There is no techno-responsibility gap - philosophy &
technology, SpringerLink. Springer Netherlands. Available at:
https://link.springer.com/article/10.1007/s13347-020-00414-7 (Accessed: December 3,
2022). 
12. C.V.Machadoa1, C. et al. (2020) The ethics of AI in health care: A mapping
review, Social Science & Medicine. Pergamon. Available at:
https://www.sciencedirect.com/science/article/pii/S0277953620303919 (Accessed:
December 3, 2022). 
13. Mannes, A. (2020) Governance, risk, and Artificial Intelligence, AI Magazine. Available
at: https://ojs.aaai.org/index.php/aimagazine/article/view/5200 (Accessed: December 3,
2022). 
14. Taylor, I. (2020) Who Is Responsible for Killer Robots? Autonomous Weapons, Group
Agency, and the Military-Industrial Complex, Wiley Online Library. Available at:
https://onlinelibrary.wiley.com/doi/abs/10.1111/japp.12469 (Accessed: December 3,
2022). 
15. M.Safdarac, N. et al. (2019) Ethical considerations in Artificial Intelligence, European
Journal of Radiology. Elsevier. Available at:
https://www.sciencedirect.com/science/article/pii/S0720048X19304188 (Accessed:
December 3, 2022). 
16. Smith, H. (2020) Clinical AI: Opacity, accountability, responsibility and liability - ai &
society, SpringerLink. Springer London. Available at:
https://link.springer.com/article/10.1007/s00146-020-01019-6 (Accessed: December 3,
2022). 
17. Nelson, G.S. (2019) Bias in Artificial Intelligence, North Carolina Medical Journal.
North Carolina Medical Journal. Available at:
https://www.ncmedicaljournal.com/content/80/4/220?
utm_source=TrendMD&utm_medium=cpc&utm_campaign=North_Carolina_Medical_J
ournal_TrendMD_1 (Accessed: December 3, 2022). 
18. Shah, M. et al. (2020) Artificial Intelligence (AI) in urology-current use and future
directions: An itrue study, Turkish journal of urology. U.S. National Library of Medicine.
Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7731952/ (Accessed:
December 3, 2022). 
19. Ravi B. Parikh, M.D. (2019) Addressing bias in artificial intelligence in Health
Care, JAMA. JAMA Network. Available at:
https://jamanetwork.com/journals/jama/article-abstract/2756196 (Accessed: December 3,
2022). 

You might also like