Abstract
Artificial intelligence (AI) has been widely used in many fields, from intelligent virtual assistants to medical diagnosis. However, there is no consensus on how to deal with ethical issues. Using a systematic literature review and an analysis of recent real-world news about AI-infused systems, we cluster existing and emerging AI ethics and responsibility issues into six groups - broken systems, hallucinations, intellectual property rights violations, privacy and regulation violations, enabling malicious actors and harmful actions, environmental and socioeconomic harms - discuss implications, and conclude that the problem needs to be reflected upon and addressed across five partially overlapping dimensions: Research, Education, Development, Operation, and Business Model. This reflection may be relevant to caution of potential dangers and frame further research at a time when products and services based on AI exhibit explosive growth. Moreover, exploring effective ways to involve users and civil society in discussions on the impact and role of AI systems could help increase trust and understanding of these technologies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hu, K.: ChatGPT sets record for fastest-growing user base - analyst note (2023). https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. Accessed 2023
Future of Life: Pause Giant AI Experiments: An Open Letter (2023). https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 2023
Ordonez, V., Dunn, T., Noll, E.: OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: “A little bit scared of this” (2023). https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122. Accessed 2023
Schiffer, Z., Newton, C.: Microsoft just laid off one of its responsible AI teams (2023). https://www.platformer.news/p/microsoft-just-laid-off-one-of-its. Accessed 2023
DeGeurin, M.: Welp, There Goes Twitter’s Ethical AI Team, Among Others as Employees Post Final Messages(2022). https://gizmodo.com/twitter-layoffs-elon-musk-ai-ethics-1849743051. Accessed 2022
Horwitz, J.: Facebook Parent Meta Platforms Cuts Responsible Innovation Team (2022). https://www.wsj.com/articles/facebook-parent-meta-platforms-cuts-responsible-innovation-team-11662658423. Accessed 2022
Hayden, F.: One year after promising to double AI ethics team, Google is light on details (2022). https://www.emergingtechbrew.com/stories/2022/06/07/one-year-after-promising-to-double-ai-ethics-team-google-is-light-on-details. Accessed 2022
European Commission: Artificial Intelligence for Europe (2018). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN. Accessed 2018
European Commission: Coordinated Plan on Artificial Intelligence (2018). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0795&qid=1683368591075. Accessed 2018
Lu, Q., Zhu, L., Xu, X., Whittle, J.: Responsible-AI-by-design: a pattern collection for designing responsible AI systems. IEEE Softw. 1–7 (2023). https://doi.org/10.1109/MS.2022.3233582
UNESCO: Recomendation on the Ethics of Artificial Intelligence (2022). https://unesdoc.unesco.org/ark:/48223/pf0000381137. Accessed 2022
Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 26, xiii–xxiii (2002)
Kitchenham, B.: Procedures for Performing Systematic Reviews. (2004)
Ouzzani, M., Hammady, H., Fedorowicz, Z., Elmagarmid, A.: Rayyan—a web and mobile app for systematic reviews. Syst. Rev. 5, 210 (2016). https://doi.org/10.1186/s13643-016-0384-4
Denzin, N.K.: The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Butterworths, London (1970)
Jick, T.D.: Mixing qualitative and quantitative methods: triangulation in action. Adm. Sci. Q. 24, 602–611 (1979). https://doi.org/10.2307/2392366
Fink, A.: Survey Research Methods. In: Peterson, P., Baker, E., McGaw, B. (eds.) International Encyclopedia of Education, 3rd edn., pp. 152–160. Elsevier, Oxford (2010)
Merhi, M.I.: An assessment of the barriers impacting responsible artificial intelligence. Inf. Syst. Front. (2022). https://doi.org/10.1007/s10796-022-10276-3
Teixeira, S., Rodrigues, J., Veloso, B., Gama, J.: An exploratory diagnosis of artificial intelligence risks for a responsible governance. In: Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, pp. 25–31. Association for Computing Machinery, New York, NY, USA (2022)
Brumen, B., Göllner, S., Tropmann-Frick, M.: Aspects and views on responsible artificial intelligence. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P., Di Fatta, G., Giuffrida, G., and Umeton, R. (eds.) Machine Learning, Optimization, and Data Science, pp. 384–398. Springer Nature Switzerland, Cham (2023)
Cachat-Rosset, G., Klarsfeld, A.: Diversity, equity, and inclusion in artificial intelligence: an evaluation of guidelines. Appl. Artif. Intell. 37, 2176618 (2023). https://doi.org/10.1080/08839514.2023.2176618
Werder, K., Ramesh, B., Zhang, R. (Sophia): Establishing data provenance for responsible artificial intelligence systems. ACM Trans. Manag. Inf. Syst. 13, 22:1–22:23 (2022). https://doi.org/10.1145/3503488
Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., Yang, Q.: Building ethics into artificial intelligence. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 5527–5533. International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden (2018)
Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L.A., Rimell, L., Isaac, W., Haas, J., Legassick, S., Irving, G., Gabriel, I.: Taxonomy of risks posed by language models. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229. Association for Computing Machinery, New York, NY, USA (2022)
Sham, A.H., et al.: Ethical AI in facial expression analysis: racial bias. SIViP 17, 399–406 (2023). https://doi.org/10.1007/s11760-022-02246-8
Papakyriakopoulos, O., Xiang, A.: Considerations for ethical speech recognition datasets. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1287–1288. Association for Computing Machinery, New York, NY, USA (2023)
Minkkinen, M., Zimmer, M.P., Mäntymäki, M.: Co-shaping an ecosystem for responsible AI: five types of expectation work in response to a technological frame. Inf. Syst. Front. 25, 103–121 (2023). https://doi.org/10.1007/s10796-022-10269-2
Gianni, R., Lehtinen, S., Nieminen, M.: Governance of responsible AI: from ethical guidelines to cooperative policies. Front. Comput. Sci. 4 (2022)
Abbu, H., Mugge, P., Gudergan, G.: Ethical considerations of artificial intelligence: ensuring fairness, transparency, and explainability. In: 2022 IEEE 28th International Conference on Engineering, Technology and Innovation (ICE/ITMC) & 31st International Association for Management of Technology (IAMOT) Joint Conference, pp. 1–7 (2022)
Treacy, S.: Mechanisms and constraints underpinning ethically aligned artificial intelligence systems: an exploration of key performance areas. In: Presented at the 3rd European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2021 (2021)
Trocin, C., Mikalef, P., Papamitsiou, Z., Conboy, K.: Responsible AI for digital health: a synthesis and a research agenda. Inf. Syst. Front. (2021). https://doi.org/10.1007/s10796-021-10146-4
El-Sappagh, S., Alonso-Moral, J.M., Abuhmed, T., Ali, F., Bugarín-Diz, A.: Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges. Artif. Intell. Rev. (2023). https://doi.org/10.1007/s10462-023-10415-5
Meerveld, H.W., Lindelauf, R.H.A., Postma, E.O., Postma, M.: The irresponsibility of not using AI in the military. Ethics Inf. Technol. 25, 14 (2023). https://doi.org/10.1007/s10676-023-09683-0
Boulanin, V., Lewis, D.A.: Responsible reliance concerning development and use of AI in the military domain. Ethics Inf. Technol. 25, 8 (2023). https://doi.org/10.1007/s10676-023-09691-0
Squadrone, L., Croce, D., Basili, R.: Ethics by design for intelligent and sustainable adaptive systems. In: Dovier, A., Montanari, A., Orlandini, A. (eds.) AIxIA 2022–Advances in Artificial Intelligence, pp. 154–167. Springer International Publishing, Cham (2023)
Varsha P.S.: How can we manage biases in artificial intelligence systems–a systematic literature review. Int. J. Inf. Manag. Data Insights 3, 100165 (2023). https://doi.org/10.1016/j.jjimei.2023.100165
Belle, V.: Knowledge representation and acquisition for ethical AI: challenges and opportunities. Ethics Inf. Technol. 25, 22 (2023). https://doi.org/10.1007/s10676-023-09692-z
Geiger, G.: How Denmark’s Welfare State Became a Surveillance Nightmare (2023). https://www.wired.com/story/algorithms-welfare-state-politics/. Accessed 2023
Meaker, M.: The Fraud-Detection Business Has a Dirty Secret (2023). https://www.wired.com/story/welfare-fraud-industry/. Accessed 2022
Harwell, D.: Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use (2019). https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/. Accessed 2019
Burgess, M., Schot, E., Geiger, G.: This Algorithm Could Ruin Your Life (2023). https://www.wired.com/story/welfare-algorithms-discrimination/. Accessed 2023
Awad, E., et al.: The moral machine experiment. Nature 563, 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6
Taylor, M.: Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians (2016). https://www.caranddriver.com/news/a15344706/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/. Accessed 2016
Smith, S.G.: Hallucinations Could Blunt ChatGPT’s Success (2023). https://spectrum.ieee.org/ai-hallucination#toggle-gdpr. Accessed 2023
Wilkinson, D.: Be Careful… ChatGPT Appears to be Making up Academic References (2023). https://oxford-review.com/chatgpt-making-up-references/. Accessed 2023
Petkauskas, V.: OpenAI ordered to delete ChatGPT over false death claims (2023). https://cybernews.com/news/openai-ordered-delete-chatgpt/. Accessed 2023
Vincent, J.: Getty images is suing the creators of AI art tool stable diffusion for scraping its content (2023). https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit. Accessed 2023
Vincent, J.: AI art tools stable diffusion and midjourney targeted with copyright lawsuit (2023). https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart. Accessed 2023
Robinson, T.: The Wes Anderson artbot craze is a fun trend, but it clarifies AI art’s ethical issues (2022). https://www.polygon.com/23494958/wes-anderson-midjourney-ai-art-generator-viral-trend
Welte, H.: The gpl-violations.org project. https://gpl-violations.org/
Constantaras, E., Geiger, G., Justin-Casimir, B., Dhruv, M., Aung, H.: Inside the Suspicion Machine (2023). https://www.wired.com/story/welfare-state-algorithms/. Accessed 2023
McCallum, S.: ChatGPT banned in Italy over privacy concerns (2023). https://www.bbc.com/news/technology-65139406. Accessed 2023
Mukherjee, S., Pollina, E., More, R.: Italy’s ChatGPT ban attracts EU privacy regulators (2023). https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/. Accessed 2023
Mauran, C.: Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT (2023). https://mashable.com/article/samsung-chatgpt-leak-details
Cohen, M.: Workers are secretly using ChatGPT, AI and it will pose big risks for tech leaders (2023). https://www.cnbc.com/2023/04/30/the-big-cyber-risks-when-chatgpt-and-ai-are-secretly-used-by-employees.html. Accessed 2023
Hill, K.: What Happens When Our Faces Are Tracked Everywhere We Go? (2021). https://www.nytimes.com/interactive/2021/03/18/magazine/facial-recognition-clearview-ai.html. Accessed 2021
Heikkila, M.: Clearview scandal exposes limits of transatlantic AI collaboration (2021). https://www.politico.eu/article/clearview-scandal-exposes-limits-transatlantic-ai-facial-recognition-collaboration/. Accessed 2021
ElevenLabs: ElevenLabs-Prime AI Text to Speech|Voice Cloning. https://beta.elevenlabs.io/
O’Sullivan, D.: Inside the Pentagon’s race against deepfake videos. CNN Business (2019)
Debunking a deepfake video of Zelensky telling Ukrainians to surrender (2022). https://www.france24.com/en/tv-shows/truth-or-fake/20220317-deepfake-video-of-zelensky-telling-ukrainians-to-surrender-debunked. Accessed 2022
iperov: DeepFaceLab (2023). https://github.com/iperov/DeepFaceLab
Home Security Heroes: 2023 Password Cracking: How Fast Can AI Crack Passwords? https://www.homesecurityheroes.com/ai-password-cracking/
McNeal, R.: A novice just used ChatGPT to create terrifyingly sophisticated malware (2023). https://www.androidauthority.com/chatgpt-malware-3310791/. Accessed 2023
Merian, D.: ChatGPT Hacking Prompts, SQLi, XSS, Vuln Analysis, Nuclei Templates, and more (2023). https://systemweakness.com/chatgpt-hacking-prompts-sqli-xss-vuln-analysis-nuclei-templates-and-more-dba6fa839a45. Accessed 2023
Roose, K., Newton, C., Land, D., Cohn, R., Poyant, J., Moxley, A., Powell, D., Lozano, M., Niemisto, R.: Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT (2023). https://www.nytimes.com/2023/03/31/podcasts/hard-fork-sundar.html. Accessed 2023
Blain, L.: The genie escapes: Stanford copies the ChatGPT AI for less than $600 (2023). https://newatlas.com/technology/stanford-alpaca-cheap-gpt/. Accessed 2023
Gravitas, S.: Auto-GPT: An Autonomous GPT-4 Experiment (2023). https://github.com/Significant-Gravitas/Auto-GPT. Accessed 2023
Walsh, P., Bera, J., Sharma, V.S., Kaulgud, V., Rao, R.M., Ross, O.: Sustainable AI in the cloud: exploring machine learning energy use in the cloud. In: 2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pp. 265–266 (2021)
Popper, K.R.: The moral responsibility of the scientist. Bull. Peace Propos. 2, 279–283 (1971). https://doi.org/10.1177/096701067100200311
Sample, I.: Maths and tech specialists need Hippocratic oath, says academic (2019). https://www.theguardian.com/science/2019/aug/16/mathematicians-need-doctor-style-hippocratic-oath-says-academic-hannah-fry. Accessed 2023
Acknowledgments
This work was partially funded by FCT-Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit-UIDB/00326/2020 or project code UIDP/00326/2020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cunha, P.R., Estima, J. (2023). Navigating the Landscape of AI Ethics and Responsibility. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science(), vol 14115. Springer, Cham. https://doi.org/10.1007/978-3-031-49008-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-49008-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49007-1
Online ISBN: 978-3-031-49008-8
eBook Packages: Computer ScienceComputer Science (R0)