Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3597512.3597529acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
extended-abstract

Responsible Agency Through Answerability: Cultivating the Moral Ecology of Trustworthy Autonomous Systems

Published: 11 July 2023 Publication History

Abstract

The decades-old debate over so-called ‘responsibility gaps’ in intelligent systems has recently been reinvigorated by rapid advances in machine learning techniques that are delivering many of the capabilities of machine autonomy that Matthias [1] originally anticipated. The emerging capabilities of intelligent learning systems highlight and exacerbate existing challenges with meaningful human control of, and accountability for, the actions and effects of such systems. The related challenge of human ‘answerability’ for system actions and harms has come into focus in recent literature on responsibility gaps [2, 3]. We describe a proposed interdisciplinary approach to designing for answerability in autonomous systems, grounded in an instrumentalist framework of ‘responsible agency cultivation’ drawn from moral philosophy and cognitive sciences as well as empirical results from structured interviews and focus groups in the application domains of health, finance and government. We outline a prototype dialogue agent informed by these emerging results and designed to help bridge the structural gaps in organisations that typically impede the human agents responsible for an autonomous sociotechnical system from answering to vulnerable patients of responsibility.

References

[1]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6, 175–183.
[2]
Mark Coeckelbergh. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics 26, 2051–2068.
[3]
Daniel W. Tigard. 2021. There is no techno-responsibility gap. Philos. Tech. 34, 589–607.
[4]
Deborah G. Johnson. 2011. Software agents, anticipatory ethics, and accountability. In: Marchant, G., Allenby, B., Herkert, J. (eds) The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. Springer, Dordrecht. 61-76.
[5]
Johannes Himmelreich. 2019. Responsibility for killer robots. Ethical Theory and Moral Practice 22:3, 731-747.
[6]
Peter Königs. 2022. Artificial intelligence and responsibility gaps: what is the problem? Ethics Inf Technol 24.
[7]
Köhler, S., Sauer, H., & Roughley, N. (2017). Technologically blurred accountability? Technology, responsibility gaps and the robustness of our everyday conceptual scheme. In C. Ulbert, P. Finkenbusch, E. Sondermann, & T. Debiel (Eds.), Moral Agency and the Politics of Responsibility (pp. 51–67). Routledge.
[8]
Daniel W. Tigard. 2021. Technological answerability and the severance problem: Staying connected by demanding answers. Sci Eng Ethics 27(59), 1-20.
[9]
Maximilian Kiener. 2022. Can we bridge AI's responsibility gap at will? Ethic Theory Moral Prac 25, 575–593.
[10]
R. A. Duff. 2009. Legal and moral responsibility. Philos Compass 4(6): 978–986.
[11]
Shannon Vallor and Bhargavi Ganesh. 2023. AI and the imperative of responsibility: Reconceiving AI governance as social care. In M. Kiener (Ed.), The Routledge Handbook of Philosophy of Responsibility (Chapter 31). Routledge.
[12]
Lyria Bennett Moses. 2007. Recurring dilemmas: The law's race to keep up with technological change. Illinois Journal of Law, Technology and Policy, Vol 2007:2. 239-285.
[13]
David Nersessian and Ruben Mancha. 2020. From automation to autonomy: legal and ethical responsibility gaps in artificial intelligence innovation. Michigan Technology Law Review 27: 55.
[14]
Johan Ordish. 2023. Large language models and software as a medical device. MedRegs, MHRA. Retrieved from https://medregs.blog.gov.uk/2023/03/03/large-language-models-and-software-as-a-medical-device/
[15]
Martin Sand, Juan Manuel Duran and Karin Rolanda Jongsma. 2021. Responsibility beyond design: Physicians requirements for ethical medical AI. Bioethics 36: 162 – 169.
[16]
Umang Bhatt, McKane Andrus, Adrian Weller and Alice Xiang. 2020. Machine learning explainability for external stakeholders. arXiv: 2007.05408. Retrieved from https://arxiv.org/abs/2007.05408
[17]
Pouyan Esmaeilzadeh. 2020. Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med Inform Decis Mak 20: 170.
[18]
NHS AI Lab & Health Education England. 2022. Understanding healthcare workers’ confidence in AI. Report 1 of 2. Retrieved from https://digital-transformation.hee.nhs.uk/binaries/content/assets/digital-transformation/dart-ed/understandingconfidenceinai-may22.pdf. 90 pages.
[19]
Jordan P. Richardson, Cambray Smith, Susan Curtis, Sara Watson and Richard R. Sharp. 2021. Patient apprehensions about the use of artificial intelligence in healthcare. npj Digit. Med. 4, 140.
[20]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3(2): 77 – 101.

Cited By

View all
  • (2024)In Whose Voice?: Examining AI Agent Representation of People in Social Interaction through Generative SpeechProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661555(224-245)Online publication date: 1-Jul-2024

Index Terms

  1. Responsible Agency Through Answerability: Cultivating the Moral Ecology of Trustworthy Autonomous Systems
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      TAS '23: Proceedings of the First International Symposium on Trustworthy Autonomous Systems
      July 2023
      426 pages
      ISBN:9798400707346
      DOI:10.1145/3597512
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 July 2023

      Check for updates

      Author Tags

      1. AI ethics
      2. Agency
      3. Answerability
      4. Dialogue agents
      5. Responsibility gaps
      6. Sociotechnical Systems Design

      Qualifiers

      • Extended-abstract
      • Research
      • Refereed limited

      Funding Sources

      Conference

      TAS '23

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)79
      • Downloads (Last 6 weeks)12
      Reflects downloads up to 04 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)In Whose Voice?: Examining AI Agent Representation of People in Social Interaction through Generative SpeechProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661555(224-245)Online publication date: 1-Jul-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media