How should self-driving vehicles react when an accident can no longer be averted in dangerous sit... more How should self-driving vehicles react when an accident can no longer be averted in dangerous situations? The complex issue of designing crash algorithms has been discussed intensively in recent research literature. This paper refines the discourse around a new perspective which reassesses the underlying dilemma structures in the light of a metaethical analysis. It aims at enhancing the critical understanding of both the conceptual nature and specific practical implications that relate to the problem of crash algorithms. The ultimate aim of the paper is to open up a way to building a bridge between the inherent structural issues of dilemma cases on the one hand and the characteristics of the practical decision context related to driving automation scenarios on the other. Based on a reconstruction of the metaethical structure of crash dilemmas, a pragmatic orientation towards the ethical design of crash algorithms is sketched and critically examined along two central particularities of the practical problem. Firstly, pertinent research on the social nature of crash dilemmas is found to be merely heuristic. Secondly, existing work from ethics of risk hardly offers explicit ethical solutions to relevant and urgent challenges. Further investigation regarding both aspects is ultimately formulated as a research desideratum.
Menschsein in einer technisierten Welt. Interdisziplinäre Perspektiven auf den Menschen im Zeichen der digitalen Transformation, 2022
Mit steigender Autonomie werden künstliche Systeme vermehrt mit Situationen konfrontiert, die kom... more Mit steigender Autonomie werden künstliche Systeme vermehrt mit Situationen konfrontiert, die komplexe moralische Handlungsentscheidungen erfordern. Doch sind Maschinen überhaupt zu einem eigenständigen moralischen Handeln fähig? In diesem Aufsatz wird zunächst erläutert, inwiefern künstliche Systeme als in primitiver Form handlungsfähig gelten können. Darauf aufbauend werden relevante maschinenethische Positionen entlang verschiedener Kriterien moralischer Handlungsfähigkeit skizziert. Dabei wird begründet, weshalb artificial moral agents (AMAs) nur in einem sehr eingeschränkten Maße als moralische Akteure gelten können. Während die einschlägige Forschungsliteratur in diesem Zusammenhang vornehmlich die fehlende Verantwortungsfähigkeit von Maschinen diskutiert, wird in diesem Aufsatz deren begrenzte Fähigkeit zur Kontextsensitivität in den Vordergrund gerückt. Auf der Grundlage eines partikularistischen Moralverständnisses wird argumentiert, dass insbesondere auf maschinellem Lernen basierende Systeme die spezifischen Umstände komplexer moralischer Entscheidungssituationen nur in begrenztem Maße berücksichtigen können. Dies wird schließlich am Fallbeispiel moralischer Dilemmata im Anwendungskontext autonomer Fahrzeuge veranschaulicht.
How should driverless vehicles respond to situations of unavoidable personal harm? This paper tak... more How should driverless vehicles respond to situations of unavoidable personal harm? This paper takes up the case of self-driving cars as a prominent example of algorithmic moral decision-making, an emergent type of morality that is evolving at a high pace in a digitised business world. As its main contribution, it juxtaposes dilemma decision situations relating to ethical crash algorithms for autonomous cars to two edge cases: the case of manually driven cars facing real-life, mundane accidents, on the one hand, and the dilemmatic situation in theoretically constructed trolley cases, on the other. The paper identifies analogies and disanalogies between the three cases with regard to decision makers, decision design, and decision outcomes. The findings are discussed from the angle of three perspectives: aspects where analogies could be found, those where the case of self-driving cars has turned out to lie in between both edge cases, and those where it entirely departs from either edge case. As a main result, the paper argues that manual driving as well as trolley cases are suitable points of reference for the issue of designing ethical crash algorithms only to a limited extent. Instead, a fundamental epistemic and conceptual divergence of dilemma decision situations in the context of self-driving cars and the used edge cases is substantiated. Finally, the areas of specific need for regulation on the road to introducing autonomous cars are pointed out and related thoughts are sketched through the lens of the humanistic paradigm.
Artificial Intelligence. Reflections in Philosophy, Theology, and the Social Sciences, 2020
The impending introduction of self-driving cars poses a new stage of complexity not only in techn... more The impending introduction of self-driving cars poses a new stage of complexity not only in technical requirements but in the ethical challenges it evokes. The question of which ethical principles to use for the programming of crash algorithms, especially in response to so-called dilemma situations, is one of the most controversial moral issues discussed. This paper critically investigates the rationale behind rule utilitarianism as to whether and how it might be adequate to guide ethical behaviour of autonomous cars in driving dilemmas. Three core aspects related to the rule utilitarian concept are discussed with regards to their relevance for the given context: the universalization principle, the ambivalence of compliance issues, and the demandingness objection. It is concluded that a rule utilitarian approach might be useful for solving driverless car dilemmas only to a limited extent. In particular, it cannot provide the exclusive ethical criterion when evaluated from a practical point of view. However, it might still be of conceptual value in the context of a pluralist solution.
Zeitschrift für Ethik und Moralphilosophie (Journal for Ethics and Moral Philosophy), 2020
Wie sollen sich autonome Fahrzeuge verhalten, wenn ein Unfall nicht mehr abwendbar ist? Die Kompl... more Wie sollen sich autonome Fahrzeuge verhalten, wenn ein Unfall nicht mehr abwendbar ist? Die Komplexität spezifischer moralischer Dilemmata, die in diesem Kontext auftreten können, lässt bewährte ethische Denktraditionen an ihre Grenzen stoßen. Dieser Aufsatz versteht sich als Versuch, neue Lösungsperspektiven mithilfe einer risikoethischen Sichtweise auf die Problematik zu eröffnen und auf diese Weise deren Relevanz für die Programmierung von ethischen Unfallalgorithmen aufzuzeigen. Im Zentrum steht dabei die Frage, welche Implikationen sich aus einer Auffassung von Dilemma-Situationen als risikoethische Verteilungsprobleme im Hinblick auf die Zulässigkeit von entsprechenden Risikoübertragungen ergeben. Dabei wird zunächst eine risikoethische Interpretation des zugrundeliegenden Entscheidungsproblems skizziert, welches durch seine dilemmatische Struktur eine besondere Risikokonstellation begründet. Ausgehend von den Positionen von Sven Ove Hansson und Julian Nida-Rümelin wird für einen deontologisch-risikoethischen Ansatz argumentiert, der auf Individualrechten einerseits und einer interpersonell gerechten Verteilung der entstehenden Schadensrisiken anderseits basiert. Diese beiden Kriterien werden für den Anwendungskontext des autonomen Fahrens konkretisiert. Zum einen wird in Bezug auf das erste Kriterium argumentiert, dass individuelle Rechte genau dann als angemessen gewahrt gelten können, wenn die resultierenden Risikoübertragungen auf die Einzelnen in ihrer absoluten Höhe jeweils zumutbar sind (absolutes Prinzip). Zum anderen werden Schwierigkeiten skizziert, die sich hinsichtlich der konkreten Umsetzung des zweiten Kriteriums der Verteilungsgerechtigkeit (relatives Prinzip) ergeben. In diesem Zusammenhang werden beispielsweise ethische Herausforderungen in Bezug auf einen möglichen Vorteilsausgleich, das Prinzip der Schadensminimierung sowie individuell unterschiedliche Ausgangsbedingungen der persönlichen Schadensreduktion kritisch in den Blick genommen.
(English version:) How should self-driving cars react in cases of unavoidable collisions? The complexity of specific dilemma situations that might arise in the context of autonomous driving pushes well-established ethical traditions of thought to their limits. This paper attempts to open up new opportunities for approaching this issue. By reframing the underlying decision problem from the perspective of ethics of risk, it is argued that the latter is highly relevant for the programming of ethical crash algorithms. The paper’s main contribution lies in providing an interpretation of dilemma situations as ethical problems of risk distribution as well as an outline of the resulting implications in terms of the permissibility of imposing these risks. Initially, moral dilemmas are shown to constitute a particularly challenging risk constellation. Drawing upon the positions of Sven Ove Hansson and Julian Nida-Rümelin, the paper makes the case for a deontological approach based on individual rights on the one hand and a fair distribution of the resulting risks of damage on the other hand. These two criteria are applied to and further elaborated in the context of self-driving car dilemmas. With regard to the first criterion, it is argued that individual rights could be considered to be adequately safeguarded if, and only if, the resulting levels of risk imposition on the individual are acceptable (absolute principle). Besides, some difficulties that emerge in the context of the second criterion of distributive justice (relative principle) are outlined. At this point, various ethical challenges such as the compensation of potential benefits, the principle of harm minimisation, and fundamental differences in individuals’ possibilities of reducing personal harm are critically examined.
Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, 2018
Although being potentially able to reduce the number of severe road accidents, self-driving vehic... more Although being potentially able to reduce the number of severe road accidents, self-driving vehicles will still face situations where harming someone cannot be avoided. Therefore there is a need for an ethical investigation into the programming of crash-optimization algorithms: which ethical principles are suitable to guide decisions in dilemma situations and to morally justify them? This paper presents an in-depth overview of research articles revealing the difficulties of a potential utilitarian solution. It evaluates an aggregative consequentialist approach that is adapted to the specific characteristics of dilemmas in autonomous driving, building upon the notions of negative utilitarianism and prioritarianism.
Published in: M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, M. Nørskov (eds.). 2018. Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, IOS Press, Amsterdam, 327-335.
How should self-driving vehicles react when an accident can no longer be averted in dangerous sit... more How should self-driving vehicles react when an accident can no longer be averted in dangerous situations? The complex issue of designing crash algorithms has been discussed intensively in recent research literature. This paper refines the discourse around a new perspective which reassesses the underlying dilemma structures in the light of a metaethical analysis. It aims at enhancing the critical understanding of both the conceptual nature and specific practical implications that relate to the problem of crash algorithms. The ultimate aim of the paper is to open up a way to building a bridge between the inherent structural issues of dilemma cases on the one hand and the characteristics of the practical decision context related to driving automation scenarios on the other. Based on a reconstruction of the metaethical structure of crash dilemmas, a pragmatic orientation towards the ethical design of crash algorithms is sketched and critically examined along two central particularities of the practical problem. Firstly, pertinent research on the social nature of crash dilemmas is found to be merely heuristic. Secondly, existing work from ethics of risk hardly offers explicit ethical solutions to relevant and urgent challenges. Further investigation regarding both aspects is ultimately formulated as a research desideratum.
Menschsein in einer technisierten Welt. Interdisziplinäre Perspektiven auf den Menschen im Zeichen der digitalen Transformation, 2022
Mit steigender Autonomie werden künstliche Systeme vermehrt mit Situationen konfrontiert, die kom... more Mit steigender Autonomie werden künstliche Systeme vermehrt mit Situationen konfrontiert, die komplexe moralische Handlungsentscheidungen erfordern. Doch sind Maschinen überhaupt zu einem eigenständigen moralischen Handeln fähig? In diesem Aufsatz wird zunächst erläutert, inwiefern künstliche Systeme als in primitiver Form handlungsfähig gelten können. Darauf aufbauend werden relevante maschinenethische Positionen entlang verschiedener Kriterien moralischer Handlungsfähigkeit skizziert. Dabei wird begründet, weshalb artificial moral agents (AMAs) nur in einem sehr eingeschränkten Maße als moralische Akteure gelten können. Während die einschlägige Forschungsliteratur in diesem Zusammenhang vornehmlich die fehlende Verantwortungsfähigkeit von Maschinen diskutiert, wird in diesem Aufsatz deren begrenzte Fähigkeit zur Kontextsensitivität in den Vordergrund gerückt. Auf der Grundlage eines partikularistischen Moralverständnisses wird argumentiert, dass insbesondere auf maschinellem Lernen basierende Systeme die spezifischen Umstände komplexer moralischer Entscheidungssituationen nur in begrenztem Maße berücksichtigen können. Dies wird schließlich am Fallbeispiel moralischer Dilemmata im Anwendungskontext autonomer Fahrzeuge veranschaulicht.
How should driverless vehicles respond to situations of unavoidable personal harm? This paper tak... more How should driverless vehicles respond to situations of unavoidable personal harm? This paper takes up the case of self-driving cars as a prominent example of algorithmic moral decision-making, an emergent type of morality that is evolving at a high pace in a digitised business world. As its main contribution, it juxtaposes dilemma decision situations relating to ethical crash algorithms for autonomous cars to two edge cases: the case of manually driven cars facing real-life, mundane accidents, on the one hand, and the dilemmatic situation in theoretically constructed trolley cases, on the other. The paper identifies analogies and disanalogies between the three cases with regard to decision makers, decision design, and decision outcomes. The findings are discussed from the angle of three perspectives: aspects where analogies could be found, those where the case of self-driving cars has turned out to lie in between both edge cases, and those where it entirely departs from either edge case. As a main result, the paper argues that manual driving as well as trolley cases are suitable points of reference for the issue of designing ethical crash algorithms only to a limited extent. Instead, a fundamental epistemic and conceptual divergence of dilemma decision situations in the context of self-driving cars and the used edge cases is substantiated. Finally, the areas of specific need for regulation on the road to introducing autonomous cars are pointed out and related thoughts are sketched through the lens of the humanistic paradigm.
Artificial Intelligence. Reflections in Philosophy, Theology, and the Social Sciences, 2020
The impending introduction of self-driving cars poses a new stage of complexity not only in techn... more The impending introduction of self-driving cars poses a new stage of complexity not only in technical requirements but in the ethical challenges it evokes. The question of which ethical principles to use for the programming of crash algorithms, especially in response to so-called dilemma situations, is one of the most controversial moral issues discussed. This paper critically investigates the rationale behind rule utilitarianism as to whether and how it might be adequate to guide ethical behaviour of autonomous cars in driving dilemmas. Three core aspects related to the rule utilitarian concept are discussed with regards to their relevance for the given context: the universalization principle, the ambivalence of compliance issues, and the demandingness objection. It is concluded that a rule utilitarian approach might be useful for solving driverless car dilemmas only to a limited extent. In particular, it cannot provide the exclusive ethical criterion when evaluated from a practical point of view. However, it might still be of conceptual value in the context of a pluralist solution.
Zeitschrift für Ethik und Moralphilosophie (Journal for Ethics and Moral Philosophy), 2020
Wie sollen sich autonome Fahrzeuge verhalten, wenn ein Unfall nicht mehr abwendbar ist? Die Kompl... more Wie sollen sich autonome Fahrzeuge verhalten, wenn ein Unfall nicht mehr abwendbar ist? Die Komplexität spezifischer moralischer Dilemmata, die in diesem Kontext auftreten können, lässt bewährte ethische Denktraditionen an ihre Grenzen stoßen. Dieser Aufsatz versteht sich als Versuch, neue Lösungsperspektiven mithilfe einer risikoethischen Sichtweise auf die Problematik zu eröffnen und auf diese Weise deren Relevanz für die Programmierung von ethischen Unfallalgorithmen aufzuzeigen. Im Zentrum steht dabei die Frage, welche Implikationen sich aus einer Auffassung von Dilemma-Situationen als risikoethische Verteilungsprobleme im Hinblick auf die Zulässigkeit von entsprechenden Risikoübertragungen ergeben. Dabei wird zunächst eine risikoethische Interpretation des zugrundeliegenden Entscheidungsproblems skizziert, welches durch seine dilemmatische Struktur eine besondere Risikokonstellation begründet. Ausgehend von den Positionen von Sven Ove Hansson und Julian Nida-Rümelin wird für einen deontologisch-risikoethischen Ansatz argumentiert, der auf Individualrechten einerseits und einer interpersonell gerechten Verteilung der entstehenden Schadensrisiken anderseits basiert. Diese beiden Kriterien werden für den Anwendungskontext des autonomen Fahrens konkretisiert. Zum einen wird in Bezug auf das erste Kriterium argumentiert, dass individuelle Rechte genau dann als angemessen gewahrt gelten können, wenn die resultierenden Risikoübertragungen auf die Einzelnen in ihrer absoluten Höhe jeweils zumutbar sind (absolutes Prinzip). Zum anderen werden Schwierigkeiten skizziert, die sich hinsichtlich der konkreten Umsetzung des zweiten Kriteriums der Verteilungsgerechtigkeit (relatives Prinzip) ergeben. In diesem Zusammenhang werden beispielsweise ethische Herausforderungen in Bezug auf einen möglichen Vorteilsausgleich, das Prinzip der Schadensminimierung sowie individuell unterschiedliche Ausgangsbedingungen der persönlichen Schadensreduktion kritisch in den Blick genommen.
(English version:) How should self-driving cars react in cases of unavoidable collisions? The complexity of specific dilemma situations that might arise in the context of autonomous driving pushes well-established ethical traditions of thought to their limits. This paper attempts to open up new opportunities for approaching this issue. By reframing the underlying decision problem from the perspective of ethics of risk, it is argued that the latter is highly relevant for the programming of ethical crash algorithms. The paper’s main contribution lies in providing an interpretation of dilemma situations as ethical problems of risk distribution as well as an outline of the resulting implications in terms of the permissibility of imposing these risks. Initially, moral dilemmas are shown to constitute a particularly challenging risk constellation. Drawing upon the positions of Sven Ove Hansson and Julian Nida-Rümelin, the paper makes the case for a deontological approach based on individual rights on the one hand and a fair distribution of the resulting risks of damage on the other hand. These two criteria are applied to and further elaborated in the context of self-driving car dilemmas. With regard to the first criterion, it is argued that individual rights could be considered to be adequately safeguarded if, and only if, the resulting levels of risk imposition on the individual are acceptable (absolute principle). Besides, some difficulties that emerge in the context of the second criterion of distributive justice (relative principle) are outlined. At this point, various ethical challenges such as the compensation of potential benefits, the principle of harm minimisation, and fundamental differences in individuals’ possibilities of reducing personal harm are critically examined.
Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, 2018
Although being potentially able to reduce the number of severe road accidents, self-driving vehic... more Although being potentially able to reduce the number of severe road accidents, self-driving vehicles will still face situations where harming someone cannot be avoided. Therefore there is a need for an ethical investigation into the programming of crash-optimization algorithms: which ethical principles are suitable to guide decisions in dilemma situations and to morally justify them? This paper presents an in-depth overview of research articles revealing the difficulties of a potential utilitarian solution. It evaluates an aggregative consequentialist approach that is adapted to the specific characteristics of dilemmas in autonomous driving, building upon the notions of negative utilitarianism and prioritarianism.
Published in: M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, M. Nørskov (eds.). 2018. Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, IOS Press, Amsterdam, 327-335.
Uploads
Papers by Vanessa Schäffner
(English version:)
How should self-driving cars react in cases of unavoidable collisions? The complexity of specific dilemma situations that might arise in the context of autonomous driving pushes well-established ethical traditions of thought to their limits. This paper attempts to open up new opportunities for approaching this issue. By reframing the underlying decision problem from the perspective of ethics of risk, it is argued that the latter is highly relevant for the programming of ethical crash algorithms. The paper’s main contribution lies in providing an interpretation of dilemma situations as ethical problems of risk distribution as well as an outline of the resulting implications in terms of the permissibility of imposing these risks. Initially, moral dilemmas are shown to constitute a particularly challenging risk constellation. Drawing upon the positions of Sven Ove Hansson and Julian Nida-Rümelin, the paper makes the case for a deontological approach based on individual rights on the one hand and a fair distribution of the resulting risks of damage on the other hand. These two criteria are applied to and further elaborated in the context of self-driving car dilemmas. With regard to the first criterion, it is argued that individual rights could be considered to be adequately safeguarded if, and only if, the resulting levels of risk imposition on the individual are acceptable (absolute principle). Besides, some difficulties that emerge in the context of the second criterion of distributive justice (relative principle) are outlined. At this point, various ethical challenges such as the compensation of potential benefits, the principle of harm minimisation, and fundamental differences in individuals’ possibilities of reducing personal harm are critically examined.
Published in: M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, M. Nørskov (eds.). 2018. Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, IOS Press, Amsterdam, 327-335.
(English version:)
How should self-driving cars react in cases of unavoidable collisions? The complexity of specific dilemma situations that might arise in the context of autonomous driving pushes well-established ethical traditions of thought to their limits. This paper attempts to open up new opportunities for approaching this issue. By reframing the underlying decision problem from the perspective of ethics of risk, it is argued that the latter is highly relevant for the programming of ethical crash algorithms. The paper’s main contribution lies in providing an interpretation of dilemma situations as ethical problems of risk distribution as well as an outline of the resulting implications in terms of the permissibility of imposing these risks. Initially, moral dilemmas are shown to constitute a particularly challenging risk constellation. Drawing upon the positions of Sven Ove Hansson and Julian Nida-Rümelin, the paper makes the case for a deontological approach based on individual rights on the one hand and a fair distribution of the resulting risks of damage on the other hand. These two criteria are applied to and further elaborated in the context of self-driving car dilemmas. With regard to the first criterion, it is argued that individual rights could be considered to be adequately safeguarded if, and only if, the resulting levels of risk imposition on the individual are acceptable (absolute principle). Besides, some difficulties that emerge in the context of the second criterion of distributive justice (relative principle) are outlined. At this point, various ethical challenges such as the compensation of potential benefits, the principle of harm minimisation, and fundamental differences in individuals’ possibilities of reducing personal harm are critically examined.
Published in: M. Coeckelbergh, J. Loh, M. Funk, J. Seibt, M. Nørskov (eds.). 2018. Envisioning Robots in Society – Power, Politics, and Public Space, Proceedings of Robophilosophy 2018 / TRANSOR 2018, Series; Frontiers in Artificial Intelligence and Applications, IOS Press, Amsterdam, 327-335.