I am a PhD Candidate at the University of Vienna working on the FWF funded project 'Forms of Normativity - Transitions and Intersections'. My research centres around the ethical concerns arising within the construct of human-AI interaction. My project concentrates on a) the influencability of human agents, and b) a subsequent elaboration of three approaches towards the relations found within the construct of human-AI interaction.
Fluffy baby seal robots entertaining our grandparents, Barbie robots taking care of our children ... more Fluffy baby seal robots entertaining our grandparents, Barbie robots taking care of our children or a doll enabling us to actually feel and see bodily expressions of another person with whom we are communicating via Skype. PARO, Hello Barbie, and Telenoid are just very few examples of how robots are no longer thought of as mere tools. It is the desire for the implementation of more complex and further reaching robots, such as driverless cars and fully autonomous care robots, that underline the present and future integration of artificial agents within the human lifeworld. It is this further growing, broader reaching human-robot interaction, short HRI, which brings up new social and legal challenges, new interpretations and new questions. Robots have implications on human feelings, on perceptions and on the understanding of concepts. This brings forth the fundamental enquiry that is to be elaborated within this thesis: is the question of an ascription of responsibility to robots constituted through actual possibility, mere utopia or dystopia?
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI.... more The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. The second part of this paper will then translate the elaborated line of argument to the field of social robotics.
Communications in Computer and Information Science, 2021
The use of robots in combination with artificial intelligence (AI) is a trend with the promises t... more The use of robots in combination with artificial intelligence (AI) is a trend with the promises to relieve humans from difficult-, time consuming- or dangerous work. Intelligent robots aim to solve tasks more efficiently, safer or partly more stable. Independent of the domain-specific challenge, the configuration of both (a) the robot and (b) the AI currently requires expert knowledge in robot implementation, security and safety regulations, legal and ethical assessments and expertise in AI. In order to enable a co-creation of domain-specific solutions for robots with AI, we performed a laboratory survey – consisting of stakeholder interaction, literature research, proof-of-concept experiments using OMiLAB and prototypes using a Robot Laboratory – to elicit requirements for an assistant system that (i) simplifies and abstracts robot interaction, (ii) enables the co-creative assessment and approval of the robot configuration using AI, and (iii) ensures a reliable execution. A model-based approach has been elaborated in the national funded project complAI that demonstrates the key components of such an assistance system. The main concepts paving the way for a shift from research and innovation into real-world applications are discussed as an outlook.
Abstract AI as decision support supposedly helps human agents make ‘better’ decisions more effici... more Abstract AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.
In the book Unfit for the Future Persson and Savulescu portray the problems and challenges humani... more In the book Unfit for the Future Persson and Savulescu portray the problems and challenges humanity will have to cope with in the near future. Problems technological progress and demographic growth have evoked can’t be solved through common moral psychology, which was assimilated to small, nontechnological societies many many years ago. There is a need of moral enhancement for humanity to be able to cope with present problems. It is Persson and Savulsecus opinion that humanity is ‘ill-equipped’ (p. 12) through the so called ‘common-sense morality’. Moral attitudes of various societies all over the world can be brought to one common denominator, which the authors call ‘common-sense morality’. This ‘common-sense morality’ is not capable of giving us the moral psychology to cope with the problems modern societies have to face. Further in the book, Persson and Savulescu illustrate the components of this ‘common-sense morality’. For example, it is said that we care more about what happen...
AI as decision support supposedly helps human agents make 'better' decisions more efficiently. Ho... more AI as decision support supposedly helps human agents make 'better' decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentricallyladen ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn't allow for an appropriate determination of decision points-this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI.... more The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. The second part of this paper will then translate the elaborated line of argument to the field of social robotics.
Fluffy baby seal robots entertaining our grandparents, Barbie robots taking care of our children ... more Fluffy baby seal robots entertaining our grandparents, Barbie robots taking care of our children or a doll enabling us to actually feel and see bodily expressions of another person with whom we are communicating via Skype. PARO, Hello Barbie, and Telenoid are just very few examples of how robots are no longer thought of as mere tools. It is the desire for the implementation of more complex and further reaching robots, such as driverless cars and fully autonomous care robots, that underline the present and future integration of artificial agents within the human lifeworld. It is this further growing, broader reaching human-robot interaction, short HRI, which brings up new social and legal challenges, new interpretations and new questions. Robots have implications on human feelings, on perceptions and on the understanding of concepts. This brings forth the fundamental enquiry that is to be elaborated within this thesis: is the question of an ascription of responsibility to robots constituted through actual possibility, mere utopia or dystopia?
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI.... more The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. The second part of this paper will then translate the elaborated line of argument to the field of social robotics.
Communications in Computer and Information Science, 2021
The use of robots in combination with artificial intelligence (AI) is a trend with the promises t... more The use of robots in combination with artificial intelligence (AI) is a trend with the promises to relieve humans from difficult-, time consuming- or dangerous work. Intelligent robots aim to solve tasks more efficiently, safer or partly more stable. Independent of the domain-specific challenge, the configuration of both (a) the robot and (b) the AI currently requires expert knowledge in robot implementation, security and safety regulations, legal and ethical assessments and expertise in AI. In order to enable a co-creation of domain-specific solutions for robots with AI, we performed a laboratory survey – consisting of stakeholder interaction, literature research, proof-of-concept experiments using OMiLAB and prototypes using a Robot Laboratory – to elicit requirements for an assistant system that (i) simplifies and abstracts robot interaction, (ii) enables the co-creative assessment and approval of the robot configuration using AI, and (iii) ensures a reliable execution. A model-based approach has been elaborated in the national funded project complAI that demonstrates the key components of such an assistance system. The main concepts paving the way for a shift from research and innovation into real-world applications are discussed as an outlook.
Abstract AI as decision support supposedly helps human agents make ‘better’ decisions more effici... more Abstract AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.
In the book Unfit for the Future Persson and Savulescu portray the problems and challenges humani... more In the book Unfit for the Future Persson and Savulescu portray the problems and challenges humanity will have to cope with in the near future. Problems technological progress and demographic growth have evoked can’t be solved through common moral psychology, which was assimilated to small, nontechnological societies many many years ago. There is a need of moral enhancement for humanity to be able to cope with present problems. It is Persson and Savulsecus opinion that humanity is ‘ill-equipped’ (p. 12) through the so called ‘common-sense morality’. Moral attitudes of various societies all over the world can be brought to one common denominator, which the authors call ‘common-sense morality’. This ‘common-sense morality’ is not capable of giving us the moral psychology to cope with the problems modern societies have to face. Further in the book, Persson and Savulescu illustrate the components of this ‘common-sense morality’. For example, it is said that we care more about what happen...
AI as decision support supposedly helps human agents make 'better' decisions more efficiently. Ho... more AI as decision support supposedly helps human agents make 'better' decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentricallyladen ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn't allow for an appropriate determination of decision points-this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.
The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI.... more The notion of trust has evolved to become one of the many nebulous buzzwords centering around AI. This paper aims at showing that, with regards to human influenceability through AI, trust in AI is to be seen as problematic. Based on the notion of socio-technical epistemic systems, I will argue that the trust human agents have in AI is strongly related to what could be understood as algorithmic authority. The second part of this paper will then translate the elaborated line of argument to the field of social robotics.
Uploads
It is this further growing, broader reaching human-robot interaction, short HRI, which brings up new social and legal challenges, new interpretations and new questions.
Robots have implications on human feelings, on perceptions and on the understanding of concepts. This brings forth the fundamental enquiry that is to be elaborated within this thesis: is the question of an ascription of responsibility to robots constituted through actual possibility, mere utopia or dystopia?
It is this further growing, broader reaching human-robot interaction, short HRI, which brings up new social and legal challenges, new interpretations and new questions.
Robots have implications on human feelings, on perceptions and on the understanding of concepts. This brings forth the fundamental enquiry that is to be elaborated within this thesis: is the question of an ascription of responsibility to robots constituted through actual possibility, mere utopia or dystopia?