Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Affective Corners as a Problematic for Design Interactions

Published: 23 June 2023 Publication History

Abstract

Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This article presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.

1 Introduction

One of us, Ericka, has a robot lawnmower, named Robbie. He cuts the grass in straight lines, criss-crossing the yard until he bumps up against an object or the guide wire, at which point he turns around and heads in another direction. He cuts a pretty random pattern, but he manages to keep the grass well trimmed and has made, according to Ericka, yard work much less labour intensive. Ericka's partner, who has done the majority of the physical labour needed to install the guide wires, level-off steep inclines, and re-adjust flower bed borders, would probably not agree about the labour-saving comment, but is also pretty happy with the way the lawn looks, now that it is properly adapted to Robbie's needs.
The other of us, Katherine, has a robot vacuum cleaner, named Fido. Katherine's oldest son is enthralled with it and thinks of it as a pet. Katherine's youngest child, however, was initially terrified of Fido, screaming every time Fido started up. The family had to adapt Fido's runtimes to avoid the youngest child's awake times and has, progressively, worked on helping their daughter overcome her fear so that Fido can be a part of the family home without terrifying its youngest member. Both of us have been working together1 on a third robot, Pepper.2 Pepper is, of course, a very familiar robot, at least within the robot community. However, even Pepper took some adapting to. Ericka initially tried to interact by poking at Pepper's blinking eyes, a faux pas that did not work at all. Katherine approached Pepper with verbal communication instead, noting that Pepper gave a different impression in Swedish and English. And while we were observing Pepper being used in an experiment to teach older people aerobics,3 we saw that Pepper did not always communicate very well with the humans who were supposed to follow her. For some of these interactions, Pepper and the person were able to understand each other and complete the exercise just fine. In other interactions, however, there were breakdowns in communication and on occasion, the human in the interaction visibly wanted to stop early and not continue interacting with Pepper. Adaptation to Pepper's style of communication did not always occur smoothly.
Inspired by the difficulties that Robbie and Fido had in reaching some of the literal corners of our homes and gardens, we use the metaphor of the “hard-to-reach corner” to explore the socio-technical limitations of companion robots and our responses to these limitations. Two of these opening anecdotes have described interactions with robots “in the wild,” highlighting the specificity of the context in which the robots operate as well as hinting at the adjustments that we have learned to make to integrate these new members of our households. The third is an example of how those corners also appeared in a laboratory experiment. This article is a critical exploration of robot design and a reflexive exercise on our attitudes toward robots (and our willingness to learn to live with them). More precisely, we want to use our homely reflections on Fido and Robbie to think critically about robots like Pepper. Marketed as a “robot designed to interact with humans,” Pepper is explicitly intended to offer affective, embodied interaction, meaning that our attention here spans not just the physical “corners” of robot design, but the affective ones as well.
Domestic robots like Fido and Robbie are already commonplace in many homes, while companion robots like Pepper are increasingly becoming part of different kinds of care work in the Global North (often imagined to be potentially useful for health care both in the home and at care facilities [13]). We assert that this requires attention to several aspects of these new interlocutors. First, a critical attention to the design and programming of these creatures and what interactions this presupposes and produces [46]. Following on from that, and informed by discussions of relational ontological intra-actions with technology [79], an exploration of how the humans in this interaction “learn” to work together with these creatures; when we realise they cannot reach the corners, what do we do [10]? When we suspect they may be reaching corners we would rather keep hidden, what do we do [6, 11]? Then we took a long hard look at our own reactions to their presence—do we instinctively lean in and learn how to interact or do we get frustrated/bored by the species gap and why [12]? And, finally, a discussion about what intersectional aspects of our positions allow us to choose to adapt or not to a robot brings up the issue of privilege. Taken together, these questions and concerns highlight the importance of context and contexting [13] when studying adoption of technologies, and the need for theoretical perspectives that lift up how inequalities are re/produced in the design and development of technologies [1421].
Inspired by these bodies of work—and especially attuned to theoretical critiques that point to the power-loaded and contingent complexity of social-technological relations, as well as discussions about the intersectional elements of social structures and relations engaged in and by affective responses—we are rethinking our experiences with robots by examining hard-to-reach corners in a human–robot interaction (HRI). This scholarship draws attention not only to the necessity of adaptation and learning in a successful HRI but also to the conditions for this adaptive work and the scaffolding it engages. Who is able to adapt and what happens to those who cannot/ will not adapt?
In what follows, we situate our experiences within the HRI scholarship around adaptation and learning. Drawing on experience with Pepper, together with our private encounters with robots, we use these qualitative accounts as the basis for an examination of the “corners” of these robots. We suggest that “affective corners” could be a problematic for design interactions to encourage HRI design to think about the intersectional power and positionality that impact the human robot relation in contexts or “in the wild.”

2 Learning and Adaptation

There is no doubt that the promise of autonomous social robots so beloved of science fiction is still a long way out of reach [24]. Our interactions with Robbie and Fido lifted up some of the limitations associated with the design of these robot bodies (for example, level ground or open spaces are required for effective functioning). Meanwhile, the interaction with Pepper that we describe in the next sections highlights a few of the challenges surrounding interaction with embodied affective agents (such as speech recognition and smoothness of movement). The literature within HRI acknowledges these limitations, and others, and has engaged extensively with different aspects of the learning and adaptation process that result from these physical and affective “corners.”
Cutting-edge technical development in social robotics focuses on making interactions more humanlike through improvements in speech, movement, and emotion programming [32, 34]. Developers design robots to perform humanlike emotional gestures to build trust and empathy with human “users” (up to a point, see Reference [35] on “The Uncanny Valley”). Recent work by Jones et al., for example, examined how robot tutors may be used to support primary school children in developing self-regulated learning techniques that would allow them to learn more effectively. However, their study also highlighted how the efficacy of the robot's support was highly dependent on its being able to model positive social behaviours that create trust and engagement with the student. Forming good social interactions and being able to adapt to different levels of learner ability were noted by the authors as an area requiring further investigation [36].
HRI has explored many aspects of a successful interaction between humans and robots, often taking as a starting point the myriad aspects involved in successful human teamwork [44, 45] but also taking inspiration from human-animal teams [46]. These aspects include both emotional labour (for example, making an effort [47] or emotional scaffolding [48]), as well as a focus on specific physical activities (such as handovers [49] or adapting walking speed of a humanoid robot [50]).
Adaptation of both robot to human [51, 52] and human to robot [53, 54] plays an important role in ensuring long-term adoption of robots into human lives. In recent years, this has included developing models to understand mutual adaptation [55, 56]. From our perspective, the work on adaptation is particularly interesting in terms of how variation in adaptation is understood. “We define the adaptability as the probability of the human switching from their mode to the robot mode. It would be unrealistic to assume that all users are equally likely to adapt to the robot. Instead, we account for individual differences by parameterizing the transition function P by the adaptability α of an individual” [56].
Models such as the one outlined above are necessary to standardize programming and development of social robots whilst at the same time accounting for variance in human responses to such robots. However, with an eye to the Science and Technology Studies (STS) literature, we wonder what kind(s) of individuals and individual uses are assumed as the basis for this measure of adaptability [25, 57]? To what extent are intersections of gender, age, ability, ethnicity, or socioeconomic status taken into account when modelling for variance in terms of how willing/able humans are to adapt [25]?

3 Learning to Live with Our Robots

During our first encounters with Pepper, both Ericka and Katherine instinctively tried to adapt their behaviour to elicit better responses from Pepper, just as our households have done with the robotic lawn mower and vacuum cleaner. In this section, we draw attention to the different kinds of learning/negotiation that are necessary for smooth interaction with a social robot through reflecting on our own experiences with Robbie, Fido, and Pepper. Through a critical attention to context and privilege, we draw attention to the many small negotiations that we make and reflect on what makes it possible for us to perform such negotiations.
Robbie, for example. The labour-saving lawnmower is not labour free, but you would have to find the person doing the labour to hear this. Ericka is not the right one—or at least the only one—to ask about this. Her partner would tell you a different story. This may be a lesson we should take into our analysis of care-robots (inspired by References [26, 27] but also taking lessons learned from STS theories about care [2830] and studies of the evaluation and ethics of implementing care through technologies like telemedicine [31]).
Like Ericka's partner with Robbie, Katherine and her partner also do a fair amount of labour to facilitate Fido's work—picking up toys and strategically placing cushions where they know Fido will get stuck under furniture. Meanwhile, to cater to their daughter's fear, they also adapted the use of Fido—instead of leaving him switched on and charging at his base station all the time, he is turned off to prevent spontaneous movements. They now only use him when they are out of the house—no more companionable pottering about for this housebot. This means that their son no longer interacts with Fido and the imaginary4 around Fido has decreased. There are no more long discussions about Fido “eating” crumbs or tickling feet. Katherine and her partner created an imaginary around Fido's actions to help their son make sense of him, and they learned after the arrival of their daughter to adapt use of him to fit around her needs. Responses to robots are both highly individual and adaptable if one is prepared to learn and do the emotional/physical work of this
One never forgets that Pepper is a robot. She hums and whirrs as she moves, and sometimes she needs to be rebooted. But she is sufficiently humanlike that Katherine wanted to interact and found herself mirroring Pepper's movements and writing “her” when she started work on this article. There was a desire to learn what to do to make the interaction with Pepper smoother and to know which gestures registered with her and which questions prompted a “reasonable” response. As such, the human–robot encounter can be seen as both an issue of design [32, 33] and an issue of human learning [8, 37]. The encounters with Pepper, Robbie, and Fido suggest that Ericka and Katherine (and their partners) are prepared to do quite a lot of learning and adaptation to accommodate these non-humans into their personal spaces. This includes physical adaptation of the environment (garden landscaping), temporal adaptation of daily life (scheduling Fido's movements to avoid distress), and affective adaptation (creation of a narrative that “makes sense” of the robot's presence and actions for both ourselves and more vulnerable family members). This labour makes clear the necessity of human mediation and learning to smooth the introduction of affective embodied robots into daily life. However, it also—for us—poses questions about the conditions for this work. We are able to give time, energy, and money to make these changes and were physically able and emotionally confident enough to perform them. We were also willing to have robots in our lives, proactively choosing them to help us and thus willing to adapt. Given the intended context of use for robots such as Pepper, what should robot designers and programmers be taking into consideration when planning for learning? How might this differ for those who have not chosen a robot carer but rather have been given one to relieve human caregivers? Who in this context does the adaptation work and how might this be accounted for?

4 Affective Corners—Hard-to-reach?

Commercial rhetoric around Pepper (and her peers) imagines her to be a robot that can integrate into a domestic environment and assist in tasks needed to run the place, including providing some form of companionship [38]. But we all know that domestic environments are not only full of physical and emotional kitchen corners, they are also rife with differing desires, power structures, and power struggles.
Pepper's integration into this sphere would hardly be a neutral event that only served objective, mutually agreed-upon tasks. Surely there would be corners that some members of a domestic space would like cleaned out but others would not. And surely Pepper would be a way for outsiders to see into that space and impose their wills where insiders would rather they did not. This seems to be particularly probable and problematic when one realizes that Pepper and other companion robots are often touted as a “solution” to the loneliness and need for care imagined in the elderly (strongly critiqued in Reference [39, 40]). This need for care may exist, but it is also imagined by those who have some responsibility for providing that care, people who, through a sense of familial responsibility or employment or professional position, feel they are tasked with ensuring the well-being of another person. How can those people know their charge is feeling ok, eating correctly, ingesting the appropriate meds at the right times, visiting the bathroom, exercising, sleeping, or breathing? Surely a robot like Pepper would be useful to them.
We suggest that there are stakes in learning to interact with robots. We bring our personal positions and our privileges and insecurities into the interaction with the robot. How can we program for these? One can have an uneven power relation, even when one of the interlocutors is a robot and recognition of these power dynamics—especially in the wild—should be a part of robotics research design.
This brings us back to the tension between standardization in robot programming and the highly specific and personal situations of human interlocutors and their differing abilities to adapt to the robot interaction. In a successful interaction, the humans must do emotional smoothing work, physical adjustment work, and protection of vulnerable others. Why do we bother? Why do we want to believe that an interaction with a robot can be anything more than an automatable exchange? Are we also, perhaps, doing this work to fathom the depths of the robot's emotional, affective, knowledge? Are we concerned not with what the robot is feeling as much as with how much the robot knows about our feelings? There is the risk that humans will feel pressured by robots to do certain things, but perhaps there is also the opportunity for humans to learn new methods of showing or hiding their affective responses.
We are used to theorizing about affect as something that occurs in relational practice [4143], both between humans and in the human/non-human relation. This approach presents questions about where we make cuts in deciding which parts of an affective encounter are human and which is non-human and who is responsible for these cuts and how and where they are drawn are political and important [8, 9]. But those questions focus on the presence and legibility of affective responses. What we want to point out with this article is that it may be just as interesting to become attuned to the hiding away and illegibility (intentionally or unintentionally) of affective responses in the human/non-human relation. It may be easy to think of a breakdown in communication—the current difficulty that Pepper has in reading emotions, for example—as a technical problem to be addressed by better sensors and more sensitive programming. But perhaps what that affective corner is hiding is filled with things someone intends to keep hidden there. And knowing when our robot companions should look away is something we could appreciate in these new interlocuters, just as knowing how we should distract their vision from a corner is a skill we will be forced to learn.
In brief, if we accept that “corners” in robot ability exist—be it dexterity, language or emotion—then learning/adaptation on the part of the human is necessary. This much is widely discussed and assumed within HRI. What we need to consider more carefully now are the assumed conditions for that adaptation and which humans cannot or will not adapt. With that in mind, we propose the following questions as a basis for development:
Is it possible to design in a way that assists with learning/adaptation?
What bodies/abilities are currently assumed in the process?
Is it possible to design in a way that sees “corners” as opportunities? [22]
How can we develop an ethics of “corners” that acknowledges their role as both respite and failure?
And how can we more explicitly design for the human “scaffolding” that is necessary in human–robot encounters?

5 Conclusion: Designing for Connection

In this article we have used the metaphor of hard-to-reach corners to help us think through the limitations that currently exist in the design and technical capacities of social robots. In recounting and reflecting upon our experiences with Robbie, Fido, and Pepper, we aim to contribute to the existing wealth of technical literature about adaptation/learning within HRI by encouraging further work that engages a qualitative, context-sensitive analysis of robotic encounters. We want to draw attention to the specificity of contexts and bodies by thinking of corners as not only limitations in robot capacity that demand adaptation, but potentially places of respite for humans who feel more ambivalent than us about robot care. We have highlighted how bodies and affects that fall outside the assumed (normative) range may not find adapting to life with a robot to be the pleasurable, curiosity-driven experience that we did. Our curiosity about these technologies is reflective of our privileged position, as we are able/willing to make the adaptations for ourselves (and others in our households) to co-exist harmoniously with our robots. However, this is not the case for everyone—especially those who are unable or unwilling to adapt (due to age, sickness, or lack of confidence with technology). This seems particularly pertinent given the care contexts in which Pepper is used.
Care and companion robots such as the one we met, Pepper, or Paro the famous baby seal are premised on a mutual learning experience in which robot and human must gradually adjust to one another [37]—much as in a human–human interaction. But if many of the users are as uncapable of really knowing the robot and as easily bored when they do not do what is expected, then this may be only partially successful. Successful interactions require some framing and a commitment to adapting human behaviour, in return for the promise of a “better” life. Or they may continue to be unsuccessful. We suggest that these are emotional corners that the robots cannot reach and that some people's corners might be more acute and difficult to get into than others. To approach these, we suggest sensitivity to the intersectional aspects of context.
Our reflections here span both laboratory study and “in the wild” encounters and prompt three important discussions around design and adaptation:
“In the wild” encounters introduce a higher level of variation in both participant response and environmental challenges—how might existing models bridge that gap in variation, engaging a sensitivity for the privileged positions that are capable and willing to adapt and those that are not?
Lab studies tend to enrol participants who are positive toward robots, while “in the wild” encounters may include less-welcomed encounters—how then can power and agency be accounted for in designing social robots for care contexts?
“Scaffolding” of the interaction by another human is particularly important in the case of vulnerable humans, who may be rendered scared, confused, uncomfortable, or frustrated by the robot—how to plan for the presence of additional humans and what impact does this have on the promise of robots taking over humans’ work?
We hope that this perspective can be read as an invitation to further interdisciplinary collaborations, in which technical models of adaptation might be tested or enriched through interaction with qualitative analysis, or in dialogue around assumed uses and users of social robots. We have noted a recurring tension in our various interactions with social robots and wish to advocate for interdisciplinary work in which qualitative, context-sensitive analysis of interactions and technologies may complement technical advances. The scholarship with which we have discussed above, for example, provides such analyses/case studies that explore how interactions with robots designed to care for humans may produce widely varying experiences, particularly in vulnerable populations [5861]. We suggest that another possible pathway to address this would be to design around the concept of contexts rather than just users, carefully thinking through how structural aspects of a context (like power asymmetries in work places) produce “users” who are afforded (sometimes limited) privileges around adaptation. Interdisciplinary teams that engage more qualitative, critical work from social science fields outside robotics could be one way to start down such a path.
We propose to bring to the conversation a critical, qualitative attention to the conditions for adaptation that seems to be outside the current models for adaptation. Our attention concerns, first, individuals’ capacity for adaptation based on their personal position, and, second, the context-specific conditions for the interaction, particularly relevant to “in the wild” encounters. We offer this as an opportunity to pay attention to the power structures in place within an encounter that may be hard to show in quantitative models, but that have a significant impact on people's affective responses toward robots and their own sense of agency in the encounter.

Acknowledgments

The authors thank Professor Amy Loutfi of the Machine Perception and Interaction Lab in Örebro, Sweden, for allowing us to observe the experiments with Pepper. An early draft of this article was presented and discussed at the “P6–Body, Knowledge, Subjectivity” seminar group at Department of Thematic Studies, Linköping University, Sweden, and the authors acknowledge the productive feedback received from this group.

Footnotes

1
The ethics and social consequences of AI and caring robots. Learning trust, empathy and accountability. https://liu.se/en/research/caring-robots.
3
Our collaborator's main focus is on “perceived safety in HRI” [23]. This involves exploring the ways in which an interaction could be safe physically, but the user may still feel unsafe. Perceived safety depends on several factors: comfort, sense of control, trust, experience, and familiarity with the robot and the environment.
4
The “imaginary” around robots is a set of collective ideas, expectations, and fantasies about what robots are and how they might interact with humans (see Reference [4]). In the specific case of Fido, the imaginary constitutes a fantasy narrative about Fido's personality and role in the home (including naming and gendering the robot) that allows broader cultural notions about robots to be integrated into a “safe” domestic context.

References

[1]
A. DeFalco. 2020. Towards a theory of posthuman care: Real humans and caring robots. Body Soc. 26, 3 (2020), 31–60.
[2]
R. Sparrow. 2016. Robots in aged care: A dystopian future? AI Soc. 31, 4 (2016), 445–454.
[3]
J. Wright. 2019. Robots vs migrants? Reconfiguring the future of japanese institutional eldercare. Crit. Asian Stud. 51, 3 (2019), 331–354.
[4]
J. Rhee. 2018. The Robotic Imaginary: The Human and the Price of Dehumanized Labour. University of Minnesota Press.
[5]
B. Fischer, B. Östlund, and A. Peine. 2020. Of robots and humans: Creating user representations in practice. Soc. Stud. Sci. 50, 2 (2020), 221–244.
[6]
Y. Strengers and J. Kennedy. 2020. The Smart Wife. MIT Press, Cambridge, MA.
[7]
C. Thompson. 2005. Making Parents. MIT Press, Cambridge, MA.
[8]
L. Suchman. 2007. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, Cambridge, UK.
[9]
K. Barad. 2007. Meeting the Universe Halfway. MIT Press, Cambridge, MA.
[10]
W. Davies. 2017. How are we now? Real-time mood-monitoring as valuation. J. Cult. Econ. 10, 1 (2017), 34–48.
[11]
I. McEwen. 2019 Machines Like Me. Penguin Books.
[12]
H. Katsuno. 2011. The robot's heart. Jpn. Stud. 31, 1 (2011), 93–109.
[13]
K. Asdahl and I. Moser. 2012. Experiments in context and contexting. Sci. Technol. Hum. Val. 37, 4 (2012), 291–306.
[14]
R. Benjamin. 2019. Race After Technology. Polity. New York.
[15]
C. D'Ignazio and L. Klein. 2019. Data Feminism. MIT Press, Cambridge, MA.
[16]
F. Eyssel and F. Hegel. 2012. (S)he's got the look: Gender stereotyping of robots. J. Appl. Soc. Psychol. 42, 9 (2012), 2213–2230.
[17]
S. U. Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, NY.
[18]
J. Robertson. 2010. Gendering humanoid robots: Robo-sexism in japan. Body Soc. 16, 2 (2010), 1–36.
[19]
J. Robertson. 2017. Robo Sapiens Japanicus. University of California Press, Los Angeles, CA.
[20]
R. Soraa. 2017. Mechanical genders: How do humans gender robots? Gender Technol. Dev. 21, 1-2 (2017), 99–115.
[21]
R. Sparrow. 2020. Robotics has a race problem. Sci. Technol. Hum. Val. 45, 3 (2020), DOI:
[22]
K. Harrison, K. Somasundaram, and A. Loutfi. (forthcoming). The imperfectly relatable robot: An interdisciplinary approach to failures in human-robot relations. In What That Robot Made Me Feel. MIT Press, Cambridge, MA.
[23]
N. Akalin, A. Kristoffersson, and A. Loutfi. 2019. Evaluating the sense of safety and security in human–robot interaction with older people. In Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction. Springer, Cham, 237–264.
[24]
R. Aylett and P. A. Vargas. 2021. Living with Robots: What Every Anxious Human Needs to Know. MIT Press, Cambridge, MA.
[25]
M. Arnelid, E. Johnson, and K. Harrison. 2022. What does it mean to measure a smile? Assigning numerical values to emotions. Valuat. Stud. 9, 1 (2022), 79–107.
[26]
S. Cockburn. 1983. Brothers: Male Dominance and Technological Change. Pluto Press Limited, London, England.
[27]
R. S. Cowan. 1983. More work for mother: The ironies of household technology from the open hearth. Basic Books, New York, NY.
[28]
M. Puig de la Bellacasa. 2011. Matters of care in technoscience: Assembling neglected things. Soc. Stud. Sci. 41, 1 (2011), 85106.
[29]
M. Murphy. 2015. Unsettling care: Troubling transnational itineraries of care in feminist health practices. Soc. Stud. Sci. 45, 5 (2015), 717–737.
[30]
A. Martin, N. Myers, and A Viseu. 2015 The politics of care in technoscience. Soc. Stud. Sci. 45, 5 (2015), 625–641.
[31]
M. Mort, C. Roberts, J. Pols, M. Domenech, and I. Moser. Ethical implications of home telecare for older people: A framework derived from a multisited participative study. Health Expect. 18, 438–339.
[32]
J. Bütepage, H. Kjellström, and D. Kragic. 2018. Anticipating many futures: Online human option prediction and generation for human-robot interaction. In Proceedings of the IEEE International Conference on Robotics and Automation.
[33]
M. Alirezaie and A. Loutfi. 2015. Reasoning for sensor data interpretation: An application to air quality monitoring. J. Amb. Intell. Smart Environ. 7, 4 (2015), 579–597.
[34]
D. McColl, A. Hong, N. Hatakeyama, G. Nejat, and B. Benhabib. 2016. A survey of autonomous human affect detection methods for social robots engaged in natural HRI. J. Intell. Robot Syst. 82.
[35]
M. Mori. 2012. The uncanny valley. IEEE Robot. Autom. Mag. (June 2012).
[36]
A. Jones, S. Bull, and G. Castellano. 2018. “I know that now, i'm going to learn this next” promoting self-regulated learning with a robotic tutor. Int. J. Soc. Robot. 10, 4 (2018).
[37]
C. Breazeal. 2002. Designing Sociable Robots. MIT Press, Cambridge, MA.
[38]
J. Robertson. 2018. Robo Sapiens Japanicus. Robots, Gender, Family and the Japanese Nation. University of California Press, Los Angeles, CA.
[39]
R. Sparrow and L. Sparrow. 2006. In the hands of machines? The future of aged care. Mind Mach. (2006). DOI:
[40]
R. Calo. 2012. Robots and privacy. In Robot Ethics, P. Lin, K. Abney, and G. A. Bekey (Eds.).
[41]
S. Ahmed. 2004. The Cultural Politics of Emotion. Edinburgh University Press, Edinburgh, UK.
[42]
S. Ahmed. 2010. The Promise of Happiness. Duke University Press, Durham, NC.
[43]
M. Gregg and G. J. Seigworth (Eds). 2010. The Affect Theory Reader. Duke University Press, Durham, NC.
[44]
J. E. Mathieu et al. 2000. The influence of shared mental models on team process and performance. J. Appl. Psychol. (2000).
[45]
S. Nikolaidis, P. Lasota, R. Ramakrishnan, et al. 2015. Improved human–robot team performance through cross-training, an approach inspired by human team training practices. Int. J. Robot. Res. 34, 14 (2015), 1711–1730.
[46]
E. Phillips, K. E. Schaefer, D. R. Billings, F. Jentsch, and P. A. Hancock. 2016. Human-animal teams as an analog for future human-robot teams: Influencing design and fostering trust. J. Hum.-Robot Interact. 5, 1 (2016), 100–125.
[47]
A. Vignolo, H. Powell, F. Rea, A. Sciutti, L. Mcellin, J. Michael. 2021. A humanoid robot's effortful adaptation boosts partners’ commitment to an interactive teaching task. J. Hum.-Robot Interact. 11, 1 (2021), Article 9.
[48]
J. Páez and E. González. 2021. Human-robot scaffolding. (unpublished).
[49]
K. Strabala, M. K. Lee, A. Dragan, J. Forlizzi, S. S. Srinivasa, M. Cakmak, and V. Micelli. 2013. Toward seamless human-robot handovers. J. Hum.-Robot Interact 2, 1 (2013), 112–132.
[50]
E. Sviestins, N. Mitsunaga, T. Kanda, H. Ishiguro, and N. Hagita. 2007. Speed adaptation for a robot walking with a human. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI'07). Association for Computing Machinery, 349–356.
[51]
H. R. Lee, J. Sung, S. Šabanović, and J. Han. 2012. Cultural design of domestic robots: A study of user expectations in korea and the united states. In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’12). 803–808.
[52]
S. Reig, M. Luria, J. Wang, D. Oltman, E. Carter, A. Steinfeld, J. Forlizzi, and J. Zimmerman. 2020. Not some random agent: Multiperson interaction with a personalizing service robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, 23–26.
[53]
V. Groom and C. Nass. 2007. Can robots be teammates? Benchmarks in human-robot teams. Interact. Stud. 8 (2007), 483–500.
[54]
M. Kwon, M. F. Jung, and R. A. Knepper. 2016. Human expectations of social robots. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI’16). 463–464.
[55]
S. Nikolaidis, A. Kuznetsov, D. Hsu, et al. 2016. Formalizing human-robot mutual adaptation via a bounded memory based model. In Proceedings of the International Conference on Human-robot Interaction. 75–78.
[56]
S. Nikolaidis, D. Hsu, and S. Srinivasa. 2017. Human-robot mutual adaptation in collaborative tasks: Models and experiments. Int. J. Robot. Res. 36, 5-7 (2017), 618–634.
[57]
M. Akrich. 1992. The de-scription of technical objects. In Shaping Technology/Building Society, Studies in Sociotechnical Change. MIT Press, Cambridge, MA, 205–224.
[58]
N. C. M. Nickelsen. 2020. `Active citizenship’ and feeding assistive robots. In Designing Robots, Designing Humans. Routledge, Milton Park, UK.
[59]
A. Sharkey, N. Wood, and R. Aminuddin. 2020. Robot companions for children and older people: Ethical issues and evidence. In Designing Robots, Designing Humans. Routledge, Milton Park, UK.
[60]
A. Mol. 2008. The logic of care: Health and the problem of patient choice. Routledge, Milton Park, UK.
[61]
A. Mol, I. Moser, and J. Pols. 2010. Care in Practice: On Tinkering in Clinics, Homes and Farms. Transcript Verlag, Bielefeld, Germany.

Cited By

View all
  • (2024)Encountering Autonomous Robots on Public StreetsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634936(561-571)Online publication date: 11-Mar-2024
  • (2023)Towards a conceptualisation and critique of everyday life in HRIFrontiers in Robotics and AI10.3389/frobt.2023.121203410Online publication date: 14-Sep-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 12, Issue 4
December 2023
374 pages
EISSN:2573-9522
DOI:10.1145/3604628
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 June 2023
Online AM: 15 May 2023
Accepted: 13 April 2023
Revised: 28 February 2023
Received: 24 May 2021
Published in THRI Volume 12, Issue 4

Check for updates

Author Tags

  1. Social robotics
  2. design
  3. affect

Qualifiers

  • Research-article

Funding Sources

  • Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS)
  • Marianne and Marcus Wallenberg Foundation

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)665
  • Downloads (Last 6 weeks)59
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Encountering Autonomous Robots on Public StreetsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634936(561-571)Online publication date: 11-Mar-2024
  • (2023)Towards a conceptualisation and critique of everyday life in HRIFrontiers in Robotics and AI10.3389/frobt.2023.121203410Online publication date: 14-Sep-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media