1 Introduction
One of us, Ericka, has a robot lawnmower, named Robbie. He cuts the grass in straight lines, criss-crossing the yard until he bumps up against an object or the guide wire, at which point he turns around and heads in another direction. He cuts a pretty random pattern, but he manages to keep the grass well trimmed and has made, according to Ericka, yard work much less labour intensive. Ericka's partner, who has done the majority of the physical labour needed to install the guide wires, level-off steep inclines, and re-adjust flower bed borders, would probably not agree about the labour-saving comment, but is also pretty happy with the way the lawn looks, now that it is properly adapted to Robbie's needs.
The other of us, Katherine, has a robot vacuum cleaner, named Fido. Katherine's oldest son is enthralled with it and thinks of it as a pet. Katherine's youngest child, however, was initially terrified of Fido, screaming every time Fido started up. The family had to adapt Fido's runtimes to avoid the youngest child's awake times and has, progressively, worked on helping their daughter overcome her fear so that Fido can be a part of the family home without terrifying its youngest member. Both of us have been working together
1 on a third robot, Pepper.
2 Pepper is, of course, a very familiar robot, at least within the robot community. However, even Pepper took some adapting to. Ericka initially tried to interact by poking at Pepper's blinking eyes, a
faux pas that did not work at all. Katherine approached Pepper with verbal communication instead, noting that Pepper gave a different impression in Swedish and English. And while we were observing Pepper being used in an experiment to teach older people aerobics,
3 we saw that Pepper did not always communicate very well with the humans who were supposed to follow her. For some of these interactions, Pepper and the person were able to understand each other and complete the exercise just fine. In other interactions, however, there were breakdowns in communication and on occasion, the human in the interaction visibly wanted to stop early and not continue interacting with Pepper. Adaptation to Pepper's style of communication did not always occur smoothly.
Inspired by the difficulties that Robbie and Fido had in reaching some of the literal corners of our homes and gardens, we use the metaphor of the “hard-to-reach corner” to explore the socio-technical limitations of companion robots and our responses to these limitations. Two of these opening anecdotes have described interactions with robots “in the wild,” highlighting the specificity of the context in which the robots operate as well as hinting at the adjustments that we have learned to make to integrate these new members of our households. The third is an example of how those corners also appeared in a laboratory experiment. This article is a critical exploration of robot design and a reflexive exercise on our attitudes toward robots (and our willingness to learn to live with them). More precisely, we want to use our homely reflections on Fido and Robbie to think critically about robots like Pepper. Marketed as a “robot designed to interact with humans,” Pepper is explicitly intended to offer affective, embodied interaction, meaning that our attention here spans not just the physical “corners” of robot design, but the affective ones as well.
Domestic robots like Fido and Robbie are already commonplace in many homes, while companion robots like Pepper are increasingly becoming part of different kinds of care work in the Global North (often imagined to be potentially useful for health care both in the home and at care facilities [
1–
3]). We assert that this requires attention to several aspects of these new interlocutors. First, a critical attention to the design and programming of these creatures and what interactions this presupposes and produces [
4–
6]. Following on from that, and informed by discussions of relational ontological intra-actions with technology [
7–
9], an exploration of how the humans in this interaction “learn” to work together with these creatures; when we realise they cannot reach the corners, what do we do [
10]? When we suspect they may be reaching corners we would rather keep hidden, what do we do [
6,
11]? Then we took a long hard look at our own reactions to their presence—do we instinctively lean in and learn how to interact or do we get frustrated/bored by the species gap and why [
12]? And, finally, a discussion about what intersectional aspects of our positions allow us to choose to adapt or not to a robot brings up the issue of privilege. Taken together, these questions and concerns highlight the importance of context and contexting [
13] when studying adoption of technologies, and the need for theoretical perspectives that lift up how inequalities are re/produced in the design and development of technologies [
14–
21].
Inspired by these bodies of work—and especially attuned to theoretical critiques that point to the power-loaded and contingent complexity of social-technological relations, as well as discussions about the intersectional elements of social structures and relations engaged in and by affective responses—we are rethinking our experiences with robots by examining hard-to-reach corners in a human–robot interaction (HRI). This scholarship draws attention not only to the necessity of adaptation and learning in a successful HRI but also to the conditions for this adaptive work and the scaffolding it engages. Who is able to adapt and what happens to those who cannot/ will not adapt?
In what follows, we situate our experiences within the HRI scholarship around adaptation and learning. Drawing on experience with Pepper, together with our private encounters with robots, we use these qualitative accounts as the basis for an examination of the “corners” of these robots. We suggest that “affective corners” could be a problematic for design interactions to encourage HRI design to think about the intersectional power and positionality that impact the human robot relation in contexts or “in the wild.”
2 Learning and Adaptation
There is no doubt that the promise of autonomous social robots so beloved of science fiction is still a long way out of reach [
24]. Our interactions with Robbie and Fido lifted up some of the limitations associated with the design of these robot bodies (for example, level ground or open spaces are required for effective functioning). Meanwhile, the interaction with Pepper that we describe in the next sections highlights a few of the challenges surrounding interaction with embodied affective agents (such as speech recognition and smoothness of movement). The literature within HRI acknowledges these limitations, and others, and has engaged extensively with different aspects of the learning and adaptation process that result from these physical and affective “corners.”
Cutting-edge technical development in social robotics focuses on making interactions more humanlike through improvements in speech, movement, and emotion programming [
32,
34]. Developers design robots to perform humanlike emotional gestures to build trust and empathy with human “users” (up to a point, see Reference [
35] on “The Uncanny Valley”). Recent work by Jones et al., for example, examined how robot tutors may be used to support primary school children in developing self-regulated learning techniques that would allow them to learn more effectively. However, their study also highlighted how the efficacy of the robot's support was highly dependent on its being able to model positive social behaviours that create trust and engagement with the student. Forming good social interactions and being able to adapt to different levels of learner ability were noted by the authors as an area requiring further investigation [
36].
HRI has explored many aspects of a successful interaction between humans and robots, often taking as a starting point the myriad aspects involved in successful human teamwork [
44,
45] but also taking inspiration from human-animal teams [
46]. These aspects include both emotional labour (for example, making an effort [
47] or emotional scaffolding [
48]), as well as a focus on specific physical activities (such as handovers [
49] or adapting walking speed of a humanoid robot [
50]).
Adaptation of both robot to human [
51,
52] and human to robot [
53,
54] plays an important role in ensuring long-term adoption of robots into human lives. In recent years, this has included developing models to understand mutual adaptation [
55,
56]. From our perspective, the work on adaptation is particularly interesting in terms of how variation in adaptation is understood. “We define the adaptability as the probability of the human switching from their mode to the robot mode. It would be unrealistic to assume that all users are equally likely to adapt to the robot. Instead, we account for individual differences by parameterizing the transition function P by the adaptability α of an individual” [
56].
Models such as the one outlined above are necessary to standardize programming and development of social robots whilst at the same time accounting for variance in human responses to such robots. However, with an eye to the
Science and Technology Studies (STS) literature, we wonder what kind(s) of individuals and individual uses are assumed as the basis for this measure of adaptability [
25,
57]? To what extent are intersections of gender, age, ability, ethnicity, or socioeconomic status taken into account when modelling for variance in terms of how willing/able humans are to adapt [
25]?
3 Learning to Live with Our Robots
During our first encounters with Pepper, both Ericka and Katherine instinctively tried to adapt their behaviour to elicit better responses from Pepper, just as our households have done with the robotic lawn mower and vacuum cleaner. In this section, we draw attention to the different kinds of learning/negotiation that are necessary for smooth interaction with a social robot through reflecting on our own experiences with Robbie, Fido, and Pepper. Through a critical attention to context and privilege, we draw attention to the many small negotiations that we make and reflect on what makes it possible for us to perform such negotiations.
Robbie, for example. The labour-saving lawnmower is not labour free, but you would have to find the person doing the labour to hear this. Ericka is not the right one—or at least the only one—to ask about this. Her partner would tell you a different story. This may be a lesson we should take into our analysis of care-robots (inspired by References [
26,
27] but also taking lessons learned from STS theories about care [
28–
30] and studies of the evaluation and ethics of implementing care through technologies like telemedicine [
31]).
Like Ericka's partner with Robbie, Katherine and her partner also do a fair amount of labour to facilitate Fido's work—picking up toys and strategically placing cushions where they know Fido will get stuck under furniture. Meanwhile, to cater to their daughter's fear, they also adapted the use of Fido—instead of leaving him switched on and charging at his base station all the time, he is turned off to prevent spontaneous movements. They now only use him when they are out of the house—no more companionable pottering about for this housebot. This means that their son no longer interacts with Fido and the imaginary
4 around Fido has decreased. There are no more long discussions about Fido “eating” crumbs or tickling feet. Katherine and her partner created an imaginary around Fido's actions to help their son make sense of him, and they learned after the arrival of their daughter to adapt use of him to fit around her needs. Responses to robots are both highly individual and adaptable if one is prepared to learn and do the emotional/physical work of this
One never forgets that Pepper is a robot. She hums and whirrs as she moves, and sometimes she needs to be rebooted. But she is sufficiently humanlike that Katherine wanted to interact and found herself mirroring Pepper's movements and writing “her” when she started work on this article. There was a desire to learn what to do to make the interaction with Pepper smoother and to know which gestures registered with her and which questions prompted a “reasonable” response. As such, the human–robot encounter can be seen as both an issue of design [
32,
33] and an issue of human learning [
8,
37]. The encounters with Pepper, Robbie, and Fido suggest that Ericka and Katherine (and their partners) are prepared to do quite a lot of learning and adaptation to accommodate these non-humans into their personal spaces. This includes physical adaptation of the environment (garden landscaping), temporal adaptation of daily life (scheduling Fido's movements to avoid distress), and affective adaptation (creation of a narrative that “makes sense” of the robot's presence and actions for both ourselves and more vulnerable family members). This labour makes clear the necessity of human mediation and learning to smooth the introduction of affective embodied robots into daily life. However, it also—for us—poses questions about the conditions for this work. We are able to give time, energy, and money to make these changes and were physically able and emotionally confident enough to perform them. We were also willing to have robots in our lives, proactively choosing them to help us and thus willing to adapt. Given the intended context of use for robots such as Pepper, what should robot designers and programmers be taking into consideration when planning for learning? How might this differ for those who have not chosen a robot carer but rather have been given one to relieve human caregivers? Who in this context does the adaptation work and how might this be accounted for?
4 Affective Corners—Hard-to-reach?
Commercial rhetoric around Pepper (and her peers) imagines her to be a robot that can integrate into a domestic environment and assist in tasks needed to run the place, including providing some form of companionship [
38]. But we all know that domestic environments are not only full of physical and emotional kitchen corners, they are also rife with differing desires, power structures, and power struggles.
Pepper's integration into this sphere would hardly be a neutral event that only served objective, mutually agreed-upon tasks. Surely there would be corners that some members of a domestic space would like cleaned out but others would not. And surely Pepper would be a way for outsiders to see into that space and impose their wills where insiders would rather they did not. This seems to be particularly probable and problematic when one realizes that Pepper and other companion robots are often touted as a “solution” to the loneliness and need for care imagined in the elderly (strongly critiqued in Reference [
39,
40]). This need for care may exist, but it is also imagined by those who have some responsibility for providing that care, people who, through a sense of familial responsibility or employment or professional position, feel they are tasked with ensuring the well-being of another person. How can those people know their charge is feeling ok, eating correctly, ingesting the appropriate meds at the right times, visiting the bathroom, exercising, sleeping, or breathing? Surely a robot like Pepper would be useful to them.
We suggest that there are stakes in learning to interact with robots. We bring our personal positions and our privileges and insecurities into the interaction with the robot. How can we program for these? One can have an uneven power relation, even when one of the interlocutors is a robot and recognition of these power dynamics—especially in the wild—should be a part of robotics research design.
This brings us back to the tension between standardization in robot programming and the highly specific and personal situations of human interlocutors and their differing abilities to adapt to the robot interaction. In a successful interaction, the humans must do emotional smoothing work, physical adjustment work, and protection of vulnerable others. Why do we bother? Why do we want to believe that an interaction with a robot can be anything more than an automatable exchange? Are we also, perhaps, doing this work to fathom the depths of the robot's emotional, affective, knowledge? Are we concerned not with what the robot is feeling as much as with how much the robot knows about our feelings? There is the risk that humans will feel pressured by robots to do certain things, but perhaps there is also the opportunity for humans to learn new methods of showing or hiding their affective responses.
We are used to theorizing about affect as something that occurs in relational practice [
41–
43], both between humans and in the human/non-human relation. This approach presents questions about where we make cuts in deciding which parts of an affective encounter are human and which is non-human and who is responsible for these cuts and how and where they are drawn are political and important [
8,
9]. But those questions focus on the presence and legibility of affective responses. What we want to point out with this article is that it may be just as interesting to become attuned to the hiding away and illegibility (intentionally or unintentionally) of affective responses in the human/non-human relation. It may be easy to think of a breakdown in communication—the current difficulty that Pepper has in reading emotions, for example—as a technical problem to be addressed by better sensors and more sensitive programming. But perhaps what that affective corner is hiding is filled with things someone intends to keep hidden there. And knowing when our robot companions should look away is something we could appreciate in these new interlocuters, just as knowing how we should distract their vision from a corner is a skill we will be forced to learn.
In brief, if we accept that “corners” in robot ability exist—be it dexterity, language or emotion—then learning/adaptation on the part of the human is necessary. This much is widely discussed and assumed within HRI. What we need to consider more carefully now are the assumed conditions for that adaptation and which humans cannot or will not adapt. With that in mind, we propose the following questions as a basis for development:
Is it possible to design in a way that assists with learning/adaptation?
What bodies/abilities are currently assumed in the process?
Is it possible to design in a way that sees “corners” as opportunities? [
22]
How can we develop an ethics of “corners” that acknowledges their role as both respite and failure?
And how can we more explicitly design for the human “scaffolding” that is necessary in human–robot encounters?
5 Conclusion: Designing for Connection
In this article we have used the metaphor of hard-to-reach corners to help us think through the limitations that currently exist in the design and technical capacities of social robots. In recounting and reflecting upon our experiences with Robbie, Fido, and Pepper, we aim to contribute to the existing wealth of technical literature about adaptation/learning within HRI by encouraging further work that engages a qualitative, context-sensitive analysis of robotic encounters. We want to draw attention to the specificity of contexts and bodies by thinking of corners as not only limitations in robot capacity that demand adaptation, but potentially places of respite for humans who feel more ambivalent than us about robot care. We have highlighted how bodies and affects that fall outside the assumed (normative) range may not find adapting to life with a robot to be the pleasurable, curiosity-driven experience that we did. Our curiosity about these technologies is reflective of our privileged position, as we are able/willing to make the adaptations for ourselves (and others in our households) to co-exist harmoniously with our robots. However, this is not the case for everyone—especially those who are unable or unwilling to adapt (due to age, sickness, or lack of confidence with technology). This seems particularly pertinent given the care contexts in which Pepper is used.
Care and companion robots such as the one we met, Pepper, or Paro the famous baby seal are premised on a mutual learning experience in which robot and human must gradually adjust to one another [
37]—much as in a human–human interaction. But if many of the users are as uncapable of really knowing the robot and as easily bored when they do not do what is expected, then this may be only partially successful. Successful interactions require some framing and a commitment to adapting human behaviour, in return for the promise of a “better” life. Or they may continue to be unsuccessful. We suggest that these are emotional corners that the robots cannot reach and that some people's corners might be more acute and difficult to get into than others. To approach these, we suggest sensitivity to the intersectional aspects of context.
Our reflections here span both laboratory study and “in the wild” encounters and prompt three important discussions around design and adaptation:
“In the wild” encounters introduce a higher level of variation in both participant response and environmental challenges—how might existing models bridge that gap in variation, engaging a sensitivity for the privileged positions that are capable and willing to adapt and those that are not?
Lab studies tend to enrol participants who are positive toward robots, while “in the wild” encounters may include less-welcomed encounters—how then can power and agency be accounted for in designing social robots for care contexts?
“Scaffolding” of the interaction by another human is particularly important in the case of vulnerable humans, who may be rendered scared, confused, uncomfortable, or frustrated by the robot—how to plan for the presence of additional humans and what impact does this have on the promise of robots taking over humans’ work?
We hope that this perspective can be read as an invitation to further interdisciplinary collaborations, in which technical models of adaptation might be tested or enriched through interaction with qualitative analysis, or in dialogue around assumed uses and users of social robots. We have noted a recurring tension in our various interactions with social robots and wish to advocate for interdisciplinary work in which qualitative, context-sensitive analysis of interactions and technologies may complement technical advances. The scholarship with which we have discussed above, for example, provides such analyses/case studies that explore how interactions with robots designed to care for humans may produce widely varying experiences, particularly in vulnerable populations [
58–
61]. We suggest that another possible pathway to address this would be to design around the concept of contexts rather than just users, carefully thinking through how structural aspects of a context (like power asymmetries in work places) produce “users” who are afforded (sometimes limited) privileges around adaptation. Interdisciplinary teams that engage more qualitative, critical work from social science fields outside robotics could be one way to start down such a path.
We propose to bring to the conversation a critical, qualitative attention to the conditions for adaptation that seems to be outside the current models for adaptation. Our attention concerns, first, individuals’ capacity for adaptation based on their personal position, and, second, the context-specific conditions for the interaction, particularly relevant to “in the wild” encounters. We offer this as an opportunity to pay attention to the power structures in place within an encounter that may be hard to show in quantitative models, but that have a significant impact on people's affective responses toward robots and their own sense of agency in the encounter.