Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

The Effects of Healthcare Robot Empathy Statements and Head Nodding on Trust and Satisfaction: A Video Study

Published: 15 February 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Clinical empathy has been associated with many positive outcomes, including patient trust and satisfaction. Physicians can demonstrate clinical empathy through verbal statements and non-verbal behaviors, such as head nodding. The use of verbal and non-verbal empathy behaviors by healthcare robots may also positively affect patient outcomes. The current study examined whether the use of robot verbal empathy statements and head nodding during a video recorded interaction between a healthcare robot and patient improved participant trust and satisfaction. One hundred participants took part in the experiment, online through Amazon Mechanical Turk. They were randoimnized to watch one of four videos depicting an interaction with a `patient' and a Nao robot that (1) either made empathetic or neutral statements, and (2) either nodded its head when listening to the patient or did not. Results showed that the use of empathetic statements by the healthcare robot significantly increased participant perceptions of robot empathy, trust and satisfaction, and reduced robot distrust. No significant findings were revealed in relation to robot head nodding. The positive effects of empathy statements support the model of Robot-Patient Communication, which theorizes that robot use of recommended clinical empathy behaviors can improve patient outcomes. The effects of healthcare robot nodding behavior needs to be further investigated.

    1 Introduction

    Clinical empathy is a fundamental aspect of patient care, associated with numerous positive patient health outcomes. The opportunity exists to make healthcare robots display clinical empathy behaviors as a default mode. However, research into the development and display of empathetic behaviors in healthcare robots is in its infancy, and little is known about effects on trust, distrust, and satisfaction. These are important outcomes because of the vulnerable nature of patient populations, and because they affect patient acceptance of healthcare recommendations. The following paper presents a video study in which a healthcare robot's use of verbal and non-verbal clinical empathy is experimentally manipulated. The results will inform the design of conversational behaviors in healthcare robots and future research. To begin, the paper provides a brief background on clinical empathy in regards to physician-patient interactions, as well as related works within the area of social and healthcare robotics.

    2 Background

    Clinical empathy is defined as “the ability to understand the patient's situation, perspectives and feelings, and to communicate that understanding to the patient” [1, p. 221]. Research has demonstrated that patients have little difficulty in identifying the use of empathetic behaviors by healthcare professionals [2, 3], and that these behaviors in turn are associated with a number of positive patient health outcomes. In diabetes research, for example, patients of physicians high in empathy were found to be significantly more likely to have effective control of their illness, compared to patients of physicians low in empathy [4].
    Clinical empathy has been shown to affect patients’ psychological health and/or treatment outcomes across a number of conditions. For example, physician empathy has been associated with increased patient satisfaction and psychological adjustment and decreased psychological distress in cancer care [5]. Even in trauma centers, where the focus is on fast and effective disbursement of medical services, patients who perceived their physician as having higher empathy had better self-reported treatment outcomes [6].
    Physician empathy is a key factor in ensuring patient adherence to medical treatment [7]. This may be because physician empathy enhances physician-patient agreement regarding treatment decisions made during medical consultations [7, 8]. Physician empathy may also impact patient adherence through facilitation of patient coping, management, and understanding of illness [9], as well as increasing patient satisfaction and trust. In fact, numerous studies have shown a positive association between the use of empathy by a physician, and higher patient satisfaction and trust [5, 6, 913].
    Theoretically, it has been proposed that patients have higher trust in physicians who respond to their healthcare issues with the appropriate demonstration of understanding and concern [14]. It is further theorized that empathy behaviors increase patient perceptions of physician care [11] and physician-patient collaboration [12]. The use of physician empathy is thought to increase patient satisfaction by enhancing perceptions of physician commitment and collaborative care, and by reducing patient frustration, disempowerment, and distress [5].
    Clinical empathy can be demonstrated through both verbal and non-verbal behaviors. An important aspect of clinical empathy is employing a feedback focus. Utilization of physician phrasing such as “Let me see if I have this right…” and “Can you tell me more about this…” allows opportunity for patient correction or addition of further information [1, p. 221–222]. Employing a feedback focus also provides a patient with concrete evidence that their physician is actively listening to their medical concerns [15].
    Non-verbal behaviors can also be employed to express attention and understanding [15, 16]. In physician-patient interactions, demonstration of behavioral empathy can include the use of eye contact, smiling, and the use of a forward lean [1720]. Other non-verbal behaviors found to be associated with empathy include head nodding, active listening, facial mimicry, and tone of voice [15, 19].
    Head nodding is seen to portray agreement, acceptance, and acknowledgement in many cultures. Head nodding has been associated with empathy within the context of clinical interactions. For example, in video-taped interactions between medical students and physical patients, head nodding by medical students was significantly associated with increased ratings of empathy by observing clinicians [18]. In a review of clinician non-verbal behaviors during interactions with patients, in which ‘clinicians’ included both medical physicians and psychotherapists, clinician head nodding was associated with higher patient ratings of clinician ‘empathetic qualities’ [19].
    Further support for these findings is demonstrated in research in which head nodding is purposefully avoided or absent. For example, Marci and Orr [20] conducted research in which psychiatrists were instructed to deliberately suppress eye gaze and head nodding behaviors during clinician-patient interactions. The researchers found, that in comparison to clinicians who used these behaviors, physicians without these behaviors were rated significantly lower in terms of perceived empathy by patients. Psychophysiological concordance (measured through simultaneous skin conductance) was also significantly lower for patients in this group. Absence of head nodding, as well as other non-verbal behaviors such as smiling and eye contact, has also been found to be associated with lower patient satisfaction [21], and both short and long term decreases in patient functioning [22].

    3 Related Work

    The model of Robot-Patient Communication [23] theorizes that demonstration of empathetic behaviors by healthcare robots will result in similar benefits for patients as demonstration of empathetic behaviors by human clinicians. Empathetic robot behaviors could include the demonstration of listening, appropriate reflection in regard to users’ emotional disclosures, and the demonstration of understanding through verbal and non-verbal communication.
    Aspects of the Robot-Patient Communication model have been tested in human-robot interactions, showing that use of robot smiling, forward lean, and humor, can improve user perceptions of healthcare robots [2426]. The specific investigation of empathetic statements and head nodding by a robot however, have had very limited attention within human-robot interactions in healthcare.
    A handful of studies have shown that robot use of empathetic statements is associated with positive user outcomes in other social applications. For example, a robot that made empathetic statements about a lost bag, was rated higher in empathy, emotion, and overall behavior by users, than a robot that made neutral statements [27]. A robotic cat that made empathetic statements and facial expressions towards its play-mate during a game of chess was rated more helpful, and higher in engagement and self-validation than a neutral robotic cat [28]. In another study, the robotic cat was rated as more friendly by individuals who had received empathetic statements from the cat while engaged in a game of chess with another player, than when compared to players who received only neutral statements from the cat [29]. A later study found that while robot use of empathetic statements are easily recognized by participants, affective facial expressions are often mismatched [30].
    There is some evidence that the effects of robot empathy behaviors may not be sustained over long periods of time. In a recent study, robot verbal empathy statements and non-verbal empathy behavior's (such as eye gaze), were associated with an increase in meaningful discussions during an initial session between a robot that taught about sustainability, and student users [31]. However, when the experiment was repeated over the course of two months, robot empathy was not found to be associated with any significant long-term learning outcomes. More research however, is needed in order to understand the effect of robot empathy long-term.
    In a healthcare vignette (a hypothetical situation described to participants in text form), a hypothetical robot that used patient centered speech received higher ratings of its emotional intelligence than a robot that used task centered speech, although this study did not measure perceived empathy (Chita-Tegmark et al., 2019). Examples of patient centered, and task-centered speech are: “It is currently difficult for the patient to observe the treatment plan” versus “The patient currently shows high levels of treatment non-compliance” respectively). Interestingly, participants were also found to rate the hypothetical patient significantly more favorably in conditions where the robot used patient-centered speech, as opposed to conditions in which the robot used task-centered speech. Contrary to the authors hypotheses, the robot in this study was not rated lower in terms of trust or acceptance, when using task-centered speech. Given the positive relationship between empathy and emotional intelligence (Ioannidou & Konstantikaki, 2008), it is possible that the robot in this study would also have been rated as more empathetic when using patient-centered speech, had this been measured.
    To our knowledge only one study has investigated the effect of robot empathy in a healthcare based human-robot interaction [32]. In this pilot study, 31 children who were to have an intravenous (IV) line placed during a hospital stay were randomized to one of three groups: one in which there was a play specialist and no robot present, one in which there was a play specialist and a non-empathetic robot present, and a final condition in which there was a play specialist and empathetic robot during IV placement. In the empathy condition, the robot changed its verbal responses and facial affect based on the child's expressed level of fear and pain, and helped the child practice deep breathing and prepare for the procedure. Children in the empathy condition were significantly more likely to report that the robot had ‘feelings’ and that the procedure ‘hurt less’. Non-significant results in regards to observed pain and distress and parental satisfaction may be due to the small sample size.
    Two studies have specifically examined the use of robot empathy on user ratings of robot trust or user satisfaction. First, a pilot study with a chimpanzee robot [33] found that users who conversed with an empathic version were more satisfied than those who conversed with a neutral version of the robot (robot empathy was demonstrated through facial mimicry and head gestures). A second pilot study found a non-significant trend that participants rated a humanoid social robot (Pepper) that used verbal statements and large gestures when a participant was inattentive, higher in trustworthiness, compared to a robot that did not show these behaviors [34].
    Preliminary research on robot head nodding suggests beneficial effects. For example, human head nodding significantly increased during human-robot interactions when the robot was able to ‘understand’ human head nodding and nodded in return [35]. Robot nodding may increase user perceptions of the robot's comprehension or increase user engagement. Robot head nodding in response to hearing instructions, such as making a cup of tea, has also been shown to increase user perceptions of robot engagement, comprehension, and likeability [36]. However, no studies could be found investigating robot head nodding on user perceptions in a healthcare context.

    3.1 Rationale

    To our knowledge, only one study has specifically examined the effect of healthcare robot empathy via facial and verbal expressions, on the perceptions of human users in a healthcare based human-robot interaction [32]. While this study offers preliminary evidence for the use of healthcare robot empathy in increasing children's perceptions of robot ‘feeling’ and decreasing children's perceptions of pain, further research is needed in adults and in other scenarios. In addition, it is necessary to understand how healthcare robot empathy affects trust and satisfaction. This is critical given the growing body of evidence demonstrating significant associations between physician use of empathy and increases in patient perceptions of physician trust and satisfaction [5, 6, 913]. When considering the study of trust in particular, researchers argue that distrust is not simply the opposite of trust, and as such, trust and distrust should be treated as a separate constructs [37, 38]. Thus, both trust and distrust were examined in the current study.
    Head nodding is a non-verbal behavior that has been found to be associated with patient satisfaction and perceptions of empathy [1822] when used by clinicians. Given the absence of research investigating the effect of healthcare robot nodding on patient trust and satisfaction, research of this behavior may prove to be promising. As previous research has shown head nodding to be not only recognized, but also associated with physician empathy, when observed during a video-taped physician-patient interaction [18], the use of robot head nodding in an experimental video study is warranted.

    3.2 Aim

    The aim of this study was to examine the effect of verbal and non-verbal empathetic behaviors by a healthcare robot, during a video-recorded interaction with a patient, on participant perceptions of robot empathy, trust, distrust, and satisfaction. Empathy was demonstrated by the healthcare robot through use of empathetic statements (verbal empathy) and head nodding (non-verbal empathy) behaviors.

    3.3 Hypotheses

    (1)
    The use of empathetic statements by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which empathetic statements were not used.
    (2)
    The use of head-nodding by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which head-nodding was not used.
    (3)
    Use of empathetic statements and head-nodding by the robot would be associated with increased participant ratings of empathy, trust, and satisfaction, and decreased ratings of distrust; compared to conditions in which empathetic statements or head-nodding alone was used.

    4 Method

    4.1 Experimental Set-up and Materials

    This study was originally designed as an experiment involving face-to-face, human-robot interactions, within the context of a healthcare scenario. Unfortunately, due to social distancing requirements introduced in the wake of the Covid-19 pandemic, this was not possible. At the time, there was no certainty given around time frames in which this original design might be able to be utilized. It was therefore decided that the current study would move forward, using an online video study design. The authors felt it important that research continue, despite uncertainty around the pandemic, and that an online design would still allow for valuable insight into user perceptions of a healthcare robot's use of empathy.
    In line with current restrictions at the time of this work, a mixed between-within experimental video design was employed. The study was carried out online, using both the Amazon Mechanical Turk (AMT) and Qualtrics platforms. AMT is a public, crowd-sourcing website which connects people, researchers, and businesses with individuals willing to take part in research and other work tasks. Potential participants (registered with AMT) were notified of the current study by AMT via their online profile. The use of AMT participants, as opposed to a patient population, was chosen for this study due to the preliminary nature of this work and consideration of patient circumstances. Patient populations are generally unwell, and it is reasonable to first test perceptions of healthcare robot behaviors on healthy populations, in order to avoid adding unintentional stress to these individuals through exposure to robot behaviors that may be associated with negative user outcomes.
    In order to be included in the current study, participants were required to be over 16 years old, fluent in English, and to be listed as a ‘Master Worker’ on AMT. A Master Worker is a registered ‘worker’ on AMT who has consistently demonstrated a high level of output across a variety of tasks. Participants who met eligibility criteria and chose to take part in the study were directed by AMT to a secure hyperlink, which connected them to Qualtrics. Qualtrics is an online platform enabling researchers to run online surveys. The current study used Qualtrics in order to gather informed consent, and as a platform through which to run the experimental sessions. Ethics approval for the current study was granted by the University of Auckland Human Participants Ethics Committee (Ref. 024431).
    The Nao robot was chosen for the current study (see Figure 1). Nao is a programmable, humanoid robot, by Softbank (Japan). The Nao robot is able to ‘speak’ and can also perform a number of physical movements through utilization of its hands, arms, head, torso, and legs.
    Fig. 1.
    Fig. 1. The Nao Robot used in the study.

    4.2 Procedure

    Once informed consent was obtained, participants were directed to complete a baseline demographics questionnaire. This questionnaire asked about age, gender, ethnicity, occupation, and any previous experience interacting with robots. All participants were then instructed to view the first of two separate online videos. The initial video was approximately 2 minutes in duration and presented an interaction between a patient (‘Sam’) and robot nurse (‘Jane’). In this first interaction, the patient is seen asking the nurse robot for information regarding a general health check and what this entails. When the nurse robot offers to book a health check for the patient, the patient is seen agreeing to undertake the check with a doctor. After participants had viewed this initial video, they were asked to complete time-point one, post-interaction measures (see ‘Measures’ section for further details).
    Following completion of time-point one measures, participants were randomized to view a second video. In this second video, the healthcare robot was seen to behave in one of the following ways:
    (1)
    The robot uses empathetic statements and head nodding during interaction with patient
    (2)
    The robot uses empathetic statements and no head nodding during interaction with patient
    (3)
    The robot uses no empathetic statements and head nodding during interaction with patient
    (4)
    The robot uses no empathetic statements and no head nodding during interaction with patient.
    Randomization was performed using research randomizer.org and kept blinded to the researcher until the data analysis phase. The second video depicted a second interaction between the same patient (‘Sam’) and the same nurse robot (‘Jane’) (see Figure 2). In this second interaction video, the patient is seen to ask the nurse robot to take their blood pressure as part of the health check. The patient is then seen to discuss their symptoms and emotional state with the robot nurse, including the fact that they are feeling tired, having trouble sleeping, and “really need the Doctor to get the bottom of things”.
    Fig. 2.
    Fig. 2. ‘Sam’ the Patient interacts with ‘Jane’ the Healthcare Robot (clip from the video).
    In the head-nodding conditions, the nurse robot is seen nodding to the patient as the patient discusses their symptoms (resulting in three separate head-nods during the interaction). In the verbal empathy conditions, the robot uses empathetic statements throughout the interaction, in response to the patients’ disclosures around symptomology and emotional state (e.g., “That sounds really hard. I can imagine that anyone in your situation would want to get some answers.”). In the non-empathy verbal condition, only neutral statements are made. Care was taken to ensure that robot statements in both conditions were similar in length.
    Appendix A includes the ‘script’ for all four experimental conditions, inclusive of head nodding behaviors. The accompanying media files include all four of the experimental videos, as well as the initial interaction video. The decision to include an initial interaction video was based on previous research that found few significant differences between groups in regards to user perceptions of robot behavior [24]. It was hypothesized that this lack of difference was potentially due to the vast majority of participants having never interacted with a robot before, resulting in a lack of comparison when completing self-report perception measures. Thus, the current study utilized an initial interaction video, in order to provide participants with a basis of comparison when completing self-report measures.
    Once participants had viewed the second interaction video, they were asked to complete the time-point two measures. Measures completed following the first and second interactions (time-point one and time-point two) were identical. Upon completion of time-point two measures, participants were given an authorization code which could be entered through AMT in order to claim a US$4 ‘thank you’ payment. A procedural outline of the study is provided below (Figure 3).
    Fig. 3.
    Fig. 3. Procedural outline.

    5 Power Analysis and Sample Size

    A power analysis was undertaken, using the program G*Power [39] with the following parameters: ANOVA Repeated measures, between subjects statistical test, 0.05 alpha error probability, 0.9 power, and 0.35 effect size (f). This effect size was based on a study of verbal empathy statements [28], described in the introduction. Analysis revealed a sample size of 92 (23 participants in each group) was needed. 100 participants were recruited in order to provide a buffer in the event that surveys were unusable.

    5.1 Measures

    As discussed above, a number of studies have shown a strong association between physician empathy and patient perceptions of satisfaction and trust [2, 1113]. Therefore, participant perceptions of robot empathy, participant trust and satisfaction were measured at time-points one and two.

    5.1.1 Empathy.

    A 13-item empathy questionnaire was created from a combination of seven items from the McGill Friendship Questionnaire [40] and five items from the Consultation and Relational Empathy measure (CARE measure) [41], plus a 13th item “Jane makes a good healthcare nurse”. An adaption of the McGill Friendship Questionnaire has previously been used in research investigating perceived robot empathy [29]. The CARE measure assesses patient perceived empathy in relation to clinical encounters, and its validity and reliability have been established across a range of clinical environments [42, 43].
    This 13-item empathy questionnaire has previously been used in research with healthcare robots [2426]. All questions are measured using a 5-point Likert scale, ranging from ‘strongly agree’ to ‘strongly disagree’. Items on the empathy questionnaire were totaled in order to obtain a ‘total score’ for empathy at each of the two time-points. Cronbach's alpha for the empathy scale was found to be .94 and .95 at time-point one and time-point two, respectively.

    5.1.2 Trust and Distrust.

    The Jian et al. [44] trust scale has been validated and shown to have two distinct subscales for trust and distrust [45]. Originally developed to measure user trust in relation to automated systems, the scale has since been used to measure user trust in relation to online virtual human agents, through substitution of the word “system” to “instructor” [46]. In order to adapt the scale for use within the current study, the word “system” was changed to “Jane” (Jane being the name of the robot in the current study). The measures used a 7-point Likert scale ranging from ‘not at all’ to ‘extremely’.
    Trust and distrust items were totaled in order to obtain ‘total scores’ for both trust and distrust at each of the two time-points. Cronbach's alpha was revealed to be .95 for both the trust and distrust sub-scales (respectively) at time-point one, and .97 for both the trust and distrust subscales (respectively) at time-point two.

    5.1.3 Satisfaction.

    An adapted version of the Scale of Patient overall Satisfaction with Primary Care Physicians was used [4]. This measure utilizes a 7-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’ and asks responders questions in relation to interactions with physicians. The satisfaction scale was adapted in the current study in order to ask participants questions relating to the video interactions they viewed as part of the current studies online survey. For example, “My Doctor cares about me as a person” was changed to “Jane cares about Sam as a person” (Sam being the name of the ‘patient’ in the current study). The Scale of Patient overall Satisfaction with Primary Care Physicians has been validated by the researchers who designed it [4]. Items on the satisfaction scale were totaled in order to obtain a ‘total score’ for satisfaction at both time-points. Cronbach's alpha was .94 and .93 at time-point one and time-point two, respectively.

    5.1.4 Desire to Interact with the Robot.

    Finally, a stand-alone question was added to the post-interaction measure, asking: “If offered the chance, would you want to interact with Jane face-to-face?”. The answers were ‘yes’ or ‘no’.

    5.2 Statistical Analyses

    Data were analyzed by conducting four 2 × 2 × 2 ANOVAs with time-point as a repeated measures variable and head-nodding and empathy statements as between subjects’ factors. The desire to interact again was analyzed using a three-way loglinear analysis (‘question’ x head-nodding x empathetic-statements) for each of the two time-points.

    6 Results

    6.1 Manipulation Check

    In order to determine whether robot head nodding was noticeable, a manipulation check was performed with a separate convenience sample (n = 18). Participants undertaking the manipulation check were asked to view one of two experimental videos (no verbal empathy and no head-nodding or no verbal empathy and head-nodding). The videos in which the robot did not use verbal empathy were chosen in order to minimize confounds. Once the viewing of the video was complete, participants were asked to complete a short questionnaire asking if the robot gave its name, if the robot nodded, and if the robot leaned towards the patient, during the video recorded interaction. Head-nodding was the variable of interest. In the no verbal empathy and no head-nodding group, 78% of participants (7/9) correctly stated that the robot did not nod. In the no verbal empathy and head-nodding group, 78% of participants (7/9) correctly stated that the robot did nod during the interaction.

    6.1.1 Participants.

    One hundred participants took part in this study (25 per group). Fifty-nine percent of participants identified as male (N = 59), and 41 percent of participants identified as female (N = 41). The mean age of participants was 40.27 years, with a minimum age of 24 and a maximum age of 69. Participants reported their current country of residency to be the USA (N = 90, 90%), India (N = 9, 9%) and Thailand (N = 1, 1%). Most participants were engaged in full time employment (N = 76, 76%), followed by part time employment (N = 11, 11%), and those not currently employed (N = 9, 9%). Forty percent (N = 40) of participants had interacted with a robot in the past.

    6.1.2 Empathy.

    In line with the first hypothesis, there was a significant time by condition interaction effect for robot verbal empathy, F(1,96) = 16.01, p < .001), partial eta squared = .14. After the second interaction, participants increased their perceptions of robot empathy if it made empathy statements but reduced their perceptions of empathy if it did not (See Table 1 and Figure 4(A)). Contrary to the second and third hypotheses, there were no significant time by condition interaction effects for robot head nodding F(1,96) = 0.21, p = .884, partial eta squared = .00), or interaction between robot head nodding and verbal empathy (F(1,96) = 0.68, p = .410) partial eta squared = .01).
    Fig. 4.
    Fig. 4. The effects of robot verbal empathy statements and head nodding on participants ratings of (A) perceived robot empathy, (B) perceived robot trust, (C) perceived robot distrust, and (D) satisfaction. Bars shown mean and standard error of scores at time-point two. Note: ** p < .01. * p = .01.
    Table 1.
    Verbal EmpathyHead NoddingTime-pointMeanSE95% CI Lower95% CI Upper
    Perceived Robot Empathy
    YesYes156.242.152.0760.41
      261.442.6156.2566.63
     No160.122.155.9664.21
      264.562.6159.3769.76
    NoYes160.042.155.8764.21
      256.042.6150.8561.23
     No1602.155.8364.17
      256.122.6150.9361.31
    Perceived Robot Trust
    YesYes132.081.4329.2534.91
      233.161.729.7936.53
     No133.961.4331.1336.8
      236.241.732.8739.61
    NoYes133.841.4331.0136.67
      229.521.726.1532.35
     No134.521.4331.6937.35
      231.21.727.8334.57
    Perceived Robot Distrust
    YesYes18.21.315.6110.79
      281.554.9311.07
     No19.921.317.3312.51
      29.881.556.8112.95
    NoYes17.81.315.2110.39
      211.281.558.2114.35
     No110.121.317.5312.71
      213.21.5510.1316.27
    Perceived Robot Satisfaction
    YesYes152.922.2148.5457.3
      256.842.7151.4762.21
     No156.722.2152.3461.1
      260.682.7155.3166.05
    NoYes155.442.2152.0659.82
      252.82.7147.4358.17
     No156.362.2151.9860.74
      252.162.7146.7957.53
    Table 1. Mean Perceived Robot Empathy, Trust, Distrust, and Participant Satisfaction Scores at Time-point one (after the Initial Interaction) and Time-point two (after the Interaction where the Robot Displayed Head Nods and Verbal Empathy or did not)

    6.1.3 Trust and Distrust.

    In line with hypothesis one, a significant time by condition interaction was found for robot verbal empathy on robot trust, F(1,96) = 13.78, p < .001), partial eta squared = .13. After the second interaction, robot trust increased if the robot used verbal empathy but decreased if it did not (See Table 1 and Figure 4(B)). Contrary to hypothesis two, there was no significant time by condition interaction for robot head nodding on trust scores, F(1,96) = .55, p = .460), partial eta squared = .01. Contrary to hypothesis three, there were no significant interactions effects of robot head nodding and verbal empathy on trust scores F(1,96) = .22, p = .639), partial eta squared = .00.
    Also in line with the first hypothesis, there was a significant time by condition interaction for robot verbal empathy on distrust F(1,96) = 6.90, p =.010, partial eta squared = .07 (See Table 1 and Figure 4(C)). After the second interaction, distrust increased if the robot did not use verbal empathy, but stayed stable when it did. Contrary to hypotheses two and three, there was no significant time by interaction effect for robot head nodding F(1,96) = .01, p = .926), partial eta squared = .00, and no significant interaction effects of robot head nodding and verbal empathy F(1,96) = 0.02, p = .901), partial eta squared = .00.

    6.1.4 Satisfaction.

    In line with the first hypothesis, a significant time by condition interaction was found for robot verbal empathy, F(1,96) = 10.44, p = .002), partial eta squared = .10 (see Table 1 and Figure 4(D)). After the second interaction, satisfaction increased if the robot made verbal empathy statements but decreased if it did not. Contrary to the second and third hypotheses, there was no significant time effect for robot head nodding, F(1,96) = 1.41, p = .238), partial eta squared = .01, nor an interaction effect of robot head nodding and verbal empathy, F(1,96) = 0.71, p = .403), partial eta squared = .01.

    6.2 Would you Interact with Jane

    A three-way loglinear analysis was used in order to analyze the question: “If offered the chance, would you want to interact with Jane face-to-face?” at each time-point. The analysis revealed a non-significant high-order interaction (question x head-nodding x empathy-statements) at both time-point one χ2 (1) = –.175, p = .861 and time-point two χ2 (1) = –1.040, p = .298. There were no significant results found for head nodding x question χ2 (1) = –1.342, p = .179, or verbal empathy x question χ2 (1) = .270, p = .787 at time-point one. There were also no significant results found for head nodding x question χ2 (1) = –853, p = .393, or verbal empathy x question χ2 (1) = .465, p = .642 at time-point two.

    7 Discussion

    This study investigated the effect of a healthcare robot's use of empathetic statements and head nodding, on participant perceptions of robot empathy, trust, distrust, and satisfaction. Verbal empathy statements resulted in greater perceptions of the robot's empathy, trust, and satisfaction, and lower perceptions of distrust. Head nodding had no significant effects on empathy, trust, distrust, or satisfaction scores, and there were no significant interaction effects of verbal empathy and head nodding on any outcomes. It is possible that robot head nodding has no effects on user perceptions of robot empathy in healthcare, but more research is needed on this behavior. It is possible that nodding has stronger effects if it is towards the user themselves, if it is more frequent, or longer in duration. Neither empathetic statements nor head nodding had an effect on participants’ desire to interact with the robot face-to-face.
    The findings of the current study provide support for the Robot-Patient Communication model [23], which theorizes that robot communication behaviors can have effects on patient related health outcomes. This was only true for empathetic statements however, and not head-nodding. Findings also align with research demonstrating that physician empathy can increase patient satisfaction scores [5, 10], and increase patient trust [1113].
    The results also align with previous research in human-robot interactions, in which the use of verbal robot empathy was found to increase participant scores in relation to helpfulness and engagement, both of which are measured as part of the satisfaction scale [28]. They also align with results demonstrating the use of verbal robot empathy in increasing user perceptions of robot trustworthiness [34], and in increasing perceptions of a robot's ‘feelings’ (i.e., empathy) [32].
    The current study found that distrust scores were significantly higher following conditions in which the robot did not use verbal empathy, compared to conditions in which it did. To our knowledge, this is the first study that has examined the use of robot empathy in relation to participant distrust.

    8 Limitations

    This research has several limitations. First, the study had an online experimental design utilizing video clips. Participants would likely have perceived that the interaction between the patient and robot was staged for the purposes of the study. Therefore, results may have differed had participants seen a natural interaction between a patient and robot. Second, viewing a human-robot interaction is different compared to engaging in a human-robot interaction. Thus, results may have differed had participants been able to interact with the robot face-to-face.
    Third, participants were recruited online through AMT. Research has shown that many individuals undertaking tasks on AMT do so as a source of income, with most completing 20 to 100 tasks per week [47]. It may be therefore, that individuals registered as workers on AMT are more experienced in undertaking research studies, and approach surveys in a way that may differ from those in the general population. Finally, AMT workers registered in the United States of America (USA) have been found to have a higher level of education than those in the general USA population [47]. The majority of the participants were from the USA (N = 90, 90%). It may be that individuals higher in education are more open and positive towards interactions between robots and patients. This may limit the generalizability of results.

    9 Conclusions and Suggestions for Future Work

    The current study provides preliminary support for the incorporation of verbal empathy as a key communication behavior in the design and implementation of healthcare robots in home and medical environments. Future research should consider replicating this experiment using an experimental design that utilizes face-to-face, human-robot interactions.
    Appendix
    A Scripts

    VERBAL Empathy WITH Head Nod

    VERBAL Empathy and NO Head Nod

    NO Verbal Empathy WITH Head Nod

    No Verbal Empathy and NO Head Nod

    References

    [1]
    John L. Coulehan, Frederic W. Platt, Barry E. Egener, Richard Frankel, Chen-Tan Lin, Beth Lown, and William H. Salazar. 2000. ‘Let me see if I have this right...’ Words that build empathy. Ann. Intern. Med. 135, 1 (Aug. 2007), 221–227.
    [2]
    Frans Derksen, Tim O. Hartman, Annelies Van Dijka, Annette Plouviera, Jozien Bensing, and Antoine Lagro-Janssena. 2016. Consequences of the presence and absence of empathy during consultations in primary care: A focus group study with patients. Patient Education and Counselling 100, 5 (Dec. 2016), 987–993.
    [3]
    Carl D. Marci, Jacob Ham, Erin Morgan, and Scott P. Orr. 2007. Physiologic correlates of perceived therapist empathy and social-emotional process during psychotherapy. The Journal of Nervous and Mental Disease 195, 2 (Feb. 2007), 103–111.
    [4]
    Mohammadreza Hojat, Daniel Z. Louise, Fred W. Markham, Richard Wender, Carol Rabinowitz, and Joseph S. Gonnella. 2011. Physicians empathy and clinical outcomes for diabetic patients. Academic Medicin. 86, 3 (Mar. 2011), 359–364.
    [5]
    Sophie Lelorain, Anne Bredart, Sylvie Dolbeault, and Serge Sultan. 2012. A systematic review of the associations between empathy measures and patient outcomes in cancer care. Psych-Oncology 21, 12 (Dec. 2012), 1255–1264.
    [6]
    Simone Steinhausen, Oliver Ommen, Sonja Thum, Rolf Lefering, Thorsten Koehler, Edmund Neugebauer, and Holger Pfaff. 2014. Physician empathy and subjective evaluation of medical treatment outcomes in trauma surgery patients. Patient Education and Care 95, 1 (Apr. 2014), 53–60.
    [7]
    Moira Stewart, M. Judith-Bell Brown, Heather Boon, Joanne Galajda, Leslie Meredith, and Mark Sangster. 1999. Evidence on patient-doctor communication. Cancer Prevention and Control 3, 1 (Feb. 1999), 25–30. PMID 10474749.
    [8]
    Tracey Parkin, Anne de Looy, and Paul Farrand. 2014. Greater professional empathy leads to higher agreement about decision made in the consultation. Patient Educational and Counselling 96, 2 (Aug. 2014), 144–150.
    [9]
    Stewart W. Mercer, Bhautesh D. Jani, Margaret Maxwell, Samuel Y. S. Wong, and Graham C. M. Watt. 2012. Patient enablement requires physician empathy: A cross-sectional study of. General practice consultations in areas of high and low socioeconomic depravation in Scotland. BMC Family Practice 13, 6 (Feb. 2012), Article 6.
    [10]
    Frans Derksen, Jozien Bensing, and Antoine Lagro-Janssen. 2013. Effectiveness of empathy in general practice: A systematic review. British Journal of General Practice 63, 606 (Jan. 2013), e76–e84.
    [11]
    Mohammadreza Hojat, Daniel Z. Louis, Kaye Maxwell, Fred Markham, Richard Wender, and Joseph S. Gonnella. 2010. Patient perceptions of physician empathy, satisfaction with physician, interpersonal trust, and compliance. International Journal of Medical Education 1, 1 (Dec. 2010), 83–87.
    [12]
    Sung S. Kim, Stan Kaplowitz, and Mark V. Johnston. 2004. The effects of physician empathy on patient satisfaction and compliance. Evaluation and the Health Professionals 27, 3 (Sep. 2004), 237–251.
    [13]
    Yu L. Lan and Yu H. Yan. 2017. The impact of trust, interaction, and empathy in doctor-patient relationship on patient satisfaction. Journal of Nursing and Health Studies 2, 1 (Jan. 2017), 1–7.
    [14]
    Jodi Halpern. 2003. What is clinical empathy. Journal of General Internal Medicine 18, 8 (Aug. 2003), 670–674. (2000).
    [15]
    Helen Riess and Gordon Kraft-Todd. 2014. E.M.P.A.T.H.Y.: A tool to enhance nonverbal communication between clinicians and their patients. Academic Medicine 89, 8 (Aug. 2014), 1108–1112.
    [16]
    Stewart W. Mercer and William J. Reynolds. 2002. Empathy and quality of care. British Journal of General Practice 52, Suppl. (Oct. 2002), S9–S13. 12389763.
    [17]
    Janice M. Morse, Gwen Anderson, Joan L. Bottorff, Olive Yonge, Beverly O'Brien, Shirley M. Solberg, and Kathleen H. McIlveen. 1992. Exploring empathy: A conceptual fit for nursing practice? Journal of Nursing Scholarship 24, 4 (Dec. 1992), 273–280.
    [18]
    Adeline M. Deladisma, Mac C. Cohen, Amy Stevens, Peggy Wagner, Benjamin Lok, Thomas Bernard, Christopher Oxedine, Lori Schumacher, Kyle Johnsen, Robert Dickerson, Andrew Raij, Rebecca Wells, Margaret Duerson, Garrett Harper, and Scott Lind. 2007. Do medical students respond empathetically to a virtual patient? The American Journal of Surgery 193, 6 (Jun. 2007), 756–760.
    [19]
    Judith A. Hall, Jinni A. Harrigan, and Robert Rosenthal. 1995. Nonverbal behaviour in clinician—patient interaction. Applied & Preventative Psychology 4, 1 (Nov. 1995), 21–37.
    [20]
    Carl D. Marci and Scott P. Orr. 2006. The effects of emotional distance on physiologic concordance and perceived empathy between patient and interviewer. Applied Psychophysiology and Biofeedback 31, 2 (Jun. 2006), 115–128.
    [21]
    Charles H. Griffith, John F. Wilson, Shelby Langer, and Steven A. Haist. 2003. House staff nonverbal communication skills and standardized patient satisfaction. Journal of General Internal Medicine 18, 3 (Mar. 2003), 170–174.
    [22]
    Nalini Ambady, Jasook Koo, Robert Rosenthal, and Carol H. Winograd. 2002. Physical therapists’ nonverbal communication predicts geriatric patients’ health outcomes. Psychological Aging 17, 3 (Sep. 2002), 443–452.
    [23]
    Elizabeth Broadbent, Deborah L. Johanson, and Julie Shah. 2018. A new model to enhance robot-patient communication: Applying insights from the medical world. Proceedings of the International Conference on Social Robotics 11357 (Nov. 2018), 303–317.
    [24]
    Deborah L. Johanson, Ho Seok Ahn, Bruce A. MacDonald, Byeong-Kyu Ahn, JongYoon Lim, Eddie Hwang, Craig J. Sutherland, and Elizabeth Broadbent. 2019. The effect of robot attentional behaviours on user perceptions and behaviours in a simulated health care interaction: Randomized controlled trial. Journal of Medical Internet Research 21, 10 (Oct. 2019), e13667.
    [25]
    Deborah L. Johanson, Ho Seok Ahn, YongYoon Lim, Christopher Lee, Gabrielle Sebaratnam, Bruce A. MacDonald, and Elizabeth Broadbent, E. 2020. Use of humour by a healthcare robot positively affects user perceptions. Technology, Mind, and Behavior 1, 2 (Oct. 2020) online.
    [26]
    Deborah L. Johanson, Ho Seok Ahn, Craig J. Sutherland, Bianca Brown, Byeong-Kyu Ahn, and Elizabeth Broadbent. 2020. Smiling and use of first-name by a healthcare receptionist robot: Effects on user perceptions, attitudes, and behaviours. Paladyn, Journal of Behavioural Robotics 11, 1 (Jan. 2020), 40–51.
    [27]
    Andreea Niculescu, Betsy van Dijk, Anton Nijholt, Haizhou Li, and Swee L. See. 2013. Making social robots more attractive: The effects of voice pitch, humour, and empathy. International Journal of Social Robotics 5, 2 (Jan. 2013), 171–191.
    [28]
    Andre Pereire, Ginevra Castellano, Samuel Mascarenhas, Carlos Martinho, and Ana Paiva. 2011. Modelling empathy in social robotic companions. Advances in User Modelling 7138, (Jun. 2011), 135–147.
    [29]
    Iolanda Leite, Andre Pereira, Samuel Mascarenhas, Carlos Martinho, Rui Prada, and Ana Pavia. 2013. The influence of empathy in human-robot relations. International Journal of Human-Computer Studies 71, 3 (Jan. 2013), 250–260.
    [30]
    Berardina De Cailis, Stefano Ferilli, and Giuseppe Palestra. 2016. Stimulating empathetic behaviour in a social assistive robot. Multimedia Tools and Applications 76, 4 (Sep. 2017), 5073–5094.
    [31]
    Patricia Alves-Oliveira, Pedro Sequeira, Francisca S. Melo, Ginevra Castellano, and Ana Paiva. 2019. Empathetic robot for group learning: A field study. ACM Transactions on Human-Robot Interaction 8, 1 (Mar. 2019), Article 3.
    [32]
    Margaret J. Trost, Grace Chrysilla, Jeffrey I. Gold, and Maja Mataric. 2020. Socially-assistive robots using empathy to reduce pain and distress during peripheral IV placement in children. Pain Research and Management. (Apr. 2020). Retrieved January 30, 2021 from
    [33]
    Laurel D. Riek and Peter Robinson. 2008. Real time empathy: Facial mimicry on a robot. In Workshop on Affective Interaction in Natural Environments, the International ACM Conference on Multimodal Interfaces. Retrieved January 2, 2021 from https://www.cl.cam.ac.uk/∼pr10/publications/affine08.pdf
    [34]
    Laurianne Charrier, Alexandre Galdeano, Amelie Corier, and Mathieu Lefort. 2018. Empathy display influence on human-robot interactions: A pilot study. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, October 8–12, 2018, Madrid, Spain. HAL Archives, 7–13. HAL, 2018. Retrieved January 30, 2021 from https://behaviors.ai/wp-content/uploads/sites/5/2019/05/Empathy-Display-Influence-o-n-Human-Robot-Interactions-a-Pilot-Study.pdf
    [35]
    Candace L. Sidner and Clifton F. L. M. C. Lee. 2006. The effect of head-nod in human-robot conversation. In Proceedings of the 1st ACM SIGCHI/SIGART Conference of Human-Robot Interaction, March 2–3, 2006, Salt Lake City, Utah. ACM Inc., New York, NY, 290–296.
    [36]
    Joanna Hall, Terry Titton, Angela Rowe, Anthony Pipe, Chris Melhuish, and Ute Leonards. 2013. Perceptions of own and robot engagement in human-robot interactions and their dependence on robotics knowledge. Robotics and Autonomous Systems 62, 3 (Mar. 2014), 392–399.
    [37]
    Roy L. Lewicki, Daniel J. McAllister, and Robert J. Bies. 1998. Trust and distrust: New relationships and realities. The Academy of Management Review 23, 3 (Jul. 1998), 438–458.
    [38]
    Steven Van de Walle and Frederique Six. 2014. Trust and distrust as distinct concepts: Why studying distrust in institutions is important. Journal of Comparative Policy Analysis: Research and Practice 16, 2 (Apr. 2014), 158–174.
    [39]
    Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 39, 2 (May 2007), 175–191.
    [40]
    Morton J. Mendelson and Frances E. Aboud. 1999. Measuring friendship quality in late adolescents and young adults: McGill friendship questionnaires. Canadian Journal of Behavioural Science 31, 2 (Apr. 1999), 130–132.
    [41]
    Stewart W. Mercer, Margaret Maxwell, David Heaney, and Graham Watt. 2004. The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice 21, 6 (Dec. 2004), 699–705.
    [42]
    Annemieke P. Bikker, Bridie Fitzpatrick, Douglas Murphy, and Stewart W. Mercer. 2015. Measuring empathic, person-centred communication in primary care nurses: Validity and reliability of the consultation and relational empathy (CARE) measure. BMC Family Practice, 16 (Oct. 2015), Article 149.
    [43]
    Markus Wirtz, Maren Boecker, Thomas Forkmann, and Melanie Neumann. 2011. Evaluation of the “Consultation and relational empathy” (CARE) measure by means of Rasch-analysis at the example of cancer patients. Patient Education and Counselling 82, 3 (Mar. 2011), 298–306.
    [44]
    Jian-Yin Jian, Ann M. Bisantz, Colin G. Drury, and James Llinas. 2000. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics 4, 1 (Mar. 2000), 53–71.
    [45]
    Randall D. Spain, Ernesto A. Bustamante, and James P. Bliss. 2008. Towards an empirically developed scale for system trust: Take two. Human Factors and Ergonomics Society Annual Meeting Proceedings 52, 19 (Sep. 2008), 1335–1339.
    [46]
    Erin K. Chiou, Noah L. Schroeder, and Scotty D. Craig. 2020. How we trust, perceive, and learn from virtual humans: The influence of voice quality. Computers & Education 146 (Mar. 2020), 103756.
    [47]
    Gariele Paolacci, Jesse Chandler, and Pangiotis G. Ipeirotis. 2010. Running experiments on Amazon Mechanical Turk. Judgement and Decision Making 5, 5 (Aug. 2010), 411–419. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1626226

    Cited By

    View all
    • (2024)Culture, sex and social context influence brain-to-brain synchrony: an fNIRS hyperscanning studyBMC Psychology10.1186/s40359-024-01841-312:1Online publication date: 14-Jun-2024

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Human-Robot Interaction
    ACM Transactions on Human-Robot Interaction  Volume 12, Issue 1
    March 2023
    454 pages
    EISSN:2573-9522
    DOI:10.1145/3572831
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 February 2023
    Online AM: 21 July 2022
    Accepted: 16 June 2022
    Revised: 12 November 2021
    Received: 29 January 2021
    Published in THRI Volume 12, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • Technology Innovation Program funded By the Ministry of Trade, industry & Energy (MI, Korea)

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,093
    • Downloads (Last 6 weeks)109
    Reflects downloads up to

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Culture, sex and social context influence brain-to-brain synchrony: an fNIRS hyperscanning studyBMC Psychology10.1186/s40359-024-01841-312:1Online publication date: 14-Jun-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media