Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

A Computational Model of Coupled Human Trust and Self-confidence Dynamics

Published: 23 June 2023 Publication History

Abstract

Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this article, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.

1 Introduction

The complexity of human interactions with autonomous systems is increasing, as evidenced in applications including intelligent transportation systems [11], autonomous vehicles [10], military operations [28, 29], and medical imaging systems [8]. In turn, this necessitates a greater understanding of these interactions and how they affect outcomes in terms of metrics such as performance [19, 32, 46, 67]. It is well established that knowledge of a human’s cognitive factors, or states, during their interactions with robots or other autonomous systems is vital to the design of effective human-automation interaction (HAI) [38, 57]. In particular, the cognitive factors of human trust and self-confidence play a substantial role in the human’s willingness, and decision, to rely on automation [25, 31, 33, 34, 43, 44, 62]. Interestingly, the inclusion of automation support presents the potential for automation bias—an over-reliance on an automated decision aid, in which the human attributes more authority to the automation than to other sources [55]. This often results in the human neglecting prior knowledge and contradictory evidence to follow incorrect advice. Consequences of improper reliance, relying too much or too little, can be dire [56]. For example, it is well supported that miscalibration of trust to automation capabilities is the cause of misuses and disuse of automation [44]. This motivates the need for calibration of cognitive factors to achieve appropriate reliance. For example, models enabling cognitive state estimation and prediction could be used by automation to appropriately trigger system responses through methods such as transparency adaptation, automation behavior adaptation, or flexible autonomy [5, 38]. However, accomplishing this often requires mathematical models of human cognitive state evolution that are suitable for algorithm design.
Several conceptual frameworks have been proposed to model HAI and specifically the role of different cognitive factors in human behavior and decision-making, particularly as it relates to human reliance on automation [12, 15, 21, 25, 26, 34, 44, 48, 50, 56, 71]. A majority of these frameworks are centered around human trust in automation [12, 15, 25, 34] and its effect on reliance. Trust is well established as a cognitive factor that can be defined in an HAI context as the belief that the automation will help the human achieve their goal(s) in an uncertain situation [44]. Early qualitative models of trust establish that human trust in automation is dependent on factors including interaction with the operator, context, automation performance, and the user interface [44]. Another widely referenced qualitative model by Hoff and Bashir [34] identifies three stratified layers of trust: dispositional trust—derived from individual characteristics and remains characteristically constant over time; situational trust—derived from the environment; and learned trust—derived from preexisting knowledge and the system’s performance. In turn, researchers have highlighted factors that affect the human’s trust, including system transparency [77], anthropomorphism [18], and automation reliability [13, 20]. However, in addition to trust, it has been established that the self-confidence of the human also affects their reliance on automation [16, 22, 42, 43, 53, 56, 73]. For example, over-reliance on automation can arise as a result of a human with low self-confidence in their skill to manually execute a particular task [56]. Additionally, biases in one’s self-confidence (over- or under-confidence) can lead to improper reliance [43].
There has been a significant effort over the last decade to develop computational models for predicting reliance behavior or the dynamics of trust and self-confidence. An overview of computational models of human trust or self-confidence is provided in Table 1. From a computational perspective, several models have been developed to predict human trust, particularly in the last decade. Notably, more recent models of trust are aimed at capturing the probabilistic nature of human behavior using a variety of mathematical techniques. Computational cognitive models of trust include auto-regressive moving average vector (ARMAV) derivations and other linear models [9, 35, 37, 42], decision analytical models based on decision or game theory [76], dynamic Bayesian networks [27, 30, 72], and partially observable Markov decision process (POMDP) models [5, 6, 17]. However, despite several conceptual frameworks supporting the relationship between trust and self-confidence, comparatively fewer computational models have been developed to capture this relationship [31, 43, 65]. Many of these models are based upon the “confidence vs. trust” hypothesis, originally developed in [43], that assumes a human’s reliance on a given system is dependent on a difference between the human’s trust in the automation and confidence in their ability (also known as the relative trust) to execute the task manually [71]. For example, this hypothesis states that a person whose self-confidence exceeds their trust in the automation will choose to perform the task manually, and vice versa. However, some researchers have published results that contradict this hypothesis [61, 75]. For example, in [75], the authors show that in a signal detection task, despite their trust in the system being lower than their self-confidence, participants still relied on the system instead of completing the task manually. Furthermore, the authors of [61] suggest that operators who have both high trust and high self-confidence tend to prefer a higher level of automation. Therefore, further investigation of the coupling between trust and self-confidence is needed to characterize how different combinations of these cognitive states affect human reliance decisions and subsequent performance. This “coupling” refers to models that capture the relationship between trust and self-confidence while also recognizing that these two individual states affect one another dynamically. While prior work has explored the coupled relationship between trust and workload [3], to the knowledge of the authors, there are no existing models that mathematically characterize the dynamic coupling between trust and self-confidence.
Table 1.
 Category
PapersTrustSelf-confidenceT-SC CouplingProbabilistic
Lee and Moray, 1992 [42]\(\checkmark\)   
Lee and Moray, 1994* [43]\(\checkmark\)\(\checkmark\)  
Gao and Lee, 2006* [31]\(\checkmark\)\(\checkmark\) \(\checkmark\)
Maanen et al., 2011 [47]\(\checkmark\)   
Mikulski et al., 2012 [49]\(\checkmark\)  \(\checkmark\)
Saeidi and Wang, 2015* [64]\(\checkmark\)\(\checkmark\)  
Juvina et al., 2015 [40]\(\checkmark\)  \(\checkmark\)
Xu and Dudek, 2015 [76]\(\checkmark\)  \(\checkmark\)
Floyd et al., 2015 [27]\(\checkmark\)   
Hu et al., 2016 [36]\(\checkmark\)   
Akash et al., 2017 [7]\(\checkmark\)   
Akash et al., 2018 [4]\(\checkmark\)   
DeVisser et al., 2018 [19]\(\checkmark\)   
Chen et al., 2018 [17]\(\checkmark\)  \(\checkmark\)
Sadrfaridpour et al., 2018 [63]\(\checkmark\)  \(\checkmark\)
Wagner et al., 2018 [72]\(\checkmark\)   
Hu et al., 2019 [37]\(\checkmark\)   
Juvina et al., 2019 [39]\(\checkmark\)   
Saeidi and Wang, 2019 [65]\(\checkmark\)\(\checkmark\) \(\checkmark\)
Tao et al., 2020 [70] \(\checkmark\) \(\checkmark\)
Akash et al., 2020 [5]\(\checkmark\)  \(\checkmark\)
Azevedo-Sa et al., 2020 [9]\(\checkmark\)   
Soh et al., 2020 [69]\(\checkmark\)  \(\checkmark\)
Table 1. Summary of Models of Trust or Self-confidence
“T-SC Coupling” refers to models that capture the relationship between trust and self-confidence while recognizing that these two individual states affect one another dynamically. *denotes models that are based upon the “confidence vs. trust” hypothesis.
The primary contribution of this article is a probabilistic discrete-state model of human trust and self-confidence dynamics as they relate to a human’s repeated interactions with automation assistance. An important feature of the model is its interpretability, which is achieved by first defining a model structure grounded in cognitive psychology and human factors literature, and then parameterizing it using human subject data collected in the context of a game-based task. The model considers coupling between the states themselves, as well as coupling between the human’s reliance on the automation assistance and the cognitive states. Furthermore, the model leverages both behavioral and self-report data for model parameter estimation, collected from 340 human subjects. It is shown that the model’s predictions are consistent with the findings of [61, 75] in that the “confidence vs. trust” hypothesis does not account for all scenarios of trust and self-confidence interactions. Instead, the coupled effect of human trust and self-confidence on reliance is captured by the state transition probabilities of the trained model and underscores the need for computational models that can be used for algorithm design for improved HAI.
The article is organized as follows. In Section 2, existing computational models of trust and self-confidence are presented and discussed in greater detail, along with a comparison to the proposed approach. In Section 3, the formulation of the trust and self-confidence modeling framework is presented. The human subject study, including experimental design and implementation, is outlined in Section 4. The modeling, training, and validation process is discussed in Section 5. The trained model is analyzed in Section 6, followed by a discussion of the implications of the results on the design of human-responsive automation and limitations of the work. Finally, conclusions and future research directions are discussed in Section 7.

2 Related Work

Before presenting our modeling approach, we describe in greater detail the existing computational models that relate human trust and self-confidence in HAI contexts. A review of applied quantitative models of trust is available in [66]. Lee and Moray [42] first developed an ARMAV time series model in 1992 to model trust as a function of performance efficiency and system faults. This model was extended in 1994 to capture the relationship between trust, self-confidence, and reliance on automation, after identifying that the use of automatic control was strongly correlated to the difference between users’ trust and self-confidence. This model is provided in Equation (1) and predicts the operators’ allocation strategy by means of the percentage of automatic control [43]. The model accounts for past automation dependence, a difference in the operators’ trust and self-confidence states, as well as individual operator bias. The variable \(\phi\) is a constant representing the current use of automation dependence on past use of automation. The variables A1 and A2 represent the weights of the difference in trust and self-confidence (\(T-SC\)) and individual bias toward manual operation, respectively. Normally distributed independent fluctuations are provided by \(a(t)\), given time t.
\begin{equation} \% Automatic = \phi 1 \times Automatic (t-1) + A1 \times ((T-SC)(t)) + A2 \times Individual Bias + a(t) \end{equation}
(1)
Gao and Lee [31] developed an alternative model that utilizes the difference between trust and self-confidence to determine reliance on automation behavior like that of Lee and Moray [43]. The EDFT model is an extended decision field theory (DFT) model [14] used to characterize multiple decisions made sequentially, as opposed to the single decisions addressed by DFT. The EDFT model structure utilizes a closed-loop relationship between the context (autonomous \(C_A\) and manual \(C_M\)), information available, operator belief (context autonomous \(B_{CA}\) and manual \(B_{CM}\)), cognitive state (trust T and self-confidence SC), intention (P), and decision (reliance). The preference, PR, of mode is defined as the difference between trust and self-confidence (Equation (2)) and updated given the context and noise term \(\epsilon\) representing the uncertainty in trust or self-confidence in Equation (3). The model is then used to predict the user’s decision to rely on automation or to use manual control when the preference evolves beyond a given threshold \(\theta\).
\begin{equation} PR(n) = T(n) - SC(n) \end{equation}
(2)
\begin{equation} PR(n) = (1 - s) \times PR(n - 1) + s \times [C_A(n - 1) - C_M(n - 1)] + \epsilon (n) \end{equation}
(3)
In 2015, Saeidi and Wang [64] developed a performance-based, computational trust and self-confidence model, TSC (Equation (4)), for autonomy allocation in a UAV context. The TSC model is a function of the human’s (\(P_h\)) and robot’s (\(P_r\)) performance at time step k, with performance level constants \(a_T\) and \(b_T\). Similar to the models of [31, 43], this model incorporates a difference in the cognitive states, human-to-robot trust and self-confidence, to achieve optimal allocation with consideration of the Yerkes-Dodson law [23] and robot performance decay. Therefore, to reduce the effects of human workload overload or poor robot performance, the level of autonomy is switched to maintain the difference, TSC, within the given thresholds. This difference is depicted in Equation (4).
\begin{equation} TSC(k) = a_T P_r (k) - b_T P_H (k) \end{equation}
(4)
In 2019, Saeidi and Wang [65] improved upon their trust and self-confidence allocation strategy by incorporating a TSC-based switching control for manual and fully autonomous mode allocation. They model the difference between trust and self-confidence as a direct function of human and robot performance. This is similar to how Lee and Moray [42] model trust as a function of performance efficiency. Lee and Moray [43] denote T-SC as the difference between subjective ratings of trust and self-confidence, and Gao and Lee [31] treat trust and self-confidence as a function of the operator’s belief in the automation and manual control capability. The proposed model in this article will be considering task performance as an action, or input, that affects the dynamic evolution of the cognitive states of trust and self-confidence, along with other environmental and task context factors that will be expanded upon in Section 3. Furthermore, among these existing computational models incorporating cognitive states of both trust and self-confidence, there is a key similarity in the basis of their frameworks. This similarity is the idea of the “confidence vs. trust” hypothesis, or assuming the human’s reliance on a given system is dependent on a difference between the human’s trust in the automation and confidence in their individual ability. On the other hand, by incorporating cognitive state coupling of the human’s trust and self-confidence, the model proposed in this article is unique in its ability to capture the relationship between trust and self-confidence while recognizing that these two individual states affect one another dynamically.

3 Model Definition

A POMDP is an extension of a Markov decision process (MDP) and is defined as a 7-tuple, (\(\mathcal {S},\mathcal {A},\mathcal {O},\mathcal {T},\mathcal {E},\mathcal {R},\mathcal {\gamma }\)), where \(\mathcal {S}\) is a finite set of states, \(\mathcal {A}\) is a finite set of actions, and \(\mathcal {O}\) is a finite set of observations [68]. The transition probability function \(\mathcal {T}\) governs the transition from the current state s to the next state \(s^{\prime }\), given the action a. The emission probability function \(\mathcal {E}\) governs the likelihood of observing o, given that the process is in state s. Finally, the reward function \(\mathcal {R}\) and discount factor \(\mathcal {\gamma }\) can be used to synthesize an optimal action (control) policy given the state dynamics. However, designing such a policy is outside the scope of this work; therefore, throughout the remainder of the article, we will refer to the 5-tuple (\(\mathcal {S},\mathcal {A},\mathcal {O},\mathcal {T},\mathcal {E}\)) as a POMDP/R.
A POMDP accounts for observability through hidden states; this is particularly useful in the modeling of human cognitive dynamics, which cannot always be directly measured or observed. The POMDP is used here to establish a gray-box modeling framework for estimation and prediction of human trust and self-confidence that can be parameterized using human subject data. This promotes interpretability of the model. The model definition is supported by existing literature establishing key relationships between the cognitive states of interest, available observations, and relevant actions, as described in more detail below. It is worth noting that POMDPs are often used in robotic contexts in which the states are the robot’s current position, the actions are the possible directions the robot can travel in, and the observation is the robot’s future position [58]. However, here we model a human’s cognitive behavior using a POMDP, as is done in [5]. To do so, we define relevant human cognitive factors as the states of the POMDP, actions are the measures that influence the cognitive states (namely characteristics of the automation’s input as well as the human’s experience with it), and observations are the observable characteristics of the human’s decision.
First, the set of states \(\mathcal {S}\) is defined as tuples containing the Trust state \(s_T\) and the Self-Confidence state \(s_{SC}\), in which each state is attributed either a low (\(\downarrow\)) or high (\(\uparrow\)) value. This discrete state definition has been employed in prior POMDP models of human cognitive dynamics and was shown to be sufficient for real-time trust calibration [5]. Next, the set of actions \(\mathcal {A}\) is defined as those variables that affect the state evolution. For HAI contexts, this includes the automation input (to the task environment) as well as the human’s experience with the automation. The latter is characterized here as the system performance, which reflects the calculated score earned by the participant in the previous trial. For example, a participant’s trust is a function of their performance in the previous trial. This means that transitions in the trust state are driven by the change in performance between the previous and current trials. It should be noted that because the model states are defined as factors of the human’s cognition, both uncontrollable and controllable actions affect the state dynamics [5]. The Automation Input \(a_A\) from the agent is controllable and belongs to the controllable action set \(\mathcal {A}_{c}\). However, the system Performance \(a_P\) is considered uncontrollable from the agent’s perspective as it is driven in part by the human’s behavior, and therefore belongs to the uncontrollable action set \(\mathcal {A}_{uc}\). In other words, the POMDP/R in this article is a 6-tuple, (\(\mathcal {S},\mathcal {A}_{uc},\mathcal {A}_{c},\mathcal {O},\mathcal {T},\mathcal {E}\)). Nevertheless, for consistency with the standard definition of a POMDP/R, we will combine the controllable and uncontrollable action sets into one action set such that \(\mathcal {A}=\lbrace \mathcal {A}_{uc},\mathcal {A}_{c}\rbrace\). Supported by the literature discussed in Section 1 citing the coupling between human trust and self-confidence, the states are assumed to be coupled according to the following transition probability functions: \(\mathcal {T}(s^{\prime }_T|s_T,s_{SC},a)\) and \(\mathcal {T}(s^{\prime }_{SC}|s_T,s_{SC},a)\).
Finally, the set of observations \(\mathcal {O}\) is defined as the observable characteristics of the human’s behavior and decision-making. As discussed earlier, it is well established in the literature that human reliance on automation is affected by both the human’s trust in the automation and their self-confidence [24, 44, 56]. In other words, reliance is specifically defined as an observation (as opposed to an action) in the POMDP/R, with the emission probability function for reliance defined as \(\mathcal {E}(o_R|s_T,s_{SC})\). It is worth noting that although a user’s past reliance decision could be construed as a predictor of their future trust in the automation, it is their performance resulting from a reliance decision that actually influences their state of trust. This further underscores the choice of reliance as an observation and performance (as a proxy of experience with the automation) as an uncontrollable action.
While a POMDP/R can be trained with fewer observations than states, doing so makes interpretation of the states difficult. Instead, self-reported self-confidence is used as a second observation for estimating the human’s self-confidence state; this is described by the following emission probability function: \(\mathcal {E}(o_{srSC}|s_{SC})\). The use of self-reported self-confidence here is supported by its use in work concerning the application of intelligent tutoring system (ITS) automation to train a self-confidence model [70]. This creates asymmetry in the emission probability function that aids interpretability of the model, as discussed in Section 5. The proposed POMDP model definition is summarized in Table 2 and depicted in Figure 1. For ease of notation, we will denote uncontrollable actions \(\mathcal {A}_{uc}\) as \(A_p\) such that \(a_{uc,P} = a_{P}\) and controllable actions \(\mathcal {A}_{c}\) as \(A_A\) such that \(a_{c,A}=a_{A}\) going forward.
Fig. 1.
Fig. 1. A representation of the proposed POMDP/R model of trust and self-confidence. The transition probabilities of trust and self-confidence depend on the previous states of trust and self-confidence. The reliance observation is dependent on both the trust state and self-confidence state. However, the self-reported self-confidence observation is dependent on only the self-confidence state.
Fig. 2.
Fig. 2. A screenshot of the web-deployed game platform in which the participant must guide a penguin across the computer screen to its home while avoiding obstacles placed in its path.
Table 2.
States s \(\in\) \(\mathcal {S}\)\(\mathcal {S} = \begin{bmatrix} \text{Trust} \ s_T \\ \text{Self-confidence} \ s_{SC} \end{bmatrix}\)\(s_T\) \(\in\) T T = \(\begin{Bmatrix} \text{Low Trust} \ T{\downarrow } \\ \text{High Trust} \ T{\uparrow } \end{Bmatrix}\)
  \(s_{SC}\) \(\in\) SC SC = \(\begin{Bmatrix} \text{Low Self-confidence} \ SC{\downarrow } \\ \text{High Self-confidence} \ SC{\uparrow }\end{Bmatrix}\)
Actions a \(\in\) \(\mathcal {A}\)\(\mathcal {A} =\lbrace \mathcal {A}_c,\mathcal {A}_{uc}\rbrace\) \(\mathcal {A}_{uc}:=\text{Performance} \ a_{uc,P}\) \(\mathcal {A}_{c}:=\text{Automation Input} \ a_{c,A}\)\(a_{uc,P}\) \(\in\) \(\mathcal {A}_{uc}\) \(\mathcal {A}_{uc}\) = \(\begin{Bmatrix} \text{Performance Deterioration} \ P^-\\ \text{Performance Improvement} \ P^+\end{Bmatrix}\)
  \(a_{c,A}\) \(\in\) \(\mathcal {A}_{c}\) \(\mathcal {A}_{c}\) \(:\ \text{Context Specific}\)
Observations o \(\in\) \(\mathcal {O}\)\(\mathcal {O} = \begin{bmatrix} \text{Reliance} \ o_R \\ \text{Self-reported Self-Confidence} \ o_{srSC} \end{bmatrix}\)\(o_R\) \(\in\) R R = \(\begin{Bmatrix} \text{No Reliance} \ R_{NR}\\ \text{Reliance} \ R_R\end{Bmatrix}\)
  \(o_{srSC}\) \(\in\) srSC srSC = \(\begin{Bmatrix} \text{Low Self-confidence} \ srSC{\downarrow } \\ \text{High Self-confidence} \ srSC{\uparrow } \end{Bmatrix}\)
Table 2. Definition of the Human Trust–Self-confidence (T-SC) POMDP/R Model
Human trust and self-confidence are modeled as hidden states. The hidden states are affected by actions corresponding to the user’s performance and the input provided by the automation. The observable characteristics of the user’s chosen reliance and self-reported self-confidence are modeled as the observations of the POMDP/R.
Using the transition and emission probabilities, the probability distribution over the states, otherwise known as the belief state \(b(s)\), can be calculated using Equation (5), in which \(P(\cdot)\) denotes probability.
\begin{equation} b^{\prime }(s^{\prime }) = P(s^{\prime }|o,a,b(s)) = \frac{P(o|s^{\prime },a) \sum \nolimits _{s \in S}^{} P(s^{\prime }|s,a)b(s)}{\sum \nolimits _{s^{\prime } \in S}^{} P(o|s^{\prime },a) \sum \nolimits _{s \in S}^{} P(s^{\prime }|s,a) b(s)} \end{equation}
(5)

4 Human Subject Study

In Section 4.1, the design and intent of the human subject study for model training data collection is described. The implementation of the study is discussed in Section 4.2, and analysis of behavioral and self-report data collected from the experiment is presented in Section 4.3.

4.1 Study Design

Human subject data is collected in the context of a game-based task to parameterize the human trust–self-confidence (T-SC) model. The experimental platform is an online obstacle avoidance game in which participants must perform the task of maneuvering an avatar (depicted as a penguin) across the screen in the shortest amount of time while avoiding collisions with six obstacles. The participants are also informed that an automation assistant is available to help them play the game. Note that in reality, the “automation assistant” simply scales the user’s mouse input by a pre-assigned parameter \(\theta\). The scaling factor \(\theta\) can take on values belonging to any one of three sets: \({\Theta _L} = \lbrace 0.7, 0.8, 0.9\rbrace\), \({\Theta _M} = \lbrace 1.0, 1.1, 1.2\rbrace\), and \({\Theta _H} = \lbrace 1.3, 1.4, 1.5\rbrace\), where \(\theta \in \Theta _j\) for \(j=\lbrace L,M,H\rbrace\). In particular, when \(\theta \lt 1\), the user will experience an attenuation of their mouse input, and when \(\theta \gt 1\), their input will be amplified. In order to obtain training data that is agnostic to the dynamics of a specific automation assistance algorithm, the value of \(\theta\) experienced by each participant is assigned to them according to the between-subjects study design described below. In other words, the scaling factor is not responsive to the human’s performance. Rather, the goal of the experiment is to obtain a set of training data that captures the effect of a range of values of the automation assistant’s input on participants’ behavior. Whether a particular value of \(\theta\) helps or hinders the participant is a function of their skill level. For example, automation input values belonging to \(\Theta _L\) scale the user’s input down. While this may be beneficial for a user whose mouse input is over-reacting, the assistance may not help a user who is already playing well. This is by design to stimulate changes in the user’s trust and, in turn, reliance on the automation assistance. For example, we expect that a user whose performance is being aided by the automation assistance will choose to continue to rely on it, whereas a user who finds the automation to be inhibiting their performance will do the opposite. Stimulating both increases and decreases in trust is critical for collecting training data that covers the state space of interest—in this case, all discrete combinations of low and high trust and self-confidence.
In the game shown in Figure 2, the penguin avatar moves at a constant speed, and its position is controlled by the participant’s mouse movement. The penguin’s x and y positions are governed by the following dynamical equations:
\begin{equation} \begin{split} x_{t+1} = x_t + \Delta t V cos(\theta _k u_t) + \phi (y) \\ y_{t+1} = y_t + \Delta t V sin(\theta _k u_t), \end{split} \end{equation}
(6)
where \([x_t, y_t]^T \in \mathbb {R}^2\) are the penguin’s position at time t, \(u_t \in \mathbb {R}\) is the participant’s (mouse) input, and \(\theta _k \in \mathbb {R}\) is the scaling factor provided by the autonomous assistant in the \(k^{th}\) trial for \(k=1,\ldots ,10\). During the practice round, participants do not receive any input scaling, so \(\theta _0 = 1\). The game update discrete time interval is \(\Delta t\), V is the constant speed, and \(\phi (y)\) is an added “wind” effect that increases in the upward vertical direction and is defined relative to the maximum vertical position, \(y_{max}\). Table 3 provides the specific parameter values used in the experiment. It is important to note that the automation never takes control away from the participant.
Table 3.
Parameter\(x_0\)V\(\Delta t\)\(\theta _0\) \(\phi (y)\)
Value[0, 200]75 pixel/sec0.02 sec10.75, 1.25, 1.75,\(y\lt \frac{1}{3}{y_{max}}\) \(\frac{1}{3}{y_{max}} \le y\lt \frac{2}{3}{y_{max}}\) \(y \ge \frac{2}{3}{y_{max}}\)
Table 3. Game Parameters
A between-subjects study is designed to elicit changes in each participant’s trust in the automation assistant and confidence in their ability to play the game (i.e., their self-confidence) over the course of 10 game trials. Figure 3 shows the sequence of events for each trial in the user study. Participants are asked to decide whether to rely or not rely on the automation assistant prior to every trial, as shown in Figure 4(a). Regardless of their reliance choice, prior to the first trial, each participant is randomly assigned to one of the three \({\Theta }\) sets. Then, for their first five trials, a single \(\theta _1\) value is randomly selected within the given \({\Theta }\) set. In this way, each participant experiences a constant input from the autonomous assistant for five repeated trials. Note that the participant is not informed of the specific \(\theta\) value that is being applied to their input; they only know that the automation assistance is available and that they can turn it on or off. Moreover, for any game trial for which they choose not to rely on the automation assistant, \(\theta _k=1\). Similarly to [43], after each trial, participants are prompted to rate their trust (in the automation assistant) and self-confidence as shown in Figure 4(b). Participants are provided with definitions of the cognitive states prior to rating their trust and self-confidence on a numerical scale of 0–100. Trust is defined as assured reliance on the character, ability, strength, or truth of someone or something. Self-confidence is defined as confidence in oneself and in one’s powers and abilities. While both trust and self-confidence self-report data are collected, only self-confidence self-report is used explicitly as an observation in the POMDP/R model as described in Section 3. Self-reported trust is utilized in validating the model’s predictive capability.
Fig. 3.
Fig. 3. The sequence of events in the experiment. The participant completes a practice trial prior to completing 10 trials of the game.
Fig. 4.
Fig. 4. Example screenshots of the survey questions participants answer after each trial of the web-deployed experiment platform. (a) The reliance selection page in which participants are asked to select to either disable or enable the automation assistance. (b) The survey questions in which participants are asked to rate their trust and self-confidence on a numerical scale from 0 to 100.
At the sixth trial, a step change in the \({\Theta }\) set is introduced. The purpose of this step change is to further stimulate changes in the participant’s trust or self-confidence. Note that to avoid introducing too large of a step change for some participants relative to others, no participant for whom \(\theta _k \in {\Theta _L} \vee {\Theta _H}\) for trial \(k= \lbrace 1,2,3,4,5\rbrace\) experiences \(\theta _k\in {\Theta _L} \vee {\Theta _H}\) for \(k = \lbrace 6,7,8,9,10\rbrace\). The choice of introducing the step change after five trials was based on analysis of data collected through pilot experiments. For the remaining five trials, a single \(\theta _2\) value is then randomly selected within the new \(\Theta\) set. Again, \(\theta _k=1\) for any trial k during which the participant chooses not to rely on the automation assistant.

4.2 Implementation

A total of 367 individuals participated in, and completed, the study. These participants were recruited from the Amazon Mechanical Turk platform [1] and completed the study online. To ensure the collection of quality data, the following criteria were applied to participant selection: participants must reside in the United States, have completed more than 500 Human Intelligence Tasks (HITs), and have a minimum HIT approval rate of 95%. Each participant provided their consent electronically and was compensated US$1.34 for their participation. The Institutional Review Board at Purdue University approved the study. Due to the online nature of the study, and given lack of participant supervision, it is assumed that some participants were not adequately engaged in the study. This was reflected in their unusually low game completion time and high rate of collisions. To remove any outlying participants, the data from participants with at least three trials in which their game times were below the 25th percentile and with four or more collisions were removed. These conditions were chosen because they suggested that the participant dragged the penguin across the screen without attempting to avoid the obstacles. As a result, 27 participants were removed from the dataset. The resulting dataset consists of 340 participants from the United States (145 females, 190 males, 5 preferred not to disclose or did not identify within either gender), ranging in age from 18 to 77 (mean 39.0 and standard deviation 11.9, two participants did not disclose age).

4.3 Behavioral and Self-reported Data

Prior to training the POMDP/R model, the self-reported data is analyzed to identify behavioral trends. First, each participant’s trust and self-confidence are identified as high or low by comparing the participant’s self-reported value to the 50th percentile from all data. In Figure 5 the mean value of the number of collisions across all data points pertaining to each self-reported state combination is used to plot the average collisions. On the right y-axis, the number of instances in which participants chose to rely is counted and divided by the total number of data points in each self-reported state combination to find and plot the reliance rates. There exist clear distinctions between each cognitive state combination and the number of collisions and chosen reliance level of each participant associated with their reporting of each state. From Figure 5, it can be seen that the state combinations \(T{\downarrow }SC{\downarrow }\) and \(T{\uparrow }SC{\downarrow }\) correspond to poorer performance—i.e., greater average collisions. The established relationship between trust and reliance captured in previously published trust models is further underscored in Figure 5; when trust is high, the reliance rate is high, and vice versa. However, as expected, the addition of self-confidence affects the user’s likelihood to rely on the autonomous assistant. When trust is low, the users with low self-confidence are 12% more likely to rely on the autonomous assistant than those with high self-confidence. It should also be noted that when both trust and self-confidence are high, \(T{\uparrow }SC{\uparrow }\), it would have been expected that users would not rely on the assistant as often. However, participants who reported being in the \(T{\uparrow }SC{\uparrow }\) state demonstrated a high reliance rate and low number of collisions. Finally, the data show an almost inverse relationship between the \(T{\uparrow }SC{\uparrow }\) and \(T{\downarrow }SC{\downarrow }\) states. These findings will be used to aid in model state sorting, as discussed in Section 5.
Fig. 5.
Fig. 5. Average collisions (left y-axis) and reliance rate (right y-axis) corresponding to the four combinations of trust and self-confidence, \(T{\downarrow }SC{\downarrow }\), \(T{\downarrow }SC{\uparrow }\), \(T{\uparrow }SC{\downarrow }\), and \(T{\uparrow }SC{\uparrow }\), as self-reported by participants. The error bars of the average collisions represent the standard error of the mean across participants.

4.4 Linear Regression Analysis

In order to further investigate the relationship between performance metrics and cognitive states, multi-variable linear regression analyses were applied to the data using the self-reported numerical self-confidence and trust data as regressors.
Performance Metrics. In Table 4, the estimated values show that as collisions and game time decrease, self-confidence increases. While all performance factors are significant for self-confidence, the categorical collision factors are much more significant to self-confidence than game time. This may be because as users progress through the trials and try to improve, avoiding obstacles is their priority. The intercept shown in Table 4 indicates that when automation is disabled and users have not collided with any obstacles, the baseline numerical self-confidence is 65.8800. In the trust regression analysis from Table 5, the estimates show that trust decreases when users collide with four to six obstacles and increases when users collide with one to three obstacles. Additionally, as game time increases, trust increases. Collisions are not found to be as significant to trust, whereas game time is; this may be because avoiding more obstacles typically implied that more time was spent navigating the penguin avatar across the screen. Additionally, the intercept in Table 5 suggests that the user avoiding all obstacles and having automation disabled is very significant to trust. Overall, these results suggest that self-confidence and trust have a positive relationship with absolute performance metrics as well as improving performance metrics.
Table 4.
 Estimatep-valueSignificance
Intercept65.8800\(6.5870\text{e}-233\)***
Trial0.5140\(1.095\text{e}-04\)***
Trust0.2377\(3.6684\text{e}-63\)***
1 Collision\(-7.5242\)\(6.2399\text{e}-15\)***
2 Collisions\(-15.1810\)\(6.3625\text{e}-37\)***
3 Collisions\(-18.1590\)\(5.8977\text{e}-39\)***
4 Collisions\(-23.4750\)\(2.7911\text{e}-41\)***
5 Collisions\(-24.7240\)\(4.5472\text{e}-36\)***
6 Collisions\(-21.3430\)\(6.1672\text{e}-18\)***
Time\(-0.2015\)0.0171*
Automation Enabled\(-4.8704\)\(6.5045\text{e}-06\)***
\(R^{2}\)0.213
Adjusted \(R^{2}\)0.211
Table 4. Estimator, P-values and Significance of Self-confidence Linear Regression Analysis
Note: *\(p \lt 0.05\), **\(p \lt 0.01\), ***\(p \lt 0.001\).
Table 5.
 Estimatep-valueSignificance
Intercept10.1720\(1.1573\text{e}-04\)***
Trial\(-0.4378\)0.0056**
Self-Confidence0.3353\(3.6684\text{e}-63\)***
1 Collision1.04600.3634 
2 Collisions1.64600.2521 
3 Collisions0.29240.85124 
4 Collisions\(-1.6333\)0.4366 
5 Collisions\(-1.4227\)0.5481 
6 Collisions\(-3.4316\)0.2453 
Time0.30730.0022*
Automation Enabled28.7120\(7.2951\text{e}-198\)***
\(R^{2}\)0.310
Adjusted \(R^{2}\)0.308
Table 5. Estimator, P-values, and Significance of Trust Linear Regression Analysis
Note: *\(p \lt 0.05\), **\(p \lt 0.01\), ***\(p \lt 0.001\).
Cognitive States. In both analyses, the corresponding cognitive state is also very significant. In other words, self-confidence is a significant factor of trust, and vice versa. Both numerical self-confidence and trust take on values of 0 to 100. Therefore, from the resulting regression estimate, a numerical trust rating of 100 translates to 23.77 points of self-confidence, and a numerical self-confidence rating of 100 translates to 33.53 points of trust. This is interesting because not only does this quantitatively suggest that self-confidence and trust affect each other, but also the relationship between trust and self-confidence is proportional. If trust and self-confidence are proportional to one another, the “confidence vs. trust” hypothesis may not be sufficiently able to predict reliance behavior when both cognitive states are high or low, thus further supporting the need for a model that does capture the nuances between trust and self-confidence. This proposed model is discussed in the next section.

5 Model Training and Validation

The adaptation of the model to the specific HAI context considered in this article is first discussed in Section 5.1. This is followed by a description of the methods used for model training (Section 5.2) and model validation (Section 5.3).

5.1 Model Definition

Recall the T-SC cognitive state model defined in Table 2. In the context of the experimental platform used for data collection, there are two relevant performance metrics: the number of collisions between the penguin and the obstacles, and the time taken to navigate the penguin to its home in the game environment. Therefore, the uncontrollable performance action set \(\mathcal {A}_{uc,P}\) is further divided into tuples containing the number of Collisions \(a_C\) and Game Time \(a_G\), as shown in Equation (7). Additionally, the automation input \(a_A\) is the assistance value \(\theta\), discretized into the sets \(\Theta _L,\Theta _M\), and \(\Theta _H\) as described in Section 4 and referenced in Equation (8). Recall that \(a_A\) is a controllable action in the context of the POMDP/R.
\begin{equation} \begin{split} \mathcal {A}_{uc} = \lbrace a_C, a_G\rbrace \\ a_C \in {C} = \lbrace \text{Collision Decrease} \ C^-,\ \text{Collision No Change} \ C^0,\ \text{Collision Increase} \ C^+\rbrace \\ a_G \in {G} = \lbrace \text{Game Time Decrease} \ G^-,\ \text{Game Time Increase} \ G^+\rbrace \end{split} \end{equation}
(7)
\begin{equation} a_{A} \in \mathcal {A}_{c} = \lbrace \Theta _L, \Theta _M, \Theta _H \rbrace \end{equation}
(8)
The transition probabilities for trust \({\mathcal {T}}_T: {\mathcal {S}} \times {T} \times {\mathcal {A}} \rightarrow [0,1]\) and self-confidence \({\mathcal {T}}_{SC}: {\mathcal {S}} \times {SC} \times {\mathcal {A}} \rightarrow [0,1]\) are each represented by \(4\times 2\times 18\) matrices that map the probability of transitioning from combinations of states \(\mathcal {S}\) of trust \(s_T\in {T}\) and self-confidence \(s_{SC}\in {SC}\) to the next states of trust and self-confidence, respectively, given an action \(a\in {\mathcal {A}}\). The state combination transition probabilities are the product of the individual transition probabilities of trust and self-confidence, as given by
\begin{align} \mathcal {T}(s^{\prime }|s,a) = \mathcal {T}(s^{\prime }_T|s_T,s_{SC},a)\mathcal {T}(s^{\prime }_{SC}|s_T,s_{SC},a). \end{align}
(9)
The emission probability function for reliance \({\mathcal {E}}_R: {\mathcal {S}} \times {R} \rightarrow [0,1]\) is represented by a \(4\times 2\) matrix that maps the probability of reliance on automation \(o_R\in R\) given the current trust and self-confidence belief states. The emission probability function for self-reported self-confidence \({\mathcal {E}}_{srSC}: {SC} \times {srSC} \rightarrow [0,1]\) is represented by a \(2\times 2\) matrix that maps the probability of low or high self-reported self-confidence \(o_{srSC} \in {srSC}\) given the current self-confidence state. The overall emission probabilities are the product of the individual reliance and self-reported self-confidence emission probabilities, given by
\begin{align} \mathcal {E}(o|s) = \mathcal {E}(o_R|s_{T},s_{SC})\mathcal {E}(o_{srSC}|s_{SC}). \end{align}
(10)
Finally, the initial state probabilities for trust \({\pi }_{T}: {T} \rightarrow [0,1]\) and self-confidence \({\pi }_{SC}: {SC} \rightarrow [0,1]\) are both given by \(1\times 2\) matrices that represent the probability of the initial trust state \(s_T\) and self-confidence state \(s_{SC}\), respectively. As shown in Figure 1, the reliance observation is dependent on both the current trust and self-confidence states. However, the self-reported self-confidence observation is only dependent on the current self-confidence state. In total, there are 152 effective parameters. There are 18 combinations of actions, consisting of the three collision performance distinctions, two game time performance distinctions, and three automation input value distinctions. There are four combinations of states, consisting of combinations of low and high levels of trust and self-confidence. Finally, there are four combinations of observations, consisting of the two levels of self-reported self-confidence as well as the two levels of reliance.
It should be noted that a limitation of the model is that the action space does not consider absolute performance. Ideally, the performance actions would be combinations of both change in performance and absolute performance. However, this would significantly increase the number of parameters in the model and, in turn, make model training computationally expensive. An analysis of models trained with performance defined either in absolute terms or as a delta between trials showed that the POMDP/R based upon change in performance actions leads to better predictability of the cognitive states and reliance behavior. Therefore, only change in performance is considered for the model presented here.

5.2 Model Parameter Estimation

It is assumed that trust and self-confidence behavior for the general population can be represented by a common model. Therefore, the aggregated data of all participants is utilized in estimating the model parameters, resulting in 340 sequences of data. Previously, an extended version of the Baum-Welch algorithm was used to estimate the parameters of a discrete observation-space cognitive model [5]. However, literature suggests that the genetic algorithm is not as sensitive to the initialization of parameters and not as susceptible to local optima as compared to the Baum-Welch algorithm [59]. Therefore, the genetic algorithm in MATLAB’s Optimization Toolbox [2] is used to optimize the parameters of the model to maximize the likelihood of the sequences given the model parameters. The forward algorithm is used to calculate the likelihood of the sequences [60] in which the algorithm computes, recursively over time, the joint probability of a state \(s_k\) at time k and the series of observations \(o_{1:k}\) and actions \(a_{1:k}\) over time, i.e., \(P(s_k, o_{1:k}, a_{1:k})\). The sum of \(P(s_N , o_{1:N} , a_{1:N})\) is calculated to determine the likelihood of the sequence across all states at the end of the sequence at time N. This gives the probability of the action observation sequence, \(P(o_{1:N}, a_{1:N})\). The model was trained several times using randomized initialization. The resulting probabilities within each final trained model were identical up to at least four significant figures with a final log-likelihood of −3,446.4. Further model validation is included in Section 5.3.
Prior to training the model, the order of the action combinations and observation combinations is established. More specifically, the action combinations are ordered so that each of the transition probability matrices associated with these combinations can be distinguished prior to training. Similarly, the observation combinations are ordered for each of the emission probability matrices. In turn, this enables the state combination labels to be assigned a posteriori to the transition and emission probabilities, which ultimately enables interpretability and analysis of the trained probabilities. The assignment is based on the well-established trust-reliance relationship [24, 44] and context-specific knowledge, such as the expected likelihood of the human’s self-reported self-confidence matching the model’s prediction of self-confidence.
The state combination order of the resulting transition, emission, and initial probability matrices are sorted into the order \(T{\downarrow }SC{\downarrow }\), \(T{\downarrow }SC{\uparrow }\), \(T{\uparrow }SC{\downarrow }\), and \(T{\uparrow }SC{\uparrow }\) after training the model by using established behavioral trends. Identifying the state combination of each row is possible due to the asymmetrical nature of the emission probability functions. The self-reported self-confidence emission probabilities are used to determine the self-confidence state order. The reliance emission probabilities are used to sort the trust state order by applying the well-known correlation between trust and reliance [42, 45, 51, 52]. After identifying the corresponding state combination of each row in the emission probability matrix, all rows and columns associated with states in the initial, transition, and emission probability matrices are re-ordered to match the prescribed state combination order.

5.3 Validation

To test the predictive capability of the model and check for over-fitting, two validation methods are used. First, a 5 \(\times\) 2-fold cross-validation is applied to the data in which the data is divided randomly into two equal sets, or folds. The model is trained with one fold and validated using the other. The entire process is then repeated for five iterations to increase the robustness of the validation log-likelihood values to variations in the training and testing datasets. The average log-likelihood of the trained models from 10-fold cross-validation is \(-1,\!770.9\pm 18.5\). In other words, the average log-likelihood of the 5 \(\times\) 2-fold cross-validation varied by \(1.1\%\), suggesting that the model is not overfitting the data.
Next, receiver operating characteristic (ROC) curves are utilized to illustrate the performance of the model in predicting the cognitive states and reliance decision of each participant. The cognitive state ROC curves (Figure 6(b)) are generated by comparing the self-reported cognitive states to the predicted belief state, as calculated using Equation (5), for all 340 participants’ data. The belief state probability of high trust or self-confidence is first compared to a threshold probability, in which the predicted state is classified as high if the belief state probability is greater than the classification threshold probability. Then, the predicted state is compared to the self-reported state. As shown in Figure 6(a), this results in a true positive (TP), false positive (FP), true negative (TN), or false negative (FN), depending on if the predicted state is high or low and if the predicted state matches the self-report data. For classification thresholds of 0–100% in increments of 1%, this process is repeated for all data to find the true-positive rate (TPR) and false-positive rate (FPR) for each threshold probability. The TPRs and FPRs of each threshold are plotted, resulting in the ROC curve. The reliance ROC curve (Figure 6(d)) is generated using a similar method, but instead, the maximum belief state probability is used to determine the corresponding emission probability. The emission probability is compared to a classification threshold probability to predict the participant’s choice of reliance. TPRs and FPRs are found by comparing the predicted reliance to the participant’s actual chosen reliance, as shown in Figure 6(c). The model can predict both cognitive state levels and reliance choice better than a random guess as shown in Figures 6(b) and 6(d). This is further supported by the area under the curve (AUC), an aggregate performance measure across all thresholds. A higher AUC corresponds to a better model classification performance. The trained model achieves a trust AUC of 0.69, self-confidence AUC of 0.62, and reliance AUC of 0.72.
Fig. 6.
Fig. 6. Receiver Operating Characteristic (ROC) curves for cognitive state and reliance prediction. The given model classification performance is determined by the area under the curve (AUC), which is denoted in the legends of plots (b) and (d). As noted, the model achieves a trust AUC of 0.69, self-confidence AUC of 0.62, and reliance AUC of 0.72. The predicted reliance ROC curve using the “confidence vs. trust” hypothesis is also plotted in (d) and achieves an AUC of 0.58.

5.4 Comparison Against “Confidence vs. Trust” Hypothesis

As discussed in Section 2, existing models of the relationship between human trust in automation, human self-confidence, and reliance on automation are based upon the “confidence vs. trust” hypothesis. Therefore, we compare the proposed model against that hypothesis. Using the self-reported trust and self-confidence values, an ROC curve using the “confidence vs. trust” hypothesis to predict participants’ reliance behavior is generated and plotted in Figure 6(d). The true-positive rate is plotted against the false-positive rate using thresholds ranging from the minimum difference to the maximum difference between participants’ self-reported trust and self-confidence. The ROC curve for predicted reliance using the “confidence vs. trust” hypothesis results in an AUC of 0.58 compared to that of the proposed model, which has an AUC of 0.72. From these results, we can conclude that the predictive capability of the proposed model, with respect to the user’s reliance decision, is greater. From this metric alone, however, it is not possible to discern what aspect of the proposed model is responsible for this improvement in reliance prediction. Hence, differences between the model will be discussed more in Section 6.1.2.

6 Results and Discussion

In Section 6.1, the identified emission and transition probabilities are presented and interpreted in the context of the specific HAI scenario under consideration. This is followed by a discussion of the implications of the model for improving HAI (Section 6.2) and a review of limitations (Section 6.3). Note that for ease of readability, the details of model parameters are provided in Appendix A. Moreover, note that to ensure that our model converged to a solution, 10 iterations of the POMDP/R were trained and the standard error of each parameter was found. It was found that the uncertainties of the initial, transition, and emission probabilities were considerably small compared to the parameter values themselves and that for several of the parameters, the standard error was found to be lower than the smallest value considered in MATLAB.

6.1 Results and Analysis

6.1.1 Initial State Probabilities.

The initial state probabilities are provided in Table 7 (see Appendix A.1). From these probabilities it can be inferred that participants tend to initially have high trust in the autonomous assistant (81.22%) and low self-confidence (60.70%). The initial high trust is consistent with existing literature that states that humans tend to have positivity bias toward automation, in which they trust automation prior to having any experience with it [24].

6.1.2 Emission Probabilities.

Next the identified emission probabilities, visually depicted in Figures 7(a) and 7(b), are analyzed. Figure 7(a) shows the probability of reliance given the trust and self-confidence states, and Figure 7(b) shows the probability of self-reported self-confidence given the self-confidence state. The first observation from Figure 7(a) is that when the participant’s self-confidence is high, the resulting probabilities behave similarly to the established trust and reliance relationship in which low and high trust lead to low and high reliance, respectively. For example, when participants are in a state of low trust and high self-confidence (\(T{\downarrow }SC{\uparrow }\)), they are highly likely (89.54%) to not rely on the automation. When they are in the \(T{\uparrow }SC{\uparrow }\) state, they are highly likely (89.17%) to rely on it. Interestingly, this relationship is not exhibited when self-confidence is low. Instead, when participants are in the \(T{\downarrow }SC{\downarrow }\) state, the likelihood that they will disable (48.62%) or enable (51.38%) the automation assistance is nearly equally distributed. The same is true when participants are in the \(T{\uparrow }SC{\downarrow }\) state. This suggests that self-confidence may be a more significant factor in reliance decisions when the user is in a state of low self-confidence rather than high self-confidence.
Fig. 7.
Fig. 7. The emission probability function for reliance \(\mathcal {E}(o_{R}|s_{T},s_{SC})\) and self-reported self-confidence \(\mathcal {E}(o_{srSC}|s_{SC})\). The probabilities are shown next to the arrows.
It is also helpful to compare these probabilities directly to the reliance behavior predicted by models that build upon the “confidence vs. trust” hypothesis. The computational models discussed in Section 2 predict reliance based on a difference between the trust and self-confidence states. For example, using the hypothesis, it would be assumed that the \(T{\uparrow }SC{\downarrow }\) state results in the participant relying and the \(T{\downarrow }SC{\downarrow }\) results in them not relying on the automation. However, the emission probabilities shown in Figure 7(a) contradict this; instead, the likelihood of relying on or not relying on the automation, when self-confidence is low, is nearly 50%. It is worth noting that the proposed model is probabilistic, whereas existing ones are deterministic. Given the stochastic nature of human behavior, it is possible that the proposed model is able to better predict reliance behavior by inherently allowing for stochasticity in the prediction. In particular, it appears that when the human is in a state of low self-confidence, their behavior may be more stochastic than when they are in a state of high self-confidence. Recall the validation results shown earlier in Section 5.4 (see Figure 6(d)) in which the proposed model was a better predictor of reliance than a model based upon the “confidence vs. trust” hypothesis.

6.1.3 Transition Probabilities.

Given that the POMDP/R consists of 3 discrete-valued actions that result in 18 distinct combinations of actions, there are a total of 18 different transition probability functions that describe the state transitions. The transition probability functions are divided to separate the probabilities of trust state transitions and probabilities of self-confidence state transitions. A complete review of all transition probabilities can be found in Appendix A.2. For clarity of exposition, a subset of these probabilities is analyzed here. Specifically, the actions associated with participants’ performance—changes in the number of collisions and game time—are grouped into cases of performance improvement or deterioration, and the effect of the third action, the autonomous assistance, is analyzed within these groupings.
Overall Performance Improvement. The overall performance improvement case scenario is that in which the number of collisions decreases \(C^-\) and game time decreases \(G^-\). When \(a_A \in \Theta _L\), as shown in Figures 8(a) and 8(d), and for all state combinations, self-confidence is likely to remain the same at the next trial (>80%). Moreover, when the participant is in the \(T{\downarrow }SC{\downarrow }\) state, they are very likely to transition to a state of high trust (99.81%), suggesting that they associate performance improvement to the automation rather than themselves. For easier interpretation, the referenced probabilities are in bold in Table 6.
Fig. 8.
Fig. 8. The transition probability function for trust \(\mathcal {T}_T(s^{\prime }_T|s_T,s_{SC},a)\) and self-confidence \(\mathcal {T}_{SC}(s^{\prime }_{SC}|s_T,s_{SC},a)\). The performance actions are the overall improvement case scenario in which the number of collisions decreases \(C^-\) and game time decreases \(G^-\). The probabilities of transition are shown next to the appropriate arrows. (a) The trust transition probabilities for \(a_A \in \Theta _L\). (b) The trust transition probabilities for \(a_A \in \Theta _M\). (c) The trust transition probabilities for \(a_A \in \Theta _H\). (d) The self-confidence transition probabilities for \(a_A \in \Theta _L\). (e) The self-confidence transition probabilities for \(a_A \in \Theta _M\). (f) The self-confidence transition probabilities for \(a_A \in \Theta _H\).
Table 6.
 TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.00190.99810.99920.0008
T\(\downarrow\)SC\(\uparrow\)0.99900.00100.00370.9963
T\(\uparrow\)SC\(\downarrow\)0.73080.26920.82190.1781
T\(\uparrow\)SC\(\uparrow\)0.04030.95970.02980.9702
Table 6. Transition Probabilities for \(a_A \in \Theta _L\), Decreasing Collisions, and Decreasing Time
This is not the case for most participants in the \(T{\uparrow }SC{\downarrow }\) state though. Participants’ cognitive state responses when they are in the \(T{\uparrow }SC{\downarrow }\) state are similar for all \(a_A\) as shown in Figures 8(a) to 8(f). They are likely to transition to a state of low trust (73.08%, 77.59%, 99.35%), while they are likely to remain in a state of low self-confidence (82.19%, 66.08%, 99.92%), suggesting that the decrease in trust may be a result of the user attributing the performance improvement more toward themselves than the automation. Upon closer analysis, when \(a_A \in \Theta _L\vee \Theta _M\), participants had a 26.91% and 22.41% chance, respectively, of remaining in a state of high trust, and a 17.81% and 33.92% chance, respectively, of transitioning to a state of high self-confidence. The different values of \(a_A\) may result in different attributions of performance between the user and automation, which then affect the participants’ cognitive state responses. When \(a_A \in \Theta _H\), as shown in Figures 8(c) and 8(f), and when the participant is in the \(T{\downarrow }SC{\downarrow }\) state, the probability of them transitioning to a state of high trust (55.29%) or remaining in a state of low trust (44.71%) is approximately equally distributed. On the other hand, they are more likely to remain in a state of low self-confidence (75%) than to transition to a state of high self-confidence. These participants may associate the cause of performance improvement slightly more with the automation than themselves.
Interestingly, for all levels of automation assistance, when participants are in a state of high self-confidence and experience an overall improvement in performance, they are very likely to remain in a state of high self-confidence as well as maintain the same level of trust in the autonomous assistant at the next trial. In other words, a participant’s self-confidence affects their interpretation of their performance metrics, which in turn affects their trust in the automation.
Partial Performance Improvement. For performance improvement, another case of interest is that in which the number of collisions does not change but the participants’ game time decreases. This represents a case of partial improvement. When \(a_A \in \Theta _L\), as shown in Table 8 (see Appendix A.2), and when the participant is in the \(T{\downarrow }SC{\downarrow }\) state, their likelihood of transitioning to a state of high trust (45.72%) or low trust (54.28%) is nearly equally distributed. However, they are likely to remain in a state of low self-confidence (79.49%). This is similar to when participants are in the \(T{\uparrow }SC{\downarrow }\) state and \(a_A \in \Theta _M\), as shown in Table 9. When \(a_A \in \Theta _H\), as shown in Table 10, and the participant is in the \(T{\downarrow }SC{\downarrow }\) state, they are highly likely (99.86%) to remain in a state of low self-confidence. However, their likelihood of transitioning to a state of high trust is only 29.52%. When \(a_A \in \Theta _L \vee \Theta _H\) and participants are in the \(T{\downarrow }SC{\downarrow }\) state, trust increasing suggests that they are attributing a slight improvement in performance to the automation rather than themselves. However, when \(a_A \in \Theta _M\), the fact that participants in a state of high trust are equally likely to remain in their current state or transition to a state of low trust while their low self-confidence is likely to be maintained (84.12%) suggests that they are unsure of to whom they should attribute the improvement in performance.
In comparing these results to the overall improvement case, participants in a state of low self-confidence are still unlikely to gain confidence and transition to \(SC{\uparrow }\), but they are now not as likely to attribute any improvement to the automation. This underscores the consequences, from the perspective of HAI, of a human being in a state of low self-confidence. In other words, participants in a state of low self-confidence may have more difficulty in calibrating their trust in the automation than those with high self-confidence. An analysis of absolute collision and time performance data (see Figure 9(a)) shows that as the game progressed, on average, participants’ performance improved and participants’ self-confidence increased (see Figure 9(b)). In turn, these observations suggest that in addition to trust calibration, correct calibration of self-confidence is important for improved HAI, as discussed further in Section 6.2.
Overall Performance Deterioration. Next, cases in which participants’ performance deteriorates between game trials are analyzed. For all \(a_A\), when performance deteriorates and participants are in the \(T{\downarrow }SC{\downarrow }\) state, their trust is highly likely to increase (99.78%, 99.87%, 98.40%) at the next trial. However, they are likely to remain in a state of low self-confidence (99.92%, 99.84%, 99.98%). This suggests that these participants associate performance deterioration to themselves rather than the automation. On the other hand, the autonomous assistance input does have a greater effect on participants in states of high trust (either \(T{\uparrow }SC{\downarrow }\) or \(T{\uparrow }SC{\uparrow }\)). When \(a_A \in \Theta _M \vee \Theta _H\) (Figures 10(b) and 10(c)), participants in a state of high trust are very likely (\(\gt\)90%) to transition to a state of low trust, regardless of their state of self-confidence. This suggests that they strongly attribute the decrease in performance to the autonomous assistant. This is not true when \(a_A \in \Theta _L\), in which participants who are in a state of \(T{\uparrow }SC{\downarrow }\) are likely to remain in a state of high trust at the next trial. These results highlight that while self-confidence affects participants’ attribution of changes in performance, so does the user’s experience with the autonomous assistant.
Fig. 9.
Fig. 9. Performance and self-reported trust and self-confidence over time.
Fig. 10.
Fig. 10. The transition probability function for trust \(\mathcal {T}_T(s^{\prime }_T|s_T,s_{SC},a)\) and self-confidence \(\mathcal {T}_{SC}(s^{\prime }_{SC}|s_T,s_{SC},a)\). The performance actions are the overall deterioration case scenario in which the number of collisions increases \(C^+\) and game time increases \(G^+\). The probabilities of transition are shown next to the appropriate arrows. (a) The trust transition probabilities for \(a_A \in \Theta _L\). (b) The trust transition probabilities for \(a_A \in \Theta _M\). (c) The trust transition probabilities for \(a_A \in \Theta _H\). (d) The self-confidence transition probabilities for \(a_A \in \Theta _L\). (e) The self-confidence transition probabilities for \(a_A \in \Theta _M\). (f) The self-confidence transition probabilities for \(a_A \in \Theta _H\).
Partial Performance Deterioration. Next, the case in which the number of collisions does not change but the participants’ game time increases is considered. For \(a_A \in \Theta _L \vee \Theta _M \vee \Theta _H\), shown in Tables 810, respectively, and when participants are in the \(T{\downarrow }SC{\downarrow }\) state, it is likely for their trust to increase (99.98%, 99.70%, 99.90%) at the next trial and likely for them to remain in a state of low self-confidence (95.12%, 99.76%, 100%). These results are consistent with those observed for the overall performance deterioration case. When \(a_A \in \Theta _H\), however, and participants are in the \(T{\uparrow }SC{\downarrow }\) state, their likelihood of transitioning to a state of low trust (57.68%) or high trust (42.32%) is more equally distributed than in the overall performance deterioration case. Therefore, the extent of the change in performance also affects participants’ trust and self-confidence dynamics.

6.2 Implications on the Design of Human-aware Autonomous Systems

As discussed in the previous section, depending on their performance and the input from the autonomous assistant, participants may attribute their successes and failures to either the automation or themselves. These observations are a demonstration of attribution theory, a theory concerned with the processes behind the attempts of humans to explain the cause of behaviors and events [71, 74]. Understanding the different attributions is important because reliance is not only affected by participants’ beliefs about the automation’s performance or reliability but also by cognitive factors affecting this performance [44], in this case, participants’ trust in the automation and their own self-confidence. Importantly, for the purpose of improving performance and safety outcomes for different HAI contexts, the proposed probabilistic model can be used to design cognitive state-based feedback policies that help humans correctly attribute changes in performance to themselves or the automation and, in turn, better calibrate their trust in the automation and their self-confidence. Calibration of human trust in HAI is critical to preventing the pitfalls associated with humans under-trusting or over-trusting autonomous systems [41, 44, 54, 73]. However, to date, less emphasis has been placed on calibration of self-confidence in HAI, despite the fact that a human who is incorrectly over-confident in their skills may under-trust the automation they are interacting with, and vice versa. The model analysis presented here shows that both states must be calibrated correctly for improving HAI. With knowledge of how the human’s cognitive dynamics evolve, autonomous systems can be designed to facilitate this, for example, through the use of automation transparency [6, 77]. Finally, through the comparison of the AUCs from the reliance ROC curves, it was observed that the trained model outperforms the “confidence vs. trust” hypothesis. This supports the need for understanding the nuances between trust and self-confidence for the prediction of human reliance on automation.

6.3 Limitations

It is worthwhile to acknowledge some of the limitations of the proposed model for capturing human trust and self-confidence dynamics. It is assumed that the cognitive state dynamics evolve based on the change in the participant’s performance rather than their absolute performance. In other words, in training the model, the behavior of a skilled participant who experienced slight improvement was not distinguished from that of a poor-performing participant who likewise had a slight performance improvement. In future work, this limitation can be mitigated by considering absolute performance in addition to the change in performance. Furthermore, as is the case with any model trained using human data, the conclusions drawn in this article are specific to the HAI scenario under consideration. However, given the generalized definition of the POMDP/R states, observations, and actions, future work should investigate how well the transition and emission probability functions translate to other HAI scenarios and the extent to which new human data is needed for doing so.
Finally, while a POMDP modeling framework was chosen here for several benefits it offers in capturing the probabilistic nature of human cognitive dynamics, a limitation of POMDPs is their scalability. Modest increases in the number of actions, states, or observations can lead to parameter explosion, thereby increasing the amount of data needed for parameter estimation. Therefore, the proposed framework may not scale well to more complex HAI scenarios in which additional actions may need to be defined, for example, to capture the nature of the automation’s input. Similarly, further discretizing the trust or self-confidence states beyond two discrete values will also lead to increased model complexity. Therefore, characterizing classes of HAI scenarios in which this model structure works well, or model adaptations for scenarios in which it does not, is another direction of future work. Future work may extend the given model to use a continuous state space to more accurately characterize trust and self-confidence dynamics. This would allow for incremental changes in these cognitive states to be accounted for [9]. Depending on the context, actions and observations may also be extended to the continuous space.

7 Conclusion

The contribution of this article is a probabilistic model of coupled human trust and self-confidence dynamics as they evolve during a human’s interaction with automation. The dynamics are modeled as a partially observable Markov decision process without a reward function that leverages behavioral and self-report data as observations for estimation of the cognitive states. Trust and self-confidence are modeled as separate discrete states with coupled transition probability functions. By doing so, the model is able to capture the nuanced effects of various combinations of the states on the participant’s reliance on autonomous assistance. A study was designed and implemented to collect human behavioral and self-report data during their repeated interactions with an autonomous assistant in an obstacle avoidance game scenario. Using data collected from 340 human participants, the cognitive model was trained and validated. Analysis of the state transition probabilities suggests that participants’ attribution of changes in performance to either themselves or the autonomous assistant varies depending on their states of trust and self-confidence. This underscores the importance of the proposed model for the design of human-aware automation, particularly in the context of human trust and self-confidence calibration in HAI.
The takeaways of this work are as follows. First, attribution theory is critical when humans are interacting with automation. Second, the calibration of both trust and self-confidence is important to avoid misattributions of skills in HAI for learning contexts. Lastly, by accounting for the coupling between trust and self-confidence, the proposed model outperforms the “confidence vs. trust” hypothesis with respect to the prediction of human reliance on automation. This validates the need to understand the relationship between trust and self-confidence when humans decide to rely on or not rely on automation. Future work includes validation of the model for other HAI scenarios, investigation of individual differences that may lead to distinct trust or self-confidence dynamics, and model-based control algorithm design aimed at, for example, optimally allocating control authority to the human and automation based on calibration of the human’s trust and self-confidence.

Acknowledgments

We thank Sooyung Byeon (Purdue University) for initial development of the game platform that was adapted for the human subject experiment.

A Trained Model Results

We present the POMDP model of human trust–self-confidence behavior discussed in Section 5.

A.1 Initial State Probabilities

The initial state probabilities for trust \({\pi }_{T}: 1\times {T} \rightarrow [0,1]\) and self-confidence \({\pi }_{SC}: 1\times {SC} \rightarrow [0,1]\) are both represented by \(1\times 2\) matrices that represent the probability of the initial trust state \(s_T\) and self-confidence state \(s_{SC}\), respectively. The initial state probabilities are provided in Table 7.
Table 7.
TrustSelf-confidence
T\({\downarrow }\)T\({\uparrow }\)SC\({\downarrow }\)SC\({\uparrow }\)
0.18780.81220.60700.3930
Table 7. Initial Trust State \(s_T\) and Self-confidence State \(s_{SC}\) Probabilities

A.2 Transition Probabilities

The transition probabilities for trust \({\mathcal {T}}_T: {\mathcal {S}} \times {T} \times {\mathcal {A}} \rightarrow [0,1]\) and self-confidence \({\mathcal {T}}_{SC}: {\mathcal {S}} \times {SC} \times {\mathcal {A}} \rightarrow [0,1]\) are each represented by \(4\times 2\times 18\) matrices that map the probability of transitioning from combinations of states \(\mathcal {S}\) of trust \(s_T\in {T}\) and self-confidence \(s_{SC}\in {SC}\) to the next states of trust and self-confidence, respectively, given an action \(a\in {\mathcal {A}}\). The state combination transition probabilities are the product of the individual transition probabilities of trust and self-confidence, as given by
\begin{align} \mathcal {T}(s^{\prime }|s,a) = \mathcal {T}(s^{\prime }_T|s_T,s_{SC},a)\mathcal {T}(s^{\prime }_{SC}|s_T,s_{SC},a). \end{align}
(11)
The transition probabilities are provided in Tables 810. The transition probability tables are separated by the action \(a_A\). Each table is divided such that the transition probabilities can be identified based upon the change in performance metrics.
Table 8.
Collision Decrease, Time Decrease Collision Decrease, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.00190.99810.99920.0008 T\(\downarrow\)SC\(\downarrow\)0.99590.00410.81420.1858
T\(\downarrow\)SC\(\uparrow\)0.99900.00100.00370.9963 T\(\downarrow\)SC\(\uparrow\)0.85180.14820.00030.9997
T\(\uparrow\)SC\(\downarrow\)0.73080.26920.82190.1781 T\(\uparrow\)SC\(\downarrow\)0.00110.99890.96960.0304
T\(\uparrow\)SC\(\uparrow\)0.04030.95970.02980.9702 T\(\uparrow\)SC\(\uparrow\)0.00010.99990.01580.9842
Collision No Change, Time Decrease Collision No Change, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.45720.54280.79490.2051 T\(\downarrow\)SC\(\downarrow\)0.00020.99980.95120.0488
T\(\downarrow\)SC\(\uparrow\)0.97380.02620.00300.9970 T\(\downarrow\)SC\(\uparrow\)0.95340.04660.06350.9365
T\(\uparrow\)SC\(\downarrow\)0.99970.00030.96120.0388 T\(\uparrow\)SC\(\downarrow\)0.02960.97040.99990.0001
T\(\uparrow\)SC\(\uparrow\)0.00130.99870.00740.9926 T\(\uparrow\)SC\(\uparrow\)0.00740.99260.02660.9734
Collision Increase, Time Decrease Collision Increase, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.99900.00100.95520.0448 T\(\downarrow\)SC\(\downarrow\)0.00220.99780.99920.0008
T\(\downarrow\)SC\(\uparrow\)0.99820.00180.15740.8426 T\(\downarrow\)SC\(\uparrow\)0.99650.00350.00540.9946
T\(\uparrow\)SC\(\downarrow\)0.08440.91560.99600.0040 T\(\uparrow\)SC\(\downarrow\)0.00040.99960.99880.0012
T\(\uparrow\)SC\(\uparrow\)0.44090.55910.10100.8990 T\(\uparrow\)SC\(\uparrow\)0.68220.31780.01330.9867
Table 8. Transition Probabilities for \(a_A \in \Theta _L\) and Performance Metric Combinations
Table 9.
Collision Decrease, Time Decrease Collision Decrease, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.98380.01620.99630.0037 T\(\downarrow\)SC\(\downarrow\)0.99400.00600.99190.0081
T\(\downarrow\)SC\(\uparrow\)0.99730.00270.00210.9979 T\(\downarrow\)SC\(\uparrow\)0.92320.07680.00190.9981
T\(\uparrow\)SC\(\downarrow\)0.77590.22410.66080.3392 T\(\uparrow\)SC\(\downarrow\)0.15170.84830.77680.2232
T\(\uparrow\)SC\(\uparrow\)0.06210.93790.00340.9966 T\(\uparrow\)SC\(\uparrow\)0.07530.92470.02930.9707
Collision No Change, Time Decrease Collision No Change, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.97880.02120.97200.0280 T\(\downarrow\)SC\(\downarrow\)0.00300.99700.99760.0024
T\(\downarrow\)SC\(\uparrow\)0.99220.00780.00150.9985 T\(\downarrow\)SC\(\uparrow\)0.99830.00170.00330.9967
T\(\uparrow\)SC\(\downarrow\)0.50400.49600.84120.1588 T\(\uparrow\)SC\(\downarrow\)0.00180.99820.95990.0401
T\(\uparrow\)SC\(\uparrow\)0.03230.96770.02300.9770 T\(\uparrow\)SC\(\uparrow\)0.00001.00000.04620.9538
Collision Increase, Time Decrease Collision Increase, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.99890.00110.99980.0002 T\(\downarrow\)SC\(\downarrow\)0.00130.99870.99840.0016
T\(\downarrow\)SC\(\uparrow\)0.97400.02600.12440.8756 T\(\downarrow\)SC\(\uparrow\)0.99330.00670.11670.8833
T\(\uparrow\)SC\(\downarrow\)0.73110.26890.97350.0265 T\(\uparrow\)SC\(\downarrow\)0.99590.00410.90840.0916
T\(\uparrow\)SC\(\uparrow\)0.05310.94690.10920.8908 T\(\uparrow\)SC\(\uparrow\)0.06010.93990.18580.8142
Table 9. Transition Probabilities for \(a_A \in \Theta _M\) and Performance Metric Combinations
Table 10.
Collision Decrease, Time Decrease Collision Decrease, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.44710.55290.74650.2535 T\(\downarrow\)SC\(\downarrow\)0.41090.58910.66720.3328
T\(\downarrow\)SC\(\uparrow\)1.00000.00000.00140.9986 T\(\downarrow\)SC\(\uparrow\)0.91990.08010.00110.9989
T\(\uparrow\)SC\(\downarrow\)0.99350.00650.99920.0008 T\(\uparrow\)SC\(\downarrow\)0.00050.99951.00000.0000
T\(\uparrow\)SC\(\uparrow\)0.00270.99730.04870.9513 T\(\uparrow\)SC\(\uparrow\)0.03820.96180.00480.9952
Collision No Change, Time Decrease Collision No Change, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.70480.29520.99860.0014 T\(\downarrow\)SC\(\downarrow\)0.00100.99901.00000.0000
T\(\downarrow\)SC\(\uparrow\)0.99580.00420.00130.9987 T\(\downarrow\)SC\(\uparrow\)0.94530.05470.00610.9939
T\(\uparrow\)SC\(\downarrow\)0.00710.99290.84320.1568 T\(\uparrow\)SC\(\downarrow\)0.57680.42320.80190.1981
T\(\uparrow\)SC\(\uparrow\)0.00030.99970.00120.9988 T\(\uparrow\)SC\(\uparrow\)0.01270.98730.00210.9979
Collision Increase, Time Decrease Collision Increase, Time Increase
 TrustSelf-confidence  TrustSelf-confidence
 T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)  T\(\downarrow\)T\(\uparrow\)SC\(\downarrow\)SC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.00200.99800.96770.0323 T\(\downarrow\)SC\(\downarrow\)0.01600.98400.99980.0002
T\(\downarrow\)SC\(\uparrow\)0.99230.00770.15250.8475 T\(\downarrow\)SC\(\uparrow\)0.98530.01470.00150.9985
T\(\uparrow\)SC\(\downarrow\)0.82080.17920.95240.0476 T\(\uparrow\)SC\(\downarrow\)0.99730.00270.99790.0021
T\(\uparrow\)SC\(\uparrow\)0.06830.93170.08280.9172 T\(\uparrow\)SC\(\uparrow\)0.03500.96500.04560.9544
Table 10. Transition Probabilities for \(a_A \in \Theta _H\) and Performance Metric Combinations

A.3 Emission Probabilities

The emission probability function for reliance \({\mathcal {E}}_R: {\mathcal {S}} \times {R} \rightarrow [0,1]\) is represented by a \(4\times 2\) matrix that maps the probability of reliance on automation \(o_R\in R\) given the current trust and self-confidence belief states. The emission probability function for self-reported self-confidence \({\mathcal {E}}_{srSC}: {SC} \times {srSC} \rightarrow [0,1]\) is represented by a \(2\times 2\) matrix that maps the probability of low or high self-reported self-confidence \(o_{srSC} \in {srSC}\) given the current self-confidence state. The overall emission probabilities are the product of the reliance and self-reported self-confidence emission probabilities, given by
\begin{align} \mathcal {E}(o|s) = \mathcal {E}(o_R|s_T,s_{SC})\mathcal {E}(o_{srSC}|s_{SC}). \end{align}
(12)
The emission probabilities are provided in Table 11.
Table 11.
RelianceSelf-reported Self-confidence
 NRR srSC\(\downarrow\)srSC\(\uparrow\)
T\(\downarrow\)SC\(\downarrow\)0.48620.5138SC\(\downarrow\)0.94480.0552
T\(\downarrow\)SC\(\uparrow\)0.89540.1046SC\(\uparrow\)0.08980.9102
T\(\uparrow\)SC\(\downarrow\)0.49830.5017   
T\(\uparrow\)SC\(\uparrow\)0.10830.8917   
Table 11. Emission Probabilities of the Reliance Observation \(o_{R}\) and Self-reported Self-confidence Observation \(o_{srSC}\)
NR and R denote no reliance and reliance respectively, while high and low self-reported self-confidence is denoted by \(srSC{\uparrow }\) and \(srSC{\downarrow }\) respectively.

References

[1]
2018. Amazon Mechanical Turk. https://www.mturk.com/.
[3]
Kumar Akash. 2020. Reimagining Human-machine Interactions through Trust-based Feedback. Thesis. Purdue University Graduate School.
[4]
Kumar Akash, Wan-Lin Hu, Neera Jain, and Tahira Reid. 2018. A classification model for sensing human trust in machines using EEG and GSR. ACM Transactions on Interactive Intelligent Systems 8, 4 (Nov. 2018), 1–20. DOI: arXiv:1803.09861 [cs].
[5]
Kumar Akash, Griffon McMahon, Tahira Reid, and Neera Jain. 2020. Human trust-based feedback control: Dynamically varying automation transparency to optimize human-machine interactions. IEEE Control Systems Magazine 40, 6 (Dec. 2020), 98–116. DOI:
[6]
Kumar Akash, Tahira Reid, and Neera Jain. 2019. Improving human-machine collaboration through transparency-based feedback – Part II: Control design and synthesis. IFAC-PapersOnLine 51, 34 (Jan. 2019), 322–328. DOI:
[7]
Kumar Akash, Wan-Lin Hu, Tahira Reid, and Neera Jain. 2017. Dynamic modeling of trust in human-machine interactions. In 2017 American Control Conference (ACC’17). IEEE, Seattle, WA, 1542–1548. DOI:
[8]
Susan M. Astley. 2005. Evaluation of computer-aided detection (CAD) prompting techniques for mammography. British Journal of Radiology 78, Suppl_1 (Jan. 2005), S20–S25. DOI:
[9]
Hebert Azevedo-Sa, Suresh Kumaar Jayaraman, Connor T. Esterwood, X. Jessie Yang, Lionel P. Robert, and Dawn M. Tilbury. 2020. Real-time estimation of drivers’ trust in automated driving systems. International Journal of Social Robotics 13 (Sept. 2020), 1911-1927. DOI:
[10]
Victoria A. Banks, Neville A. Stanton, and Catherine Harvey. 2014. Sub-systems on the road to vehicle automation: Hands and feet free but not ‘mind’ free driving. Safety Science 62 (Feb. 2014), 505–514. DOI:
[11]
Woodrow Barfield and Thomas A. Dingus. 2014. Human Factors in Intelligent Transportation Systems. (1st ed.) Taylor & Francis, New York.
[12]
Jayson G. Boubin, Christina F. Rusnock, and Jason M. Bindewald. 2017. Quantifying compliance and reliance trust behaviors to influence trust in human-automation teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, 1 (Sept. 2017), 750–754. DOI:
[13]
Erik Brockbank, Haoliang Wang, Justin Yang, Suvir Mirchandani, Erdem Bıyık, Dorsa Sadigh, and Judith E. Fan. 2022. How do people incorporate advice from artificial agents when making physical judgments? (May 2022). DOI:
[14]
Jerome R. Busemeyer and James T. Townsend. 1993. Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review 100 (1993), 432–459. DOI:
[15]
Eric T. Chancey, James P. Bliss, Yusuke Yamani, and Holly A. H. Handley. 2017. Trust and the compliance–reliance paradigm: The effects of risk, error bias, and reliability on trust and dependence. Human Factors 59, 3 (May 2017), 333–345. DOI:
[16]
Jessie Y. C. Chen and Peter I. Terrence. 2009. Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics 52, 8 (Aug. 2009), 907–920. DOI:
[17]
Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. 2018. Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI’18). Association for Computing Machinery, New York, NY, 307–315. DOI:
[18]
Ewart de Visser, Samuel Monfort, Ryan Mckendrick, Melissa Smith, Patrick McKnight, Frank Krueger, and Raja Parasuraman. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22 (Aug. 2016), 331-349. DOI:
[19]
Ewart de Visser, Richard Pak, and Tyler Shaw. 2018. From “automation” to “autonomy”: The importance of trust repair in human-machine interaction. Ergonomics 61 (March 2018), 1–33. DOI:
[20]
Ewart de Visser and Raja Parasuraman. 2011. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making 5 (June 2011), 209–231. DOI:
[21]
Ewart J. de Visser, Marieke M. M. Peeters, Malte F. Jung, Spencer Kohn, Tyler H. Shaw, Richard Pak, and Mark A. Neerincx. 2020. Towards a theory of longitudinal trust calibration in human–robot teams. International Journal of Social Robotics 12, 2 (May 2020), 459–478. DOI:
[22]
Peter de Vries, Cees Midden, and Don Bouwhuis. 2003. The effects of errors on system trust, self-confidence, and the allocation of control in route planning. International Journal of Human-Computer Studies 58, 6 (June 2003), 719–735. DOI:
[23]
J. D. Dodson. 1915. The relation of strength of stimulus to rapidity of habit-formation in the kitten. Journal of Animal Behavior 5 (1915), 330–336. DOI:
[24]
Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and Hall P. Beck. 2003. The role of trust in automation reliance. International Journal of Human-Computer Studies 58, 6 (June 2003), 697–718. DOI:
[25]
Mica R. Endsley. 2017. From here to autonomy: Lessons learned from human–automation research. Human Factors: The Journal of the Human Factors and Ergonomics Society 59, 1 (Feb. 2017), 5–27. DOI:
[26]
Karen M. Feigh, Michael C. Dorneich, and Caroline C. Hayes. 2012. Toward a characterization of adaptive systems: A framework for researchers and system designers. Human Factors: The Journal of the Human Factors and Ergonomics Society 54, 6 (Dec. 2012), 1008–1024. DOI:
[27]
Michael W. Floyd, Michael Drinkwater, and David W. Aha. 2015. Improving trust-guided behavior adaptation using operator feedback. In Case-Based Reasoning Research and Development (Lecture Notes in Computer Science), Eyke Hüllermeier and Mirjam Minor (Eds.). Springer International Publishing, Cham, 134–148. DOI:
[28]
US Air Force. 2010. Report on Technology Horizons: A Vision for Air Force Science & Technology during 2010–203. Technical Report. https://www.airuniversity.af.edu/AUPress/Book-Reviews/Display/Article/1194559/report-on-technology-horizons-a-vision-for-air-force-science-technology-during/.
[29]
Ulrike Esther Franke. 2014. Drones, drone strikes, and US policy: The politics of unmanned aerial vehicles. Parameters 44, 1 (2014), 121–130.
[30]
Amos Freedy, Ewart DeVisser, Gershon Weltman, and Nicole Coeyman. 2007. Measurement of trust in human-robot collaboration. In 2007 International Symposium on Collaborative Technologies and Systems. 106–114. DOI:
[31]
Ji Gao and John D. Lee. 2006. Extending the decision field theory to model operators’ reliance on automation in supervisory control situations. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 36, 5 (Sept. 2006), 943–959. DOI:
[32]
Michael. A. Goodrich and Mary L. Cummings. 2015. Human factors perspective on next generation unmanned aerial systems. In Handbook of Unmanned Aerial Vehicles, Kimon P. Valavanis and George J. Vachtsevanos (Eds.). Springer Netherlands, Dordrecht, 2405–2423. DOI:
[33]
Peter A. Hancock, Richard J. Jagacinski, Raja Parasuraman, Christopher D. Wickens, Glenn F. Wilson, and David B. Kaber. 2013. Human-automation interaction research: Past, present, and future. Ergonomics in Design 21, 2 (April 2013), 9–14. DOI:
[34]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society 57, 3 (2015), 407–434. DOI:
[35]
Mark Hoogendoorn, Syed Waqar Jaffry, Peter Paul Van Maanen, and Jan Treur. 2013. Modelling biased human trust dynamics. Web Intelligence and Agent Systems 11, 1 (Aug. 2013), 21–40.
[36]
Wan-Lin Hu, Kumar Akash, Neera Jain, and Tahira Reid. 2016. Real-time sensing of trust in human-machine interactions. IFAC-PapersOnLine 49, 32 (Jan. 2016), 48–53. DOI:
[37]
Wan-Lin Hu, Kumar Akash, Tahira Reid, and Neera Jain. 2019. Computational modeling of the dynamics of human trust during human–machine interactions. IEEE Transactions on Human-Machine Systems 49, 6 (Dec. 2019), 485–497. DOI:
[38]
Aya Hussein, Sondoss Elsawah, and Hussein Abbass. 2020. Towards trust-aware human-automation interaction: An overview of the potential of computational trust models. In Proceedings of the 53rd Hawaii International Conference on System Sciences. DOI:
[39]
Ion Juvina, Michael G. Collins, Othalia Larue, William G. Kennedy, Ewart de Visser, and Celso De Melo. 2019. Toward a unified theory of learned trust in interpersonal and human-machine interactions. ACM Transactions on Interactive Intelligent Systems 9, 4 (Oct. 2019), 24:1–24:33. DOI:
[40]
Ion Juvina, Christian Lebiere, and Cleotilde Gonzalez. 2015. Modeling trust dynamics in strategic interaction. Journal of Applied Research in Memory and Cognition 4, 3 (Sept. 2015), 197–211. DOI:
[41]
Christian Lebiere, Leslie M. Blaha, Corey K. Fallon, and Brett Jefferson. 2021. Adaptive cognitive mechanisms to maintain calibrated trust and reliance in automation. Frontiers in Robotics and AI 8 (2021). https://www.frontiersin.org/article/10.3389/frobt.2021.652776.
[42]
John D. Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270. DOI:
[43]
John D. Lee and Neville Moray. 1994. Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-computer Studies 40, 1 (1994), 153–184. DOI:
[44]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society 46, 1 (2004), 50–80. DOI:
[45]
Stephan Lewandowsky, Michael Mundy, and Gerard P. A. Tan. 2000. The dynamics of trust: Comparing humans to automation. Journal of Experimental Psychology: Applied 6, 2 (2000), 104–123. DOI:
[46]
Morten Lind. 1999. Plant modelling for human supervisory control. Transactions of the Institute of Measurement and Control 21, 4–5 (Oct. 1999), 171–180. DOI:
[47]
Peter-Paul Maanen, Francien Wisse, Jurriaan Diggelen, and Robbert Jan Beun. 2011. Effects of reliance support on team performance by advising and adaptive autonomy. In Proceedings of the 2011 IEEE/WIC/ACM International Conference on Intelligent Agent Technology. IEEE, 287. DOI:
[48]
P. Madhavan and D. A. Wiegmann. 2007. Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science 8, 4 (July 2007), 277–301. DOI:
[49]
Dariusz Mikulski, Frank Lewis, Edward Gu, and Greg Hudas. 2012. Trust method for multi-agent consensus. In Unmanned Systems Technology XIV, Vol. 8387. SPIE, Baltimore, MD, 146–159. DOI:
[50]
Bonnie M. Muir. 1987. Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies 27, 5 (1987), 527–539. DOI:
[51]
Bonnie M. Muir. 1990. Operators’ Trust in and Use of Automatic Controllers in a Supervisory Process Control Task. Ph.D. Thesis. University of Toronto, Toronto, ON, Canada.
[52]
Bonnie M. Muir and Neville Moray. 1996. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 3 (March 1996), 429–460. DOI:
[53]
Heather Neyedli, Justin Hollands, and Greg Jamieson. 2009. Human reliance on an automated combat ID system: Effects of display format. Human Factors and Ergonomics Society Annual Meeting Proceedings 53 (Oct. 2009), 212–216. DOI:
[54]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. PLOS ONE 15, 2 (Feb. 2020), e0229132. DOI:
[55]
Raja Parasuraman and Dietrich H. Manzey. 2010. Complacency and bias in human use of automation: An attentional integration. Human Factors 52, 3 (June 2010), 381–410. DOI:
[56]
Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society 39, 2 (June 1997), 230–253. DOI:
[57]
Jeffrey R. Peters, Vaibhav Srivastava, Grant S. Taylor, Amit Surana, MIguel P. Eckstein, and Francesco Bullo. 2015. Human supervisory control of robotic teams: Integrating cognitive modeling with engineering design. IEEE Control Systems Magazine 35, 6 (Dec. 2015), 57–80. DOI:
[58]
Joelle Pineau and Geoffrey J. Gordon. 2007. POMDP planning for robust robot control. In Robotics Research (Springer Tracts in Advanced Robotics), Sebastian Thrun, Rodney Brooks, and Hugh Durrant-Whyte (Eds.). Springer, Berlin, 69–82. DOI:
[59]
Óscar Pérez, Massimo Piccardi, Jesús García, Miguel Ángel Patricio, and José Manuel Molina. 2007. Comparison between genetic algorithms and the Baum-Welch algorithm in learning HMMs for human activity classification. In Applications of Evolutionary Computing, Mario Giacobini (Ed.). Lecture Notes in Computer Science, Vol. 4448. Springer, Berlin, 399–406.
[60]
Lawrence Rabiner and Biing-Hwang Juang. 1986. An introduction to hidden Markov models. IEEE ASSP Magazine 3, 1 (1986), 4–16. DOI:
[61]
Mat R. Abdul Rani Rani, Murray A. Sinclair, and Keith Case. 2000. Human mismatches and preferences for automation. International Journal of Production Research 38, 17 (Nov. 2000), 4033–4039. DOI:
[62]
Victor Riley. 1996. Operator reliance on automation: Theory and data. In Automation and Human Performance: Theory and Applications (1st ed.), Raja Parasuraman and Mustapha Mouloua (Eds.). CRC Press, Mahwah, NJ, 19–35.
[63]
Behzad Sadrfaridpour, Maziar Fooladi Mahani, Zhanrui Liao, and Yue Wang. 2018. Trust-based impedance control strategy for human-robot cooperative manipulation. In ASME 2018 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, V001T04A015 (8 pages). DOI:
[64]
Hamed Saeidi and Yue Wang. 2015. Trust and self-confidence based autonomy allocation for robotic systems. In 2015 54th IEEE Conference on Decision and Control (CDC’15). IEEE, 6052–6057. DOI:
[65]
Hamed Saeidi and Yue Wang. 2019. Incorporating trust and self-confidence analysis in the guidance and control of (semi)autonomous mobile robotic systems. IEEE Robotics and Automation Letters 4, 2 (April 2019), 239–246. DOI:
[66]
Nathan E. Sanders and Chang S. Nam. 2021. Chapter 19 - Applied quantitative models of trust in human-robot interaction. In Trust in Human-Robot Interaction, Chang S. Nam and Joseph B. Lyons (Eds.). Academic Press, 449–476. DOI:
[67]
Thomas B. Sheridan. 1992. Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge, MA.
[68]
Olivier Sigaud and Olivier Buffet. 2013. Markov Decision Processes in Artificial Intelligence. John Wiley & Sons, Hoboken, NJ. 2009048651
[69]
Harold Soh, Yaqi Xie, Min Chen, and David Hsu. 2020. Multi-task trust transfer for human–robot interaction. International Journal of Robotics Research 39, 2–3 (March 2020), 233–249. DOI:
[70]
Yudong Tao, Erik Coltey, Tianyi Wang, Miguel Alonso, Mei-Ling Shyu, Shu-Ching Chen, Hadi Alhaffar, Albert Elias, Biayna Bogosian, and Shahin Vassigh. 2020. Confidence estimation using machine learning in immersive learning environments. In 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR’20). IEEE, 247–252. DOI:
[71]
Kees van Dongen and Peter-Paul van Maanen. 2013. A framework for explaining reliance on decision aids. International Journal of Human-Computer Studies 71, 4 (April 2013), 410–424. DOI:
[72]
Alan R. Wagner, Paul Robinette, and Ayanna Howard. 2018. Modeling the human-robot trust phenomenon: A conceptual framework based on risk. ACM Transactions on Interactive Intelligent Systems 8, 4 (Nov. 2018), 26:1–26:24. DOI:
[73]
Lu Wang, Greg A. Jamieson, and Justin G. Hollands. 2011. The effects of design features on users’ trust in and reliance on a combat identification system. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 55, 1 (Sept. 2011), 375–379. DOI:
[74]
Bernard Weiner. 1986. An Attribution Theory of Motivation and Emotion. Vol. 92. Springer-Verlag, New York, NY.
[75]
Rebecca Wiczorek and Joachim Meyer. 2019. Effects of trust, self-confidence, and feedback on the use of decision automation. Frontiers in Psychology 10 (2019), 519. DOI:
[76]
Anqi Xu and Gregory Dudek. 2015. OPTIMo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI’15). ACM, New York, NY, 221–228. DOI:
[77]
X. Jessie Yang, Vaibhav V. Unhelkar, Kevin Li, and Julie A. Shah. 2017. Evaluating effects of user experience and system transparency on trust in automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17). Association for Computing Machinery, New York, NY, 408–416. DOI:

Cited By

View all
  • (2025)A new model for calculating human trust behavior during human-AI collaboration in multiple decision-making tasks: A Bayesian approachComputers & Industrial Engineering10.1016/j.cie.2025.110872(110872)Online publication date: Jan-2025
  • (2024)Using Reward Shaping to Train Cognitive-Based Control Policies for Intelligent Tutoring Systems2024 American Control Conference (ACC)10.23919/ACC60939.2024.10644169(3223-3230)Online publication date: 10-Jul-2024
  • (2024)Calibrating Trust, Reliance and Dependence in Variable-Reliability AutomationProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/1071181324127753168:1(604-610)Online publication date: 2-Sep-2024
  • Show More Cited By

Index Terms

  1. A Computational Model of Coupled Human Trust and Self-confidence Dynamics

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Human-Robot Interaction
      ACM Transactions on Human-Robot Interaction  Volume 12, Issue 3
      September 2023
      413 pages
      EISSN:2573-9522
      DOI:10.1145/3587919
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 June 2023
      Online AM: 27 April 2023
      Accepted: 06 April 2023
      Revised: 26 January 2023
      Received: 27 August 2021
      Published in THRI Volume 12, Issue 3

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Human cognitive modeling
      2. human trust in automation
      3. human self-confidence
      4. computational modeling
      5. partially observable Markov decision process

      Qualifiers

      • Research-article

      Funding Sources

      • National Science Foundation

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,326
      • Downloads (Last 6 weeks)86
      Reflects downloads up to 26 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)A new model for calculating human trust behavior during human-AI collaboration in multiple decision-making tasks: A Bayesian approachComputers & Industrial Engineering10.1016/j.cie.2025.110872(110872)Online publication date: Jan-2025
      • (2024)Using Reward Shaping to Train Cognitive-Based Control Policies for Intelligent Tutoring Systems2024 American Control Conference (ACC)10.23919/ACC60939.2024.10644169(3223-3230)Online publication date: 10-Jul-2024
      • (2024)Calibrating Trust, Reliance and Dependence in Variable-Reliability AutomationProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/1071181324127753168:1(604-610)Online publication date: 2-Sep-2024
      • (2024)Trust with increasing and decreasing reliabilityHuman Factors: The Journal of the Human Factors and Ergonomics Society10.1177/00187208241228636Online publication date: 6-Mar-2024
      • (2024)Classification of Human Learning Stages via Kernel Distribution EmbeddingsIEEE Open Journal of Control Systems10.1109/OJCSYS.2023.33487043(102-117)Online publication date: 2024
      • (2023)Interpersonal and Human-Automation Trust in an Underwater Mine Detection TaskProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/2169506723119256067:1(145-150)Online publication date: 25-Oct-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Full Access

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media