6.1.2 Emission Probabilities.
Next the identified emission probabilities, visually depicted in Figures
7(a) and
7(b), are analyzed. Figure
7(a) shows the probability of reliance given the trust and self-confidence states, and Figure
7(b) shows the probability of self-reported self-confidence given the self-confidence state. The first observation from Figure
7(a) is that when the participant’s self-confidence is high, the resulting probabilities behave similarly to the established trust and reliance relationship in which low and high trust lead to low and high reliance, respectively. For example, when participants are in a state of low trust and high self-confidence (
\(T{\downarrow }SC{\uparrow }\)), they are highly likely (89.54%) to
not rely on the automation. When they are in the
\(T{\uparrow }SC{\uparrow }\) state, they are highly likely (89.17%) to rely on it. Interestingly, this relationship is
not exhibited when self-confidence is low. Instead, when participants are in the
\(T{\downarrow }SC{\downarrow }\) state, the likelihood that they will disable (48.62%) or enable (51.38%) the automation assistance is nearly equally distributed. The same is true when participants are in the
\(T{\uparrow }SC{\downarrow }\) state. This suggests that self-confidence may be a more significant factor in reliance decisions when the user is in a state of low self-confidence rather than high self-confidence.
It is also helpful to compare these probabilities directly to the reliance behavior predicted by models that build upon the “confidence vs. trust” hypothesis. The computational models discussed in Section
2 predict reliance based on a difference between the trust and self-confidence states. For example, using the hypothesis, it would be assumed that the
\(T{\uparrow }SC{\downarrow }\) state results in the participant relying and the
\(T{\downarrow }SC{\downarrow }\) results in them not relying on the automation. However, the emission probabilities shown in Figure
7(a) contradict this; instead, the likelihood of relying on or not relying on the automation, when self-confidence is low, is nearly 50%. It is worth noting that the proposed model is probabilistic, whereas existing ones are deterministic. Given the stochastic nature of human behavior, it is possible that the proposed model is able to better predict reliance behavior by inherently allowing for stochasticity in the prediction. In particular, it appears that when the human is in a state of low self-confidence, their behavior may be more stochastic than when they are in a state of high self-confidence. Recall the validation results shown earlier in Section
5.4 (see Figure
6(d)) in which the proposed model was a better predictor of reliance than a model based upon the “confidence vs. trust” hypothesis.
6.1.3 Transition Probabilities.
Given that the POMDP/R consists of 3 discrete-valued actions that result in 18 distinct combinations of actions, there are a total of 18 different transition probability functions that describe the state transitions. The transition probability functions are divided to separate the probabilities of trust state transitions and probabilities of self-confidence state transitions. A complete review of all transition probabilities can be found in Appendix
A.2. For clarity of exposition, a subset of these probabilities is analyzed here. Specifically, the actions associated with participants’ performance—changes in the number of collisions and game time—are grouped into cases of performance improvement or deterioration, and the effect of the third action, the autonomous assistance, is analyzed within these groupings.
Overall Performance Improvement. The overall performance improvement case scenario is that in which the number of collisions decreases
\(C^-\) and game time decreases
\(G^-\). When
\(a_A \in \Theta _L\), as shown in Figures
8(a) and
8(d), and for all state combinations, self-confidence is likely to remain the same at the next trial (>80%). Moreover, when the participant is in the
\(T{\downarrow }SC{\downarrow }\) state, they are very likely to transition to a state of high trust (99.81%), suggesting that
they associate performance improvement to the automation rather than themselves. For easier interpretation, the referenced probabilities are in bold in Table
6.
This is not the case for most participants in the
\(T{\uparrow }SC{\downarrow }\) state though. Participants’ cognitive state responses when they are in the
\(T{\uparrow }SC{\downarrow }\) state are similar for all
\(a_A\) as shown in Figures
8(a) to
8(f). They are likely to transition to a state of low trust (73.08%, 77.59%, 99.35%), while they are likely to remain in a state of low self-confidence (82.19%, 66.08%, 99.92%), suggesting that the decrease in trust may be a result of the user attributing the performance improvement more toward themselves than the automation. Upon closer analysis, when
\(a_A \in \Theta _L\vee \Theta _M\), participants had a 26.91% and 22.41% chance, respectively, of remaining in a state of high trust, and a 17.81% and 33.92% chance, respectively, of transitioning to a state of high self-confidence. The different values of
\(a_A\) may result in different attributions of performance between the user and automation, which then affect the participants’ cognitive state responses. When
\(a_A \in \Theta _H\), as shown in Figures
8(c) and
8(f), and when the participant is in the
\(T{\downarrow }SC{\downarrow }\) state, the probability of them transitioning to a state of high trust (55.29%) or remaining in a state of low trust (44.71%) is approximately equally distributed. On the other hand, they are more likely to remain in a state of low self-confidence (75%) than to transition to a state of high self-confidence. These participants may associate the cause of performance improvement slightly more with the automation than themselves.
Interestingly, for all levels of automation assistance, when participants are in a state of high self-confidence and experience an overall improvement in performance, they are very likely to remain in a state of high self-confidence as well as maintain the same level of trust in the autonomous assistant at the next trial. In other words, a participant’s self-confidence affects their interpretation of their performance metrics, which in turn affects their trust in the automation.
Partial Performance Improvement. For performance improvement, another case of interest is that in which the number of collisions does not change but the participants’ game time decreases. This represents a case of partial improvement. When
\(a_A \in \Theta _L\), as shown in Table
8 (see Appendix
A.2), and when the participant is in the
\(T{\downarrow }SC{\downarrow }\) state, their likelihood of transitioning to a state of high trust (45.72%) or low trust (54.28%) is nearly equally distributed. However, they are likely to remain in a state of low self-confidence (79.49%). This is similar to when participants are in the
\(T{\uparrow }SC{\downarrow }\) state and
\(a_A \in \Theta _M\), as shown in Table
9. When
\(a_A \in \Theta _H\), as shown in Table
10, and the participant is in the
\(T{\downarrow }SC{\downarrow }\) state, they are highly likely (99.86%) to remain in a state of low self-confidence. However, their likelihood of transitioning to a state of high trust is only 29.52%. When
\(a_A \in \Theta _L \vee \Theta _H\) and participants are in the
\(T{\downarrow }SC{\downarrow }\) state, trust increasing suggests that they are attributing a slight improvement in performance to the automation rather than themselves. However, when
\(a_A \in \Theta _M\), the fact that participants in a state of high trust are equally likely to remain in their current state or transition to a state of low trust while their low self-confidence is likely to be maintained (84.12%) suggests that they are unsure of to whom they should attribute the improvement in performance.
In comparing these results to the overall improvement case, participants in a state of low self-confidence are still unlikely to gain confidence and transition to
\(SC{\uparrow }\), but they are now not as likely to attribute any improvement to the automation. This underscores the consequences, from the perspective of HAI, of a human being in a state of low self-confidence.
In other words, participants in a state of low self-confidence may have more difficulty in calibrating their trust in the automation than those with high self-confidence. An analysis of absolute collision and time performance data (see Figure
9(a)) shows that as the game progressed, on average, participants’ performance improved and participants’ self-confidence increased (see Figure
9(b)). In turn, these observations suggest that in addition to trust calibration, correct calibration of self-confidence is important for improved HAI, as discussed further in Section
6.2.
Overall Performance Deterioration. Next, cases in which participants’ performance deteriorates between game trials are analyzed. For all
\(a_A\), when performance
deteriorates and participants are in the
\(T{\downarrow }SC{\downarrow }\) state, their trust is highly likely to increase (99.78%, 99.87%, 98.40%) at the next trial. However, they are likely to remain in a state of low self-confidence (99.92%, 99.84%, 99.98%). This suggests that these participants associate performance deterioration to themselves rather than the automation. On the other hand, the autonomous assistance input does have a greater effect on participants in states of high trust (either
\(T{\uparrow }SC{\downarrow }\) or
\(T{\uparrow }SC{\uparrow }\)). When
\(a_A \in \Theta _M \vee \Theta _H\) (Figures
10(b) and
10(c)), participants in a state of high trust are very likely (
\(\gt\)90%) to transition to a state of low trust, regardless of their state of self-confidence. This suggests that they strongly attribute the decrease in performance to the autonomous assistant. This is not true when
\(a_A \in \Theta _L\), in which participants who are in a state of
\(T{\uparrow }SC{\downarrow }\) are likely to remain in a state of high trust at the next trial. These results highlight that while self-confidence affects participants’ attribution of changes in performance, so does the user’s experience with the autonomous assistant.
Partial Performance Deterioration. Next, the case in which the number of collisions does not change but the participants’ game time increases is considered. For
\(a_A \in \Theta _L \vee \Theta _M \vee \Theta _H\), shown in Tables
8–
10, respectively, and when participants are in the
\(T{\downarrow }SC{\downarrow }\) state, it is likely for their trust to increase (99.98%, 99.70%, 99.90%) at the next trial and likely for them to remain in a state of low self-confidence (95.12%, 99.76%, 100%). These results are consistent with those observed for the overall performance deterioration case. When
\(a_A \in \Theta _H\), however, and participants are in the
\(T{\uparrow }SC{\downarrow }\) state, their likelihood of transitioning to a state of low trust (57.68%) or high trust (42.32%) is more equally distributed than in the overall performance deterioration case. Therefore, the extent of the change in performance also affects participants’ trust and self-confidence dynamics.