Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic... more
Our voice provides salient cues about how confident we sound, which promotes inferences about how believable we are. However, the neural mechanisms involved in these social inferences are largely unknown. Employing functional magnetic resonance imaging, we examined the brain networks and individual differences underlying the evaluation of speaker believability from vocal expressions. Participants (n 5 26) listened to statements produced in a confident, unconfident, or " prosodically unmarked " (neutral) voice, and judged how believable the speaker was on a 4-point scale. We found frontal–temporal networks were activated for different levels of confidence, with the left superior and inferior frontal gyrus more activated for confident statements, the right superior temporal gyrus for unconfident expressions, and bilateral cerebellum for statements in a neutral voice. Based on listener's believability judgment, we observed increased activation in the right superior parietal lobule (SPL) associated with higher believability, while increased left posterior central gyrus (PoCG) was associated with less believability. A psychophysiological interaction analysis found that the anterior cingulate cortex and bilateral caudate were connected to the right SPL when higher believ-ability judgments were made, while supplementary motor area was connected with the left PoCG when lower believability judgments were made. Personal characteristics, such as interpersonal reactivity and the individual tendency to trust others, modulated the brain activations and the functional connectivity when making believability judgments. In sum, our data pinpoint neural mechanisms that are involved when inferring one's believability from a speaker's voice and establish ways that these mechanisms are modulated by individual characteristics of a listener. Hum Brain Mapp 00:000–000, 2017. V C 2017 Wiley Periodicals, Inc.
Research Interests:
Research Interests:
Feeling of knowing (or expressed confidence) reflects a speaker's certainty or commitment to a statement and can be associated with one's trustworthiness or persuasiveness in social interaction. We investigated the perceptual-acoustic... more
Feeling of knowing (or expressed confidence) reflects a speaker's certainty or commitment to a statement and can be associated with one's trustworthiness or persuasiveness in social interaction. We investigated the perceptual-acoustic correlates of expressed confidence and doubt in spoken language, with a focus on both linguistic and vocal speech cues. In Experiment 1, utterances subserving different communicative functions (e.g., stating facts, making judgments) produced in a confident, close-to-confident, unconfident, and neutral-intending voice by six speakers, were then rated for perceived confidence by 72 native listeners. As expected, speaker confidence ratings increased with the intended level of expressed confidence; neutral-intending statements were frequently judged as relatively high in confidence. The communicative function of the statement, and the presence vs. absence of an utterance-initial probability phrase (e.g., Maybe, I'm sure), further modulated speaker confidence ratings. In Experiment 2, acoustic analysis of perceptually valid tokens rated in Experiment 1 revealed distinct patterns of pitch, intensity and temporal features according to perceived confidence levels; confident expressions were highest in fundamental frequency (f0) range, mean amplitude, and amplitude range, whereas unconfident expressions were highest in mean f0, slowest in speaking rate, with more frequent pauses. Dynamic analyses of f0 and intensity changes across the utterance uncovered distinctive patterns in expression as a function of confidence level at different positions of the utterance. Our findings provide new information on how metacognitive states such as confidence and doubt are communicated by vocal and linguistic cues which permit listeners to arrive at graded impressions of a speaker's feeling of (un)knowing.
Research Interests:
A critical issue in the study of language communication is how extra-linguistic information, such as the social status of the communicators, is taken into account by the online comprehension system. In Mandarin Chinese, the second-person... more
A critical issue in the study of language communication is how extra-linguistic information, such as
the social status of the communicators, is taken into account by the online comprehension system.
In Mandarin Chinese, the second-person pronoun (you/your) can be in a respectful form (nin/nin-de) when
the addressee is of higher status than the speaker or in a less respectful form (ni/ni-de) when the addressee
is of equal or lower status. We conducted an event-related potential (ERP) study to investigate how social
status information affects pronoun resolution during utterance comprehension. Participants read simple
conversational scenarios for comprehension, with each scenario including a context describing a speaker
and an addressee and a directly-quoted utterance beginning with the second-person pronoun. The relative
status between the speaker and the addressee was varied, creating conditions in which the second-person
pronoun was either consistent or inconsistent with the relationship between conversants, or in which the
two conversants were of equal status. ERP results showed that, compared with the status-consistent and
status-equal conditions, the status-inconsistent condition elicited an anterior N400-like effect on nin-de
(over-respectful) and a broadly distributed N400 on ni-de (disrespectful). In a later time window, both the
status-reversed and the status-equal conditions elicited a sustained positivity effect on nin-de and a
sustained negativity effect on ni-de. These findings suggest that the comprehender builds up expectance
towards the upcoming pronoun based on the perceived social status of conversants.While the inconsistent
pronoun causes semantic integration difficulty in an earlier stage of processing, the strategy to resolve the
inconsistency and the corresponding brain activity vary according to the pragmatic implications of the
pronoun.
Research Interests:
Verbal communication is often ambiguous. By employing the event-related potential (ERP) technique, this study investigated how a comprehender resolves referential ambiguity by using information concerning the social status of... more
Verbal communication is often ambiguous. By employing the event-related potential
(ERP) technique, this study investigated how a comprehender resolves referential
ambiguity by using information concerning the social status of communicators.
Participants read a conversational scenario which included a minimal conversational
context describing a speaker and two other persons of the same or different social status
and a directly quoted utterance. A singular, second-person pronoun in the respectful
form (nin/nin-de in Chinese) in the utterance could be ambiguous with respect to which
of the two persons was the addressee (the “Ambiguous condition”). Alternatively, the
pronoun was not ambiguous either because one of the two persons was of higher social
status and hence should be the addressee according to social convention (the “Status
condition”) or because a word referring to the status of a person was additionally inserted
before the pronoun to help indicate the referent of the pronoun (the “Referent condition”).
Results showed that the perceived ambiguity decreased over the Ambiguous, Status,
and Referent conditions. Electrophysiologically, the pronoun elicited an increased N400
in the Referent than in the Status and the Ambiguous conditions, reflecting an increased
integration demand due to the necessity of linking the pronoun to both its antecedent and
the status word. Relative to the Referent condition, a late, sustained positivity was elicited
for the Status condition starting from 600 ms, while a more delayed, anterior negativity
was elicited for the Ambiguous condition. Moreover, the N400 effect was modulated by
individuals’ sensitivity to the social status information, while the late positivity effect was
modulated by individuals’ empathic ability. These findings highlight the neurocognitive
flexibility of contextual bias in referential processing during utterance comprehension.
Research Interests:
Listeners often encounter conflicting verbal and vocal cues about the speaker's feeling of knowing; these " mixed messages " can reflect online shifts in one's mental state as they utter a statement, or serve different social-pragmatic... more
Listeners often encounter conflicting verbal and vocal cues about the speaker's feeling of knowing; these " mixed messages " can reflect online shifts in one's mental state as they utter a statement, or serve different social-pragmatic goals of the speaker. Using a cross-splicing paradigm, we investigated how conflicting cues about a speaker's feeling of (un)knowing change one's perception. Listeners rated the confidence of speakers of utterances containing an initial verbal phrase congruent or incongruent with vocal cues in a subsequent statement, while their brain potentials were tracked. Different forms of conflicts modulated the perceived confidence of the speaker, the extent to which was stronger for female listeners. A confident phrase followed by an unconfident voice enlarged an anteriorly maximized negativity for female listeners and late positivity for male listeners, suggesting that mental representations of another's feeling of knowing in face of this conflict were hampered by increased demands of integration for females and increased demands on updating for males. An unconfident phrase followed by a confident voice elicited a delayed sustained positivity (from 900 ms) in female participants only, suggesting females generated inferences to moderate the conflicting message about speaker knowledge. We highlight ways that verbal and vocal cues are real-time integrated to access a speaker's feeling of (un)knowing, while arguing that females are more sensitive to the social relevance of conflicting speaker cues.
Research Interests:
In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online... more
In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330e500 msec and 550e740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980e1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension ; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by revealing how a speaker's mental state (i.e., feeling of knowing) is simultaneously inferred from vocal expressions.
Research Interests:
During interpersonal communication, listeners must rapidly evaluate verbal and vocal cues to arrive at an integrated meaning about the utterance and about the speaker, including a representation of the speaker's 'feeling of knowing'... more
During interpersonal communication, listeners must rapidly evaluate verbal and vocal cues to arrive at an integrated meaning about the utterance and about the speaker, including a representation of the speaker's 'feeling of knowing' (i.e., how confident they are in relation to the utterance). In this study, we investigated the time course and neural responses underlying a listener's ability to evaluate speaker confidence from combined verbal and vocal cues. We recorded real-time brain responses as listeners judged statements conveying three levels of confidence with the speaker's voice (confident, close-to-confident, unconfident), which were preceded by meaning-congruent lexical phrases (e.g. I am positive, Most likely, Perhaps). Event-related potentials to utterances with combined lexical and vocal cues about speaker confidence were compared to responses elicited by utterances without the verbal phrase in a previous study (Jiang and Pell, 2015). Utterances with combined cues about speaker confidence elicited reduced, N1, P2 and N400 responses when compared to corresponding utterances without the phrase. When compared to confident statements, close-to-confident and unconfident expressions elicited reduced N1 and P2 responses and a late positivity from 900 to 1250 ms; unconfident and close-to-confident expressions were differentiated later in the 1250–1600 ms time window. The effect of lexical phrases on confidence processing differed for male and female participants, with evidence that female listeners incorporated information from the verbal and vocal channels in a distinct manner. Individual differences in trait empathy and trait anxiety also moderated neural responses during confidence processing. Our findings showcase the cognitive processing mechanisms and individual factors governing how we infer a speaker's mental (knowledge) state from the speech signal.
Research Interests: