Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Philipp Norton

    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history... more
    Research on the evolution of human speech and phonology benefits from the comparative approach: structural, spectral, and temporal features can be extracted and compared across species in an attempt to reconstruct the evolutionary history of human speech. Here we focus on analytical tools to measure and compare temporal structure in human speech and animal vocalizations. We introduce the reader to a range of statistical methods usable, on the one hand, to quantify rhythmic complexity in single vocalizations, and on the other hand, to compare rhythmic structure between multiple vocalizations. These methods include: time series analysis, distributional measures, variability metrics, Fourier transform, auto- and cross-correlation, phase portraits, and circular statistics. Using computer-generated data, we apply a range of techniques, walking the reader through the necessary software and its functions. We describe which techniques are most appropriate to test particular hypotheses on rhythmic structure, and provide possible interpretations of the tests. These techniques can be equally well applied to find rhythmic structure in gesture, movement, and any other behavior developing over time, when the research focus lies on its temporal structure. This introduction to quantitative techniques for rhythm and timing analysis will hopefully spur additional comparative research, and will produce comparable results across all disciplines working on the evolution of speech, ultimately advancing the field.
    Research Interests:
    Multimodal signalling can improve or maximize information exchange. A challenge is to show that two independent signals, such as vocalizations and visual displays, are deliberately coordinated. Male zebra finches, Taeniopygia guttata,... more
    Multimodal signalling can improve or maximize information exchange. A challenge is to show that two independent signals, such as vocalizations and visual displays, are deliberately coordinated. Male zebra finches, Taeniopygia guttata, signal visually and acoustically during courtship, performing a stereotyped dance while singing. The male approaches the female hopping in a zigzag pattern, turning his body axis, and wiping his beak repeatedly on or above the perch. The only previous quantitative study of song and dance choreography in zebra finches revealed that the distribution of all movements during song was not strongly patterned across birds but very similar in fathers and sons. This raises the possibility that particular movements may follow a choreography. Here we report that three operationally defined dance movements, 'beak wipe' (BA), 'turn-around' (TA) and 'hop', occurred with different frequencies and speed during singing than during silence. BW, TA and hops clustered significantly at the start and end of song bouts and were arranged in a nonrandom fashion. In addition, BW, but not TA, were performed faster during song than nonsong. Finally, hops coincided significantly more often than expected by chance with particular notes. Together, these results suggest that male zebra finches integrate their song and dance during courtship. This may help females to identify courting males in a noisy environment and evaluate the intensity and quality of the courtship performance. Our results underscore that the choreography of movement gestures with learned vocalizations, such as hand gestures accompanying speech, is a further parallel between human and avian signalling. They invite future investigations into the underlying neural mechanisms and consequences for mate choice.
    Research Interests: