Professor Diane Brentari is one of three Directors of the Center for Gesture Sign Language. Currently her work addresses cross-linguistic variation among sign languages, particularly in the parameters of handshape and movement. She is also interested in how the mental lexicon emerges in historical time, which includes the relationship between gesture, homesign systems and well-established sign languages. In addition Brentari has developed the Prosodic Model of sign language phonology, and her work addresses the prosodic structure of signed and spoken languages. Phone: 773-702-5725 Address: Linguistics Department University of Chicago 1115 E. 58th Street Chicago, IL 60637
<p>Responses to stimulus vignettes with an agent (video example 1) and without an agent (vi... more <p>Responses to stimulus vignettes with an agent (video example 1) and without an agent (video example 2).</p
Does knowledge of language transfer across language modalities? For example, can speakers who hav... more Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the p...
Publisher Summary This chapter presents an analysis of ASL, the signed language of the American D... more Publisher Summary This chapter presents an analysis of ASL, the signed language of the American Deaf., phonology, which focuses on the information structure of the sign. General phonological theory should operate in a uniform fashion across modalities and provide theoretical units that play similar, though not identical, roles in the various modalities that may underlie human language. The chapter focuses on analogy that should be established between the coda of a spoken language syllable and the second/weak hand of a two-handed sign. If the proposal is correct, then the cross-modality analogies that are to be made by the theory of phonology are, in a sense, more abstract than previous researchers had been led to believe. If there is in sign language something analogous to the syllable of spoken language, then it does not carry over the sequential character of the syllable of spoken language.
Over the history of research on sign languages, much scholarship has highlighted the pervasive pr... more Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computationa...
In this article, we analyze the grammatical incorporation of demonstratives in a tactile language... more In this article, we analyze the grammatical incorporation of demonstratives in a tactile language, emerging in communities of DeafBlind signers in the US who communicate via reciprocal, tactile channels—a practice known as “protactile.” In the first part of the paper, we report on a synchronic analysis of recent data, identifying four types of “taps,” which have taken on different functions in protacitle language and communication. In the second part of the paper, we report on a diachronic analysis of data collected over the past 8 years. This analysis reveals the emergence of a new kind of “propriotactic” tap, which has been co-opted by the emerging phonological system of protactile language. We link the emergence of this unit to both demonstrative taps, and backchanneling taps, both of which emerged earlier. We show how these forms are all undergirded by an attention-modulation function, more or less backgrounded, and operating across different semiotic systems. In doing so, we co...
Table 1 provides the numbers of clips and of fingerspelling segments in the datasets used in our ... more Table 1 provides the numbers of clips and of fingerspelling segments in the datasets used in our work. Note that the number of fingerspelling segments is not exactly same as in [7, 8] due to the 75-frame overlap when we split raw video into 300-frame clips. On average there are 1.9/1.8 fingerspelling segments per clip for ChicagoFSWild/ChicagoFSWild+. The distributions of durations are shown in Figure 1.
<p>Responses to stimulus vignettes with an agent (video example 1) and without an agent (vi... more <p>Responses to stimulus vignettes with an agent (video example 1) and without an agent (video example 2).</p
Does knowledge of language transfer across language modalities? For example, can speakers who hav... more Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the p...
Publisher Summary This chapter presents an analysis of ASL, the signed language of the American D... more Publisher Summary This chapter presents an analysis of ASL, the signed language of the American Deaf., phonology, which focuses on the information structure of the sign. General phonological theory should operate in a uniform fashion across modalities and provide theoretical units that play similar, though not identical, roles in the various modalities that may underlie human language. The chapter focuses on analogy that should be established between the coda of a spoken language syllable and the second/weak hand of a two-handed sign. If the proposal is correct, then the cross-modality analogies that are to be made by the theory of phonology are, in a sense, more abstract than previous researchers had been led to believe. If there is in sign language something analogous to the syllable of spoken language, then it does not carry over the sequential character of the syllable of spoken language.
Over the history of research on sign languages, much scholarship has highlighted the pervasive pr... more Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computationa...
In this article, we analyze the grammatical incorporation of demonstratives in a tactile language... more In this article, we analyze the grammatical incorporation of demonstratives in a tactile language, emerging in communities of DeafBlind signers in the US who communicate via reciprocal, tactile channels—a practice known as “protactile.” In the first part of the paper, we report on a synchronic analysis of recent data, identifying four types of “taps,” which have taken on different functions in protacitle language and communication. In the second part of the paper, we report on a diachronic analysis of data collected over the past 8 years. This analysis reveals the emergence of a new kind of “propriotactic” tap, which has been co-opted by the emerging phonological system of protactile language. We link the emergence of this unit to both demonstrative taps, and backchanneling taps, both of which emerged earlier. We show how these forms are all undergirded by an attention-modulation function, more or less backgrounded, and operating across different semiotic systems. In doing so, we co...
Table 1 provides the numbers of clips and of fingerspelling segments in the datasets used in our ... more Table 1 provides the numbers of clips and of fingerspelling segments in the datasets used in our work. Note that the number of fingerspelling segments is not exactly same as in [7, 8] due to the 75-frame overlap when we split raw video into 300-frame clips. On average there are 1.9/1.8 fingerspelling segments per clip for ChicagoFSWild/ChicagoFSWild+. The distributions of durations are shown in Figure 1.
Uploads
Papers by Diane Brentari