Derivation and Definitions: The Phonology of English
Derivation and Definitions: The Phonology of English
Derivation and Definitions: The Phonology of English
system (sound system) of a given language. This is one of the fundamental systems which
a language is considered to comprise, like its syntax and its vocabulary.
Phonology is often distinguished from phonetics. While phonetics concerns the physical
production, acoustic transmission and perception of the sounds of speech,[1][2] phonology
describes the way sounds function within a given language or across languages to encode
meaning. For many linguists, phonetics belongs to descriptive linguistics, and phonology
to theoretical linguistics, although establishing the phonological system of a language is
necessarily an application of theoretical principles to analysis of phonetic evidence. Note
that this distinction was not always made, particularly before the development of the
modern concept of the phoneme in the mid 20th century. Some subfields of modern
phonology have a crossover with phonetics in descriptive disciplines such as
psycholinguistics and speech perception, resulting in specific areas like articulatory
phonology or laboratory phonology.
History
The history of phonology may be traced back to the Ashtadhyayi, the Sanskrit grammar
composed by Pini in the 4th century BC. In particular the Shiva Sutras, an auxiliary
text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the
Sanskrit language, with a notational system for them that is used throughout the main
text, which deals with matters of morphology, syntax and semantics.
The Polish scholar Jan Baudouin de Courtenay (together with his students, Mikoaj
Kruszewski and Lev Shcherba) shaped the modern usage of the term phoneme in 18767,
[5]
which had been coined in 1873 by the French linguist A. Dufriche-Desgenettes[6] who
proposed it as a one-word equivalent for the German Sprachlaut.[7] Baudouin de
Courtenay's work, though often unacknowledged, is considered to be the starting point of
modern phonology. He also worked on the theory of phonetic alternations (what is now
called allophony and morphophonology), and may have had an influence on the work of
Saussure according to E. F. K. Koerner.[8]
process may be the input to another). The second most prominent natural phonologist is
Patricia Donegan (Stampe's wife); there are many natural phonologists in Europe, and a
few in the U.S., such as Geoffrey Nathan. The principles of natural phonology were
extended to morphology by Wolfgang U. Dressler, who founded natural morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena
are no longer seen as operating on one linear sequence of segments, called phonemes or
feature combinations, but rather as involving some parallel sequences of features which
reside on multiple tiers. Autosegmental phonology later evolved into feature geometry,
which became the standard theory of representation for theories of the organization of
phonology as different as lexical phonology and optimality theory.
Government phonology, which originated in the early 1980s as an attempt to unify
theoretical notions of syntactic and phonological structures, is based on the notion that all
languages necessarily follow a small set of principles and vary according to their
selection of certain binary parameters. That is, all languages' phonological structures are
essentially the same, but there is restricted variation that accounts for differences in
surface realizations. Principles are held to be inviolable, though parameters may
sometimes come into conflict. Prominent figures in this field include Jonathan Kaye, Jean
Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky
developed optimality theoryan overall architecture for phonology according to which
languages choose a pronunciation of a word that best satisfies a list of constraints ordered
by importance; a lower-ranked constraint can be violated when the violation is necessary
in order to obey a higher-ranked constraint. The approach was soon extended to
morphology by John McCarthy and Alan Prince, and has become a dominant trend in
phonology. The appeal to phonetic grounding of constraints and representational elements
(e.g. features) in various approaches has been criticized by proponents of 'substance-free
phonology', especially Mark Hale and Charles Reiss.[9][10]
Broadly speaking, government phonology (or its descendant, strict-CV phonology) has a
greater following in the United Kingdom, whereas optimality theory is predominant in
the United States.[citation needed]
An integrated approach to phonological theory that combines synchronic and diachronic
accounts to sound patterns was initiated with Evolutionary Phonology in recent years.[11]
Analysis of phonemes
An important part of traditional, pre-generative schools of phonology is studying which
sounds can be grouped into distinctive units within a language; these units are known as
phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [p])
while that in spot is not aspirated (pronounced [p]). However, English speakers
intuitively treat both sounds as variations (allophones) of the same phonological category,
that is of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [p] were
interchanged with the unaspirated [p] in spot, native speakers of English would still hear
the same words; that is, the two sounds are perceived as "the same" /p/.) In some other
languages, however, these two sounds are perceived as different, and they are
consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua,
there are minimal pairs of words for which aspiration is the only contrasting feature (two
words can have different meanings but with the only difference in pronunciation being
that one has an aspirated sound where the other has an unaspirated one).
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonemic point
of view. Note the intersection of the two circlesthe distinction between short a, i and u
is made by both speakers, but Arabic lacks the mid articulation of short vowels, while
Hebrew lacks the distinction of vowel length.
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of
view. Note that the two circles are totally separatenone of the vowel-sounds made by
speakers of one language is made by speakers of the other.
Part of the phonological study of a language therefore involves looking at data (phonetic
transcriptions of the speech of native speakers) and trying to deduce what the underlying
phonemes are and what the sound inventory of the language is. The presence or absence
of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether
two sounds should be assigned to the same phoneme. However, other considerations
often need to be taken into account as well.
The particular contrasts which are phonemic in a language can change over time. At one
time, [f] and [v], two sounds that have the same place and manner of articulation and
differ in voicing only, were allophones of the same phoneme in English, but later came to
belong to separate phonemes. This is one of the main factors of historical change of
languages as described in historical linguistics.
The findings and insights of speech perception and articulation research complicate the
traditional and somewhat intuitive idea of interchangeable allophones being perceived as
the same phoneme. First, interchanged allophones of the same phoneme can result in
unrecognizable words. Second, actual speech, even at a word level, is highly coarticulated, so it is problematic to expect to be able to splice words into simple segments
without affecting speech perception.
Different linguists therefore take different approaches to the problem of assigning sounds
to phonemes. For example, they differ in the extent to which they require allophones to
be phonetically similar. There are also differing ideas as to whether this grouping of
sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the
human brain processes a language.
Since the early 1960s, theoretical linguists have moved away from the traditional concept
of a phoneme, preferring to consider basic units at a more abstract level, as a component
of morphemes; these units can be called morphophonemes, and analysis using this
approach is called morphophonology.