Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Models of Cognitive-Linguistic Process

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

MODELS OF COGNITIVE-LINGUISTIC PROCESS

By: Niya Mathew


Submitted to: Dr. R. Rajasudhakar

INTERACTIVE MODELS
The TRACE model is like a complex network with different layers for processing speech. Imagine it as a
system with input at one end and words understood at the other. Here's a simplified breakdown:

1. Input Layer:
- This is where speech sounds enter the model.
- They're converted into features
2. Feature Layer:
- These features activate specific phonemes
- For example, if the input includes a "b" sound, it activates the corresponding phoneme units.
3. Phoneme Layer:
- Here, phonemes activate units representing words.
- So, if the model hears a sequence of phonemes that matches a word, it activates that word unit.
4. Word Layer:
- This layer represents recognized words.
- Activation of a word unit means that the model has recognized that word.

Connectivity:
- Feedforward Connections:
- These connections pass information from one layer to the next, like a flow of activation.
- Lateral Inhibitory Connections:
- Within each layer, units can inhibit each other, helping in selecting the most appropriate options.
- Top-Down Feedback Connections:
- Words can send feedback to phonemes, helping to refine the recognition process.

Operation:
- Input Processing:
- The model receives speech sounds bit by bit to mimic real-time processing.
- Each chunk of input activates different parts of the network, changing activation levels in the layers.

- Recognition:
- There's no set rule for when a word or phoneme is recognized.
- Typically, a recognition threshold is used, and when a unit's activation exceeds this threshold, it's
considered recognized.

Auditory features - 1
● Place and manner of articulation is determined through acoustic features.
● And is represented in traces.
Phonemes - 2
● A complete set of phonemes for each letter in a word
● Phonemes within a set inhibit each other (lateral inhibition of competing phonemes)
● Winner-take-all configuration: Only one phoneme in each set is identified.
Words (Level 3):
Reflects the lexicon, limited to monosyllabic words.
Lateral inhibition ensures only one syllable is selected.
Activated in the mental lexicon via bottom-up processing.
Feedback to phonemes through top-down processing.

(b)Cohort Model:

● Proposed by William Marslen-Wilson (1980)


● This explains how visual or auditory input is mapped onto a word in the listener’s lexicon
● Processing starts with the first phoneme of a word, not only after a word has finished (explained through
shadowing effect)
● After the recognition, the word is selected and integrated into the context (eg. in a sentence)
● Processing differences explained (bottom-up and top-down)
● Multiple words in a sentence is processed parallely (parallel processing)
● It also accounts for Visual input as well.

(c) Logogen Model:


● Proposed by Morten (1969)
● In Greek, Logo means ‘word’
● The heart of logogen model was a set of processing units that would receive input (from either auditory or
visual modality and fires when their excitatory inputs exceeded some criterion level/threshold.
● This model explains word recognition using a new type of unit known as a ‘logogen’.
● Each logogen is responsible for recognizing one specific word.
● Each word in a person’s vocabulary is represented by logogens.
● A given stimulus can affect the activation level of multiple words at once, usually words that are similar to
each other.
● When this happens whichever word reaches the threshold level is sent to the output unit.
● Then the evidence is combined with the semantically related words through the cognitive system and
finally they send signals to an output buffer.
● Here the sensory and semantic cues interaction will be the output of the response.

(d) Semantic Feature comparison Model:


● This model is used to devise predictions about categorization in a situation where a subject must rapidly
decide whether a test item is a member of a particular target category.
● Using words from the animal kingdom, Collins and Quillian (1972) developed a hierarchical network
model of semantic memory.
● Its classification system has superordinate, basic and subordinate levels.
● It says the time took to verify the sentences would indicate how far the feature is placed (distance effect).
● 2 levels of feature: defining features and characteristics features.

(e) Spreading Activation Model:

● It is proposed by Collins and Loftus (1975)


● This model has words represented as nodes as in the earlier network models and these nodes are connected
to each other, but the connections are not necessarily hierarchical, they are determined by strength of
associations.
● It also says that these connections produce between the units so when you think of one, you automatically
think of the other.
● These connections are based on personal experience.
● Activation first to original nodes then to more remote nodes.

(f) Neighborhood Activation Model:


● Proposed by Luce et al., in 1990
● NAM makes several predictions regarding the effects of neighborhood and frequency effects on spoken
word recognition.
● First, if stimulus input activates a relatively large number of similar acoustic-phonetic patterns in memory,
word recognition in that case is predicted to be slower and less accurate.
● That is, words with many similar sounding neighbors should be responded to less quickly and accurately
than words with few similar sounding neighbors.
● Second, NAM predicts that the frequency of the neighborhood should affect recognition. In particular, the
model predicts that, all things being equal, words with high frequency neighbors should be responded to less
quickly and accurately that words with low-frequency neighbors.
● Finally NAM predicts processing advantages for high-frequency words.

HIERARCHICAL MODELS

1. Jackson's Model
John Hughlings Jackson, in 1874, proposed a model that viewed behavior and neural activity as the
superimposition of increasingly complex functions upon basic capacities. Here's a breakdown of his model:
Basic Functions Level: Consists of automatic and involuntary functions such as respiration, cardiac rhythm,
endocrine functions, and sleep.
Intermediate Level: Involves postures, gait, and responses to painful stimuli.
Higher Level: Encompasses voluntary functions such as language use.
Symptom Differentiation:
 Jackson differentiated symptoms into positive and negative categories.
 Negative symptoms indicate functions that patients cannot carry out, reflecting damaged brain areas.
 Positive symptoms refer to residual abilities, reflecting the operation of remaining brain portions.
Language Perspective:
 Jackson emphasized that language's critical feature was its ability to form "propositions," which express
relationships between objects and events.
 Propositions are flexible and not bound to stimuli.
 "True propositions" are those not used in stereotyped or automatic ways.
 Disturbances in propositional language use lead to restricted, stereotyped, and automatic speech.
Broca's Aphasia:
 Patients with Broca's aphasia understand spoken language to a considerable extent but produce sparse,
restricted speech.
 They may retain automatic words like obscenities and express stereotype words.
 Jackson suggested that these words existed in the patients' minds before the lesion causing aphasia.
 Non-words produced are derivatives of these words.

2. JAKOBSON’S MODEL :
Roman Jakobson
● Jakobson’s hypothesis in 1964 was based on the case reports and studies in the literature. He observed the
pattern of loss and retention of phonemes in aphasic language and compares that pattern to the sequence in
which phonemes are acquired in normal language development. Children also show an obligatory sequence
of mastery of the phonemes of language.
● Jakobson restricted his hypothesis regarding the inverse relationship between aphasic disorders of
language and the stages of language acquisition to phonemic inventory.
● He describes phonemic contrasts develops in any language via place of articulation, nasality, degree of
closure and openness of vocal-tract, position of sides and body of the tongue.
● Only when basic contrast (utilization of maximum contrasts) exists, other phonemic inventory will
develop.
● In aphasics, this pattern is reversed for language breakdown.
● Patients lose the ability to produce more complex phonemes before simpler ones and when their language
returns, they show the similar pattern of regaining the phonemes that of children.
● Language consists of 2 types of operations:
a. choice of units
b. combination of units.
● So, 2 types of aphasia: (Jakobson, 1964)
● Problems choosing the right unit- Paraphasias & Anomia.
● Problems combining units- Agrammatism
Merits:
● Reflected the development of linguistic units in the brain.
● Gives specific content to the notion of the hierarchical organization of linguistic units.
Demerits:
● Does not apply to other aspects of language (Caramazza and Zurif, 1978)
● Hypothesis restricted only to the inverse relationship between aphasic disorders of language and stages of
acquisition to the phonemic inventory.
● This does not advance our knowledge of the anatomical basis hierarchical features of language.

3. JASON BROWN’S MODEL:


● Jason Brown’s theory of language and brain is based on the neuroanatomical aspect of language
processing.
● The tasks of language use, such as speaking and comprehension involve realizing these levels in
sequential order.
● For primitive stages of language production subcortical areas are involved for later stages of language
production cortical areas are involved.
● He classified language processing system into two types anterior and posterior system.
● With respect to the posterior system of language processing for speech production, he explained semantic
paraphasia and phonemic paraphasia.
● Issue with categorical judgement, involving selection of a particular words within a narrow linguistic
category will cause semantic paraphasia. Lesion with respect to association cortex.
● Problem in selecting the phonological entities will lead to phonemic paraphasia. Lesion with respect to
Wernicke’s area.

PROCESS MODELS
Luria's Process Model Overview:
Alexander Luria, in 1947 and later in 1973, proposed a comprehensive model of brain function and
organization. This model emphasizes the dynamic and interconnected nature of brain processes:
Three Functional Units:
Luria worked out 3 functional models of the brain which are responsible for :
1. Reception: Involves the intake of sensory information.
2. Association: Integrates and processes sensory information, forming perceptions and
concepts.
3. Expression: Controls motor functions and produces behavior in response to processed
information.

1. First functional Unit - Brainstem(reticular formation), hippocampus, Limbic system .This unit is for
regulating tone and waking and mental states referred as the arousal and attentional unit.
2. Second functional Unit - posterior cortical area (including occipital, parietal, temporal lobes) .This unit is
for receiving, analyzing, and storing information (the sensory input and integration unit)
3. Third functional unit - association cortex located in frontal and prefrontal cortex of the brain. The unit for
programming, regulation and verification of activity (executive planning and organization unit) also deals
with capacity for intentions, plans, asking new questions, solving problems, and self-monitoring.
Three Stages of Information Processing:
 Luria described three stages of information processing that occur across these functional
units:
1. Input: Sensory information is received and processed.
2. Integration: Information is analyzed, compared, and combined to form perceptions
and concepts.
3. Output: Motor responses are generated based on processed information.
Brain Regions and Functional Systems:
 Luria proposed that different brain regions and functional systems are responsible for specific
cognitive processes.
 He emphasized the importance of distributed processing and the dynamic interactions
between brain areas in supporting complex behaviors.
Dysfunction and Compensation:
 Luria's model also accounts for how brain damage or dysfunction can lead to deficits in
specific cognitive processes.
 He described compensatory mechanisms by which intact brain areas may take on additional
functions to compensate for damage in other regions.

COMPUTATIONAL MODELS
IMPORTANCE:
1. Clarity through Specificity: Computational models compel researchers to precisely
articulate their theories in quantitative and algorithmic terms, enhancing clarity and enabling
rigorous testing and prediction.
2. Controlled Variable Manipulation: Computational models facilitate the manipulation of
specific variables while keeping others constant, overcoming the complexity of natural
environments. This control is particularly valuable in studying language learning where
manipulating variables like input quantity is challenging in empirical studies.
3. Teasing Apart Confounding Factors: Computational models provide a systematic approach
to disentangle confounding factors, such as age of acquisition and proficiency in bilingual
language learning, which are difficult to separate in empirical studies. By adjusting variables
like L2 onset time or amount of input, researchers can isolate their effects more effectively.
4. Direct Examination of Processes: Unlike verbal models, computational approaches allow
direct examination of underlying processes rather than inferring them from input-output
relations. For instance, techniques like hierarchical clustering analysis unveil the internal
representations and their evolution during learning, offering insights into cognitive processes.
5. Visualization of Internal Representations: Computational models and modern data analysis
methods make internal representations and their developmental changes visible and
accessible, aiding comprehension of complex cognitive phenomena like language learning.
Computational modeling research is based on the metaphor of human brain as a computational information
processing system. From an external observer viewpoint, this system perceives the environment using a number
of input channels (senses), processes the information using some type of processing steps (the nervous system),
and creates outputs (motor actions) based on the available sensory information and other internal states of the
system. This input/output-relationship is affected by developmental factors and learning from earlier
sensorimotor experience, realized as changes in the connectivity and structure of the central nervous system.
Computational research attempts to understand the components of this perception-action loop by replacing the
human physiology and neurophysiology with computational algorithms for sensory (or sensorimotor)
information processing. Typically the aim is not to replicate information processing of the brain at the level of
individual neurons, but to focus on the computational and algorithmic principles of the process, i.e. the
information representation, flow and transformation within the system

Approaches different from the more “classical” view of cognition and language have led to the development
of computational modeling of language along the following two directions
1. Probabilistic approach
Because of empirical discoveries, computational researchers have begun to explore computational
frameworks of language based on probabilistic principles, such as Bayesian statistics and co ‐occurrence
statistics (Chater & Manning, 2006; Jones, Willits, & Dennis, 2015; Perfors, Tenenbaum, Griffiths, & Xu,
2011, for reviews).
The Yu and Ballard Model (2007) represents an example of computational probabilistic models applied to
developmental psycholinguistics. This model focuses on semantic learning and it begins by calculating co‐
occurrence statistics between linguistic labels (words) in the spoken utterances and real ‐world objects
(referents) in their direct extra linguistic contexts. Here, the input data of the model was extracted from two
video‐clips of caregiver‐infant interactions from the CHILDES database (MacWhinney, 2000).
Specifically, Yu and Ballard focused on two components of the input: the language stream, which included
the transcripts of caregivers’ speech, and the meaning stream, which included a set of objects shown in the
video as the potential referents. The task of the model was to find the correct word ‐referent pairs based on
statistical regularities in these two streams of the input. Simple frequency counting of single word ‐object
pairs is not the best way to find the correct referent of a word, because there were too many high frequency
function words in the spoken utterances (such as you, the) that could outweigh the number of content words
(such as cat) in the input speech stream, leading to incorrect mappings to the referents (such as the image of
a cat) in the context. To solve this problem, the authors first estimated the association probabilities of all the
possible word‐referent pairs based on an expectation ‐maximization (EM) algorithm. They then identified the
best word‐ referent pairs with association probabilities that can jointly “maximize the likelihood of the
audio‐ visual observations in natural interaction”.
The authors demonstrated that, with the convergence of the EM algorithm, the association probabilities of
relevant word‐referent pairs increased and those of irrelevant pairs decreased. Eventually, correct referents to
several words could be successfully identified given the higher association probabilities between words and
referents. An important feature of the Yu and Ballard model is the incorporation of certain non ‐linguistic
(social) contextual cues into its statistical learning Yu and Ballard’s study demonstrates a salient feature of
computational modeling, which is that researchers can systematically manipulate the variables in the
simulations. Adding or removing certain factors into the simulation (i.e., adding the social cues into the
current model) allows the researchers to clearly identify their causal role and systematically investigate their
effect and impact on learning or processing. This model clearly shows the significance of cross ‐situational
statistics in the learning of word meanings. However, this model only learned a small number (about 40 ‐60)
of relevant word‐referent pairs.
2. Connectionist approach
Since the 1980s, the classical view of the mind as a serial symbolic computational system has been
challenged by the resurgence of connectionism or Parallel Distributed Processing (PDP), also known as
artificial neural networks. Connectionism advocates that language learning and processing are parallel,
distributed, and interactive in nature, just as other cognitive systems are. Specifically, connectionist language
models embrace the philosophy that static linguistic representations (e.g., words, concepts, syntactic
structures) are emergent properties that can be dynamically acquired from the input environment (e.g., the
speech data received by the learner).
The DevLex‐II model, as formulated in Li, Zhao, and MacWhinney (2007), is a scalable SOM ‐based
connectionist language model designed to simulate a wide range of processes in both first and second
language learning. The model is “scalable” because it can be used to simulate a large realistic lexicon, in
single or multiple languages, and for various bilingual language pairs (Li, 2009; Zhao & Li, 2010, 2013).
Since the model was designed to simulate language development on the vocabulary level, we choose to
include three basic levels for the representation and organization of words: phonological content, semantic
content, and the articulatory output sequence. The core of the model is a SOM self ‐organizing map) that
handles lexical‐semantic representation. This SOM is connected to two other SOMs, one for input (auditory)
phonology, and another for articulatory sequences of output phonology. Upon training of the network, the
semantic representation, Input phonology, and output phonemic sequence of a word are simultaneously
presented to the network.
This process is analogous to that of a child hearing a word and performing analyses of its semantic,
phonological, and phonemic information. On the semantic and phonological levels, DevLex ‐II constructs the
representations based on the corresponding linguistic input according to the standard SOM algorithm. On the
phonemic output level, the mode uses a temporal sequence learning network (based on SARDNET of James
and Miikkulainen, 1995). Given the challenge that the language learner faces in articulatory control of the
phonemic sequences of words, the use of a temporal sequence network allows us to model word production
more realistically. In DevLex‐II, the associative connections between maps are trained via the Hebbian
learning rule. As training progresses, the weights of the associative connections between the concurrently
activated nodes on two maps become increasingly stronger.

NEURAL NETWORK MODELS


 Neural networks are networks or circuits of biological neurons:
 Biological neural networks.
 Artificial neural networks.

Biological neural network

 Introduced by McCulloch & Pitts, 1943.


 Biological neural networks are made up of real biological neurons that are connected or functionally
related in the peripheral nervous system or the central nervous system.

Artificial Neural Network Model

 Proposed by Frank Rosenblatt in early 1950s - 1960s.


 Initially, in the mid of 19 century, Mr. Frank Rosenblatt invented the Perceptron for performing certain
th

calculations to detect input data capabilities or business intelligence.


 Perceptron is a building block of an Artificial Neural Network.
 Perceptron model is also treated as one of the best and simplest types of Artificial Neural networks.
 The performance of the cognitive tasks (eg: speech) activates simultaneously several separate cortical
areas, and that stimulation of a limited area may elicits any number of different responses.
 Language and speech functions can be interpreted as coordinate activity of more different assemblies
within a single unitary network (cortical or neural).
 It is believed that the basic functions are localized in specific locations in the cerebral cortex, whereas
complex functions involves parallel processing of the information in widely distributed networks that
spread over cortical and sub –cortical structures.

Artificial Neural Network

 Deep learning method arose from the concept of the human brain Biological Neural Networks.
 Development of ANN was the result of an attempt to replicate the workings of the human brain.
 Artificial Neural Networks (ANN) are algorithms based on brain function and are used to model
complicated patterns and forecast issues.
 Workings of ANN are extremely similar to those of biological neural networks, although they are not
identical.

Perceptron

We can consider it as a single-layer neural network with four main parameters, i.e., input values, weights
and Bias, net sum, and an activation function. These are as follows:

Input Nodes or Input Layer:


This is the primary component of Perceptron which accepts the initial data into the system for further
processing. Each input node contains a real numerical value.

Weight and Bias:


Weight parameter represents the strength of the connection between units. This is another most important
parameter of Perceptron components. Weight is directly proportional to the strength of the associated
input neuron in deciding the output. Further, Bias can be considered as the line of intercept in a linear
equation.

Activation Function:
These are the final and important components that help to determine whether the neuron will fire or not.
Activation Function can be considered primarily as a step function. Types of Activation functions: Sign
function, Step function, and
Sigmoid function.
The data scientist uses the activation function to take a subjective decision based on various problem
statements and forms the desired outputs. Activation function may differ (e.g., Sign, Step, and Sigmoid)
in perceptron models by checking whether the learning process is slow or has vanishing or exploding
gradients.

Output nodes:
Final output.

 Neural network model consist of 3 layers:


1. The input layer (units)
2. The hidden layer (units)
3. The output layer (units)
 The units (that actually represents the neurons) are connected and these connections facilitate or inhibit
activation levels of the specific reception unit.
 Each unit receives input from the preceding layer and relays its signals to the next layers. Each input has
value ie. Weight, and the activation level of each unit depends upon the weighted sum of all units.
 The hidden units provide information necessary for correct mapping of the input layer to the output
layer by changing the strength of connections between units.
 During practice phase neural network learn to respond to a given input with a particular output by
modifying the connections between the units. So that the each actual output is compared with the target
output. If there are discrepancies the weights are changed based on back propagation algorithm (the error
message is relate through the network in the opposite direction).

 Features of neural network:


 Generalize the response patterns
 Highly interconnected
 Adaptable
 Can differentiate among patterns of input stimuli
 Information in the neural networks is distributed.
 The least useful neuron dies away gradually

 Neural networks involve simultaneous activation of internal nodes.


 Nodes are assumed to be numerical processors. And the outputs from the nodes are assumed to be
numerical sum of its inputs. These nodes also interconnect to either inhibit or suppress the activation of
all the other nodes involved until a threshold point has been reached.
 The strength of connections between the nodes can be given a numerical value, representing the
probability that one node will co-exist with another.
 Example; ‘It is running’- (correct)
‘They is run’ – (wrong) due to the fact that verb form ‘is’ is highly frequent after ‘it’ and not after ‘they’.
The input node ‘it’ would activate a node for ‘is’ which would activate a node for ‘-ing’.
Serial processing

 In serial process (first Identify sounds🡪 combine them into words 🡪then into sentences) or to what
extent should it been seen as a number of different process acting at the same time and at different levels.
 In serial processing information is taken in via the senses and then various features extracted through a
series of memory stores.
 The symbolistic approach following (following from linguistic descriptions which are hierarchical in
nature) also suggest serial processing of language input. The brain first of all decodes the input from a
rule governed syntactical view point which then accesses a semantic representation (for language
comprehension). The brain uses a similar reverse path for language production; the semantics generate
the syntax which then produces output.

Parallel Distributed Processing

 Brain contains number of neurons connected into neural networks which carry out myriad simultaneous
and complex operations.
 PDP also known as interactive activation or spreading activation.
 This theory postulates that brain is able to carry out multiple levels of activity simultaneously and thus
several processes can take place at the same time and not in a serial order, Spreading activation through
many parts of the brain through a highly complex system of neural networks.
Eg: Let us say some one such as Mahatma Gandhi. There are all sorts of facts we know about him like he
looks, his personality, father of the nation, freedom fighter.
These are all connected together in a web of information, or a neural network. Accessing any one of
these pieces of information will activate all the other pieces of information to some degree or other.
These activation will obviously depend on the strength of the information held (we may not have met
him very often) but they will all be activated simultaneously, for e.g. on showing a photograph or hearing
his name.
 In the same way, PDP envisages different processes being carried out in different parts of the brain
simultaneously.
 It was a neural network approach that stressed the parallel nature of neural processing, and the
distributed nature of neural representations. It provided a general mathematical framework for researchers
to operate in.

 The framework involved 8 major aspects;


1. A set of processing units.
2. Activation for each unit
3. An output function for each unit
4. A pattern of connectivity among units
5. A propagation rule for combining inputs to a unit to determine its new activation, represented by a
function on the current activation and propagation.
6. Aactivation rule for combining inputs to a unit to determine its new activation, represented by a function
on the current activation and propagation.
7. A learning rule for modifying connections based on experience
8. An environment which provides the system with experience.
Two fundamental components of ANN
ANNs incorporate the two fundamental components of biological neural nets:
 Neurones (nodes)
 Synapses(weights)

 The synapses can be excitatory or inhibitory , and they can be either fixed or adaptive. In the latter case,
the process of adjusting the synaptic weights is referred to as learning as it is an approximation of
learning and memory formation in the brain.
 ANN explains that learning is possible through differing wiring of the connections between the
simple units.
 Connections are the most important aspects of the model : Structure of the network is mostly fixed, but
the efficiency of the connections can be modified by learning rules, which enable the network to learn
tasks.

 Artificial neural nets (ANNs) have been used to model many of the functions the brain performs – to
recognize patterns, to plan actions in robots, learn new information, and use feedback to improve
performance.
 Cognitive neuroscientists commonly focus on biologically plausible neural net models, those that are
based on the known properties of a specific set of neurons and their connections. However, artificial
neural nets often provide useful approximations to the reality.
 Neural nets can learn and store information in much the way real neurons are believed to. Since the basic
types of ANNs were worked out in detail over the last several decades, they have been applied to many
practical uses, from computer-based face recognition to predicting the stock market.

Advantage

 A neural network can perform tasks that a linear program cannot.


 When an element of the neural network fails, it can continue without any problem by their parallel nature.
 A neural network learns and does not need to be reprogrammed.
 It can be implemented in any application.
 It can be implemented without any problem.

Limitations

 The neural network needs training to operate.


 The architecture of a neural network is different from the architecture of microprocessors therefore needs
to be emulated.
 Requires high processing time for large neural networks.
BILINGUAL MODELS
2 MODELS:

1. Bilingual Interaction Activation : It is a connectionist model that extends the


McClelland and Rumelhart (1981) Interactive Activation (IA) model to the bilingual
case.
2. Hierarchical Models : Word Association and concept mediation models
-Revised Hierarchical
-Re-Revised Hierarchical

Hierarchichal models of Bilingual Memory:


Word Association and Concept Mediation Models (Given by Potter et al, 1984)
This model assumes that the second language words gains access to concepts only through 1 st
language mediation (L1).
1. The word association Model: Bilingual’s two languages interact at the lexical level,
based on translation equivalents. Bilingual’s L2 is subordinated to the L1.Access to
the general conceptual system via the L2 is not possible, unless the L2 word is
translated into L1.
2. The Concept mediation model : Assumes bilingual’s 2 languages operate
independent of each other. Both lexicons are connected independently to the
conceptual memory store common to both languages.A bilingual can activate a
meaning of a particular concept, regardless of the language or whether the word is
translated.

Revised Hierarchical Model (Given by Kroll & Stewart (1990&1994)


 This model describes the asymmetrical link that appears to be present in bilingual language
representation
 L1 is seen as being larger than L2 since it is assumed that bilinguals would have a larger
vocabulary in their native language than in their second language.
 As the model suggests, there are two separate lexical systems in the bilingual mental
dictionary, but a shared conceptual system.
 L1 and L2 words are not only linked at the lexical level, but also at the shared conceptual
level.
 The link between L1 and concepts appear to be bidirectional and strong.
 When a person acquired a 2nd language especially if it occurs later in life, L2 words would be
integrated in to memory by developing a pathway that is attached to the lexicon of first
language.
 During L2 acquisition, bilinguals learn to associate every L2 word with its L1 equivalent (e.g.
they learn house and associate it with casa), thus forming a lexical-level association that
remains active and strong (Kroll and Steward, 1994).
 Stronger lexical links from L2 to L1 than in the reverse direction reflect the bilingual’s ease
of translation.
 The observation of greater latencies when translating from L1 to L2 than vice versa, and
shorter latencies when translating categorized list verses randomized list from L1 to L2 (Kroll
and Stewart)
 A longer latency implies that translation is being modulated by conceptual memory where as
shorter latency implies a direct translation between lexicons.

Re- Revised Hierarchical Model

 The main difference between this model and the RHM is that this model avoids the terms
“L1” and “L2”.
 This is done to avoid the misconception that the L1 or native language has special status and
that the L2 is subordinated to the L1.
 For this reason, R2-HM depicts the bilingual lexicon in terms of “most dominant language”
(MDL) and the “least dominant language” (LdL) lexicons.
 Words in the language, used more frequently will be responded to more quickly.
 Words in the language that is used less frequently will be responded to more slowly.
 According to R2-HM, regardless of which language is learnt first, the more active/
dominant language would determine which lexicon would be accessed faster
 Heredia’s Spanish- English bilinguals were faster in accessing their L2 lexicon simply
because it was the language they used more frequently or their most dominant language.
 Theoretically possible for the bilingual’s L2 to become the dominant language and the L1 to
become the less dominant language.
 Regardless of which language is learned first. The more frequently used language will sub-
ordinate over the other.
REFERNCES:

1. Li, P., & Xu, Q. (2022). Computational modeling of bilingual language learning
Current models and future directions. Language Learning.
https://doi.org/10.1111/lang.12529
2. Mildner.V(2008) The Cognitive Neuroscience of Human Communication.
3. Tej K.B, William C.R (2004). The Handbook of Bilingualism. Blackwell Publishing.

You might also like