Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

László Drienkó

The present study reports results from a series of computer experiments seeking to combine word-based Largest Chunk (LCh) segmentation and Agreement Groups (AG) sequence processing. The AG model is based on groups of similar utterances... more
The present study reports results from a series of computer experiments seeking to combine word-based Largest Chunk (LCh) segmentation and Agreement Groups (AG) sequence processing. The AG model is based on groups of similar utterances that enable combinatorial mapping of novel utterances. LCh segmentation is concerned with cognitive text segmentation, i.e. with detecting word boundaries in a sequence of linguistic symbols. Our observations are based on the text of Le petit prince (The little prince) by Antoine de Saint-Exupéry in three languages: French, English, and Hungarian. The data suggest that word-based LCh segmentation is not very efficient with respect to utterance boundaries, however, it can provide useful word combinations for AG processing. Typological differences between the languages are also reflected in the results.
Based on findings on short text segmentation our work aims to draw attention to a distributional dimension of speech segmentation. Using the algorithm of Drienkó (2016), we segment CHILDES texts in four languages: English, Hungarian,... more
Based on findings on short text segmentation our work aims to draw attention to a distributional dimension of speech segmentation. Using the algorithm of Drienkó (2016), we segment CHILDES texts in four languages: English, Hungarian, Mandarin, and Spanish. The algorithm looks for subsequent largest chunks that occur at least twice in the text. Then adjacent fragments below an arbitrary length bound k are merged. By assigning various values to k, we get a picture of how precision values change as chunks grow longer. Our results suggest that looking for largest recurring chunks may be a powerful cognitive strategy cross-linguistically as well.
We apply the largest-chunk segmentation algorithm to texts consisting of syllables as smallest units. The algorithm was proposed in Drienkó (2016, 2017a), where it was used for texts considered to have letters/characters as smallest... more
We apply the largest-chunk segmentation algorithm to texts consisting of syllables as smallest units. The algorithm was proposed in Drienkó (2016, 2017a), where it was used for texts considered to have letters/characters as smallest units. The present study investigates whether the largest chunk segmentation strategy can result in higher precision of boundary inference when syllables are processed rather than characters. The algorithm looks for subsequent largest chunks that occur at least twice in the text, where text means a single sequence of characters, without punctuation or spaces. The results are quantified in terms of four precision metrics: Inference Precision, Alignment Precision, Redundancy, and Boundary Variability. We segment CHILDES texts in four languages: English, Hungarian, Mandarin, and Spanish. The data suggest that syllable-based segmentation enhances inference precision. Thus, our experiments (i) provide further support for the possible role of a cognitive largest-chunk segmentation strategy, and (ii) point to the syllable as a more optimal unit for segmentation than the letter/phoneme/character, (iii) in a cross-linguistic context.
The present paper summarises findings from a series of experiments using the ‘agreement groups’ method, a distributional framework for analysing linguistic data. First, the method was applied to short mother-child utterances in order to... more
The present paper summarises findings from a series of experiments using the ‘agreement groups’ method, a distributional framework for analysing linguistic data. First, the method was applied to short mother-child utterances in order to directly investigate how group formation can affect language processing. Next, it was examined how longer utterances could be processed with the help of the groups of the previous analyses, i.e. how utterance fragments compatible with ‘agreement groups’ could ‘cover’ longer utterances. After recapitulating our previous results from English, Hungarian, and Spanish analyses, we report our findings on the ‘coverage’ of English utterances.  Furthermore, we extend the coverage mechanism by employing discontinuous fragments and point out some theoretical implications, outlining a formal “continuum” model of linguistic generalisation. Convergence points with usage-based and constructionist approaches will also be discussed. Our method is “computationist” in that it emphasises the computational aspects of linguistic processing.
Keywords: agreement groups, coverage, generalisation, usage-based, constructionist, distributional
The present work aims at shedding further light on how Agreement Groups (AG) processing (e.g Drienkó, 2020a) and Largest Chunk (LCh) segmentation (e.g. Drienkó, 2018) can be combined to model the emergence of language.
We mean to demonstrate how various structural priming phenomena can be interpreted in the Agreement Groups model (AGM) of linguistic processing (e.g. Drienkó 2020). The AGM is a usage-based distributional cognitive framework operating... more
We mean to demonstrate how various structural priming phenomena can be interpreted in the Agreement Groups  model (AGM) of linguistic processing (e.g. Drienkó  2020). The AGM is a usage-based distributional cognitive framework operating with memorised groups (AGs) of similar utterances as basic processing units and a combinatorial mapping mechanism defined over them. Similarity means that utterances within an AG differ from a base utterance in exactly one word. AGs consist of 2-5-word-long utterances. For the processing of longer utterances the model applies a coverage apparatus. 
We assume two major levels: i). direct mappings onto AGs for processing holophrases, shorter utterances, or “formulaic” expressions; ii) coverage, i.e. the selection of optimal combinations of AG-compatible fragments to “grammatically” cover more complex utterances. This duality is reflected in the coverage structure of a given utterance.
The AGM is compatible with findings in cognitive linguistic processing including  (over)generalisation, categorisation, a semantic/syntactic categorical less-is-more principle (Newport 1990) and its relationship to U-shaped learning  (Strauss 1982), parallelisms with the dual-process model of Van Lancker Sidtis (2009), neurolinguistic processing (Bahlmann et al. 2006), and the processing of complex linguistic structures such as long-distance dependencies, crossing dependencies, or embeddings. Beyond syntax, the approach might be applicable to morphological, historical/evolutional, semantic/conceptual and analogical aspects of language.
Structural priming in AGM can arise from the repetitive usage of previously activated AGs, or AG configurations (coverage structures, schemas). Here we propose possible analyses for a selection of priming phenomena involving thematic role assignment with prepositions and word order (prepositional/double-object dative, passive, etc.) – e.g Chang et al. (2003), Rowland et al. (2012), Pickering et al. (2002),  Goldwateret al. (2011).; locative vs. agentive by-phrases – Bock & Loebell (1990); closed-class variation – Pickering & Branigan (1998); subordinate vs. main clauses – Branigan et al. (2006); relative clause attachment (high vs. low) – Scheepers (2003); object-raising/object-control – Griffin & Weinstein-Tull (2003); coerced sentences – Raffray et al. (2014).


References:

Bahlmann, G., & Friederici, A. D. (2006). Hierarchical and linear sequence processing:  Anelectrophysiological exploration of two different grammar types. Journal of Cognitive Neuroscience, 18(11), 1829-1842.
Bock, K., Loebell, H. (1990). Framing sentences. Cognition, 35. 1-39. [PubMed: 2340711 ]
Branigan, H. P., Pickering, M. J., McLean, J. F., Stewart, A. J., (2006).The role of global and local syntactic structure in language production: Evidence from syntactic priming. Language and Cognitive Processes, 21, 974-1010.
Chang, F., Bock, K., and Goldberg, A. E. (2003). Can thematic roles leave traces of their places? Cognition 90, 29–49. doi: 10.1016/S0010-0277(03)00123-9
Drienkó, L. (2020). Agreement Groups and dualistic syntactic processing. In Haselow, A. and Kaltenböck, G. (eds.) Grammar and Cognition: Dualistic models of language structure and language processing, [HCP 70]. 310-354. John Benjamins P. C.
Goldwater, M. B., Tomlinson, M. T., Echols, C. H., Love, B. C. (2011). Structural Priming as Structure-Mapping: Children Use Analogies from Previous Utterances to Guide Sentence Production. Cognitive Science, 35. 156 -170.
Griffin, Z. M., Weinstein-Tull, J. (2003). Conceptual structure modulates structural priming in the production of complex sentences. Journal of Memory and Language. 49. 537-555
Newport, E. L. (1990). Maturational constraints on language learning. Cogn. Sci. 14, 11-28.
Pickering, M. J., Branigan, H. P., (1998). The representation of verbs: Evidence from syntactic priming in language production. Journal of Memory and Lang., 39. 633-651.
Pickering, M. J., Branigan, H. P., McLean, J. F., (2002). Constituent structure is formulated in one stage. Journal of Memory and Language, 46. 586-605.
Raffray, C. N., Pickering, M. J., Zhenguang, G. C., Branigan, H. P. (2014). The production of coerced expressions: Evidence from priming. Journal of Memory and Langage, 74. 91-106.
Rowland, C. F., Chang, F., Ambridge, B., Pine, J. M., Lieven, E. V. M. (2012). The development of abstract syntax: Evidence from structural priming and the lexical boost.  Cognition, Volume 125, Issue 1. 49-63.
https://doi.org/10.1016/j.cognition.2012.06.008.
Scheepers, C. (2003). Syntactic priming of relative clause attachments: Persistence of structural configuration in sentence production. Cognition, 89. 179-205. [PubMed: 12963261]
Strauss, S. (1982). Ancestral and descendent behaviours: The case of U-shaped behavioural growth. In Bever, T. G., (Ed.). Regressions in mental development: Basic phenomena and theories (191-220). Hillsdale, NJ: Lawrence Erlbaum Associate, Inc.
Van Lancker Sidtis, D. (2009). Formulaic and novel language in a ‘dual process’ model of language competence: evidence from surveys, speech samples, and schemata. In R. L. Corrigan, E. A.  Moravcsik, H. Ouali, & K. M. Wheatley (Eds.), Formulaic Language: Volume 2. Acquisition, loss, psychological reality, functional applications (151–176). Amsterdam: Benjamins Publishing Co.
The present work investigates how syntactic processing is affected within the “agreement groups” (AG) framework, a language processing model based on forming groups of similar utterances (Drienkó, 2014, 2013, 2015, 2016, 2018) when... more
The present work investigates how syntactic processing is affected within the “agreement groups” (AG) framework, a language processing  model based on forming  groups of similar utterances (Drienkó, 2014, 2013, 2015, 2016, 2018) when semantic category information is available for the group formation mechanism. The AG model is claimed to be usage-based and distributional since its fundamental processing units are groups of utterances differing from a base utterance in only one word. It has been shown that AGs can account for novel utterances of mother-child speech, may facilitate categorisation (lexical/syntactic, semantic), and might serve as a basis for ‘real’ agreement relations (Drienkó 2014). For the processing of longer utterances the idea of ‘coverage’ was proposed  in Drienkó (2015, 2016).

Previous work demonstrated that AG processing is initially reduced by information on lexical/syntactic categories, then improves with growing training set size, i.e. with more and more category information involved in the group formation process (Drienkó 2017). In examining the effects of semantic information on AGs we followed Pulvermüller & Knoblauch (2009) by creating two-word-long utterances out of flying-related and non-flying-related verbs and nouns, e.g. bird flies, child sleeps. The semantic results are similar to the effects found with syntactic categories. In the absence of any category information –
uninformed case, C(0,0) – there are larger groups than in the informed case, and more novel utterances can be mapped onto the AGs, cf. Table 1.  However, processing capacity increases with category information, i.e. more utterances are mappable onto AGs when e.g. all words are assigned their proper semantic categories – condition C(10,10) – than when  the categories of e.g. only six verbs and six nouns are known, C(6,6).

Uninformed Informed
C(0,0)           C(6,6) C(6,8) C(6,10) C(8,10) C(10,10)
Group space 292 80 116 150 170 190
Average group size 7.7         2.1 3.1         3.95 4.47 5
Novel utterances mapped 62 5 10         12         14         24

Table 1. Results from the experiments


The regression in processing skills followed by an improving tendency is reminiscent of the U-shaped learning curve documented in various fields of cognitive development (Strauss, 1982). Gopnik & Meltzoff (1987) found a specific relation between categorisation skills of 18-month-olds and vocabulary spurt. This suggests that category information in the AG model (prerequisite for generalisation) might be related to accelerated vocabulary acquisition. Newport (1990) claims that the development of certain cognitive capacities may cause a reduction in others. This developmental “less is more” feature is echoed by our results:  the development of  syntactic-semantic category processing may cause a reduction in the capacity to freely combine words. A rise-fall-rise developmental curve might also be linked to localisation issues. The appearance of categorically more informed, more precise AGs may condition the foundations of more “analytic”, or “propositional”, speech associated with strong left lateralisation (e.g. Sidtis, Sidtis, Dhawan, & Eidelberg, 2018). Our results also exhibit a semantic dissociation of processable utterances, analogously to what Pulvermüller & Knoblauch (2009) found for their combinatorial network with low and high activation.

References

Drienkó, L. (2013). Agreement groups coverage of mother-child language. Talk presented at the Child Language Seminar, Manchester, UK, 23-25 June 2013.
Drienkó, L. (2014). Agreement groups analysis of mother-child discourse. In Rundblad, G., 
Tytus, A., Knapton, O., and Tang, C. (eds.) Selected Papers from the 4th UK Cognitive Linguistics Conference. London:UK Cognitive Linguistics Association. pp. 52-67.
Drienkó, L. (2015). Discontinuous coverage of English mother-child speech. Talk presented at the Budapest Linguistics Conference, 18-20 June 2015, Budapest, Hungary.
Drienkó, L. (2016). Agreement groups coverage of English mother-child utterances for modelling linguistic generalisations. Journal of Child Language Acquisition and Development– JCLAD. Vol. 4, Issue 3, pp. 113-158.
Drienkó, L. (2017). Agreement groups processing of context-free utterances: coverage, structural precision, and category information Talk presented at the 2nd Budapest Linguistics Conference, 1-3 June 2017, Budapest, Hungary. 
Online:http://www.academia.edu/20646809/Agreement_groups_processing_of_contex t- free_utterances_coverage_structural_precision_and_category_information
Drienkó, L. (2018). Agreement groups and dualistic syntactic processing. Talk presented at
  the “One Brain – Two Grammars? Examining dualistic approaches to language and
  cognition” international workshop, 1-2 March 2018, Rostock, Germany.
https://independent.academia.edu/LaszloDrienko/Conference-Presentations
Gopnik, A., & Meltzoff, A. (1987). The development of categorization in the second year and its relations to other cognitive and linguistic developments. Child Development, 58, 1523–1531.
Newport, E. L. (1990). Maturational constraints on language learning. Cognitive Science, 14,  11-28.
Pulvermüller, F., & Knoblauch, A. (2009). Discrete combinatorial circuits emerging in neural networks: A mechanism for rules of grammar in the human brain? Neural Netw, 22(2), 161-172.
Sidtis, J.J., Sidtis, D.V., Dhawan, V., & Eidelberg, D. (2018). Switching Language Modes: Complementary Brain Patterns for Formulaic and Propositional Language. Brain connectivity. 189-196.
Strauss, S. (1982). Ancestral and descendent behaviours: The case of U-shaped behavioural growth. In Bever, T. G., (Ed.). Regressions in mental development: Basic phenomena and theories (191-220). Hillsdale, NJ: Lawrence Erlbaum Associate, Inc.
The present woks aims to possibly contribute to the argumentation for language and cognition to be viewed in a dynamic systems context (e.g. Van Gelder, 1998; De Bot et al. 2007) by identifying some dynamic properties of linguistic... more
The present woks aims to possibly contribute to the argumentation for language and cognition to be viewed in a dynamic systems context (e.g. Van Gelder, 1998; De Bot et al. 2007) by identifying some dynamic properties of linguistic generalisation, as connected to the Agreement Groups (AG) framework of Drienkó (2014, 2016, 2018), in particular. Linguistic processing in the AG model is based on forming groups of similar utterances and combinatorial generalisation. In the experiments that we report we assume a simple learning process: the learner receives random utterances (2-word combinations) from the environment and stores them in memory. Equipped with a capacity to categorise and generalise, at a certain point, the learner begins to be able to process (understand and/or produce) novel combinations. In the meantime, the learner acquires vocabulary and realises that words fall into semantic categories besides their (syntactic) categories as specified by their positions in utterances. Viewed from our present perspective, the primary purpose of learning is to gain access to each point/region of the 'utterance space' that is accessible for the speakers in the environment. Initially these points become available via memorisation, however, generalisation has to come into play, later on. At each stage in our experiments, the learner receives and stores a constant number of random word combinations represented as points in the 'utterance space', a hypothetical coordinate system with individual words along the axes. Plotting the number of accessible points at each stage yields a learning curve characterising the temporal dimension of the learning process. (Note, the actual words and their order along the axes are immaterial. The axes correspond to utterance positions.) For simulating generalisation we use a simple analogical inference rule 'if (AB and AC) and DB then DC ' that is encoded in AGs. Our basic finding is a logistic (S-shaped) curve with a transitional region. Initially, there are only memorised combinations. When a 'critical' number of memorised word combinations is reached generalisation begins. This generalisation phase lasts until another 'critical' number of memorised utterances is reached. At this value all points of the utterance space are accessible to the learner and further memorisation is impractical. The transition between the 'no generalisation' and 'full access' phases is reminiscent of phase transition phenomena observed in other areas of language processing (e.g. Spivey et al., 2009). When semantic similarity information is simulated with a co-occurrence threshold parameter and is taken into consideration by the generalisation mechanism, a regression in generalisation capacity is reflected in the development curve. As the parameter approaches 1 from 0, generalisation becomes more and more disabled. When we reverse the learning process by systematically deleting memorised combinations we observe hysteresis (cf. e.g. Tuller et al. 1994). Since there can be several ways to generalise to a novel word combination an AG system may also exhibit plasticity effects (cf. e.g. Bates, 1999). References Bates, E. 1999. Plasticity, localization and language development. In S. Broman & J. M. Fletcher (eds.) The changing nervous system: Neurobehavioral consequences of early brain disorders. New York: Oxford University Press, 214-253. De Bot, K., Lowie, W., Verspoor, M. (2007). A Dynamic Systems Theory approach to second language acquisition.
The present study reports results from a series of computer experiments seeking to combine word-based LARGEST CHUNK (LCh) segmentation and AGREEMENT GROUPS (AG) sequence processing. The AG model is based on groups of similar utterances... more
The present study reports results from a series of computer experiments seeking to combine word-based LARGEST CHUNK (LCh) segmentation and AGREEMENT GROUPS (AG) sequence processing. The AG model is based on groups of similar utterances that enable combinatorial mapping of novel utterances. LCh segmentation is concerned with cognitive text segmentation, i.e. with detecting word boundaries in a sequence of linguistic symbols. Our observations are based on the text of Le Petit Prince (The Little Prince) by Antoine de Saint-Exupéry in three languages: French, English, and Hungarian. The data suggest that word-based LCh segmentation is not very efficient with respect to utterance boundaries, however, it can provide useful word combinations for AG processing. Typological differences between the languages are also reflected in the results.
We would like to draw attention to the dualistic characteristics of the Agreement Groups (AG) model of linguistic processing, a distributional approach based on the cognitive mechanisms of storing groups of similar... more
We would like to draw attention  to the dualistic characteristics of the Agreement Groups (AG)
model of  linguistic  processing,  a  distributional  approach  based  on    the  cognitive  mechanisms
of  storing groups of similar utterances in memory, and mechanisms for mapping  utterances onto such groups. Agreement groups, i.e. groups of utterances differing from a base utterance in only one word, provide  a  means  for  processing  novel  utterances  on  the  basis  of  utterances  already  encountered. Analysing  2- 5  word  long  English  mother- child  utterances,  Drienkó  (2012a, 2014)  found  that  at  any stage  of  linguistic  development  the  agreement  groups  extracted  from  the  body  of  utterances encountered  up  to  that  point  can  account  for  a  certain  proportion  of  the  utterances  (novel,  and  non-novel)  of  the  stage  in  question.  Similar  results  were  reported  for Hungarian  and  Spanish  (Drienkó 2013a). The maximum  proportion of  English utterances that were compatible with AG ’s was 41%. For  the  processing  of  longer  sequences  the  notion  of  ‘coverage’  was  introduced  in  Drienkó (2013b, 2015). The basic idea is to break down utterances into shorter (2 -5 word long) fragments which are  compatible  with  AG’s.  Fragments  then  can  “cover”  the  longer  utterance.  The  author  found  78% and 83%  average coverage values for the “continuous”  and the “discontinuous”  case, respectively.
In terms of linguistic modelling the results suggest at least two basic levels of processing. The
first  level corresponds  to  direct  mappings  onto  AG’s.  Shorter  utterances,  holophrases,  formulaic expressions  can  be  handled  relatively  readily  here.  The  second  level  requires  more  computational efforts  since  first  legal  (i.e.  AG -compatible)  fragments  have  to  be  found
(Level  1  operation),  then  an optimal combination of fragments must be selected in order to effect grammaticality. This duality  is reflected  in  the  “coverage  structure”  of  utterances.  Drienkó  (2016)  proposes  a  “continuum”  model  of linguistic  generalisation  based  on  the  operations  and  generalisation  objects  associated    with  the  two major  levels  and  their  possible  sublevels. The  model may  have  parallelisms  with  the  dual-process model  of  Van  Lancker  Sidtis  (2008)  based on  holistic  and  analytic  levels  of  processing  for    formulaic
and novel utterances, respectively, and schemata representing the interplay of the two levels.
The AG approach may accord with neurological findings. For instance, Bahlmann & Friederici (2006)
documented  different  ERP  components  for  the processing  of  sequences  of  different  structural types,  namely,  (AB)n sequences  from  a  Finite  State  Grammar    and  AnBn
sequences  from  a  Phrase Structure Grammar. In the AG framework this dichotomy could be explained in terms  of ’continuous’ and/or  ’discontinuous’  coverage:  as  for  (AB)n
utterances,  continuous  AB  fragments  can  cover  any sequence,  whereas  AnBn
sequences  require  n-1  discontinuous  fragments.  Discontinuity,  in  turn,
involves a computationally more complex process in the AG model. Cf. Table 1 and Table 2,
also the difference in coverage effectiveness, 78% vs. 83%,for the experiment mentioned above.
Table 1
Table 2
We test the coverage potential of the “agreement groups” (AG) approach, a language processing model based on forming groups of similar utterances, on corpora generated by a context-free grammar. As natural language corpora do not allow... more
We test the coverage potential of the “agreement groups” (AG) approach, a language processing  model based on forming  groups of similar utterances, on corpora generated by a context-free grammar. As natural language corpora do not allow generic immediate methods of comparing structural correctness, or “precision”, with the output of language processing models, the advantage of formal grammars is that the language generating mechanism is well-defined, i.e. the correct structure for any utterance is known. We compare the “coverage structure” of utterances, as output by our model, to their constituent structure. We are interested to see what insights the AG model can offer about language learning and processing. The experiments focus on the relationship between corpus size and coverage, as well as structural precision. The role of explicit information about syntactic categories is also considered. We find that i) the AG framework may be useful in modelling general developmental processes: larger corpora effect higher coverage and precision, comprehension precedes production; ii) information on syntactic categories hinders processing, i.e. “less is more”; iii) agreement groups can code structural information.
In the present study we demonstrate how the processing of mother-child utterances can be enhanced by allowing discontinuity. In particular, we employ the Agreement Groups framework as proposed in Drienkó (2014, 2013). The data suggest... more
In the present study we demonstrate how the processing of mother-child utterances can be enhanced by allowing discontinuity. In particular, we employ the Agreement Groups framework as proposed in Drienkó (2014, 2013). The data suggest that higher coverage can be achieved by exploiting discontinuous utterance fragments. The notion of “coverage structure” and its possible relationship to constituent structure is also discussed.
We propose an algorithm for inferring boundaries of utterance frag-ments in relatively small unsegmented texts. The algorithm looks for subse-quent largest chunks that occur at least twice in the text. Then adjacent frag-ments below an... more
We propose an algorithm for inferring boundaries of utterance frag-ments in relatively small unsegmented texts. The algorithm looks for subse-quent largest chunks that occur at least twice in the text. Then adjacent frag-ments below an arbitrary length bound are merged. In our pilot experiment three types of English text were segmented: mother-child language from the CHILDES database, excerpts from Gulliver's travels by Jonathan Swift, and Now We Are Six, a children’s poem by A. A. Milne. The results are interpreted in terms of four precision metrics: Inference Precision, Alignment Precision, Redundancy, and Boundary Variability. We find that i) Inference Precision grows with merge-length, whereas Alignment Precision  decreases – i.e. the longer a segment is the more probable that its two boundaries are correct;        ii) Redundancy and Boundary Variability also decrease with the merge-length bound – i.e. the less boundaries we insert, the closer they are to the ideal boundaries.
The present paper reports the findings from an experiment applying the ‘agreement groups’ method, a distributional framework for analysing linguistic data, to Hungarian utterances. Originally, the method was applied to short... more
The present paper reports the findings from an experiment applying the ‘agreement groups’ method, a distributional framework for analysing linguistic data, to Hungarian utterances. Originally, the method was applied to short mother-child utterances in order to directly investigate how group formation can affect language processing. The ‘coverage’ framework was designed for examining how longer utterances could be processed with the help of the groups of the previous analyses, i.e. how utterance fragments compatible with ‘agreement groups’ could ‘cover’ longer utterances. After outlining the method and recapitulating  previous results from  English, Hungarian, and Spanish analyses, we present our findings on the ‘coverage’ of Hungarian utterances.
The present paper seeks to highlight qualitative congruencies between empirical data from behavioural experiments on linguistic structural priming and insights obtained from the Agreement Groups (AG) approach, a cognitive, usage-based,... more
The present paper seeks to highlight qualitative congruencies between empirical data from behavioural experiments on linguistic structural priming and insights obtained from the Agreement Groups (AG) approach, a cognitive, usage-based, distributional framework for modelling linguistic processing. Specifically, we demonstrate that a wide variety of experimental observations can be given theoretically consistent interpretation when the AG model is situated in a cognitive circuitry-architecture of nodes and connections. With the AG method, structural priming naturally emerges as a consequence of structural similarity via repeated activation of basic structural units, i.e. AGs. The analysed phenomena include structural priming of particular linguistic constructions, lexical/semantic facilitation (boost), cross-linguistic priming, anomalous utterances, and developmental aspects of structural priming. We also point out experimentally testable issues that come up along the hypothetical discussions.
The Discrete Combinatorial Neuronal Assemblies (DCNA) model as proposed in Pulvermüller & Knoblauch (2009) is a network of sequence detectors capable of processing linguistic sequences by simulating brain connectivity. The emergent... more
The Discrete Combinatorial Neuronal Assemblies (DCNA) model as proposed in Pulvermüller & Knoblauch (2009) is a network of sequence detectors capable of processing linguistic sequences by simulating brain connectivity. The emergent circuits in the network may correspond to the neural substrate of linguistic " rules ". The Agreement Groups (AG) model of linguistic processing outlined in Drienkó (2014) is a usage-based distributional approach based on cognitive mechanisms of storing groups of similar utterances in memory, and mechanisms for mapping utterances onto such groups. The present work reports parallelisms between the two models. We show that mapping word sequences onto AG's may be analogous to processing in DCNA circuits. Analogies in semantic dissociation suggest a parallelism between explicit semantic category information for AG's and activation threshold for DCNA's. The results also converge with language acquisition data.
Research Interests:
The present analysis is based on forming groups of similar speech fragments, and the assumption that shorter speech-fragments are contained in longer ones. We use punctuation as a cue for segmenting sentences into smaller units. The... more
The present analysis is based on forming groups of similar speech fragments, and the assumption that shorter speech-fragments are contained in longer ones. We use punctuation as a cue for segmenting sentences into smaller units. The segmented text will then be analysed by examining what proportion of the test set units can be " covered " by the " agreement groups " of the training corpus. We find that, in accord with previous findings, discontinuous processing yields better coverage values than continuous processing. The results may also point to the role of punctuation in text segmentation and processing.
Research Interests: