SCIENTIFIC CHALLENGES
Almost all levels of language knowledge and processing (from phonology, to syntax and semantics) are known to be affected by knowledge of word structure at varying degrees. A better understanding of the human strategies involved in learning and processing word structure lies at the heart of our comprehension of the basic mechanisms serving both language and cognition and is key to addressing three fundamental challenges for the study of the physiology of grammar, as detailed below.
Lexicon & Grammar | ^ |
According to dual-route approaches to word structure, recognition of a morphologically complex input word involves two interlocked steps: i) preliminary full-form access to the lexicon, ii) optional morpheme-based access of sub-word constituents of the input word, resulting from application of combinatorial rules taking care of on-line word segmentation. Such a view, recently challenged by several scholars, rests on the hypothesis of a direct correspondence between principles of grammar organisation (lexicon vs rules), processing correlates (storage vs computation) and localisation of the cortical areas functionally involved in word processing. Other theoretical models have put forward a more nuanced indirect correspondence hypothesis. For instance, in the Word-and-Paradigm tradition, fully inflected forms are associatively related through possibly recursive paradigmatic structures, defining entailment relations between forms. Any serious appraisal of such an indirect correspondence requires extensive empirical testing on a wide array of morphologically rich languages of the sort spoken in Europe, and is likely to exceed the limits of both human intuition and box-and-arrow models of cognition. Increasing availability of multi-lingual data sets and computer models of language learning and processing will have much to say in this respect in the near future.
Another fundamental open issue is how theoretical models relate to neurobiologically-grounded models of word structure. Recent evidence of automatic sublexical segmentation of monomorphemic words triggered by pseudo inflectional endings lends support to a less deterministic and modular view of the interaction between stored word knowledge and on-line processing, based on simultaneously activating patterns of cortical connectivity reflecting (possibly redundant) distributional regularities in the input at the phonological, morphosyntactic and morphosemantic levels. At the same time, this evidence argues for a more complex and differentiated neurobiological substrate for human language than current models are ready to acknowledge, suggesting that brain areas devoted to language processing maximise the opportunity of using both general and specific information simultaneously, rather than maximize processing efficiency and economy of storage.
Such a dynamic view of the brain language processor makes contact with the human ability to retain symbolic sequences in Short Term Memory. Elements that are frequently sequenced in the subject’s input are stored in Long Term Memory as single chunks, and accessed and executed in Short Term Memory as though they had no internal structure. Such an interaction between Short Term and Long Term Memory structures points to a profound continuity between word repetition/learning and other levels of grammatical processing in language.
Word Knowledge & Word Use | ^ |
People are known to understand, memorise and parse words in a context-sensitive and opportunistic way. Not only can speakers take advantage of token-based information such as frequency of individual, holistically stored words, but they are also able to organise them into paradigmatic structures (or word families) whose overall size and frequency is an important determinant of ease of lexical access and interpretation. Quantitative and analogy-based approaches to word interpretation lend support to this view, capitalising on stable correlation patterns linking distributional entrenchment of lexical units with productivity, internal structure and ease of interpretation.
These aspects agree with well-established psycholinguistic evidence that language comprehension is highly incremental, with readers and listeners continuously updating the meaning of utterances as they parse them. Much recent research suggests that language comprehension can be highly predictive, as long as the linguistic and non-linguistic context supports these predictions. Prediction can also be used to compensate for problems with noisy or ambiguous input and may explain the human advantage in parsing morphologically irregular forms (where morphosyntactic and morpholexical features are marked through extended exponence) over morphologically regular forms (where a morphological exponent systematically follows a full stem).
A parsimonious explanation of anticipatory mechanisms of language comprehension is that prediction uses some components for language production. There is indirect empirical evidence pointing in this direction: listeners activate the appropriate articulatory cortical areas for tongue and lips while listening to speech and brain areas that are associated with production during aspects of comprehension from phonology to narrative structure. This is in keeping with evidence of activation of mirror neurons in monkeys by perceptual predictions and perceived actions, but may also be understood as involving context-sensitive language “emulators”. In turn, anticipatory mechanisms of language comprehension may be closely related to mechanisms for Short Term memory content rehearsal such as Baddeley’s phonological loop.
All of this points to a converging trend between computational and cognitive lines of scientific inquiry, supporting the view that grammar and lexical competence are acquired through minimal steps, shaped up by performance-driven factors such as memory limitations, frequency-based sensitivity, and modality-specific constraints, ultimately blurring the dichotomy between language knowledge and usage.
Words & Meanings | ^ |
By exchanging words in ecological settings, we share, assess, modify, extend and structure our “semantic memory”. Yet, the nature and content of such memory, the principles of its associative organisation and internal structure, the developmental role of the dynamic interaction between linguistic form, meaning and sensing are among the most controversial issues in the current linguistic and neurocognitive debate.
Suggestions in the literature range from relatively abstract representations, including hierarchical semantic networks and lexical conceptual structures, to more concrete perceptual- or motor-based representations. Each of these approaches faces difficulties. Abstract representations elude the issue of symbol interpretation by severing meaning from our system of experiences of the external world. On the other hand, linguistic units can combine and behave distributionally in ways that are not strictly predictable from their semantic properties. Inferences, sense extensions, metaphors and processes of concept composition and coercion show that grounded sensory motor knowledge does not suffice to account for our ability to extract meaning from language. Intermediate hypotheses need be entertained and empirically assessed, casting meaning as abstract, schematic representations, based on linguistically articulated, structured knowledge and word co-occurrences in large text samples, which are nonetheless embodied in human perceptual and motor systems. Researchers working in a neurocomputational framework have recently addressed issues of semantic knowledge arising from patterns of combinatorial information using more brain-like neural network simulations.
Interpretation of Noun-Noun compounds seems to require integration of the meaning representations associated with the two constituent nouns and independently accessed from the lexicon. However, it has recently been shown that access to conceptual representations is considerably more dynamic and context-sensitive, so that the whole construction appears to prompt a process of selective activation of contextually-relevant semantic properties. From a computational standpoint, constraint-satisfaction approaches made the interesting suggestion that the interpretation of a complex construction makes use of pre-compiled, schematized information, memorized in the mental lexicon and applied probabilistically.
These aspects bring in the issue of interactive negotiation of referential and intentional word meanings in the process of learning word usages in daily communicative exchanges. Lexical pragmatics investigates the processes by which linguistically-specified (i.e. literal) word meanings are modified in use on the basis of factors related to pragmatic competence, such as knowledge of the specific communicative context, knowledge about the co-conversant(s), knowledge about the specific ongoing task and general knowledge of the world. Mediation of all these factors is key to understanding the ontogenesis of word meaning and its creative usage in daily conversation.
Collaboration in NetWordS unfolds through the following activities:
♦ discuss and develop consensual word representations in context
♦ establish common experimental protocols and suggest novel ones
♦ take stock of and integrate multilingual evidence based on the large array of European languages spoken and investigated in the Network
♦ transfer best practice in use of new computational and statistical techniques for lexicon modeling
♦ share experimental data, software and equipment
♦ facilitate, through community building, the development of optimum cross-disciplinary and cross-linguistic research strategies
♦ prompt and extend collaboration between partners
♦ link European activities with the wider community world-wide.