From Mind to Brain
Porteurs : T. Scheer et Fabien Mathy (BCL)
Financement : IDEX UCA Jedi (projet issu de l’Appel à Manifestation d’Intérêt 2016)
Durée : 4 ans
1. Phonologists don’t know what a phonological phenomenon is
In current phonological theory, a major problem is that there is no agreement on the set of phenomena that are phonological in kind and hence constitute the input to theory-building. Perspectives on a simple phenomenon such as the alternation between [k] and [s] in the word pair electri[k] – electri[s]-ity significantly diverge: it is considered phonological in kind by some (i.e. there is one single underlier and phonological computation turns k into s upon each production), while placed outside of the purview of phonology by others: there are two distinct lexical recordings, either of whole words (electric and electricity), or of allomorphs (electri[k]- and electri[s]-) which are then selected in the appropriate morphological context.
As a result, the popperian competition among theories is biased: a theory that accounts for the k-s alternation in the phonology cannot be compared to a theory considering that k-s has nothing to do with phonology: the set of things to be explained is not the same, and wildly diverges at the scale of a language, or of phonology as such. Before theories can compete, the question what a true phonological phenomenon is thus needs to be addressed.
That is, phonologists are currently in a position of, say, geologists who aim to make a theory of the characteristics of stone, but are unable to distinguish stone from plastic. They thus collect samples on which they build their theory, some of which contain 10% of plastic, others 30%, still others 60% and so on. Unsurprisingly enough, competing theories built on these wildly varying sets of empirical material significantly diverge – not because of the theorizing itself but because of the plastic.
The issue is identified at least since the early 1970s and has produced a sizeable body of literature then (evaluation metrics), but nothing could ever be concluded: there is no pre-theoretical criterion that wold allow us to decide whether a given alternation is stone or plastic. Since the early 80s, phonologists have abandoned looking into this problem, and regular sources of evidence (elicitation, phonetics, corpora / big data, typology or conceptual arguments) stand little chance to offer any advances.
2. How to tell plastic from stone
We believe that telling plastic from stone may be achieved by looking at the presence or absence of relevant neuro-physiological correlates of phonological activity (rather than examining indirect reflections thereof in the production or intuition of speakers).
Based on electro-physiological evidence from a speech production task, Sahin et al. (2009) showed that there are at least three linguistically distinct processes that can be separated in time and space in the brain: first pieces (morphemes, words) are retrieved from long term memory (lexical access), then they are concatenated (morpho-syntax), and finally they are assigned a pronunciation (phonology). The study was performed using intracranial electrophysiology, i.e. by recording local field potentials from neuron populations using electrodes implanted (for clinical evaluation) in language-related brain regions of patients with epilepsy; the patients silently pronounced words or inflected versions of non-inflected stimuli.
Sahin et al.’s (2009) results revealed that lexical access produced a significant neural response (ERP, Event-Related Potential) at ~200ms, concatenative activity provoked an ERP at ~320ms, and phonological computation induced an ERP at ~450ms. Sahin et al. thus provide an instrument that is able to detect the presence of phonological computation in language production (although this was not necessarily their goal). Whether or not the pronunciation of contentious cases such as electricity involves phonological computation may now be tested: we predict that if neural activity at around 450ms in the relevant experimental condition is absent, the production of electricity does not involve phonological computation.
We have adapted Sahin et al.’s experimental setup to non-invasive scalp-level EEG, and a pilot study shows that their results can be reproduced using this modality. We will now run a number of contentious phenomena from a variety of languages (including the English k-s alternation known as velar softening) through our protocol in order to referee their status. Particularly exotic and contentious phenomena are identified by a PhD project on so-called Crazy Rules.
In all our experiments, production-based scalp-level EEG (with IRM-supported source localization) will be flanked by behavioural data based on the same protocol and stimuli (vocal key-based reaction time), as well as by a study of the anatomic underpinnings (dMRI-based diffusion of information in the brain, subject-specific variation of the white matter).
3. Information compression, storage and learning
Sahin et al. have also controlled for two additional factors: word frequency and word length. Their results show that upon lexical access (~200ms) length is irrelevant but frequency plays a role: less frequent words produce higher amplitude of the signal recorded. On the other hand, when phonological computation is performed (~450ms), the reverse pattern is observed: frequency is irrelevant but longer words produce higher amplitude of the signal. In both cases, extra labour impacts the amount of energy used, rather than the timeline, which is stable across all conditions.
Word length thus appears to be neutralized upon lexical access (all words behave in the same way no matter how long they are), but does impact phonological computation once a word is loaded into active memory. One way of interpreting this situation is through information compression: words are not stored as such in long term memory but in a compressed guise. They are uncompressed when loaded into active memory.
Information compression is known as chunking, i.e. a cognitive mechanism in memory optimization that can shape aspects of our cognitive system by extracting statistical regularities that are present in our environment, which helps re-encoding information. Chunking represents a fundamental aspect of our everyday cognitive life since it allows us to drastically increase our memory and to accelerate processing. The concept of a chunk was originally developed to deal with the fact that the amount of information that can be stored can vary considerably depending on the nature of the material. Since Miller (1956), it is usually thought that the capacity in short-term memory is limited to 7 chunks of elements such as letters or words, to refer to the idea that an individual can retain about 7 « things ». For instance, « USA » is a chunk because it is made of three non-independent letters. Since about 7 slots are available in short-term memory, the chunking of U-S-A leaves 6 free slots (out of 7) instead of 4, had the three letters been considered separately. Chunking is therefore an efficient cognitive strategy to overcome capacity limitations in memory.
We will thus control for word length and frequency in all experiments and pursue the hypothesis that storing of lexical material involves information compression. Since information compression occurs upon perception, we will also conduct artificial language experiments pursuing the question whether phonological regularities that occur in unknown words are matched with prior knowledge encoded in the computational system of phonology (application of an existing process to a new word), or is rather extracted from the environment independently and stored (lexicalized) as such. That is, in case there is a phonological operation upon production that turns stored k into surface s in electri[s]city (which is the first thing that we want to find out), will speakers use this process in order to store and later produce new words when they are exposed to, say, nonce denic – denic‑ity?