Slide background

Niels O. Schiller

Professor of Psycho- and Neurolinguistics

Projects

Representation and processing of pitch in tonal languages

tonalPhD student: Jessie Nixon
Supervisors: Yiya Chen and Niels O. Schiller 

I work within the ERC project The representation and processing of pitch in tonal languages. My research focuses on how native speakers of Beijing Mandarin store lexical tone in the brain and access it during speech. Beijing Mandarin has four lexical tones. These tones distinguish between words in the same way that 'phonemes' (approximately letter-sized units of sound) do in other languages (e.g. cat versus bat in English; mao1 'cat' verses mao2 'reed' in Chinese).

I am investigating how abstract or specific our representations of speech sounds are. When we retrieve words from memory, how much detail is retained in our phonological (sound) representations of these words? Although we may not be aware of it, there is a huge amount of variation in the pronunciation of even a single word, due to factors such as speech rate, repetition, familiarity and context. This is also true of Mandarin tones. For example, the '3rd tone' is usually low; but when two 3rd tones occur together, the first has a rising contour.

So, how do speakers store this variability? Many early accounts proposed that words were stored as series of abstract 'phonemes'. This would suggest that Mandarin tones would be stored and processed in abstract representations of one of the four lexical tones. However, my latest experiments have shown that seeing printed words activates detailed acoustic representations of tone. Ongoing experiments will examine whether there is also an abstract level of representation. In addition, electrophysiological methods will be used to investigate exactly when each of the various levels of representation is activated.

MODOMA: A Computer-Simulated Laboratory-Approach towards Language Acquisition

modomaPhD student: David Shakouri
Supervisors: Crit Cremers, Claartje Levelt and Niels O. Schiller

The goal of the MODOMA-project is to create a computer model of language acquisition. The resulting computer program sets out to construct linguistic knowledge (e.g. lexical and/or grammatical information) when presented with utterances such as sentences. MODOMA is an acronym for Moeder-Dochter-Machine (Dutch for: "Mother-Daughter-Machine"). A MODOMA is a language acquisition automaton. Acquisition is a result of an ongoing and online interaction between two conversation partners: a mother- and a daughter-entity. The mother-entity will be based on Delilah, the Leiden parser and generator of Dutch, whereas the daughter entity sets out to acquire the mother language grammar in the course of interaction. An online demo-version of Delilah can be found at: www.delilah.eu.

This project will provide a language acquisition lab to researchers. The process and results are reproducible, measurable and verifiable. The architecture of the program will be completely parametrized: researchers are given control of the configuration for a particular experiment. On the other hand, a lab entails a simplification of reality as the daughter is acquiring language in a controlled interaction space instead of the real world. Therefore, this project will provide a research tool for future studies into language acquisition. One of the advantages of a computer model is that it enables experiments that would be impossible to do with human subjects. Currently, a test program based on the same principles and the same architecture has successfully been developed. This program sets out to acquire a small grammar of subject and object morphology when presented with texts. The MODOMA-project is funded by an NWO 'PhD in the Humanities'-grant.

Grapheme-to-phoneme conversion in first and second language reading aloud

muts1PhD student: Kalinka Timmer
Supervisors: Niels O. Schiller

I combine the fields of Psychology with Linguistics and the research methods that give us the opportunity to understand what happens in the brain (e.g. electroencephalography: EEG). My research interests are in the process of reading aloud, specifically the conversion of printed text ('orthography') into sounds ('phonology'). The use of EEG in addition to speech onset measures (RTs) to study reading aloud, gives the opportunity to look at processes before speech output.

I am interested in the similarities and differences in reading aloud of languages in this world (e.g. Dutch, English, Persian, Russian, and Spanish). The Persian language has the interesting feature that 3 out of 6 vowels are not written. These words are read slower than words where all the vowels are written, because for the former you need to know the meaning of the word before you can pronounce it. However, EEG reveals that the early processes of both word types are the same. Another interest of mine is bilingualism. I have found that Dutch natives with English as a second language are very similar to native speakers of English in terms of converting written text into speech. In another experiment Dutch natives read Dutch words aloud (e.g. KNOOP) very briefly preceded by an English word (e.g. 'knee') that was not consciously perceived. Even thought they were in a monolingual Dutch environment, they still read the English word as an English word (pronouncing 'knee' with /n/) and not as a Dutch nonword (pronouncing 'knee' with /kn/). This suggests that not only a person's primary language (Dutch, in this case) can influence his secondary language (English), but that the opposite is also possible.

Integrating biofeedback games in speech therapy for children who stutter

stutterPhD student: Daniil Umanski
Supervisor: Niels O. Schiller

The goal of this project is to investigate new possibilities of employing interactive technologies in the delivery of treatment for patients with Motor Speech Disorders (MSD). Specifically, the project explores how computer-based tools can refine current methodologies, when one considers the three main phases in the delivery of MSD treatment: (1) the preparation of treatment programs, (2) the practicing of speech motor skills, (3) the autonomous management of communication outside the clinic. In addressing each of these steps, a concrete technological solution to an identified problem is proposed, developed and evaluated.

Advancements in clinical linguistics are applied, together with natural language processing, to the development of a new computer-based tool for allowing clinicians to generate customized treatment materials for their patients. Principles of rhythm video games are combined with real-time acoustic analysis, in order to examine the possibilities of using a computer game to support the training of speech timing skills of children with MSD. Speech signal processing is used to provide an adaptive activation of auditory feedback, in an effort to refine current methods for enhancing the speech fluency of individuals who stutter. Finally, a course is developed to teach the principles of technological innovation to students of speech language therapy, in order to promote a durable process of innovation in the field.

This research project is funded through a Mozaiek grant from The Netherlands Organization for Scientific Research (NWO).

Linguistics aspects of speech and hearing of children who are wearing a cochlear implant (CI)

ciPhD student: Daan van de Velde
Supervisors: Prof. Dr. Niels O. Schiller, Prof. Dr. Johan Frijns, Prof. Dr. Vincent van Heuven, Dr. Mieke Beers, Prof. Dr. Claartje Levelt, Dr. Jeroen Briaire

My research is about linguistics aspects of speech and hearing of children who are wearing a cochlear implant (CI).

A cochlear implant is a surgically implanted hearing device, which partly restores the hearing of deaf people. It applies to people having only a small number of functioning hair cells in the inner ear; this can involve both children and adults as well as people born deaf and people who acquired deafness at a later stage. Cochlear implantation has been performed for over thirty years; currently, over 300,000 individuals worldwide have received an implant and it is still becoming more widespread. What a CI technically does is to convert the acoustic signal, captured by a microphone carried on the outer ear, into an electric signal, bypassing the impaired hair cells.

We know from experimental research and also from the mechanics of the device that CI users have difficulty discriminating different frequencies, whereas time and intensity related contrasts are less problematic. In speech, frequency is a very important parameter: it carries the melody and accentuation of sentences. This is called prosody. Prosody has several crucial functions in speech, including the signaling of emotion, attitude, and the marking of important words.

Therefore, I investigate what prosodic contrasts users can or cannot hear and how this is reflected in their voice and in the way they produce prosody. Because different acoustic parameters, such as frequency, timing and intensity, can correlate in one prosodic phenomenon, implantees may be able to use these phenomena properly, but it is expected that the more they have to rely on frequency, the worse they perform. One way in which this can be tested is using vocoders.

These are simulations of CIs and have the advantage that the researcher can manipulate the settings and is not dependant on finding patients. Altogether, this research combines linguistic, medical and engineering disciplines.

Processing of prosody

prosodyPhD student: Jurriaan Witteman
Supervisors: Vincent van Heuven and Niels O. Schiller

How we say something can be as important as what we say. Using these melodic and rhythmic aspects of speech (also known as 'prosody') we can communicate our emotions (whether we are happy or angry) but also the linguistic structure of an utterance (e.g. whether what we say is meant as a question or a statement). How does the human brain process such prosodic information? Using meta-analyses we have shown that both sides ('hemispheres') of the brain are necessary to perceive prosodic information, but that the right hemisphere is more important than the left for the perception of emotional prosody. Furthermore we have shown that this right hemispheric superiority in emotional prosody perception can be explained by superiority of the right auditory processing areas in the processing of acoustic properties that are important for the perception of emotional speech such as pitch. By using measurements of the electrical activity of the brain ('electroencephalography') we showed that the brain prioritizes the perception of emotional prosody as compared to linguistic prosody, possibly reflecting the existence of a 'hard-wired' system dedicated to the detection of conspecifics' emotions. Lastly, using measurements of regional oxygen use by the brain ('functional magnetic resonance imaging') we have shown that different regions in the brain are active when people actively analyze emotional prosody versus when people do not pay attention to emotional prosody. The acquired knowledge is not only interesting from a fundamental cognitive neuroscience point of view, but could also advance our understanding of neuropsychiatric disorders that are accompanied with prosody perception disturbances, such as schizophrenia and autism.

Tonal bilingualism

tonal2PhD student: Junru Wu
Supervisors: Yiya Chen, Vincent van Heuven and Niels O. Schiller

My research focuses on how bilingual individuals of closely related Chinese tonal dialects handle two tonal systems in perception, comprehension, and production, with a focus on the role of tones in lexical access.

  The project covers the following topics: how cognitive and sociolinguistic backgrounds influence the tonal systematic correspondence between the two related dialects, the interlingual mapping of tonal categories and its impacts on lexical and semantic access, tonal similarity effect in bilingual lexical access, tonal variability in bilingual mental lexicon and lexical access, the role of tone in automatic bilingual visual word recognition.

    The data in this project are all collected in the field, using phonetic and psycholinguistic experiments. The data are acoustic and behavioral in nature. I use Praat scripts to mark the corpus and extract phonetic parameters in a semi-automatic way. The main modeling methods used in this project are linear mixed-effect modeling and generalized additive modeling, implemented in R. 

http://www.hum.leiden.edu/lucl/organisation/phd-a-z/wuj.html