Statische bovenlaag

2000px Linkedin icon

Twitter

researcher

orcid

rgate

publons

scopus

google scholar

academia

loop

taal

Integrating biofeedback games in speech therapy for children who stutter

stutterPhD student: Daniil Umanski
Supervisor: Niels O. Schiller

The goal of this project is to investigate new possibilities of employing interactive technologies in the delivery of treatment for patients with Motor Speech Disorders (MSD). Specifically, the project explores how computer-based tools can refine current methodologies, when one considers the three main phases in the delivery of MSD treatment: (1) the preparation of treatment programs, (2) the practicing of speech motor skills, (3) the autonomous management of communication outside the clinic. In addressing each of these steps, a concrete technological solution to an identified problem is proposed, developed and evaluated.

Advancements in clinical linguistics are applied, together with natural language processing, to the development of a new computer-based tool for allowing clinicians to generate customized treatment materials for their patients. Principles of rhythm video games are combined with real-time acoustic analysis, in order to examine the possibilities of using a computer game to support the training of speech timing skills of children with MSD. Speech signal processing is used to provide an adaptive activation of auditory feedback, in an effort to refine current methods for enhancing the speech fluency of individuals who stutter. Finally, a course is developed to teach the principles of technological innovation to students of speech language therapy, in order to promote a durable process of innovation in the field.

This research project is funded through a Mozaiek grant from The Netherlands Organization for Scientific Research (NWO).

Linguistics aspects of speech and hearing of children who are wearing a cochlear implant (CI)

ciPhD student: Daan van de Velde
Supervisors: Prof. Dr. Niels O. Schiller, Prof. Dr. Johan Frijns, Prof. Dr. Vincent van Heuven, Dr. Mieke Beers, Prof. Dr. Claartje Levelt, Dr. Jeroen Briaire

My research is about linguistics aspects of speech and hearing of children who are wearing a cochlear implant (CI).

A cochlear implant is a surgically implanted hearing device, which partly restores the hearing of deaf people. It applies to people having only a small number of functioning hair cells in the inner ear; this can involve both children and adults as well as people born deaf and people who acquired deafness at a later stage. Cochlear implantation has been performed for over thirty years; currently, over 300,000 individuals worldwide have received an implant and it is still becoming more widespread. What a CI technically does is to convert the acoustic signal, captured by a microphone carried on the outer ear, into an electric signal, bypassing the impaired hair cells.

We know from experimental research and also from the mechanics of the device that CI users have difficulty discriminating different frequencies, whereas time and intensity related contrasts are less problematic. In speech, frequency is a very important parameter: it carries the melody and accentuation of sentences. This is called prosody. Prosody has several crucial functions in speech, including the signaling of emotion, attitude, and the marking of important words.

Therefore, I investigate what prosodic contrasts users can or cannot hear and how this is reflected in their voice and in the way they produce prosody. Because different acoustic parameters, such as frequency, timing and intensity, can correlate in one prosodic phenomenon, implantees may be able to use these phenomena properly, but it is expected that the more they have to rely on frequency, the worse they perform. One way in which this can be tested is using vocoders.

These are simulations of CIs and have the advantage that the researcher can manipulate the settings and is not dependant on finding patients. Altogether, this research combines linguistic, medical and engineering disciplines.

Processing of prosody

prosodyPhD student: Jurriaan Witteman
Supervisors: Vincent van Heuven and Niels O. Schiller

How we say something can be as important as what we say. Using these melodic and rhythmic aspects of speech (also known as 'prosody') we can communicate our emotions (whether we are happy or angry) but also the linguistic structure of an utterance (e.g. whether what we say is meant as a question or a statement). How does the human brain process such prosodic information? Using meta-analyses we have shown that both sides ('hemispheres') of the brain are necessary to perceive prosodic information, but that the right hemisphere is more important than the left for the perception of emotional prosody. Furthermore we have shown that this right hemispheric superiority in emotional prosody perception can be explained by superiority of the right auditory processing areas in the processing of acoustic properties that are important for the perception of emotional speech such as pitch. By using measurements of the electrical activity of the brain ('electroencephalography') we showed that the brain prioritizes the perception of emotional prosody as compared to linguistic prosody, possibly reflecting the existence of a 'hard-wired' system dedicated to the detection of conspecifics' emotions. Lastly, using measurements of regional oxygen use by the brain ('functional magnetic resonance imaging') we have shown that different regions in the brain are active when people actively analyze emotional prosody versus when people do not pay attention to emotional prosody. The acquired knowledge is not only interesting from a fundamental cognitive neuroscience point of view, but could also advance our understanding of neuropsychiatric disorders that are accompanied with prosody perception disturbances, such as schizophrenia and autism.

Tonal bilingualism

tonal2PhD student: Junru Wu
Supervisors: Yiya Chen, Vincent van Heuven and Niels O. Schiller

My research focuses on how bilingual individuals of closely related Chinese tonal dialects handle two tonal systems in perception, comprehension, and production, with a focus on the role of tones in lexical access.

  The project covers the following topics: how cognitive and sociolinguistic backgrounds influence the tonal systematic correspondence between the two related dialects, the interlingual mapping of tonal categories and its impacts on lexical and semantic access, tonal similarity effect in bilingual lexical access, tonal variability in bilingual mental lexicon and lexical access, the role of tone in automatic bilingual visual word recognition.

    The data in this project are all collected in the field, using phonetic and psycholinguistic experiments. The data are acoustic and behavioral in nature. I use Praat scripts to mark the corpus and extract phonetic parameters in a semi-automatic way. The main modeling methods used in this project are linear mixed-effect modeling and generalized additive modeling, implemented in R. 

http://www.hum.leiden.edu/lucl/organisation/phd-a-z/wuj.html