As for any communication system, the decoding of speech by the human auditory system relies on a code associating a physical input with some linguistic representations. Finding what auditory primitives (acoustic cues) human listeners rely on to decode speech sounds is an important step toward a better understanding of speech comprehension and acquisition.
In this talk I will describe two projects aiming at uncover perceptually relevant acoustic cues in speech. The first part will focus on the identification of the acoustic cues underpinning phoneme comprehension, through the example of a ba/da categorization task, using the newly developed Auditory Classification Image method (Varnet et al., 2013, 2015, 2016). Then, in a second part, we will turn to the encoding of higher-level linguistic properties in the speech signal, with a comparison of different language groups (stress-timed vs. syllable-timed languages and head-complement vs. complement-head languages) on the basis of their temporal modulation content (Varnet et al., 2017).
Prochains événements
Voir la liste d'événementsSRPP d'Antje Mefferd
Antje Mefferd (Department of Hearing and Speech Sciences, Vanderbilt University Medical Center)
SRPP de Jonah Katz
Jonah Katz (West Virginia University)
SRPP de Michele Gubian
Michele Gubian (IPS, LMU Munich)
SRPP de Nancy C. Kula
Nancy C. Kula (University of Essex)