As for any communication system, the decoding of speech by the human auditory system relies on a code associating a physical input with some linguistic representations. Finding what auditory primitives (acoustic cues) human listeners rely on to decode speech sounds is an important step toward a better understanding of speech comprehension and acquisition.
In this talk I will describe two projects aiming at uncover perceptually relevant acoustic cues in speech. The first part will focus on the identification of the acoustic cues underpinning phoneme comprehension, through the example of a ba/da categorization task, using the newly developed Auditory Classification Image method (Varnet et al., 2013, 2015, 2016). Then, in a second part, we will turn to the encoding of higher-level linguistic properties in the speech signal, with a comparison of different language groups (stress-timed vs. syllable-timed languages and head-complement vs. complement-head languages) on the basis of their temporal modulation content (Varnet et al., 2017).
Outi Tuomainen (University of Potsdam)
Michael A. Grosvald (Qatar University)
Martin Krämer (The Arctic Unversity of Norway)
Gasper Begus (UC Berkeley)