Human talkers decode spoken language with unconscious ease, even in noisy environments where engineering approaches to speech recognition fall short. A crucial reason for this success is the ability to integrate sensory cues from multiple sources. This talk will provide an overview of how visual and somatosensory channels augment auditory processes in speech perception, with particular emphasis on recent work showing how appropriately timed somatosensory perturbations (facial skin deformation, aero-tactile stimulation) can lead to systematic shifts in perceived perceptual boundaries.
Outi Tuomainen (University of Potsdam)
Perception et production des voyelles nasales du français par des hispanophones d'Espagne et de Colombie
Michael A. Grosvald (Qatar University)
Martin Krämer (The Arctic Unversity of Norway)