Human talkers decode spoken language with unconscious ease, even in noisy environments where engineering approaches to speech recognition fall short. A crucial reason for this success is the ability to integrate sensory cues from multiple sources. This talk will provide an overview of how visual and somatosensory channels augment auditory processes in speech perception, with particular emphasis on recent work showing how appropriately timed somatosensory perturbations (facial skin deformation, aero-tactile stimulation) can lead to systematic shifts in perceived perceptual boundaries.
Doris Mücke (IfL Phonetics, University of Cologne)
Serge Pinto (Laboratoire Parole et Langage, Aix-Marseille Université)
Claire Pillot-Loiseau (LPP)
Hannah King (CLILLAC-ARP, Université de Paris)