Human talkers decode spoken language with unconscious ease, even in noisy environments where engineering approaches to speech recognition fall short. A crucial reason for this success is the ability to integrate sensory cues from multiple sources. This talk will provide an overview of how visual and somatosensory channels augment auditory processes in speech perception, with particular emphasis on recent work showing how appropriately timed somatosensory perturbations (facial skin deformation, aero-tactile stimulation) can lead to systematic shifts in perceived perceptual boundaries.
Shigeto Kawahara (Keio University, Tokyo)
Timo B. Roettger (University of Oslo)
Bob Ladd (University of Edinburgh)
Marcin Włodarczak (Stockholm University)