In my talk, I will present results from two lines of my research, speech development and disorders. Both lines intend to elucidate the role of acoustic, and in particular, temporal variability and stability in the perception and production of verbal and non-verbal expressions.
In the first part, I will focus on how acoustic variability informs us about the distinction between verbal and musical functions. Here, the effects of pitch and temporal variability / stability on perception will be discussed by examining phenomena at the boundary between speech and song. As an introduction, I will present acoustic correlates that have been found to underlay the “speech-to-song transformation”, a perceptual illusion that makes us perceive spoken speech as song via repeated presentations of speech. The boundaries between speech and song are also fuzzy in the case of infant-directed communication, a “musilanguage” infants are confronted with during their first months of life. New results will be presented shedding light on the acoustic (and particular temporal) variability in infant-directed speech and singing and the perception of infant-directed expressions by infants and adults.
The second part aims to address the role of temporal variability in speech production as a marker of speech disorders. I will focus on my recent research on stuttering, a developmental motor speech disorder which is characterized by severe disruptions of the flow of speech. Temporal aspects (variability and timing) were measured in verbal and musical auditory-motor tasks in children, adolescents and adults who stutter. Results reveal that temporal variability and timing are altered in stuttering in the verbal as well as the non-verbal domain. The results will be discussed in light of the hypothesis that stuttering is linked to a deficit in predictive timing during speech production and, potentially, auditory-motor coupling.
Yoon Mi Oh (Aoju University, Seoul)