Sound change refers to the slow and systematic change in spoken accent within individuals or communities over time, typically on a time scale of years or decades. Examples of sound change in the recent history are the fronting of /u/ (e.g. in GOOSE, FOOT) in Southern British English and the merging of the diphthongs in SQUARE and NEAR in favour of /i@/ in New Zealand English. The forces giving rise to sound change are rooted in the cognitive mechanisms by which humans transmit and subtly imitate each other’s speech attributes, hence making sound change an emerging phenomenon. In order to identify and understand the roots of sound change, as well as predict future emerging accents for a particular language community, the development of complex computational models is required. Agent-Based Models (ABMs) allow to simulate the evolution of spoken accent among a community of artificial agents, each one endowed with fully specified (probabilistic) rules for perception, production, and mental representation of speech sounds, together with global (stochastic) rules governing interactions among agents.
This talk presents the latest version of the ABM of sound change developed at the Institute of Phonetics and Speech Processing at LMU Munich. Acoustic and (sub-)phonemic levels are implemented in the ABM by general-purpose machine learning algorithms, namely Gaussian Mixture Models (GMMs) and Non-negative Matrix Factorisation (NMF). Each agent organises and continuously adapts both levels of representation in full autonomy. Simulated acoustic and/or (sub-)phonemic changes, at the individual as well as at the population level, are tracked separately, directly compared to real (corpus) data, and their origin interpreted on the basis of the known mechanisms governing the ABM. In the talk, the case of /u/ fronting in Southern British English will serve as example to showcase the architecture and workings of the ABM.
Yoon Mi Oh (Aoju University, Seoul)