Traits distinctifs et gestes articulatoires des consonnes non-pulmoniques Hadza et Iraqw

En raison du COVID-19 ce SRPP aura lieu en ligne

 
Le Hadza (langue Khoesan) et l’Iraqw (Langue couchitique) ont des consonnes non-pulmoniques, clicks et éjectives, dans leur inventaire phonétique/phonologique. Ces consonnes nécessitent une description articulatoire et acoustique précise, pour les interpréter et les formaliser en termes de gestes articulatoires et de traits distinctifs. L’enjeu est aussi de comprendre les mécanismes de la production de ces segments et comment ce type de données teste, ou étend, les limites de nos connaissances sur la diversité du fonctionnement de la production de la parole dans les langues.

Sands, Maddieson et Ladefoged (1996) et Sands (2013) ont décrit le Hadza avec 9 clicks tandis que Miller (2008) suggère qu’il y en a 12. Des données récentes montrent que cette langue en présente 16 : [ʘ̰, |, |ʔ, |h, ŋ|, ŋ|ʔ, !, !ʔ, !h, ŋ!, ŋ!ʔ, ‖, ‖ʔ, ‖h, ŋ‖, ŋ‖ʔ]. Une des questions fondamentales avec les clicks est de décrire et de formaliser leur description du point de vue articulatoire et acoustique. Les 4 types de clicks Hadza, bilabial, dental, alvéolaire et latéral [ʘ, |, !, ‖] peuvent être accompagnés, de manière contrastive, par des traits aspirés, glottal et nasals, ces traits étant même parfois combinés. Un click peut ainsi être une suite et une superposition de gestes et de traits, par exemple, nasal, latéral et aspiré, [ŋ‖̰h]. Du point de vue acoustique, les clicks sont décrits avec 2 traits [grave vs aigu] et [abrupt vs bruité] en suivant une proposition de Traill (1985). Les clicks dental [|] et alvéolaire [!] du Hadza sont [grave] et bruité [|] ou abrupt [!]. Le click latéral [‖] est [grave et aigu]. Cette description est déduite de l’examen des spectres acoustiques pris au relâchement des clicks.
Un point important à souligner concerne la biomécanique de la production des clicks, dont les mouvements d’articulateurs sont semblables à des gestes de déglutition, mais sans bol alimentaire. Le geste contraire est celui des consonnes éjectives où le larynx s’élève avec la glotte fermée. Le Hadza en comporte 6 : [p ́, ts ́, tʃ ́, cʎ̥ ́, kx ́, kχw ́]. Acoustiquement, la forte intensité du bruit de relâchement des éjectives les rend comparables aux clicks. Leurs traits acoustiques suivent une gradation de [grave] vers [aigu] et des variations [±bruité] du relâchement. Le caractère labialisé de l’éjective [kχw ́] est une caractéristique intrinsèque et non une articulation secondaire. Le geste du click labialisé (comme celui des autres consonnes labialisées du Hadza et de l’Iraqw) est différent de celui de l’arrondissement et la production rencontrés avec les approximantes labio-vélaires [w]. Le click labialisé [kχw ́] montre un geste d’approximation des lèvres et l’approximante labio-vélaire [w] une protrusion et un arrondissement. La comparaison des éjectives Hadza et Iraqw montre des détails importants dans les mécanismes de production de ces sons. Le geste articulatoire des éjectives Iraqw implique un mouvement initial quasi horizontal du larynx, provoqué par l’activité du constricteur pharyngien inférieur. Ce mouvement précède celui de l’élévation du larynx, souvent très marqué dans cette langue. Des données de palatographie suggèrent, en outre, un renforcement de la constriction supra laryngienne comme pour les éjectives latérale alvéolaire [tɬ ́] de l’Iraqw et l’éjective palatale latérale [cʎ̥ ́] du Hadza. L’aspect temporel des gestes impliqués dans la production des consonnes non-pulmoniques Hadza et Iraqw crée des difficultés pour la description de ces segments uniquement en termes de gestes articulatoires. La combinaison des gestes impliqués dans leur production avec la description acoustique de leurs principaux traits permet de catégoriser plus robustement ce type de segments.

Miller, K. (2008). Hadza grammar notes. Riezlern.
Sands, B. (2013). Hadza. In R. Vössen ed. The Khoesan Languages. London. Routledge
Sands, B., Maddieson, I. et Ladefoged, P. (1996). The phonetic structures of Hadza. Studies in African Linguistics. 25, 2. 171-204.
Traill, A. (1995). Phonetic and phonological studies in !xóõ. Hamburg. Buske.

Prosodic structure as an interface between rhythmic and intonational patterns

In most studies on prosodIc structure, two or three levels of constituency above the prosodic word are usually assumed: the accentual phrase (also named minor phrase or clitic group), the intermediate phrase (also named major phrase or phonological phrase) and the intonational phrase. The different names assigned to these units often reflect distinct perspectives in apprehending prosodic structure, among which we may distinguish an intonation-based approach and a grammatically-driven approach. Because of these differences, endless debates exist on the validity of the various units.

In this communication, based on analysis of French prosody and on an examination of the intermediate phrase, we will argue for an approach that clearly distinguishes between metrically and intonationally-based prosodic units. First, we will clarify the extension and status of the intermediate phrase in such a way as to consider it essentially as a metrically-driven prosodic unit. Second, a distinction will be made between this metrically-driven phrase and two types of intonational phrases on the basis of the intonational contours occurring at their right edge.

This proposal is based on (a) the inventory and possible realisations of the contours at the right edge of these phrases, and (b) their relation with the morpho-syntactic and semantic structures. Note that our proposal accounts for phrasing and intonation contour choice at the underlying phonological level, the way the contours are realized being seen as resulting from choices made in other parts of the grammar and from performance factors.

Relative vs. absolute orientation in sign language: The case of two-handed signs

The segmental phonology of sign language is currently modeled with feature geometry and dependency relations. These models typically assume three phonemic classes as primitives (handshape, place of articulation and movement), and derive a fourth, orientation, as a result of the interaction between handshape and place of articulation. Current sign language models approach orientation as a relation between a hand-part and a plane of articulation. This relative way of defining it allows getting rid of the reference to the body as a landmark.
The goals of this study are i) to provide evidence for the need of absolute orientation in addition to relative orientation in order to capture the phonology of some signs, and ii) to minimally enrich current models which are only based on relative orientation so that the phonology of these “exceptional” signs is also accounted for.

We use French sign language symmetrical two-handed signs produced on the body like BELT, BONE, TABOO, UNEMPLOYMENT as a case study. We show that relative orientation does not meet descriptive adequacy when the two hands contact each other. Relative orientation can either capture the contact between the hands or the contact with the body, but not both. We propose secondary planes as a formal step to model orientation for these signs. While the implementation of this solution requires minimal changes in current theories, the impact on the whole theory of segmental phonology for sign is quite important. The core conceptualization of orientation as a purely relational phonemic class does not hold anymore (at least not for these signs), as secondary planes impose geometrical restrictions that force absolute orientation.

The phonetic basis of speech preparation

Silent phases before speech initiation are often seen as the time-interval during which the
utterance is planned. In most studies on pauses the focus is on cognitive and linguistic
factors such as word frequencies or utterance complexity. The aim of our study is to
investigate how phonetic factors affect these silent phases. In particular we are interested
in the physiological aspects of speech initiation such as breathing, articulatory posturing and
coordination of breathing and oral gestures. Pilot studies from three areas will be presented
here: (1) the effect of breathing on reaction time, (2) the coordination of respiratory activity
and breathing during interspeech pauses and (3) the effect of answer type on gap duration in
dialogues.

Bangine as a language isolate

The birthplace of modern humans is potentially in West Africa, yet, north of the Bantu-speaking area,  it is among the least studied areas in the world. Language is a central part of humanity’s present and past: every modern human being communicates through language. Prehistoric unrecorded languages cannot be studied in the same way as speech is today, but we can still gain insights into our ancient ancestors’ languages by looking at the ways with which people correspond with each other. Historical linguists search forpresent speakers’ sound-meaning patterns to group languages into families, and then to reconstruct what the language family’sProto-language would have sounded like. A language isolate, one with no known living relatives, presents one of the biggestobstacles for historical linguistic reconstruction. A language isolate spoken by a population genetic isolate represents remnants of lost diversity and the keys to unlocking the mysteries of our species’ early migration patterns. Bangime is one of Africa’s four confirmed language isolates. Its speakers, the Bangande, are equally unique genetically. The affiliations of the languages and peoples surrounding the Bangande, the Dogon, Mande, and Songhai groups, are among the most debated in Africa. The INSIGHT2020 team will amass existing and gather new, big data from under-studied languages and compare them with innovative genetics research to expand the search for previous pathways of West African populations. We will use ground-breaking computer- assisted technologiesto test the hypothesis that the Bangande are the only population to have survived a yet undiscovered cataclysmic event that predated the Bantu Expansion. Findings will be made available to researchers in an accessible, multimodal, online repository. The Bangande community will also be informed in an ethically sensitive and culturally appropriate manner. Our interdisciplinary methodology can serve as a model for other areas with similar questions.

Articulatory variability and coordination: Speech errors from a dynamical perspective

The proper act of speaking is one of the factors that leads to effective communication between people. The seemingly invariant sequence of planned and produced speech units frequently results in an extremely variable output of articulatory movements; sometimes to such an extent that the speaker produces a, by the listener, perceived speech error. Interestingly, the speaker his or herself frequently doesn’t notice the error, suggesting that immediate auditory feedback is not the most important channel to monitor speech productions.  Since decades, the production and correction of speech errors have been a valuable source of information for linguists to model speech and language production processes. In general, errors have been interpreted and modeled as originating at the phonological level, because of competing phonemes or features. More recent studies suggest that errors are more gradual and, in certain cases, originate at the articulatory level. l present a series of studies, conducted at the Oral Dynamics lab in Toronto, examining errors from an articulatory point of view, exploring whether speech errors were influenced by phonetic context and thus originated at a lower phonetic or articulatory level. In addition, I will present data on how speakers control for these articulatory speech errors.

Proposing and testing a new nasality index measured using a synchronous multi-sensory system

We have lately developed a non-invasive multi-sensor acquisition set – the hyper-helmet – for rare songs recording in an intangible cultural heritage safeguarding perspective. In this presentation, we take advantage of this articulatory sensing system to study and test a new nasality index. The helmet’s acoustic microphone and nasal piezoelectric accelerometer are used to calculate an oral/nasal rms ratio. An ElectroGlottoGraph instrument is the mean to estimate the voicing selector parameter. In addition, a non-intrusive tongue imaging sensor (an ultrasonic probe) and a lips movement camera are backups for articulatory and nasality qualitative interpretation. A software has been developed for synchronous acquisition of all sensors and it is been used to record an English corpus interpreted by a native English-speaking Canadian mid-age man. Multiple tests have been held to verify numerous nasality theories. Some results are shown in this presentation.

Aerodynamic, articulatory and acoustic realization of French /R/

French uvular /ʁ/ is usually considered as problematic due to its variability, especially in positions such as word initial and word final.
In this presentation, physiological and aerodynamic analyses allowed us to determine its major axes of variation as well as to validate the use of several acoustic measurements.
An acoustic study is then presented on large corpora of continuous speech, so as to test the variability of French /ʁ/ in terms of the aforementioned results. Finally, a parallel with perception is drawn.