Figure 4 - uploaded by Julien Meyer
Content may be subject to copyright.
Isotonic lines and audition threshold 

Isotonic lines and audition threshold 

Source publication
Conference Paper
Full-text available
The scientific study of the whistled speech of several languages has already provided an alternative point of view on many aspects of language. After a general overview on the phenomenon, this paper develops a comparative analysis of several whistled forms of non tonal languages which are still in use. Meanwhile, the vocalic and consonantal reducti...

Context in source publication

Context 1
... the vowels do not have a fixed pitch: they occupy a frequency band of 150 Hz in average. The stressed vowels are very often whistled higher in pitch within each band. As these bands sometimes overlap, they are used by the whistler to adapt to the phonetic contexts of the words: when there is a potential ambiguity, they distinguish clearly neighboring vowels by making the effort to place the vowels at opposed extremes of their own band (cf. examples below). Therefore, the overlapping, in whistles, of two bands of frequencies corresponding to two different vowels doesn’t mean that they are always interpreted in the same phonologic way. The five vowels in the modern Greek (i, y  , a, o) are whistled in the village of Antia in five bands of frequencies among which four might overlap with the band of another vowel. This creates statistically three major bands of frequencies: [i, ( y ,  ), (a, o)]. The accentuation in modern Greek has an intermediary degree of liberty, we have noticed that the whistlers reproduce the spoken accentuation nearly systematically (80%), with variations between the whistlers. The 8 vowels of Turkish (i, y, e,  a, o, u) are whistled in 8 bands of frequencies decreasing in pitch that might be grouped in three reductive bands as follows: [(i, y,  ), (e,  ), (a, o, u)]. Our results differ from the ones published by Leroy [5], only in the case of /u/, but we have the same results as Moles [11]. The influence of vowel harmony rules of Turkish help the whistlers to get rid of some ambiguities. However the ones which are not solved by the harmony system are sometimes overcome by the use of extremes of the bands of frequencies. For example for the common word “kolay” (/kolaj/), /o/ and /a/ are effectively distinguished because /a/ bears a higher pitch despite the fact that these two vowels are usually whistled in the same way. Silbo vocalic system is based on the Spanish spoken dialect of La Gomera island for which /o/ and /a/ are very near. The spoken vowels (i, e, a, o, u) are therefore whistled in five bands which can be grouped in three groups [i, e, (a, o, u)]. Similarly to Greek and Turkish, the three bands of frequencies of Silbo are around 2600 Hz for [i], 2100 Hz for [e] and 1600 Hz for [(a, o, u)]. Some of the best whistlers disagree with this classification. Their whistled description enables them to distinguish the five vowels. They can even differentiate /u/ from /o/ because its spectral envelope carries much more energy. This difference of point of view might be due to the survival of different whistled dialects in the island [3] and is always relative to the level of practice. The accentuation in Silbo uses the set of possibilities of each band of frequencies: for example, in the word “abajo”, the second /a/ is whistled higher than the first and the /o/ is whistled lower than the two former vowels. The main categories of whistled consonants described in Silbo and Turkish are still available in Greek. The occlusive stop consonants can be grouped in three types of loci movements: [P, T, K] equivalent to those of the formant 2 of spoken speech. /s/ and /dz/ behave as /t/ but with higher loci. The voicing is reproduced by a slight continuation in the amplitude. The shape of liquid and fricative continuants are displayed on Fig.2. together with [m]. The nasal [n] is usually whistled like [l] with sometimes a cut at the edge of the shape. From our study of Greek, we also concluded that from a cluster of consonants, only the most projecting consonant will remain in whistles. The whistling techniques vary in function of the distance of communication : intromission of a finger in the mouth (long distance), retro flexion of the tongue (middle distance) or bilabial (short distance). Whistling does not require the vibration of the vocal cords: it is produced by a shock effect of the compressed air stream inside the cavity of the mouth. When the jaws are fixed by the finger and/or the tighten lips (point 1 in Fig. 3) , the size of the hole is stable. The air stream expelled makes turbulences at the edge of the mouth. The faster the air stream is expelled, the higher is the noise inside the cavities. If the hole (mouth) and the cavity (intra oral volume) are well matched the resonance is tuned and the whistle is projected more loudly. The frequency of this bio-acoustical phenomenon is modulated by the variation of the volume of the resonating cavity which can be, to a certain extent, related to the articulation of the equivalent spoken form. The movements of the tongue and of the epiglottis play the main roles for tuning the vowels and consonants (Fig. 3). The whistled signal is shaped by a kind of whistled aperture. Such a behavior is aimed at reaching an optimal listener’s intelligibility. The pitches of the main band of frequencies of whistles are concentrated in a narrow bandwidth (1 kHz to 3 kHz) where the audition of human beings is more sensitive and selective as shown on Fig 4. The amplitude of whistled speech has reasonable limits in its dynamic range (less than 20 dB) whereas the range of spoken speech is more than 50dB. Long distance whistled speeches are higher in frequencies (approximately 100 to 250 Hz) than short distance ones. This aspect underlines that the range of frequency is relative to the distance of communication. Whistles are well carried in valleys which form a natural guide (the signal remains understandable at 8 km in La Gomera) and they exactly cover the central domain of frequencies for which the sounds resist to reverberation in forests [10]. Moreover, in natural conditions the background noise is weak in high frequencies (except in windy weather) so the signal to noise ratio is better than 6 dB at 1 km and is enough to be clearly heard. All these remarks show that whistled speech is particularly well adapted to human communication in noisy natural environments, and in long distance speaking in mountains or in forests. The comparative results obtained from several whistlers of various non tonal languages show how the properties of whistles are exploited to encode linguistic information. As a whistled signal is limited to three features: pitch, loudness and duration; the frequency and amplitude modulations are used in a complementary way. In this context, the analysis of the acoustic cues exploited by whistlers for vowels and consonants underline that the intelligibility of these sounds is relative to the lexical environment and to the structure of the concerned language. Such a conclusion is supported by the performances to psychoacoustic tests realized by Moles [11] in Turkey which showed that the intelligibility of words was eased when they contain the most frequent segmental features and that much more confusions where made for words extracted from their lexical context. These results combined with our analysis meet the interrogations of some studies on spoken languages which look for the phonetic basis of phonological representations [12] by analyzing the degree of reduction in different contextual conditions [13]. Our results taking into account the articulation, the perception, the acoustics and the use of the vowels in different cultural and lexical context suggest a good match of the whistled main band of frequency and of the perceptive formant (called F2') defined by Carlson, Fant and Granström [14]. Their approach of the mechanisms of data reduction in vowel perception, together with other psychoacoustic studies tackling this subject, provide similar results to the natural evolution displayed in the phonetic description of articulated whistles. Whistles extend this approach to consonants. Whistled languages are the result of the adaptation of the human perceptive and productive intelligence to a natural acoustic area (topography, vegetation, noise) and to a linguistic environment. They represent a strong model to investigate the perception of languages. I would like to thank the whistlers and the cultural leaders who took some time to work with me on the field, Laure Dentel who was volunteer to record during my fieldwork, Prof. René-Guy Busnel for his strong scientific support during the last 3 years, Bernard Gautheron for his advice, Ross Caughley for lending me some material in Chepang, the staff of the DDL Laboratory for providing technical material. This research was half financed by a BDI Ph.D grant of the CNRS and half by personal financial ...

Similar publications

Article
Full-text available
The investigation of acoustic correlates of word stress is a prominent area of research. The literature is rife with studies of the acoustic exponents of what is often referred to as stress but the methodological diversity of this research has created an unclear picture of the properties robustly associated with it. The present paper explores the m...
Article
Full-text available
The aim of this paper is to provide a description of a number of phonetic aspects of Manange, a Tibeto-Burman language of Nepal. Specifically I describe acoustic properties of select units in the segmental and suprasegmental domains. In particular, the tone system of Manange is of special interest because the domain of the tone bearing unit is the...
Thesis
Full-text available
The Arapaho language (one of the Plains Algonquian languages of the Algic family) is traditionally claimed to be a pitch-accent language, meaning that prominence is marked exclusively or mainly by modulation of fundamental frequency (Goddard 2001, Cowell 2008). The main goal of the current research was to experimentally establish the phonetic diffe...
Article
Full-text available
The study of the acoustic correlates of word stress has been a fruitful area of phonetic research since the seminal research on American English by Dennis Fry over 50 years ago. This paper presents results of a cross-linguistic survey designed to distill a clearer picture of the relative robustness of different acoustic exponents of what has been r...
Article
Full-text available
Previous literature on the phonetics of stress in Persian has reported that fundamental frequency is the only reliable acoustic correlate of stress, and that stressed and unstressed syllables are not differentiated from each other in the absence of accentuation. In this study, the effects of lexical stress on duration, overall intensity and spectra...

Citations

... Speech changes induced by environmental factors are primarily characterised by modications to prosodic cues including increases in intensity, fundamental frequency and word duration (e.g., Castellanos et al., 1996;Garnier, Bailly, Dohen, Welby, & Loevenbruck, 2006;Van Summers et al., 1988). Some languages have even developed a whistled form of language in response to the necessity to communicate across very large physical distances (Meyer, 2005). ...
Thesis
Full-text available
Articulatory variation is well-documented in post-alveolar approximant realisations of /r/ in rhotic Englishes, which present a diverse array of tongue configurations. However, the production of /r/ remains enigmatic, especially concerning non-rhotic Englishes and the accompanying labial gesture, both of which tend to be overlooked in the literature. This thesis attempts to account for them both, in which we consider the production and perception of /r/ in the non-rhotic variety of English spoken in England, ‘Anglo-English’. This variety is of particular interest because non-lingual labiodental articulations of /r/ are rapidly gaining currency, which may be due to the visual prominence of the lips, although a detailed phonetic description of this change in progress has yet to be undertaken. Three production and perception experiments were conducted to investigate the role of the lips in Anglo-English /r/. The results indicate that the presence of labiodental /r/ has caused auditory ambiguity with /w/ in Anglo-English. In order to maintain a perceptual contrast between /r/ and /w/, it is argued that Anglo-English speakers use their lips to enhance the perceptual saliency of /r/ in both the auditory and visual domains. The results indicate that visual cues of the speaker's lips are more prominent than the auditory ones and that these visual cues dominate the perception of the contrast when the auditory and visual cues are mismatched. The results have theoretical implications for the nature of speech perception in general, as well as for the role of visual speech cues in diachronic sound change.
... These sounds travel well over large distances [1] and are easy to discern from other biological sounds by the rare occurrence of pure-tone sine waves in nature. These features have made whistling a viable alternative sound source for human communication when signal fidelity may be more important than signal complexity [2,3]. ...
... Whistled languages encode less information from which to identify the intended speech sounds than voiced speech, but are more robust to long-distance communication. The narrow frequency band of the whistle gives it more power per unit of spectral bandwidth, increasing its signal-to-noise ratio and the effective range of communication [1][2][3]. ...
Article
Full-text available
Most human communication is carried by modulations of the voice. However, a wide range of cultures has developed alternative forms of communication that make use of a whistled sound source. For example, whistling is used as a highly salient signal for capturing attention, and can have iconic cultural meanings such as the catcall, enact a formal code as in boatswain’s calls or stand as a proxy for speech in whistled languages. We used real-time magnetic resonance imaging to examine the muscular control of whistling to describe a strong association between the shape of the tongue and the whistled fre- quency. This bioacoustic profile parallels the use of the tongue in vowel production. This is consistent with the role of whistled languages as proxies for spoken languages, in which one of the acoustical features of speech sounds is substituted with a frequency-modulated whistle. Furthermore, pre- vious evidence that non-human apes may be capable of learning to whistle from humans suggests that these animals may have similar sensorimotor abilities to those that are used to support speech in humans.
... Whistlers should choose either pitch or timbre to adapt it to the phonetics of their language. Whereas, these two frequency levels can be independently controllable and recoverable through pitch and timbre (formants) in a normal speech (Meyer, 2004;Meyer, 2005;Meyer, 2007a). A Rialland performed a perception test for consonant phonemes in Silbo gomero and compared Silbo Gomero with the TWsL (Rialland, 2005). ...
... Ozaydin Open Journal of Modern Linguistics wels and consonants in TWsL based on the data both recorded by Busnel in Kuskoy in 1967(Busnel, 1976 and the data recorded by Meyer in 2003(Meyer, 2007a. Besides, Gungorkun (Gungorkun et al., 2015) was and 1600 Hz for [(a, o, u)] (Meyer, 2005;Meyer, 2007a;Rialland, 2005). Eight types of vowels in Turkish language are (i, ʏ, w, e, oe, u, a, o) in IPA form (or can be written as (i, ü, ı, e, ö, u, a, o) in Turkish letters, respectively). ...
... These four groups are concluded with a phonetic reduction in the whistled sentences while a phonologic structure is preserved. The acoustical analysis on TWsL also confirms that there are some phonetic reductions in whistled signal when compared to the spoken signal while articulatory information is tried to be saved (Meyer, 2005;Meyer, 2007a) (this supports the theory that Turkish whistled vowels can be piled up in (i, oe,o) in (Baskan, 1968)). Meyer examines whistled consonants in five groups according to resulting frequency shapes (close articulatory loci) of their whistled articulation (Meyer, 2007a). ...