Fig 2 - uploaded by Bernd J. Kröger
Content may be subject to copyright.

Source publication
Conference Paper
Full-text available
Speech motor learning is still an under-discussion process in neural computational modeling. In this paper we focus on the relationship between vowel articulation and its muscle activation patterns, propose a neural understanding of speech motor learning and elucidate the neural strategy for speech learning of infants. An existing physiological mod...

Context in source publication

Context 1
... motor learning (i.e. organization of the execution map), corresponding muscle forces and articulatory contour points were generated as training items. As a starting point of babbling, we defined three 'extreme proto-vocalic tongue states' (high-front, high- back and low-back) forming palatal, velar, and pharyngeal proto-vocalic constrictions (Fig. ...

Similar publications

Article
Full-text available
Introduction The notion of a single localized store of word representations has become increasingly less plausible as evidence has accumulated for the widely distributed neural representation of wordform grounded in motor, perceptual, and conceptual processes. Here, we attempt to combine machine learning methods and neurobiological frameworks to pr...

Citations

... Thus a syllable like [ta] for example comprises an apical closing action in temporal coordination with a glottal opening action for production of [t], and a vocalic tongue lowering action in temporal coordination with a glottal phonatory closing action for producing an [a]. The neuromuscular programming and execution module calculates concrete muscle activation patterns for each motor plan on the basis of the speech action score (see execution map in Chen, Dang, Yan, Fang, & Kröger, 2013). The articulatory acoustic model currently used is the Dang-Honda model (Dang & Honda, 2004). ...
Article
Abstract: Because speech acquisition begins with sensorimotor activity (i.e. babbling and imitation of speech items in order to learn articulatory-acoustic relations) as well as with semantic cognitive processing (i.e. linking phonetic items with concepts), distinctiveness as well as phonetic-phonological features emerge early in speech acquisition. Based on a biolo-gically inspired model of speech processing, using interconnected growing self-organizing maps (I-GSOMs), the phonetic-phonological interface is described here in terms of a numerical computer-implemented model. By simulating early phases of speech acquisition, it can be shown that vocalic features like low-high as well as consonantal features describing the manner of articulation (like plosive, fricative, nasal, lateral etc.) already arise at sensorimotor levels. This is reflected in our model by an ordering of syllables with respect to phonetic-phonological features at the level of an auditory based neural map. Other features like consonantal place of articulation (labial, apical, dorsal) as well as voiced-voiceless emerge at higher levels within our model, i.e. at the level of neural associations between a phonetic and a semantic neural map. It can be hypothesized from these findings that the phonetic-phonological interface does not appear as a clean cut within the speech processing system but as a broader zone within that system located between sensorimotor and semantic processing. Download: http://www.sciencedirect.com/science/article/pii/S0095447015000765
... In order to improve the feedback function, Kröger introduces a syllabic sensorimotor skill repository, which is modeled by a selforganizing map (SOM) [11]. In previous work, we combined the Kröger's model with a physiological articulatory model to develop a model that can replicate speech production process from the central control to the peripheral movements [16]. ...
... The previous model included two motor states, the highlevel motor state (motor plan) and the low-level motor state (motor control), which are connected to a core feature map modeled by the self-organizing map (SOM). The SOM consists of a number of "model neurons", where each of the neurons contains three different states: the articulation state, the acoustics state, and the perceptual state, and those states are paired using a fixed relation of "one-to-one" [16]. As well known, however, the relations between different states should be "one-to-many" but not "one-to-one" [15]. ...
... In this section, we propose a new framework of a neurocomputational model with four distinct SOMs, and briefly introduce the neural representation method of each state and the physiological articulatory model, respectively. Figure 1 shows the new structure of a neurocomputational model, which is developed based on our previous work [16]. The proposed neurocomputational model has four SOMs including feedforward mapping and sensorybased feedback pathways. ...
Article
Full-text available
Speech production is complex for the brain to control, since it involves many neural processes such as speech planning, motor control, auditory and somatosensory feedback. Those functions are thought to work both in cascaded and parallel, and the control signals are transformed from one brain area to others with 'one-to-many' relations. To describe this situation, in this study, we developed a new framework for a neuro-computational model for speech production based on our previous studies. The proposed model is used to deal with dynamic properties of speech articulation for consonant-vowel (CV-) syllables. In our simulation, the neuronal groups (i.e., motor, auditory and somatosensory) were acquired by learning and stored in the self-organizing maps (SOMs), and those relations between the SOMs were investigated. The results show that the time-varying properties were represented properly. In the control signal flow, the model demonstrated 'one-to-many' projections between the SOMs, where one neuron in an SOM on average was projected onto 1.64 neurons in another SOM.
Article
Full-text available
Refractive index of binary systems water-poly(ethylene glycol) of different molecular weights at several temperatures from 283.15 to 363.15 K was measured. Refractive index strongly depends on molecular weight, on mole fraction and on temperature. Experimental data were correlated by four parameters relationship for dependence of refractive index on mole fraction of poly(ethylene glycol) and by three parameters relationship for dependence on molecular weight of poly(ethylene glycol). Obtained data can serve as an input data for subsequent study of these binary systems and especially for study of colloidal systems of metal nanoparticles in dispersion surrounding consisting of water and poly(ethylene glycol). Especially these data on refractive index are important for determination of nanoparticle size and zeta potential by dynamic light scattering. Obtained data on refractive index have been employed for correct and accurate determination of nanoparticle size and zeta potential. The size of Au nanoparticles was determined by dynamic light scattering. For comparison of this determination also a transmission electron microscopy and UV-Vis spectroscopy have been employed in very good agreement. The optical properties of Au nanoparticles colloidal solutions were analyzed with UV-Vis spectroscopy and showed a significant absorption peak maximum at 530 nm.
Article
The paper presents an innovative packaging approach to allow highly sensitive in-line Mach–Zehnder interferometer (IMZI) sensor to be used for accurate measurement of curvature and flexural strain specifically in civil engineering. The sensor which consists of two tapers is protected within a polypropylene package to survive in harsh, in-the-field conditions. The package design employs cost effective materials without compromising the curvature and flexural strain sensitivity which are 85.2 dB m⁻¹ and 0.0148 dB/μϵ. The accuracy of measurement results is further verified by obtaining the flexural modulus for the steel which is in good agreement with theoretical value.