scholarly journals Ocular dynamics reveal articulatory processing at single-phoneme level during silent reading

2019 ◽  
Author(s):  
Alan Taitz ◽  
Diego E Shalom ◽  
Marcos A Trevisan

Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we investigated the signatures of articulatory processing during reading. We acquired sound, eye trajectories and vocal gestures during the reading of consonant-consonant-vowel (CCV) pseudowords. We found that the duration of the first fixations on the CCVs during silent reading are correlated to the duration of the transitions between consonants when the CCVs are actually uttered. An articulatory model of the vocal system was implemented to show that consonantal transitions measure the articulatory effort required to produce the CCVs. These results demonstrate that silent reading is modulated by slight articulatory features such as the laryngeal abduction needed to devoice a single consonant or the reshaping of the vocal tract between successive consonants.

2014 ◽  
Vol 10 (5) ◽  
pp. 20140095 ◽  
Author(s):  
Kathleen Wermke ◽  
Johannes Hain ◽  
Klaus Oehler ◽  
Peter Wermke ◽  
Volker Hesse

The specific impact of sex hormones on brain development and acoustic communication is known from animal models. Sex steroid hormones secreted during early development play an essential role in hemispheric organization and the functional lateralization of the brain, e.g. language. In animals, these hormones are well-known regulators of vocal motor behaviour. Here, the association between melody properties of infants' sounds and serum concentrations of sex steroids was investigated. Spontaneous crying was sampled in 18 healthy infants, averaging two samples taken at four and eight weeks, respectively. Blood samples were taken within a day of the crying samples. The fundamental frequency contour (melody) was analysed quantitatively and the infants' frequency modulation skills expressed by a melody complexity index (MCI). These skills provide prosodic primitives for later language. A hierarchical, multiple regression approach revealed a significant, robust relationship between the individual MCIs and the unbound, bioactive fraction of oestradiol at four weeks as well as with the four-to-eight-week difference in androstenedione. No robust relationship was found between the MCI and testosterone. Our findings suggest that oestradiol may have effects on the development and function of the auditory–vocal system in human infants that are as powerful as those in vocal-learning animals.


2013 ◽  
Vol 23 (4) ◽  
pp. R155-R156 ◽  
Author(s):  
Christopher I. Petkov ◽  
Pascal Belin
Keyword(s):  

2002 ◽  
Vol 14 (5) ◽  
pp. 453-461 ◽  
Author(s):  
Yoshio Higashimoto ◽  
◽  
Hideyuki Sawada ◽  

We are developing a mechanical model of a human vocal system based on mechatronics technology. Although various ways of vocal sound production have been actively studied, mechanical construction is considered to advantageously realize natural vocalization with its fluid dynamics. In voice generation, analysis of the behavior of the vocal cords and the vocal tract are required in a mechanical system. Furthermore, fluid mechanics are less stable, making control more difficult. Several motors are used to manipulate the mechanical vocal system. A neural network works to establish relations between motor positions and produced vocal sounds by auditory feedback in the learning phase. In speech performance, the mechanical system is able to vocalize while vocal pitches and phonemes are adaptively controlled by auditory feedback control. This paper presents the construction of a mechanical vocal system and its adaptive acquisition of vocalization skills.


2005 ◽  
Vol 6 (2) ◽  
pp. 253-286 ◽  
Author(s):  
Jihène Serkhane ◽  
Jean-Luc Schwartz ◽  
Pierre Bessière

Speech is a perceptuo-motor system. A natural computational modeling framework is provided by cognitive robotics, or more precisely speech robotics, which is also based on embodiment, multimodality, development, and interaction. This paper describes the bases of a virtual baby robot which consists in an articulatory model that integrates the non-uniform growth of the vocal tract, a set of sensors, and a learning model. The articulatory model delivers sagittal contour, lip shape and acoustic formants from seven input parameters that characterize the configurations of the jaw, the tongue, the lips and the larynx. To simulate the growth of the vocal tract from birth to adulthood, a process modifies the longitudinal dimension of the vocal tract shape as a function of age. The auditory system of the robot comprises a “phasic” system for event detection over time, and a “tonic” system to track formants. The model of visual perception specifies the basic lips characteristics: height, width, area and protrusion. The orosensorial channel, which provides the tactile sensation on the lips, the tongue and the palate, is elaborated as a model for the prediction of tongue-palatal contacts from articulatory commands. Learning involves Bayesian programming, in which there are two phases: (i) specification of the variables, decomposition of the joint distribution and identification of the free parameters through exploration of a learning set, and (ii) utilization which relies on questions about the joint distribution. Two studies were performed with this system. Each of them focused on one of the two basic mechanisms, which ought to be at work in the initial periods of speech acquisition, namely vocal exploration and vocal imitation. The first study attempted to assess infants’ motor skills before and at the beginning of canonical babbling. It used the model to infer the acoustic regions, the articulatory degrees of freedom and the vocal tract shapes that are the likeliest explored by actual infants according to their vocalizations. Subsequently, the aim was to simulate data reported in the literature on early vocal imitation, in order to test whether and how the robot was able to reproduce them and to gain some insights into the actual cognitive representations that might be involved in this behavior. Speech modeling in a robotics framework should contribute to a computational approach of sensori-motor interactions in speech communication, which seems crucial for future progress in the study of speech and language ontogeny and phylogeny.


2015 ◽  
Vol 9 (1) ◽  
pp. 52-87 ◽  
Author(s):  
STEVE CHANDLER

abstractIn recent years proponents of usage-based linguistics have singled out ‘categorization’ as possibly the fundamental cognitive operation underlying the acquisition and use of language. Despite this increasing appeal to the importance of categorization, few researchers have yet offered explicit interpretations of how linguistic categories might be represented in the brain other than vague allusions to prototype theory, especially as implemented in connectionist-like frameworks. In this paper, I discuss in some detail the implications of superimposing the theoretical representations of linguistic structures onto domain-general models of categorization. I first review the evidence that instance-based, or exemplar-based, models of categorization provide empirically and theoretically better models of both domain-general categorization and of linguistic categorization than do the most commonly cited alternative models. I then argue that of the three exemplar-based models currently being applied to linguistic data, Skousen’s Analogical Model (AM) appears to provide the simplest, most straightforward account of the data and that it appears to be fully compatible with our current understanding of the psychological capabilities and operations that underlie categorization behavior.


1981 ◽  
Vol 46 (4) ◽  
pp. 348-352 ◽  
Author(s):  
Jeri A. Logemann ◽  
Hilda B. Fisher

Consonant articulation patterns of 200 Parkinson patients were defined by two expert listeners from high fidelity tape recordings of the sentence version of the Fisher-Logemann Test of Articulation Competence (1971). Phonetic transcription and phonetic feature analysis were the methodologies used. Of the 200 patients, 90 (45%) exhibited some misarticulations. Phonetic data on these 90 dysarthric Parkinson patients revealed articulatory errors highly consistent in detailed production characteristics. Manner changes predominated. Phoneme classes that were most affected were the stop-plosives, affricates, and fricatives. In terms of perception features (Chomsky & Halle, 1968), the stop-plosives and affricates, which are normally [– continuant] were produced as [ + continuant] fricatives; fricatives that are [+ strident] were produced as [– strident]. There is no implication, however, that Parkinsonism involves a perception deficit. Analysis of the articulatory deficit reveals inadequate tongue elevation to achieve complete closure on stop-plosives and affricates, which can be expressed in production features as a change from [+ stop] to [+ fricative]. There was also inadequate close construction of the airway in lingual fricatives, which in articulatory features can be expressed as a change from [+ fricative] to [– fricative]. Both the incomplete contact for stops and the partial constriction for fricatives represent and inadequate narrowing of the vocal tract at the point of articulation. These results are discussed in relation to recent EMG studies and other physiologic examinations of Parkinsonian dysarthria.


1999 ◽  
Vol 3 (1) ◽  
pp. 49-77 ◽  
Author(s):  
Louis-Jean Boë ◽  
Shinji Maeda ◽  
Jean-Louis Heim

Since Lieberman and Crelin (1971) postulated a theory that Neandertals were speechless species, the speech capability of Neandertals has been a subject of hot debate for over 30 years and remains as a controversial question. These authors claimed that the acquisition of a low laryngeal position during evolution is a necessary condition for having a vowel space large enough to realize the necessary vocalic contrasts for speech. Moreover, Neandertals didn't posses this anatomical base and therefore could not speak, presumably causing their extinction. In this study, we refute Lieberman and Crelin's theory by showing, first with the analysis of biometric data, that the estimated laryngeal position for two Neandertals is relatively high, but not as high as claimed by the two authors. In fact, the length ratio of the pharyngeal cavity to the oral cavity, i.e., an acoustically important parameter, of the Neandertals corresponds to that of a modern female adult or of a child. Second, using an anthropomorphic articulatory model, the potentially maximum vowel space estimated by varying the model morphology from a newborn, a child, a female adult and to a male adult didn't show any relevant variation. We infer then that a Neandertal could have a vowel space no smaller than that of a modern human. Our study is strictly limited to the morphological aspects of the vocal tract. We, therefore, cannot offer any definitive answer to the question whether Neandertals actually spoke or not. But we feel safe saying that Neandertals were not morphologically handicapped for speech.


1998 ◽  
Vol 8 (1) ◽  
pp. 69-94 ◽  

Spoken language is one of the defining human characteristics — the crucial accomplishment which makes us human and separates us from other species. Naturally enough, the origins of this accomplishment — which must lie somewhere back in the Palaeolithic — have been the subject of lively and often heated debate, not least since speech leaves no direct material residues. Many people have sought to resolve the question by careful analysis of the material remains of early hominids. But do patterns of tool-making or evidence of sophisticated subsistence strategies really provide an adequate base from which to deduce the presence of linguistic ability? Furthermore, is language inextricably bound up with the ability to vocalize and to speak? Are studies of the vocal tract of Neanderthals or Homo erectus really relevant to the question of language origins?The wide diversity of view on the antiquity of human spoken language is very clear from the brief contributions which make up this feature. On the one hand is the evidence for the presence of Broca's and Wernicke's areas in the brain of Homo habilis around 2 million years ago. Does this provide grounds for believing that Homo habilis could speak? Did the use of tools as icons by Homo erectus play a key role in the development of human spoken language? Or should we instead go along with the growing consensus — supported by many linguists —that spoken language is a late addition to the range of human abilities, originating along with fully modern humans only within the last 200,000 years? And dare we go even further, and nominate Africa as the locus of language origin?The time may come when we are able to specify not only when human spoken language first developed, but also where. For the present, however, the debate shows no sign of imminent resolution. In the pages which follow, we bring together the views of archaeologists from a number of different backgrounds; but we begin with a linguist's perspective, and seven propositions to set the scene for the archaeological enquiry.


2019 ◽  
pp. 030573561988368 ◽  
Author(s):  
Camila Bruder ◽  
Clemens Wöllner

Subvocalization has been described as a series of attenuated movements of the vocal tract during silent reading and imagination. This two-part study investigated covert laryngeal activations among singers during the perception and imagination of music and text. In the first part, 155 singers responded to an online survey investigating their self-perceived corporal activation when listening to live or recorded singing. Respondents reported frequent corporal activation in their larynx and other body parts in response to live singing and, to a lesser extent, recordings. In the second part, an exploratory experiment was conducted to investigate physiological correlates of subvocalization in singers during the perception and imagery of melody and text stimuli, using simultaneous measurements of laryngeal activation both externally, with surface electromyography, and internally, with nasolaryngoscopy. Experimental results indicate the occurrence of subvocalization during imagination—but not during listening—of both stimuli and suggest that laryngoscopy is more sensitive to detection of subvocalization in singers. The results may point to vocal resonance or empathy in the perception of singers.


Sign in / Sign up

Export Citation Format

Share Document