Discourse coherence and gesture interpretation

Gesture ◽  
2009 ◽  
Vol 9 (2) ◽  
pp. 147-180 ◽  
Author(s):  
Alex Lascarides ◽  
Matthew Stone

In face-to-face conversation, communicators orchestrate multimodal contributions that meaningfully combine the linguistic resources of spoken language and the visuo-spatial affordances of gesture. In this paper, we characterise this meaningful combination in terms of the COHERENCE of gesture and speech. Descriptive analyses illustrate the diverse ways gesture interpretation can supplement and extend the interpretation of prior gestures and accompanying speech. We draw certain parallels with the inventory of COHERENCE RELATIONS found in discourse between successive sentences. In both domains, we suggest, interlocutors make sense of multiple communicative actions in combination by using these coherence relations to link the actions’ interpretations into an intelligible whole. Descriptive analyses also emphasise the improvisation of gesture; the abstraction and generality of meaning in gesture allows communicators to interpret gestures in open-ended ways in new utterances and contexts. We draw certain parallels with interlocutors’ reasoning about underspecified linguistic meanings in discourse. In both domains, we suggest, coherence relations facilitate meaning-making by RESOLVING the meaning of each communicative act through constrained inference over information made salient in the prior discourse. Our approach to gesture interpretation lays the groundwork for formal and computational models that go beyond previous approaches based on compositional syntax and semantics, in better accounting for the flexibility and the constraints found in the interpretation of speech and gesture in conversation. At the same time, it shows that gesture provides an important source of evidence to sharpen the general theory of coherence in communication.

2019 ◽  
Vol 2 (3) ◽  
pp. 94-102
Author(s):  
Lance E Mason

The present sociopolitical environment in the United States is perpetually mediated and beset with information from innumerable sources. This paper argues that Dewey’s conception of communication as a mutual act of meaning-making holds insights for explaining the connections between pervasive mediation and political polarization, in addition to understanding why political discourse has become more degrading in recent years. It also points the way toward viable solutions by arguing for the reorientation of schools toward valuable living experiences that are becoming less pronounced in the broader culture, such as sustained face to face engagement on matters of social import.


2020 ◽  
Author(s):  
Marlen Fröhlich ◽  
Natasha Bartolotta ◽  
Caroline Fryns ◽  
Colin Wagner ◽  
Laurene Momon ◽  
...  

Abstract From early infancy, human face-to-face communication is “multimodal”, comprising a plethora of interlinked articulators and sensory modalities. Although there is also growing evidence for this in nonhuman primates, the functions of integrating articulators (i.e. multiplex or multi-articulator acts) and channels (i.e. multimodal or multi-sensory acts) remain poorly understood. Here, we studied close-range social interactions within and beyond mother-infant pairs of Bornean and Sumatran orang-utans living in wild and captive settings, to examine to what extent species, setting and recipient-dependent factors affected the use of and responses to multi-sensory as well as multi-articulator communication. Results showed that both multi-sensory and multi-articulatory acts were more effective at eliciting responses (i.e. “apparently satisfactory outcomes”) than their respective uni-component parts, and generally played a larger role in wild populations. However, only multi-articulator acts were used more when the presumed goal did not match the dominant outcome for a specific communicative act, and were more common among non-mother-infant dyads and Sumatrans across settings. We suggest that communication through multiple sensory channels primarily facilitates effectiveness, whereas a flexible combination of articulators is relevant when social tolerance and interaction outcomes are less predictable. These different functions underscore the importance of distinguishing between these forms of multi-component communication.


2020 ◽  
Vol 31 (1) ◽  
pp. 233-247
Author(s):  
Hun S Choi ◽  
William D Marslen-Wilson ◽  
Bingjiang Lyu ◽  
Billi Randall ◽  
Lorraine K Tyler

Abstract Communication through spoken language is a central human capacity, involving a wide range of complex computations that incrementally interpret each word into meaningful sentences. However, surprisingly little is known about the spatiotemporal properties of the complex neurobiological systems that support these dynamic predictive and integrative computations. Here, we focus on prediction, a core incremental processing operation guiding the interpretation of each upcoming word with respect to its preceding context. To investigate the neurobiological basis of how semantic constraints change and evolve as each word in a sentence accumulates over time, in a spoken sentence comprehension study, we analyzed the multivariate patterns of neural activity recorded by source-localized electro/magnetoencephalography (EMEG), using computational models capturing semantic constraints derived from the prior context on each upcoming word. Our results provide insights into predictive operations subserved by different regions within a bi-hemispheric system, which over time generate, refine, and evaluate constraints on each word as it is heard.


2020 ◽  
Author(s):  
Ludivine Crible ◽  
Sílvia Gabarró-López

Abstract This paper provides the first contrastive analysis of a coherence relation (viz. addition) and its connectives across a sign language (French Belgian Sign Language) and a spoken language (French), both used in the same geographical area. The analysis examines the frequency and types of connectives that can express an additive relation, in order to contrast its “markedness” in the two languages, that is, whether addition is marked by dedicated connectives or by ambiguous, polyfunctional ones. Furthermore, we investigate the functions of the most frequent additive connective in each language (namely et and the sign SAME), starting from the observation that most connectives are highly polyfunctional. This analysis intends to show which functions are compatible with the meaning of addition in spoken and signed discourse. Despite a common core of shared discourse functions, the equivalence between et and SAME is only partial and relates to a difference in their semantics.


Author(s):  
Lyn Robertson

This chapter explores the acquisition of spoken language and literacy in children with hearing loss whose auditory access through the use of hearing technology enables them to listen, and it examines the relationships among language, thought, and print that offer explanation of the role of spoken language as the foundation for literacy. It defines reading and writing as thinking processes that make use of symbol systems representative of spoken language and gives attention to the numerous cueing systems and conventions comprising representations of meaning. Drawing from cognitive psychology, linguistics, psycholinguistics, sociolinguistics, literary criticism, and critical traditions developed over time through study of people with typical hearing, this chapter argues that meaning making resides in the individual in the presence of symbols both heard and seen and for maximizing spoken language acquisition in children with hearing loss so as to prepare them for lifelong literacy and language use.


2012 ◽  
Vol 18 (5) ◽  
pp. 267-272 ◽  
Author(s):  
Gabriella Constantinescu

Auditory-Verbal Therapy (AVT) is an effective early intervention for children with hearing loss. The Hear and Say Centre in Brisbane offers AVT sessions to families soon after diagnosis, and about 20% of the families in Queensland participate via PC-based videoconferencing (Skype). Parent and therapist satisfaction with the telemedicine sessions was examined by questionnaire. All families had been enrolled in the telemedicine AVT programme for at least six months. Their average distance from the Hear and Say Centre was 600 km. Questionnaires were completed by 13 of the 17 parents and all five therapists. Parents and therapists generally expressed high satisfaction in the majority of the sections of the questionnaire, e.g. most rated the audio and video quality as good or excellent. All parents felt comfortable or as comfortable as face-to-face when discussing matters with the therapist online, and were satisfied or as satisfied as face-to-face with their level and their child's level of interaction/rapport with the therapist. All therapists were satisfied or very satisfied with the telemedicine AVT programme. The results demonstrate the potential of telemedicine service delivery for teaching listening and spoken language to children with hearing loss in rural and remote areas of Australia.


English Today ◽  
2007 ◽  
Vol 23 (2) ◽  
pp. 19-26 ◽  
Author(s):  
John Damaso ◽  
Colleen Cotter

ABSTRACTIn traditional English lexicography, individual dictionary editors have had ultimate control over the selection, meaning, and illustration of words and extensive collaboration with contributors has been limited. However, Internet technologies that easily permit exchanges between a user and a database have allowed a new type of dictionary online: one that is built by the collaboration of contributing end-users, allowing users who are not trained lexicographers to engage in the actual making of dictionaries. We discuss here a popular online slang dictionary, UrbanDictionary.com (UD), to illustrate how traditional lexicographic principles are joined with Web-only communication technologies to provide a context for collaborative engagement and meaning-making, and to note the many characteristics and functions shared with traditional print dictionaries. Significantly, UD captures what most traditional English dictionaries fall short of: both recording ephemeral everyday spoken language and representing popular views of meaning. By relying on the users of language to select and define words for a dictionary, UD – which defines more than one million words – has in effect influenced both access to and formulation of the lexis.


2018 ◽  
Vol 60 ◽  
pp. 73-90
Author(s):  
Pranav Anand ◽  
Maziar Toosarvandani

Discourses in the historical (or narrative) use of the simple present in English prohibitbackshifting, though they allow forward sequencing. Unlike both reference time theories anddiscourse coherence theories of these temporal inferences, we propose that backshifting hasa different source from narrative progression. In particular, we argue that backshifting arisesthrough anaphora to a salient event in the preceding discourse.Keywords: tense, discourse coherence, coherence relations, perspective.


2020 ◽  
Vol 9 ◽  
pp. 01-06
Author(s):  
Heliana Mello ◽  
Lúcia Ferrari ◽  
Bruno Rocha

Speech and gestures meet at their departure point which is actionality. The same departing point keeps the two channels connected through their execution in the creation of meaning and interactivity. Both speech and gestures require segmentation in order to be studied and understood scientifically, as knowing what the units of analysis are is crucial to the scientific endeavor. Prominence is both a characteristic carried by prosody (be it defined functionally, physically or cognitively), as well as by several gestural acts, such as widening of the eyes, increased speed in hand motion, head tilting, among others. This link permits our joining multimodality, segmentation and prominence in speech as a topic for a scientific journal. As our knowledge about spoken language grows, thanks to empirically and experimentally based studies, the necessity for the never ending refining of methodologies is called into action, as well as the broadening of their boundaries. The understanding that gestuality actively interacts and partakes in communication is not a novel perception, as gesture forms a single system with speech and is an integral part of the communicative act (Kendon 1980; McNeil, 1992). However, the accurate pairing of how this interaction occurs is still not fully understood. Are gestures and speech additive, parallel, complementary? How are they linked in terms of the cognitive-neurological and motor routines involved?


Sign in / Sign up

Export Citation Format

Share Document