scholarly journals Visual and auditory information shed light on savings mechanisms

2019 ◽  
Author(s):  
Olivier White ◽  
Marie Barbiero ◽  
Quentin Maréchal ◽  
Jean-Jacques Orban de Xivry

AbstractSuccessful completion of natural motor actions relies on feedback information delivered through different modalities, including vision and audition. The nervous system weights these sensory inflows according to the context and they contribute to the calibration and maintenance of internal models. Surprisingly, the influence of auditory feedback on the control of fine motor actions has only been scarcely investigated alone or together with visual feedback. Here, we tested how 46 participants learned a reaching task when they were provided with either visual, auditory or both feedback about terminal error. In the VA condition, participant received visual (V) feedback during learning and auditory (A) feedback during relearning. The AV group received the opposite treatment. A third group received visual and auditory feedback in both learning periods. Our experimental design allowed us to assess how learning with one modality transferred to relearning in another modality. We found that adaptation was high in the visual modality both during learning and relearning. It was absent in the learning period under the auditory modality but present in the relearning period (learning period was with visual feedback). An additional experiment suggests that transfer of adaptation between visual and auditory modalities occurs through a memory of the learned reaching direction that acts as an attractor for the reaching direction, and not via error-based mechanisms or an explicit strategy. This memory of the learned reaching direction allowed the participants to learn a task that they could not learn otherwise independently of any memory of errors or explicit strategy.

2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


2014 ◽  
Vol 26 (12) ◽  
pp. 2827-2839 ◽  
Author(s):  
Maria J. S. Guerreiro ◽  
Joaquin A. Anguera ◽  
Jyoti Mishra ◽  
Pascal W. M. Van Gerven ◽  
Adam Gazzaley

Selective attention involves top–down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top–down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top–down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.


Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


2000 ◽  
Vol 84 (4) ◽  
pp. 1708-1718 ◽  
Author(s):  
Andrew B. Slifkin ◽  
David E. Vaillancourt ◽  
Karl M. Newell

The purpose of the current investigation was to examine the influence of intermittency in visual information processes on intermittency in the control continuous force production. Adult human participants were required to maintain force at, and minimize variability around, a force target over an extended duration (15 s), while the intermittency of on-line visual feedback presentation was varied across conditions. This was accomplished by varying the frequency of successive force-feedback deliveries presented on a video display. As a function of a 128-fold increase in feedback frequency (0.2 to 25.6 Hz), performance quality improved according to hyperbolic functions (e.g., force variability decayed), reaching asymptotic values near the 6.4-Hz feedback frequency level. Thus, the briefest interval over which visual information could be integrated and used to correct errors in motor output was approximately 150 ms. The observed reductions in force variability were correlated with parallel declines in spectral power at about 1 Hz in the frequency profile of force output. In contrast, power at higher frequencies in the force output spectrum were uncorrelated with increases in feedback frequency. Thus, there was a considerable lag between the generation of motor output corrections (1 Hz) and the processing of visual feedback information (6.4 Hz). To reconcile these differences in visual and motor processing times, we proposed a model where error information is accumulated by visual information processes at a maximum frequency of 6.4 per second, and the motor system generates a correction on the basis of the accumulated information at the end of each 1-s interval.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2014 ◽  
Vol 18 (3) ◽  
pp. 490-501 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
JOHN MORRIS ◽  
FIONA M. RICHARDSON ◽  
PETER BRIGHT ◽  
MICHAEL S.C. THOMAS ◽  
...  

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7–10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.


1981 ◽  
Vol 24 (3) ◽  
pp. 351-357 ◽  
Author(s):  
Paula Tallal ◽  
Rachel Stark ◽  
Clayton Kallman ◽  
David Mellits

A battery of nonverbal perceptual and memory tests were given to 35 language-impaired (LI) and 38 control subjects. Three modalities of tests were given: auditory, visual, and cross-modal (auditory and visual). The purpose was to reexamine some nonverbal perceptual and memory abilities of LI children as a function of age and modality of stimulation. Results failed to replicate previous findings of a temporal processing deficit that is specific to the auditory modality in LI children. The LI group made significantly more errors than did controls regardless of modality of stimulation when 2-item sequences were presented rapidly, or when more than two stimuli were presented in series. However, further analyses resolved this apparent conflict between the present and earlier studies by demonstrating that age is an important variable underlying modality specificity of perceptual performance in LI children. Whereas younger LI children were equally impaired when responding to stimuli presented rapidly to the auditory and visual modality, older LI subjects made nearly twice as many errors responding to rapidly presented auditory rather than visual stimuli. This developmental difference did not occur for the control group.


2018 ◽  
Vol 39 (14) ◽  
pp. 1075-1080 ◽  
Author(s):  
Eric Ching ◽  
Winko An ◽  
Ivan Au ◽  
Janet Zhang ◽  
Zoe Chan ◽  
...  

AbstractVisual feedback gait retraining has been reported to successfully reduce impact loading in runners, even when the runners were distracted. However, auditory feedback is more feasible in real life application. Hence, this study compared the peak positive acceleration (PPA), vertical average (VALR) and instantaneous (VILR) loading rate during distracted running before and after a course of auditory feedback gait retraining in 16 runners. The runners were asked to land with softer footfalls with and without auditory feedback. Low or high sound pitch was generated according to the impact of particular footfall, when compared with the preset target. Runners then received a course of auditory gait retraining, and after the gait retraining, runners completed a reassessment. Runners before gait retraining exhibited lower PPA, VALR and VILR with augmented auditory feedback (p<0.049). We found a reduction in PPA, VALR and VILR after gait retraining, regardless of the presence of feedback (p<0.018). However, runners after gait retraining did not demonstrate further reduction in PPA and VALR with auditory feedback (p>0.104). A small effect of auditory feedback on VILR in runners after gait retraining was observed (p=0.032). Real time auditory feedback gait retraining is effective in impact loading reduction, even when the runners were distracted.


2020 ◽  
Author(s):  
Douglas M. Shiller ◽  
Takashi Mitsuya ◽  
Ludo Max

ABSTRACTPerceiving the sensory consequences of our actions with a delay alters the interpretation of these afferent signals and impacts motor learning. For reaching movements, delayed visual feedback of hand position reduces the rate and extent of visuomotor adaptation, but substantial adaptation still occurs. Moreover, the detrimental effect of visual feedback delay on reach motor learning—selectively affecting its implicit component—can be mitigated by prior habituation to the delay. Auditory-motor learning for speech has been reported to be more sensitive to feedback delay, and it remains unknown whether habituation to auditory delay reduces its negative impact on learning. We investigated whether 30 minutes of exposure to auditory delay during speaking (a) affects the subjective perception of delay, and (b) mitigates its disruptive effect on speech auditory-motor learning. During a speech adaptation task with real-time perturbation of vowel spectral properties, participants heard this frequency-shifted feedback with no delay, 75 ms delay, or 115 ms delay. In the delay groups, 50% of participants had been exposed to the delay throughout a preceding 30-minute block of speaking whereas the remaining participants completed this block without delay. Although habituation minimized awareness of the delay, no improvement in adaptation to the spectral perturbation was observed. Thus, short-term habituation to auditory feedback delays is not effective in reducing the negative impact of delay on speech auditory-motor adaptation. Combined with previous findings, the strong negative effect of delay and the absence of an influence of delay awareness suggest the involvement of predominantly implicit learning mechanisms in speech.HIGHLIGHTSSpeech auditory-motor adaptation to a spectral perturbation was reduced by ~50% when feedback was delayed by 75 or 115 ms.Thirty minutes of prior delay exposure without perturbation effectively reduced participants’ awareness of the delay.However, habituation was ineffective in remediating the detrimental effect of delay on speech auditory-motor adaptation.The dissociation of delay awareness and adaptation suggests that speech auditory-motor learning is mostly implicit.


Sign in / Sign up

Export Citation Format

Share Document