The Influence of Visual Cues on the Localisation of Circular Auditory Motion

Perception ◽  
1995 ◽  
Vol 24 (4) ◽  
pp. 457-465 ◽  
Author(s):  
Stephen Lakatos

A relative inability of listeners to refer auditory motion properly to the front hemifield when a rapid circular trajectory is traced around them in the horizontal plane was investigated. It was hypothesised that the representations of visual space and auditory frontal space are linked in such a way that auditory cues alone are insufficient to determine the veridical path in the front hemifield when both the front and the rear hemifield receive circular auditory input in rapid sequence. Listeners discriminated the direction of a rapid apparent trajectory created by the sequential activation of various combinations of loudspeakers and light-emitting diodes spaced evenly in a circular array. The addition of visual stimuli to the front hemifield, even when lacking motional cues, improved discrimination significantly. Conversely, when the same portion of the trajectory presented to the front hemifield consisted only of visual stimuli, performance decreased markedly. Additional conditions, in which the trajectory was restricted to a 120 deg path in the frontal plane, confirmed these effects. The findings suggest that the presence of a visual cue enhances the perception of auditory directionality, even when it does not itself provide any motion information.

2020 ◽  
Author(s):  
Adam Qureshi ◽  
Rebecca Monk ◽  
Charlotte Rebecca Pennington ◽  
Jennifer Rose Oulton

Introduction: Representing a more immersive testing environment, the current study exposed individuals to both alcohol-related visual and auditory cues to assess their respective impact on alcohol-related inhibitory control. It examined further whether individual variation in alcohol consumption and trait effortful control may predict inhibitory control performance. Method: Twenty-five U.K. university students (Mage = 23.08, SD = 8.26) completed an anti-saccade eye-tracking task and were instructed to look towards (pro) or directly away (anti) from alcohol-related and neutral visual stimuli. Short alcohol-related sound cues (bar audio) were played on 50% of trials and were compared with responses where no sounds were played. Results: Findings indicate that participants launched more incorrect saccades towards alcohol-related visual stimuli on anti-saccade trials, and responded quicker to alcohol on pro-saccade trials. Alcohol-related audio cues reduced latencies for both pro- and anti-saccade trials and reduced anti-saccade error rates to alcohol-related visual stimuli. Controlling for trait effortful control and problem alcohol consumption removed these effects. Conclusion: These findings suggest that alcohol-related visual cues may be associated with reduced inhibitory control, evidenced by increased errors and faster response latencies. The presentation of alcohol-related auditory cues, however, appears to enhance performance accuracy. It is postulated that auditory cues may re-contextualise visual stimuli into a more familiar setting that reduces their saliency and lessens their attentional pull.


Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


1976 ◽  
Vol 28 (2) ◽  
pp. 193-202 ◽  
Author(s):  
Philip Merikle

Report of single letters from centrally-fixated, seven-letter, target rows was probed by either auditory or visual cues. The target rows were presented for 100 ms, and the report cues were single digits which indicated the spatial location of a letter. In three separate experiments, report was always better with the auditory cues. The advantage for the auditory cues was maintained both when target rows were masked by a patterned stimulus and when the auditory cues were presented 500 ms later than comparable visual cues. The results indicate that visual cues produce modality-specific interference which operates at a level of processing beyond iconic representation.


Author(s):  
Nada Zwayyid Almutairi ◽  
Eman Salah Ibrahim Rizk

This study explores interactive e-book cues and Information Processing Levels (IPL)’s effectiveness on Learning Retention (LR) and External Cognitive Load (ECL). 117 middle school pupils (MSP) were divided into six experimental groups based on their IPL and cues during the second term of the academic year 2019–2020. Visual Cues (VC)/Audiovisual Cues (VAC) and Auditory Cues (AC)/Audiovisual Cues (VAC) statistically varied in the Ie-book in LR test and ECL scale, same for the average scores when testing the LR in Science for MSP due to the difference between IPL for the DL. There is a statistically significant effect of cue types' interaction in Ie-book with IPL in ECL scale for MSP, at its highest peak in the case of the AVC with DL, followed by the interaction resulting from the VC with DL then AC with SL. Also, cues interaction in Ie-book with IPL immensely affect the LR test for MEP, which is at its highest peak in the case of the AVC with DL. The interactions between (DL–SL) and (AC–VC) seem to equally influence the ELC.


2021 ◽  
Vol 11 ◽  
Author(s):  
Christopher R. Madan ◽  
Anthony Singhal

Learning to play a musical instrument involves mapping visual + auditory cues to motor movements and anticipating transitions. Inspired by the serial reaction time task and artificial grammar learning, we investigated explicit and implicit knowledge of statistical learning in a sensorimotor task. Using a between-subjects design with four groups, one group of participants were provided with visual cues and followed along by tapping the corresponding fingertip to their thumb, while using a computer glove. Another group additionally received accompanying auditory tones; the final two groups received sensory (visual or visual + auditory) cues but did not provide a motor response—all together following a 2 × 2 design. Implicit knowledge was measured by response time, whereas explicit knowledge was assessed using probe tests. Findings indicate that explicit knowledge was best with only the single modality, but implicit knowledge was best when all three modalities were involved.


1996 ◽  
Vol 74 (12) ◽  
pp. 2248-2253 ◽  
Author(s):  
Lamar A. Windberg

Individual coyotes (Canis latrans) are infrequently captured within their familiar areas of activity. Current hypotheses are that the differential capture vulnerability may involve neophobia or inattentiveness. To assess the effect of familiarity, I measured coyote responsiveness to sensory cues encountered in familiar and novel settings. Seventy-four captive coyotes were presented with visual and olfactory stimuli in familiar and unfamiliar 1-ha enclosures. The visual stimuli were black or white wooden cubes of three sizes (4, 8, and 16 cm per side). The olfactory stimuli were fatty acid scent, W-U lure (trimethylammonium decanoate plus sulfide additives), and coyote urine and liquefied feces. Overall, coyotes were more responsive to stimuli during exploration in unfamiliar than in familiar enclosures. None of 38 coyotes that responded were neophobic toward the olfactory stimuli. The frequency of coyote response, and the resulting degrees of neophobia, did not differ between the black and white visual stimuli. Regardless of context, the largest visual stimuli were recognized at the greatest distance and evoked the strongest neophobic response. A greater proportion of coyotes were neophobic toward the small and medium-sized stimuli in familiar than in unfamiliar enclosures. This study demonstrated that when encountered in familiar environments, visual cues are more likely to elicit neophobic responses by coyotes than are olfactory stimuli.


2004 ◽  
Vol 91 (5) ◽  
pp. 2172-2184 ◽  
Author(s):  
Andrew H. Bell ◽  
Jillian H. Fecteau ◽  
Douglas P. Munoz

Reflexively orienting toward a peripheral cue can influence subsequent responses to a target, depending on when and where the cue and target appear relative to each other. At short delays between the cue and target [cue-target onset asynchrony (CTOA)], subjects are faster to respond when they appear at the same location, an effect referred to as reflexive attentional capture. At longer CTOAs, subjects are slower to respond when the two appear at the same location, an effect referred to as inhibition of return (IOR). Recent evidence suggests that these phenomena originate from sensory interactions between the cue- and target-related responses. The capture of attention originates from a strong target-related response, derived from the overlap of the cue- and target-related activities, whereas IOR corresponds to a weaker target-aligned response. If such interactions are responsible, then modifying their nature should impact the neuronal and behavioral outcome. Monkeys performed a cue-target saccade task featuring visual and auditory cues while neural activity was recorded from the superior colliculus (SC). Compared with visual stimuli, auditory responses are weaker and occur earlier, thereby decreasing the likelihood of interactions between these signals. Similar to previous studies, visual stimuli evoked reflexive attentional capture at a short CTOA (60 ms) and IOR at longer CTOAs (160 and 610 ms) with corresponding changes in the target-aligned activity in the SC. Auditory cues used in this study failed to elicit either a behavioral effect or modification of SC activity at any CTOA, supporting the hypothesis that reflexive orienting is mediated by sensory interactions between the cue and target stimuli.


Sign in / Sign up

Export Citation Format

Share Document