scholarly journals SSVEP captures predictive feature-based attentional tuning for point-light biological walker detection in unattended spatial locations

2016 ◽  
Vol 16 (12) ◽  
pp. 685
Author(s):  
Rakibul Hasan ◽  
Ramesh Srinivasan ◽  
Emily Grossman
2017 ◽  
Vol 17 (9) ◽  
pp. 22 ◽  
Author(s):  
Rakibul Hasan ◽  
Ramesh Srinivasan ◽  
Emily D. Grossman

2021 ◽  
Author(s):  
Daniel Birman ◽  
Justin L. Gardner

AbstractHuman observers use cues to guide visual attention to the most behaviorally relevant parts of the visual world. Cues are often separated into two forms: those that rely on spatial location and those that use features, such as motion or color. These forms of cueing are known to rely on different populations of neurons. Despite these differences in neural implementation, attention may rely on shared computational principles, enhancing and selecting sensory representations in a similar manner for all types of cues. Here we examine whether evidence for shared computational mechanisms can be obtained from how attentional cues enhance performance in estimation tasks. In our tasks, observers were cued either by spatial location or feature to two of four dot patches. They then estimated the color or motion direction of one of the cued patches, or averaged them. In all cases we found that cueing improved performance. We decomposed the effects of the cues on behavior into model parameters that separated sensitivity enhancement from sensory selection and found that both were important to explain improved performance. We found that a model which shared parameters across forms of cueing was favored by our analysis, suggesting that observers have equal sensitivity and likelihood of making selection errors whether cued by location or feature. Our perceptual data support theories in which a shared computational mechanism is re-used by all forms of attention.Significance StatementCues about important features or locations in visual space are similar from the perspective of visual cortex, both allow relevant sensory representations to be enhanced while irrelevant ones can be ignored. Here we studied these attentional cues in an estimation task designed to separate different computational mechanisms of attention. Despite cueing observers in three different ways, to spatial locations, colors, or motion directions, we found that all cues led to similar perceptual improvements. Our results provide behavioral evidence supporting the idea that all forms of attention can be reconciled as a single repeated computational motif, re-implemented by the brain in different neural architectures for many different visual features.


Author(s):  
P.M. Houpt ◽  
A. Draaijer

In confocal microscopy, the object is scanned by the coinciding focal points (confocal) of a point light source and a point detector both focused on a certain plane in the object. Only light coming from the focal point is detected and, even more important, out-of-focus light is rejected.This makes it possible to slice up optically the ‘volume of interest’ in the object by moving it axially while scanning the focused point light source (X-Y) laterally. The successive confocal sections can be stored in a computer and used to reconstruct the object in a 3D image display.The instrument described is able to scan the object laterally with an Ar ion laser (488 nm) at video rates. The image of one confocal section of an object can be displayed within 40 milliseconds (1000 х 1000 pixels). The time to record the total information within the ‘volume of interest’ normally depends on the number of slices needed to cover it, but rarely exceeds a few seconds.


2020 ◽  
Vol 63 (4) ◽  
pp. 931-947
Author(s):  
Teresa L. D. Hardy ◽  
Carol A. Boliek ◽  
Daniel Aalto ◽  
Justin Lewicke ◽  
Kristopher Wells ◽  
...  

Purpose The purpose of this study was twofold: (a) to identify a set of communication-based predictors (including both acoustic and gestural variables) of masculinity–femininity ratings and (b) to explore differences in ratings between audio and audiovisual presentation modes for transgender and cisgender communicators. Method The voices and gestures of a group of cisgender men and women ( n = 10 of each) and transgender women ( n = 20) communicators were recorded while they recounted the story of a cartoon using acoustic and motion capture recording systems. A total of 17 acoustic and gestural variables were measured from these recordings. A group of observers ( n = 20) rated each communicator's masculinity–femininity based on 30- to 45-s samples of the cartoon description presented in three modes: audio, visual, and audio visual. Visual and audiovisual stimuli contained point light displays standardized for size. Ratings were made using a direct magnitude estimation scale without modulus. Communication-based predictors of masculinity–femininity ratings were identified using multiple regression, and analysis of variance was used to determine the effect of presentation mode on perceptual ratings. Results Fundamental frequency, average vowel formant, and sound pressure level were identified as significant predictors of masculinity–femininity ratings for these communicators. Communicators were rated significantly more feminine in the audio than the audiovisual mode and unreliably in the visual-only mode. Conclusions Both study purposes were met. Results support continued emphasis on fundamental frequency and vocal tract resonance in voice and communication modification training with transgender individuals and provide evidence for the potential benefit of modifying sound pressure level, especially when a masculine presentation is desired.


Author(s):  
Kevin Dent

In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.


2000 ◽  
Author(s):  
Frank E. Pollick ◽  
Helena Paterson ◽  
Andrew J. Calder ◽  
Armin Bruderlin ◽  
Anthony J. Sanford
Keyword(s):  

2013 ◽  
Author(s):  
Matthew A. Bezdek ◽  
Richard J. Gerrig ◽  
William G. Wenzel ◽  
Jaemin Shin ◽  
Kathleen Pirog Revill ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document