scholarly journals A decisional space account of saccadic reaction times towards personally familiar faces

2018 ◽  
Author(s):  
Meike Ramon ◽  
Nayla Sokhn ◽  
Roberto Caldara

AbstractManual and saccadic reaction times (SRTs) have been used to determine the minimum time required for different types of visual categorizations. Such studies have demonstrated that faces can be detected within natural scenes within as little as 100ms (Crouzet, Kirchner & Thorpe, 2010), while increasingly complex decisions require longer processing times (Besson, Barragan-Jason, Thorpe, Fabre-Thorpe, Puma et al., 2017). Following the notion that facial representations stored in memory facilitate perceptual processing (Ramon & Gobbini, 2018), a recent study reported 180ms as the fastest speed at which “familiar face detection” based on expressed choice saccades (Visconti di Ollegio Castello & Gobbini, 2015). At first glance, these findings seem incompatible with the earliest neural markers of familiarity reported in electrophysiological studies (Barragan-Jason, Cauchoix & Barbeau, 2015; Caharel, Ramon & Rossion, 2014; Huang, Wu, Hu, Wang, Ding & Qu et al., 2017), which should temporally precede any overtly observed behavioral (oculomotor or manual) categorization. Here, we reason that this apparent discrepancy could be accounted for in terms of decisional space constraints, which modulate both manual RTs observed for different levels of visual processing (Besson et al., 2017), as well as saccadic RTs (SRTs) in both healthy observers and neurological patients (Ramon, in press; Ramon, Sokhn, Lao & Caldara, in press). In the present study, over 70 observers completed three different SRT experiments in which decisional space was manipulated through task demands and stimulus probability. Subjects performed a gender categorization task, or one of two familiar face “recognition” tasks, which differed with respect to the number of personally familiar identities presented (3 vs. 7). We observe an inverse relationship between visual categorization proficiency and decisional space. Observers were most accurate for categorization of gender, which could be achieved in as little as 140ms. Categorization of highly predictable targets was more error-prone and required an additional ~100ms processing time. Our findings add to increasing evidence that pre-activation of identity-information can modulate early visual processing in a top-down manner. They also emphasize the importance of considering procedural aspects as well as terminology when aiming to characterize cognitive processes.

Perception ◽  
10.1068/p3164 ◽  
2001 ◽  
Vol 30 (7) ◽  
pp. 833-853 ◽  
Author(s):  
William H A Beaudot ◽  
Kathy T Mullen

We investigated the temporal properties of the red-green, blue – yellow, and luminance mechanisms in a contour-integration task which required the linking of orientation across space to detect a ‘path’. Reaction times were obtained for simple detection of the stimulus regardless of the presence of a path, and for path detection measured by a yes/no procedure with path and no-path stimuli randomly presented. Additional processing times for contour integration were calculated as the difference between reaction times for simple stimulus detection and path detection, and were measured as a function of stimulus contrast for straight and curved paths. We found that processing time shows effects not apparent in choice reaction-time measurements. (i) Processing time for curved paths is longer than for straight paths. (ii) For straight paths, the achromatic mechanism is faster than the two chromatic ones, with no difference between the red – green and blue – yellow mechanisms. For curved paths there is no difference in processing time between mechanisms. (iii) The extra processing time required to detect curved compared to straight paths is longest for the achromatic mechanism, and similar for the red – green and blue – yellow mechanisms. (iv) Detection of the absence of a path requires at least 50 ms of additional time independently of chromaticity, contrast, and path curvature. The significance of these differences and similarities between postreceptoral mechanisms is discussed.


2001 ◽  
Vol 15 (4) ◽  
pp. 256-274 ◽  
Author(s):  
Caterina Pesce ◽  
Rainer Bösel

Abstract In the present study we explored the focusing of visuospatial attention in subjects practicing and not practicing activities with high attentional demands. Similar to the studies of Castiello and Umiltà (e. g., 1990) , our experimental procedure was a variation of Posner's (1980) basic paradigm for exploring covert orienting of visuospatial attention. In a simple RT-task, a peripheral cue of varying size was presented unilaterally or bilaterally from a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside the cue with varying spatial relation to its boundary. Event-related brain potentials (ERPs) and reaction times (RTs) were recorded to target stimuli under the different task conditions. RT and ERP findings showed converging aspects as well as dissociations. Electrophysiological results revealed an amplitude modulation of the ERPs in the early and late Nd time interval at both anterior and posterior scalp sites, which seems to be related to the effects of peripheral informative cues as well as to the attentional expertise. Results were: (1) shorter latency effects confirm the positive-going amplitude enhancement elicited by unilateral peripheral cues and strengthen the criticism against the neutrality of spatially nonpredictive peripheral cueing of all possible target locations which is often presumed in behavioral studies. (2) Longer latency effects show that subjects with attentional expertise modulate the distribution of the attentional resources in the visual space differently than nonexperienced subjects. Skilled practice may lead to minimizing attentional costs by automatizing the use of a span of attention that is adapted to the most frequent task demands and endogenously increases the allocation of resources to cope with less usual attending conditions.


1999 ◽  
Vol 127 (3) ◽  
pp. 291-297 ◽  
Author(s):  
A. Spantekow ◽  
Paul Krappmann ◽  
Stefan Everling ◽  
Hans Flohr

Perception ◽  
10.1068/p7085 ◽  
2012 ◽  
Vol 41 (2) ◽  
pp. 131-147 ◽  
Author(s):  
Nicola J Gregory ◽  
Timothy L Hodgson

Pointing with the eyes or the finger occurs frequently in social interaction to indicate direction of attention and one's intentions. Research with a voluntary saccade task (where saccade direction is instructed by the colour of a fixation point) suggested that gaze cues automatically activate the oculomotor system, but non-biological cues, like arrows, do not. However, other work has failed to support the claim that gaze cues are special. In the current research we introduced biological and non-biological cues into the anti-saccade task, using a range of stimulus onset asynchronies (SOAs). The anti-saccade task recruits both top–down and bottom–up attentional mechanisms, as occurs in naturalistic saccadic behaviour. In experiment 1 gaze, but not arrows, facilitated saccadic reaction times (SRTs) in the opposite direction to the cues over all SOAs, whereas in experiment 2 directional word cues had no effect on saccades. In experiment 3 finger pointing cues caused reduced SRTs in the opposite direction to the cues at short SOAs. These findings suggest that biological cues automatically recruit the oculomotor system whereas non-biological cues do not. Furthermore, the anti-saccade task set appears to facilitate saccadic responses in the opposite direction to the cues.


2008 ◽  
Vol 275 (1649) ◽  
pp. 2299-2308 ◽  
Author(s):  
M To ◽  
P.G Lovell ◽  
T Troscianko ◽  
D.J Tolhurst

Natural visual scenes are rich in information, and any neural system analysing them must piece together the many messages from large arrays of diverse feature detectors. It is known how threshold detection of compound visual stimuli (sinusoidal gratings) is determined by their components' thresholds. We investigate whether similar combination rules apply to the perception of the complex and suprathreshold visual elements in naturalistic visual images. Observers gave magnitude estimations (ratings) of the perceived differences between pairs of images made from photographs of natural scenes. Images in some pairs differed along one stimulus dimension such as object colour, location, size or blur. But, for other image pairs, there were composite differences along two dimensions (e.g. both colour and object-location might change). We examined whether the ratings for such composite pairs could be predicted from the two ratings for the respective pairs in which only one stimulus dimension had changed. We found a pooling relationship similar to that proposed for simple stimuli: Minkowski summation with exponent 2.84 yielded the best predictive power ( r =0.96), an exponent similar to that generally reported for compound grating detection. This suggests that theories based on detecting simple stimuli can encompass visual processing of complex, suprathreshold stimuli.


2018 ◽  
Vol 1 ◽  
pp. 205920431877823 ◽  
Author(s):  
Linda Becker

Musical expertise can lead to neural plasticity in specific cognitive domains (e.g., in auditory music perception). However, not much is known about whether the visual perception of simple musical symbols (e.g., notes) already differs between musicians and non-musicians. This was the aim of the present study. Therefore, the Familiarity Effect (FE) – an effect which occurs quite early during visual processing and which is based on prior knowledge or expertise – was investigated. The FE describes the phenomenon that it is easier to find an unfamiliar element (e.g., a mirrored eighth note) in familiar elements (e.g., normally oriented eighth notes) than to find a familiar element in a background of unfamiliar elements. It was examined whether the strength of the FE for eighth notes differs between note readers and non-note readers. Furthermore, it was investigated at which component of the event-related brain potential (ERP) the FE occurs. Stimuli that consisted of either eighth notes or vertically mirrored eighth notes were presented to the participants (28 note readers, 19 non-note readers). A target element was embedded in half of the trials. Reaction times, sensitivity, and three ERP components (the N1, N2p, and P3) were recorded. For both the note readers and the non-note readers, strong FEs were found in the behavioral data. However, no differences in the strength of the FE between groups were found. Furthermore, for both groups, the FE was found for the same ERP components (target-absent trials – N1 latency; target-present trials – N2p latency, N2p amplitude, P3 amplitude). It is concluded that the early visual perception of eighth note symbols does not differ between note readers and non-note readers. However, future research is needed to verify this for more complex musical stimuli and for professional musicians.


1981 ◽  
Vol 11 (1) ◽  
pp. 99-104 ◽  
Author(s):  
C. H. Meng

The purpose of this study is to develop analytical formulae for special queuing situations which occur during the operations of the felling and processing devices of a tree harvester, and the pickup and processing devices of a tree processor. Analytical formulae are used to estimate mean waiting time and mean idle time; in case 1 both "input" times and processing times are normally distributed; in case 2 "input" times are normally distributed and processing times are Poisson distributed. "Input" time is a term used for convenience to denote time required to fell a tree by a harvester or time required to pick up a tree by a processor. Methods of choosing distributions for representing "input" times and processing times are provided. In addition, there are two examples, using historical data, which demonstrate the applications of the analytical formulae.


2009 ◽  
Vol 26 (1) ◽  
pp. 35-49 ◽  
Author(s):  
THORSTEN HANSEN ◽  
KARL R. GEGENFURTNER

AbstractForm vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information.


2014 ◽  
Vol 14 (10) ◽  
pp. 332-332
Author(s):  
M. Mahadevan ◽  
H. Bedell ◽  
S. Stevenson

Sign in / Sign up

Export Citation Format

Share Document