scholarly journals From Following Edges to Pursuing Objects

2002 ◽  
Vol 88 (5) ◽  
pp. 2869-2873 ◽  
Author(s):  
Guillaume S. Masson ◽  
Leland S. Stone

Primates can generate accurate, smooth eye-movement responses to moving target objects of arbitrary shape and size, even in the presence of complex backgrounds and/or the extraneous motion of non-target objects. Most previous studies of pursuit have simply used a spot moving over a featureless background as the target and have thus neglected critical issues associated with the general problem of recovering object motion. Visual psychophysicists and theoreticians have shown that, for arbitrary objects with multiple features at multiple orientations, object-motion estimation for perception is a complex, multi-staged, time-consuming process. To examine the temporal evolution of the motion signal driving pursuit, we recorded the tracking eye movements of human observers to moving line-figure diamonds. We found that pursuit is initially biased in the direction of the vector average of the motions of the diamond's line segments and gradually converges to the true object-motion direction with a time constant of approximately 90 ms. Furthermore, transient blanking of the target during steady-state pursuit induces a decrease in tracking speed, which, unlike pursuit initiation, is subsequently corrected without an initial direction bias. These results are inconsistent with current models in which pursuit is driven by retinal-slip error correction. They demonstrate that pursuit models must be revised to include a more complete visual afferent pathway, which computes, and to some extent latches on to, an accurate estimate of object direction over the first hundred milliseconds or so of motion.

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 10-10
Author(s):  
B R Beutter ◽  
J Lorenceau ◽  
L S Stone

For four subjects (one naive), we measured pursuit of a line-figure diamond moving along an elliptical path behind an invisible X-shaped aperture under two conditions. The diamond's corners were occluded and only four moving line segments were visible over the background (38 cd m−2). At low segment luminance (44 cd m−2), the percept is largely a coherently moving diamond. At high luminance (108 cd m−2), the percept is largely four independently moving segments. Along with this perceptual effect, there were parallel changes in pursuit. In the low-contrast condition, pursuit was more related to object motion. A \chi2 analysis showed ( p>0.05) that for 98% of trials subjects were more likely tracking the object than the segments, for 29% of trials one could not reject the hypothesis that subjects were tracking the object and not the segments, and for 100% of trials one could reject the hypothesis that subjects were tracking the segments and not the object. Conversely, in the high-contrast condition, pursuit appeared more related to segment motion. For 66% of trials subjects were more likely tracking the segments than the object; for 94% of trials one could reject the hypothesis that subjects were tracking the object and not the segments; and for 13% of trials one could not reject the hypothesis that subjects were tracking the segments and not the object. These results suggest that pursuit is driven by the same object-motion signal as perception, rather than by simple retinal image motion.


Perception ◽  
2018 ◽  
Vol 47 (7) ◽  
pp. 735-750 ◽  
Author(s):  
Lindsey M. Shain ◽  
J. Farley Norman

An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred—The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers’ precision markedly deteriorated as the number of apertures was reduced from 64 to 9.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


2006 ◽  
Vol 96 (6) ◽  
pp. 3545-3550 ◽  
Author(s):  
Anna Montagnini ◽  
Miriam Spering ◽  
Guillaume S. Masson

Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 150-150 ◽  
Author(s):  
L S Stone ◽  
J Lorenceau ◽  
B R Beutter

There has long been qualitative evidence that humans can pursue an object defined only by the motion of its parts (eg Steinbach, 1976 Vision Research16 1371 – 1375). We explored this quantitatively using an occluded diamond stimulus (Lorenceau and Shiffrar, 1992 Vision Research32 263 – 275). Four subjects (one naive) tracked a line-figure diamond moving along an elliptical path (0.9 Hz) either clockwise (CW) or counterclockwise (CCW) behind either an X-shaped aperture (CROSS) or two vertical rectangular apertures (BARS), which obscured the corners. Although the stimulus consisted of only four line segments (108 cd m−2), moving within a visible aperture (0.2 cd m−2) behind a foreground (38 cd m−2), it is largely perceived as a coherently moving diamond. The intersaccadic portions of eye-position traces were fitted with sinusoids. All subjects tracked object motion with considerable temporal accuracy. The mean phase lag was 5°/6° (CROSS/BARS) and the mean relative phase between the horizontal and vertical components was +95°/+92° (CW) and −85°/−75° (CCW), which is close to perfect. Furthermore, a \chi2 analysis showed that 56% of BARS trials were consistent with tracking the correct elliptical shape ( p<0.05), although segment motion was purely vertical. These data disprove the main tenet of most models of pursuit: that it is a system that seeks to minimise retinal image motion through negative feedback. Rather, the main drive must be a visual signal which has already integrated spatiotemporal retinal information into an object-motion signal.


2000 ◽  
Vol 17 (2) ◽  
pp. 263-271 ◽  
Author(s):  
HIROYUKI UCHIYAMA ◽  
TAKAHIDE KANAYA ◽  
SHOICHI SONOHATA

One type of retinal ganglion cells prefers object motion in a particular direction. Neuronal mechanisms for the computation of motion direction are still unknown. We quantitatively mapped excitatory and inhibitory regions of receptive fields for directionally selective retinal ganglion cells in the Japanese quail, and found that the inhibitory regions are displaced about 1–3 deg toward the side where the null sweep starts, relative to the excitatory regions. Directional selectivity thus results from delayed transient suppression exerted by the nonconcentrically arranged inhibitory regions, and not by local directional inhibition as hypothesized by Barlow and Levick (1965).


2019 ◽  
Author(s):  
Tatjana Seizova-Cajic ◽  
Sandra Ludvigsson ◽  
Birger Sourander ◽  
Melinda Popov ◽  
Janet L Taylor

I.ABSTRACTAn age-old hypothesis proposes that object motion across the receptor surface organizes sensory maps (Lotze, 19th century). Skin patches learn their relative positions from the order in which they are stimulated during motion events. We test this idea by reversing the local motion within a 6-point apparent motion sequence along the forearm. In the ‘Scrambled’ sequence, two middle locations were touched in reversed order (1-2-4-3-5-6, followed by 6-5-3-4-2-1, in a continuous loop). This created a local acceleration, a double U-turn, within an otherwise constant-velocity motion, as if the physical location of skin patches 3 and 4 was surgically swapped. The control condition, ‘Orderly’, proceeded at constant velocity at inter-stimulus onset interval (ISOI) of 120 ms. In the test, our twenty participants reported motion direction between the two middle tactors, presented on their own at 75, 120 or 190-ms ISOI. Results show degraded motion discrimination following exposure to Scrambled pattern: for the 120-ms test stimulus, it was 0.31 d’ weaker than following Orderly conditioning (p = .007). This is the aftereffect we expected; its maximal expression would be a complete reversal in perceived motion direction between locations 3 and 4 for either motion direction. We propose that the somatosensory system was beginning to ‘correct’ reversed local motion to uncurl and remove the U-turns that always occurred on the same part of the receptor surface. Such de-correlation between accelerations and their location on the sensory surface is one possible mechanism for organization of sensory maps.


2016 ◽  
Vol 115 (3) ◽  
pp. 1703-1712 ◽  
Author(s):  
S. McIntyre ◽  
I. Birznieks ◽  
R. M. Vickery ◽  
A. O. Holcombe ◽  
T. Seizova-Cajic

Neurophysiological studies in primates have found that direction-sensitive neurons in the primary somatosensory cortex (SI) generally increase their response rate with increasing speed of object motion across the skin and show little evidence of speed tuning. We employed psychophysics to determine whether human perception of motion direction could be explained by features of such neurons and whether evidence can be found for a speed-tuned process. After adaptation to motion across the skin, a subsequently presented dynamic test stimulus yields an impression of motion in the opposite direction. We measured the strength of this tactile motion aftereffect (tMAE) induced with different combinations of adapting and test speeds. Distal-to-proximal or proximal-to-distal adapting motion was applied to participants' index fingers using a tactile array, after which participants reported the perceived direction of a bidirectional test stimulus. An intensive code for speed, like that observed in SI neurons, predicts greater adaptation (and a stronger tMAE) the faster the adapting speed, regardless of the test speed. In contrast, speed tuning of direction-sensitive neurons predicts the greatest tMAE when the adapting and test stimuli have matching speeds. We found that the strength of the tMAE increased monotonically with adapting speed, regardless of the test speed, showing no evidence of speed tuning. Our data are consistent with neurophysiological findings that suggest an intensive code for speed along the motion processing pathways comprising neurons sensitive both to speed and direction of motion.


2005 ◽  
Vol 58 (3) ◽  
pp. 467-506 ◽  
Author(s):  
Simone Bosbach ◽  
Wolfgang Prinz ◽  
Dirk Kerzel

Five experiments were carried out to test whether (task-irrelevant) motion information provided by a stimulus changing its position over time would affect manual left–right responses. So far, some studies reported direction-based Simon effects whereas others did not. In Experiment 1a, a reliable direction-based effect occurred, which was not modulated by the response mode—that is, by whether participants responded by pressing one of two keys or more dynamically by moving a stylus in a certain direction. Experiments 1a, 1b, and 2 lend support to the idea that observers use the starting position of target motion as a reference for spatial coding. That is, observers might process object motion as a shift of position relative to the starting position and not as directional information. The dominance of relative position coding could also be shown in Experiment 3, in which relative position was pitted against motion direction by presenting a static and dynamic stimulus at the same time. Additionally, we explored the role of eye movements in stimulus–response compatibility and showed in Experiments 1b and 3a that the execution or preparation of saccadic eye movements—as proposed by an attention-shifting account—is not necessary for a Simon effect to occur.


Vision ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 13
Author(s):  
Pearl Guterman ◽  
Robert Allison

When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.


Sign in / Sign up

Export Citation Format

Share Document