Correspondence matching in apparent motion: evidence for three-dimensional spatial representation

Science ◽  
1986 ◽  
Vol 233 (4771) ◽  
pp. 1427-1429 ◽  
Author(s):  
M Green ◽  
J. Odom
Perception ◽  
1993 ◽  
Vol 22 (12) ◽  
pp. 1441-1465 ◽  
Author(s):  
Jeffrey C Liter ◽  
Myron L Braunstein ◽  
Donald D Hoffman

Five experiments were conducted to examine constraints used to interpret structure-from-motion displays. Theoretically, two orthographic views of four or more points in rigid motion yield a one-parameter family of rigid three-dimensional (3-D) interpretations. Additional views yield a unique rigid interpretation. Subjects viewed two-view and thirty-view displays of five-point objects in apparent motion. The subjects selected the best 3-D interpretation from a set of 89 compatible alternatives (experiments 1–3) or judged depth directly (experiment 4). In both cases the judged depth increased when relative image motion increased, even when the increased motion was due to increased simulation rotation. Subjects also judged rotation to be greater when either simulated depth or simulated rotation increased (experiment 4). The results are consistent with a heuristic analysis in which perceived depth is determined by relative motion.


2003 ◽  
Vol 26 (4) ◽  
pp. 417-418
Author(s):  
Dan Lloyd

The “Gestalt Bubble” model of Lehar is not supported by the evidence offered. The author invalidly concludes that spatial properties in experience entail an explicit volumetric spatial representation in the brain. The article also exaggerates the extent to which phenomenology reveals a completely three-dimensional scene in perception.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 55-55
Author(s):  
S Yukumatsu ◽  
K Bingushi

To study the effect of binocular disparity on apparent motion, we measured the cumulative time of its breakdown during a 30 s fixation viewing period. Two light spots, both on the left side of the fixation point, were alternately presented one by one on a CRT display (unilateral condition). These spots were binocularly disparate and viewed through a stereoscope. While one spot near the fixation point was presented on a zero disparity plane, the other spot (more peripheral) was either on a zero, uncrossed, or crossed disparity plane, so that three-dimensional motion could be seen depending on disparity values. We found that the duration of the breakdown of apparent motion was longer when uncrossed and zero-disparity spots were paired to produce apparent motion, and it was shorter when crossed and zero-disparity spots were paired. However, such disparity-specific tendencies were not obtained when the two spots were presented on both sides of the fixation point (bilateral condition). The disparity-specific tendencies in the unilateral condition can be explained by assuming that three-dimensional apparent motion that is consistent with the motion perspective may be stable because we experience it more frequently. Thus, we assume that perception of motion, both apparent and real, may develop through everyday experiences of moving to and fro in the environment rather than seeing objects move.


2013 ◽  
Vol 36 (5) ◽  
pp. 553-554 ◽  
Author(s):  
David M. Kaplan

AbstractJeffery et al. characterize the egocentric/allocentric distinction as discrete. But paradoxically, much of the neural and behavioral evidence they adduce undermines a discrete distinction. More strikingly, their positive proposal – the bicoded map hypothesis – reflects a more complex interplay between egocentric and allocentric coding than they acknowledge. Properly interpreted, their proposal about three-dimensional spatial representation contributes to recent work on embodied cognition.


Perception ◽  
1983 ◽  
Vol 12 (3) ◽  
pp. 305-312 ◽  
Author(s):  
Kathleen Mutch ◽  
Isabel M Smith ◽  
Albert Yonas

The problem of how the visual system matches corresponding inputs from one instant to the next to produce the perception of motion has been experimentally examined. The specific concern was whether this correspondence problem is solved prior to the interpretation of three-dimensional distance. Observers judged the degree of apparent motion between pairs of lights in a conflicting motion display. Spatial separation of the lights was varied in two and three dimensions in order to assess whether retinal distance, actual depth, or some combination of these provided critical information for correspondence. The results support Ullman's contention that only two-dimensional (retinal) distances are used in establishing correspondence in motion perception.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Naived George Eapen ◽  
Debabrata Samanta ◽  
Manjit Kaur ◽  
Jehad F. Al-Amri ◽  
Mehedi Masud

The increase in computational power in recent years has opened a new door for image processing techniques. Three-dimensional object recognition, identification, pose estimation, and mapping are becoming popular. The need for real-world objects to be mapped into three-dimensional spatial representation is greatly increasing, especially considering the heap jump we obtained in the past decade in virtual reality and augmented reality. This paper discusses an algorithm to convert an array of captured images into estimated 3D coordinates of their external mappings. Elementary methods for generating three-dimensional models are also discussed. This framework will help the community in estimating three-dimensional coordinates of a convex-shaped object from a series of two-dimension images. The built model could be further processed for increasing the resemblance of the input object in terms of its shapes, contour, and texture.


Perception ◽  
1986 ◽  
Vol 15 (5) ◽  
pp. 619-625 ◽  
Author(s):  
Kvetoslav Prazdny

Experiments are reported which show that three-dimensional structure can be perceived from two-dimensional image motions carried by objects defined solely by the differences in binocular and/or temporal correlation (ie disparity or motion discontinuities). This demonstrates that the kinetic depth effect is independent of motion detection in the luminance domain and that its relevant input comes from detectors based on some form of identity preservation of objects or features over time, ie the long-range processes of apparent motion.


Perception ◽  
2019 ◽  
Vol 49 (1) ◽  
pp. 61-80 ◽  
Author(s):  
Harry H. Haladjian ◽  
Stuart Anstis ◽  
Mark Wexler ◽  
Patrick Cavanagh

In the visual quartet, alternating diagonal pairs of dots produce apparent motion horizontally or vertically, depending on proximity. Here, we studied a tactile quartet where vibrating tactors were attached to the thumbs and index fingers of both hands. Apparent motion was felt either within hands (from index finger to thumb) or between hands. Participants adjusted the distance between their hands to find the point where motion changed directions. Surprisingly, switchovers occurred when between-hand distances were as much as twice that of within-hand distances—a general bias that was also found for tactile judgments of static distances. This expansion of within-hand felt distances was again seen when lights were placed on the hands rather than vibrating tactors. Importantly, switchover points were similar when the hands were placed at different depths, indicating that representations governing tactile motion were in perceptual three-dimensional space, not retinal two-dimensional space. This was true whether the quartets were visual stimuli on the hands or were purely visual on a monitor, suggesting that proximity is generally determined in three-dimensional coordinates for motion perception. Finally, the similarity of visual and tactile results suggests a common computation for apparent motion, albeit with different built-in distance biases for separate modalities.


2013 ◽  
Vol 36 (5) ◽  
pp. 523-543 ◽  
Author(s):  
Kathryn J. Jeffery ◽  
Aleksandar Jovalekic ◽  
Madeleine Verriotis ◽  
Robin Hayman

AbstractThe study of spatial cognition has provided considerable insight into how animals (including humans) navigate on the horizontal plane. However, the real world is three-dimensional, having a complex topography including both horizontal and vertical features, which presents additional challenges for representation and navigation. The present article reviews the emerging behavioral and neurobiological literature on spatial cognition in non-horizontal environments. We suggest that three-dimensional spaces are represented in a quasi-planar fashion, with space in the plane of locomotion being computed separately and represented differently from space in the orthogonal axis – a representational structure we have termed “bicoded.” We argue that the mammalian spatial representation in surface-travelling animals comprises a mosaic of these locally planar fragments, rather than a fully integrated volumetric map. More generally, this may be true even for species that can move freely in all three dimensions, such as birds and fish. We outline the evidence supporting this view, together with the adaptive advantages of such a scheme.


2013 ◽  
Vol 36 (5) ◽  
pp. 559-560 ◽  
Author(s):  
Achille Pasqualotto ◽  
Michael J. Proulx

AbstractJeffery et al. suggest that three-dimensional environments are not represented according to their volumetric properties, but in a quasi-planar fashion. Here we take into consideration the role of visual experience and the use of technology for spatial learning to better understand the nature of the preference of horizontal over vertical spatial representation.


Sign in / Sign up

Export Citation Format

Share Document