Ambiguous Pictorial Depth Cues and Perceptions of Nonrigid Motion in the Three-Loop Figure

Perception ◽  
1994 ◽  
Vol 23 (9) ◽  
pp. 1049-1062
Author(s):  
Jack Broerse ◽  
Rongxin Li ◽  
Roderick Ashton

The three-loop figure is a two-dimensional (2-D) pattern that generates (mis)perceptions of nonrigid three-dimensional (3-D) structure when rotated about its centre. Such observations have been described as counterexamples to the principle whereby a moving object is presumed to be rigid, provided that a rigid interpretation is possible (ie the ‘rigidity constraint’). In the present investigation we demonstrated that stationary three-loop figures exhibit many of the classic properties of multistable/ambiguous figures, with any one of several possible 3-D configurations being reported at any one instant. Further investigation revealed that perceived nonrigidity during rotation was markedly reduced (and rigidity enhanced) when the figure was modified with static pictorial depth cues (eg shading, interposition). These cues had no effect on the overall proportion of time that observers reported 3-D organisations in stationary versions of the figure, but significantly reduced the frequency of perceptual reorganisation, and increased the duration for reporting a particular organisation. Since each of the perceived 3-D structures in a stationary ambiguous 2-D figure has a unique kinetic counterpart (ie rigid transformation), we attribute the nonrigid structure perceived when the figure rotates to the integration of these otherwise inconsistent kinetic components; and have further illustrated this with modified versions of a Penrose impossible triangle. Under kinetic versions of the classical size/distance invariance hypothesis, the rigidity constraint may be considered to represent a special instance of size/shape constancy, in which case counterexamples involving (mis)perceptions of nonrigid structure are comparable to other well-known exceptions to such principles of minimum object change (eg classical illusions).

2013 ◽  
Vol 1 (1-2) ◽  
pp. 49-64 ◽  
Author(s):  
Robert Pepperell ◽  
Anja Ruschkowski

‘Double images’ are a little-noticed feature of human binocular vision caused by non-convergence of the eyes outside of the point of fixation. Double vision, or psychological diplopia, is closely linked to the perception of depth in natural vision as its perceived properties vary depending on proximity of the stimulus to the viewer. Very little attention, however, has been paid to double images in art or in scientific studies of pictorial depth. Double images have rarely been depicted and do not appear among the list of commonly cited monocular depth cues. In this study we discuss some attempts by artists to capture the doubled appearance of objects in pictures, and some of the relevant scientific work on double vision. We then present the results of a study designed to test whether the inclusion of double images in two-dimensional pictures can enhance the illusion of three-dimensional space. Our results suggest that double images can significantly enhance depth perception in pictures when combined with other depth cues such as blur. We conclude that double images could be added to the list of depth cues available to those wanting to create a greater sense of depth in pictures.


2020 ◽  
Vol 3 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Christopher W. Tyler

Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.


Author(s):  
Jida Huang ◽  
Tsz-Ho Kwok ◽  
Chi Zhou

With the advances in hardware and process development, additive manufacturing is realizing a new paradigm: mass customization. There are massive human-related data in mass customization, but there are also many similarities in mass-customized products. Therefore, reusing information can facilitate mass customization and create unprecedented opportunities in advancing the theory, method, and practice of design for mass-customized products. To enable information reuse, different models have to be aligned so that their similarity can be identified. This alignment is commonly known as the global registration that finds an optimal rigid transformation to align two three-dimensional shapes (scene and model) without any assumptions on their initial positions. The Super 4-Points Congruent Sets (S4PCS) is a popular algorithm used for this shape registration. While S4PCS performs the registration using a set of 4 coplanar points, we find that incorporating the volumetric information of the models can improve the robustness and the efficiency of the algorithm, which are particularly important for mass customization. In this paper, we propose a novel algorithm, Volumetric 4PCS (V4PCS), to extend the 4 coplanar points to non-coplanar ones for global registration, and theoretically demonstrate the computational complexity is significantly reduced. Several typical human-centered applications such as tooth aligner and hearing aid are investigated and compared with S4PCS. The experimental results show that the proposed V4PCS can achieve a maximum of 20 times speedup and can successfully compute the valid transformation with very limited number of sample points.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3949 ◽  
Author(s):  
Wei Li ◽  
Mingli Dong ◽  
Naiguang Lu ◽  
Xiaoping Lou ◽  
Peng Sun

An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.


Sign in / Sign up

Export Citation Format

Share Document