scholarly journals Picture Perception Reveals Rules of 3D Scene Inference

2018 ◽  
Vol 18 (10) ◽  
pp. 387
Author(s):  
Erin Koch ◽  
Famya Baig ◽  
Qasim Zaidi
Keyword(s):  
2018 ◽  
Vol 115 (30) ◽  
pp. 7807-7812 ◽  
Author(s):  
Erin Koch ◽  
Famya Baig ◽  
Qasim Zaidi

Pose estimation of objects in real scenes is critically important for biological and machine visual systems, but little is known of how humans infer 3D poses from 2D retinal images. We show unexpectedly remarkable agreement in the 3D poses different observers estimate from pictures. We further show that all observers apply the same inferential rule from all viewpoints, utilizing the geometrically derived back-transform from retinal images to actual 3D scenes. Pose estimations are altered by a fronto-parallel bias, and by image distortions that appear to tilt the ground plane. We used pictures of single sticks or pairs of joined sticks taken from different camera angles. Observers viewed these from five directions, and matched the perceived pose of each stick by rotating an arrow on a horizontal touchscreen. The projection of each 3D stick to the 2D picture, and then onto the retina, is described by an invertible trigonometric expression. The inverted expression yields the back-projection for each object pose, camera elevation, and observer viewpoint. We show that a model that uses the back-projection, modulated by just two free parameters, explains 560 pose estimates per observer. By considering changes in retinal image orientations due to position and elevation of limbs, the model also explains perceived limb poses in a complex scene of two bodies lying on the ground. The inferential rules simply explain both perceptual invariance and dramatic distortions in poses of real and pictured objects, and show the benefits of incorporating projective geometry of light into mental inferences about 3D scenes.


2019 ◽  
Vol 2019 (7) ◽  
pp. 465-1-465-7
Author(s):  
Sjors van Riel ◽  
Dennis van de Wouw ◽  
Peter de With

2020 ◽  
Vol 25 (3) ◽  
pp. 265-276
Author(s):  
K.M. Shepilova ◽  
◽  
A.V. Sotnikov ◽  
A.V. Shipatov ◽  
Yu.V. Savchenko ◽  
...  

2012 ◽  
Vol 38 (9) ◽  
pp. 1428 ◽  
Author(s):  
Xin LIU ◽  
Feng-Mei SUN ◽  
Zhan-Yi HU

Photonics ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 298
Author(s):  
Juan Martinez-Carranza ◽  
Tomasz Kozacki ◽  
Rafał Kukołowicz ◽  
Maksymilian Chlipala ◽  
Moncy Sajeev Idicula

A computer-generated hologram (CGH) allows synthetizing view of 3D scene of real or virtual objects. Additionally, CGH with wide-angle view offers the possibility of having a 3D experience for large objects. An important feature to consider in the calculation of CGHs is occlusion between surfaces because it provides correct perception of encoded 3D scenes. Although there is a vast family of occlusion culling algorithms, none of these, at the best of our knowledge, consider occlusion when calculating CGHs with wide-angle view. For that reason, in this work we propose an occlusion culling algorithm for wide-angle CGHs that uses the Fourier-type phase added stereogram (PAS). It is shown that segmentation properties of the PAS can be used for setting efficient conditions for occlusion culling of hidden areas. The method is efficient because it enables processing of dense cloud of points. The investigated case has 24 million of point sources. Moreover, quality of the occluded wide-angle CGHs is tested by two propagation methods. The first propagation technique quantifies quality of point reproduction of calculated CGH, while the second method enables the quality assessment of the occlusion culling operation over an object of complex shape. Finally, the applicability of proposed occlusion PAS algorithm is tested by synthetizing wide-angle CGHs that are numerically and optically reconstructed.


Sign in / Sign up

Export Citation Format

Share Document