Self-calibration three-dimensional light field display based on scalable multi-LCDs

2012 ◽  
Vol 20 (12) ◽  
pp. 653-660 ◽  
Author(s):  
Yifan Peng ◽  
Haifeng Li ◽  
Rui Wang ◽  
Qing Zhong ◽  
Xiang Han ◽  
...  
Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1920
Author(s):  
Chang Wang ◽  
Zeqing Yu ◽  
Qiangbo Zhang ◽  
Yan Sun ◽  
Chenning Tao ◽  
...  

Near-eye display (NED) systems for virtual reality (VR) and augmented reality (AR) have been rapidly developing; however, the widespread use of VR/AR devices is hindered by the bulky refractive and diffractive elements in the complicated optical system as well as the visual discomfort caused by excessive binocular parallax and accommodation-convergence conflict. To address these problems, an NED system combining a 5 mm diameter metalens eyepiece and a three-dimensional (3D), computer-generated holography (CGH) based on Fresnel diffraction is proposed in this paper. Metalenses have been extensively studied for their extraordinary capabilities at wavefront shaping at a subwavelength scale, their ultrathin compactness, and their significant advantages over conventional lenses. Thus, the introduction of the metalens eyepiece is likely to reduce the issue of bulkiness in NED systems. Furthermore, CGH has typically been regarded as the optimum solution for 3D displays to overcome limitations of binocular systems, since it can restore the whole light field of the target 3D scene. Experiments are carried out for this design, where a 5 mm diameter metalens eyepiece composed of silicon nitride anisotropic nanofins is fabricated with diffraction efficiency and field of view for a 532 nm incidence of 15.7% and 31°, respectively. Furthermore, a novel partitioned Fresnel diffraction and resample method is applied to simulate the wave propagations needed to produce the hologram, with the metalens capable of transforming the reconstructed 3D image into a virtual image for the NED. Our work combining metalens and CGH may pave the way for portable optical display devices in the future.


Agronomy ◽  
2019 ◽  
Vol 9 (11) ◽  
pp. 741 ◽  
Author(s):  
Haihui Yang ◽  
Xiaochan Wang ◽  
Guoxiang Sun

Perception of the fruit tree canopy is a vital technology for the intelligent control of a modern standardized orchard. Due to the complex three-dimensional (3D) structure of the fruit tree canopy, morphological parameters extracted from two-dimensional (2D) or single-perspective 3D images are not comprehensive enough. Three-dimensional information from different perspectives must be combined in order to perceive the canopy information efficiently and accurately in complex orchard field environment. The algorithms used for the registration and fusion of data from different perspectives and the subsequent extraction of fruit tree canopy related parameters are the keys to the problem. This study proposed a 3D morphological measurement method for a fruit tree canopy based on Kinect sensor self-calibration, including 3D point cloud generation, point cloud registration and canopy information extraction of apple tree canopy. Using 32 apple trees (Yanfu 3 variety) morphological parameters of the height (H), maximum canopy width (W) and canopy thickness (D) were calculated. The accuracy and applicability of this method for extraction of morphological parameters were statistically analyzed. The results showed that, on both sides of the fruit trees, the average relative error (ARE) values of the morphological parameters including the fruit tree height (H), maximum tree width (W) and canopy thickness (D) between the calculated values and measured values were 3.8%, 12.7% and 5.0%, respectively, under the V1 mode; the ARE values under the V2 mode were 3.3%, 9.5% and 4.9%, respectively; and the ARE values under the V1 and V2 merged mode were 2.5%, 3.6% and 3.2%, respectively. The measurement accuracy of the tree width (W) under the double visual angle mode had a significant advantage over that under the single visual angle mode. The 3D point cloud reconstruction method based on Kinect self-calibration proposed in this study has high precision and stable performance, and the auxiliary calibration objects are readily portable and easy to install. It can be applied to different experimental scenes to extract 3D information of fruit tree canopies and has important implications to achieve the intelligent control of standardized orchards.


i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951668608 ◽  
Author(s):  
Ling Xia ◽  
Sylvia C. Pont ◽  
Ingrid Heynderick

Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


2016 ◽  
Vol 371 ◽  
pp. 166-172 ◽  
Author(s):  
Songlin Xie ◽  
Peng Wang ◽  
Xinzhu Sang ◽  
Chenyu Li ◽  
Wenhua Dou ◽  
...  

2019 ◽  
Vol 27 (17) ◽  
pp. 24624
Author(s):  
Duo Chen ◽  
Xinzhu Sang ◽  
Peng Wang ◽  
Xunbo Yu ◽  
Binbin Yan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document