Microlens array based three-dimensional light field projection and possible applications in photolithography

Optifab 2019 ◽  
2019 ◽  
Author(s):  
Hongjie Zhang ◽  
Sy-Bor Wen
2021 ◽  
Author(s):  
Luca Palmieri

Microlens-array based plenoptic cameras capture the light field in a single shot, enabling new potential applications but also introducing additional challenges. A plenoptic image consists of thousand of microlens images. Estimating the disparity for each microlens allows to render conventional images, changing the perspective and the focal settings, and to reconstruct the three-dimensional geometry of the scene. The work includes a blur-aware calibration method to model plenoptic cameras, an optimization method to accurately select the best microlenses combination for disparity estimation, an overview of the different types of plenoptic cameras, an analysis of the disparity estimation algorithms, and a robust depth estimation approach for light field microscopy. The research led to the creation of a full framework for plenoptic cameras, which contains the implementation of the algorithms discussed in the work and datasets of both real and synthetic images for comparison, benchmarking and future research.


2016 ◽  
Vol 10 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Shin Usuki ◽  
◽  
Masaru Uno ◽  
Kenjiro T. Miura ◽  
◽  
...  

In this paper, we propose a digital shape reconstruction method for micro-sized 3D (three-dimensional) objects based on the shape from silhouette (SFS) method that reconstructs the shape of a 3D model from silhouette images taken from multiple viewpoints. In the proposed method, images used in the SFS method are depth images acquired with a light-field microscope by digital refocusing (DR) of a stacked image along the axial direction. The DR can generate refocused images from an acquired image by an inverse ray tracing technique using a microlens array. Therefore, this technique provides fast image stacking with different focal planes. Our proposed method can reconstruct micro-sized object models including edges, convex shapes, and concave shapes on the surface of an object such as micro-sized defects so that damaged structures in the objects can be visualized. Firstly, we introduce the SFS method and the light-field microscope for 3D shape reconstruction that is required in the field of micro-sized manufacturing. Secondly, we show the developed experimental equipment for microscopic image acquisition. Depth calibration using a USAF1951 test target is carried out to convert relative value into actual length. Then 3D modeling techniques including image processing are implemented for digital shape reconstruction. Finally, 3D shape reconstruction results of micro-sized machining tools are shown and discussed.


2018 ◽  
Vol 26 (4) ◽  
pp. 4035 ◽  
Author(s):  
Zhaowei Xin ◽  
Dong Wei ◽  
Xingwang Xie ◽  
Mingce Chen ◽  
Xinyu Zhang ◽  
...  

Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1920
Author(s):  
Chang Wang ◽  
Zeqing Yu ◽  
Qiangbo Zhang ◽  
Yan Sun ◽  
Chenning Tao ◽  
...  

Near-eye display (NED) systems for virtual reality (VR) and augmented reality (AR) have been rapidly developing; however, the widespread use of VR/AR devices is hindered by the bulky refractive and diffractive elements in the complicated optical system as well as the visual discomfort caused by excessive binocular parallax and accommodation-convergence conflict. To address these problems, an NED system combining a 5 mm diameter metalens eyepiece and a three-dimensional (3D), computer-generated holography (CGH) based on Fresnel diffraction is proposed in this paper. Metalenses have been extensively studied for their extraordinary capabilities at wavefront shaping at a subwavelength scale, their ultrathin compactness, and their significant advantages over conventional lenses. Thus, the introduction of the metalens eyepiece is likely to reduce the issue of bulkiness in NED systems. Furthermore, CGH has typically been regarded as the optimum solution for 3D displays to overcome limitations of binocular systems, since it can restore the whole light field of the target 3D scene. Experiments are carried out for this design, where a 5 mm diameter metalens eyepiece composed of silicon nitride anisotropic nanofins is fabricated with diffraction efficiency and field of view for a 532 nm incidence of 15.7% and 31°, respectively. Furthermore, a novel partitioned Fresnel diffraction and resample method is applied to simulate the wave propagations needed to produce the hologram, with the metalens capable of transforming the reconstructed 3D image into a virtual image for the NED. Our work combining metalens and CGH may pave the way for portable optical display devices in the future.


i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951668608 ◽  
Author(s):  
Ling Xia ◽  
Sylvia C. Pont ◽  
Ingrid Heynderick

Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Sign in / Sign up

Export Citation Format

Share Document