3D Modeling of Wide Area Outdoor Environments by Integrating Omnidirectional Range and Color Images

Author(s):  
T. Asai ◽  
M. Kanbara ◽  
N. Yokoya
Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6726
Author(s):  
Hang Luo ◽  
Christian Pape ◽  
Eduard Reithmeier

This paper presents an active wide-baseline triple-camera measurement system designed especially for 3D modeling in general outdoor environments, as well as a novel parallel surface refinement algorithm within the multi-view stereo (MVS) framework. Firstly, the pre-processing module converts the synchronized raw triple images from one single-shot acquisition of our setup to aligned RGB-Depth frames, which are then used for camera pose estimation using iterative closest point (ICP) and RANSAC perspective-n-point (PnP) approaches. Afterwards, an efficient dense reconstruction method, mostly implemented on the GPU in a grid manner, takes the raw depth data as input and optimizes the per-pixel depth values based on the multi-view photographic evidence, surface curvature and depth priors. Through a basic fusion scheme, an accurate and complete 3D model can be obtained from these enhanced depth maps. For a comprehensive test, the proposed MVS implementation is evaluated on benchmark and synthetic datasets, and a real-world reconstruction experiment is also conducted using our measurement system in an outdoor scenario. The results demonstrate that (1) our MVS method achieves very competitive performance in terms of modeling accuracy, surface completeness and noise reduction, given an input coarse geometry; and (2) despite some limitations, our triple-camera setup in combination with the proposed reconstruction routine, can be applied to some practical 3D modeling tasks operated in outdoor environments where conventional stereo or depth senors would normally suffer.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.


Author(s):  
K. K. Christenson ◽  
J. A. Eades

One of the strengths of the Philips EM-400 series of TEMs is their ability to operate under two distinct optical configurations: “microprobe”, the normal TEM operating condition which allows wide area illumination, and “nanoprobe”, which gives very small probes with high angular convergence for STEM imaging, microchemical and microstructural analyses. This change is accomplished by effectively turning off the twin lens located in the upper pole piece which changes the illumination from a telefocus system to a condenser-objective system. The deflection and tilt controls and alignments are designed for microprobe use and do not function properly when in nanoprobe. For instance, in nanoprobe the deflection control gives a mix of deflection and tilt; as does the tilt control.


2020 ◽  
Author(s):  
Christopher S. Graffeo ◽  
Avital Perry ◽  
Lucas P. Carlstrom ◽  
Michael J. Link ◽  
Jonathan Morris

Sign in / Sign up

Export Citation Format

Share Document