Robust scene reconstruction from an omnidirectional vision system

2003 ◽  
Vol 19 (2) ◽  
pp. 351-357 ◽  
Author(s):  
R. Bunschoten ◽  
Ben Krose
Author(s):  
Dimitrios Chrysostomou ◽  
Antonios Gasteratos

The production of 3D models has been a popular research topic already for a long time, and important progress has been made since the early days. During the last decades, vision systems have established to become the standard and one of the most efficient sensorial assets in industrial and everyday applications. Due to the fact that vision provides several vital attributes, many applications tend to use novel vision systems into domestic, working, industrial, and any other environments. To achieve such goals, a vision system should robustly and effectively reconstruct the 3D surface and the working space. This chapter discusses different methods for capturing the three-dimensional surface of a scene. Geometric approaches to three-dimensional scene reconstruction are generally based on the knowledge of the scene structure from the camera’s internal and external parameters. Another class of methods encompasses the photometric approaches, which evaluate the pixels’ intensity to understand the three-dimensional scene structure. The third and final category of approaches, the so-called real aperture approaches, includes methods that use the physical properties of the visual sensors for image acquisition in order to reproduce the depth information of a scene.


2020 ◽  
Vol 10 (18) ◽  
pp. 6480
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Sergio Cebollada ◽  
Óscar Reinoso

In this work, an incremental clustering approach to obtain compact hierarchical models of an environment is developed and evaluated. This process is performed using an omnidirectional vision sensor as the only source of information. The method is structured in two loop closure levels. First, the Node Level Loop Closure process selects the candidate nodes with which the new image can close the loop. Second, the Image Level Loop Closure process detects the most similar image and the node with which the current image closed the loop. The algorithm is based on an incremental clustering framework and leads to a topological model where the images of each zone tend to be clustered in different nodes. In addition, the method evaluates when two nodes are similar and they can be merged in a unique node or when a group of connected images are different enough to the others and they should constitute a new node. To perform the process, omnidirectional images are described with global appearance techniques in order to obtain robust descriptors. The use of such technique in mapping and localization algorithms is less extended than local features description, so this work also evaluates the efficiency in clustering and mapping techniques. The proposed framework is tested with three different public datasets, captured by an omnidirectional vision system mounted on a robot while it traversed three different buildings. This framework is able to build the model incrementally, while the robot explores an unknown environment. Some relevant parameters of the algorithm adapt their value as the robot captures new visual information to fully exploit the features’ space, and the model is updated and/or modified as a consequence. The experimental section shows the robustness and efficiency of the method, comparing it with a batch spectral clustering algorithm.


Mechatronics ◽  
2011 ◽  
Vol 21 (2) ◽  
pp. 399-410 ◽  
Author(s):  
António J.R. Neves ◽  
Armando J. Pinho ◽  
Daniel A. Martins ◽  
Bernardo Cunha

Sign in / Sign up

Export Citation Format

Share Document