catadioptric systems
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 0)

H-INDEX

6
(FIVE YEARS 0)

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2066 ◽  
Author(s):  
Bruno Berenguel-Baeta ◽  
Jesus Bermudez-Cameo ◽  
Jose J. Guerrero

Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4494 ◽  
Author(s):  
Liu ◽  
Guo ◽  
Feng ◽  
Yang

Simultaneous localization and mapping (SLAM) are fundamental elements for many emerging technologies, such as autonomous driving and augmented reality. For this paper, to get more information, we developed an improved monocular visual SLAM system by using omnidirectional cameras. Our method extends the ORB-SLAM framework with the enhanced unified camera model as a projection function, which can be applied to catadioptric systems and wide-angle fisheye cameras with 195 degrees field-of-view. The proposed system can use the full area of the images even with strong distortion. For omnidirectional cameras, a map initialization method is proposed. We analytically derive the Jacobian matrices of the reprojection errors with respect to the camera pose and 3D position of points. The proposed SLAM has been extensively tested in real-world datasets. The results show positioning error is less than 0.1% in a small indoor environment and is less than 1.5% in a large environment. The results demonstrate that our method is real-time, and increases its accuracy and robustness over the normal systems based on the pinhole model. We open source in https://github.com/lsyads/fisheye-ORB-SLAM.


Author(s):  
Fatima Aziz ◽  
Ouiddad Labbani-Igbida ◽  
Amina Radgui ◽  
Ahmed Tamtaoui

2015 ◽  
Vol 26 (8) ◽  
pp. 085402 ◽  
Author(s):  
Zhiyu Xiang ◽  
Yanbing Zhou ◽  
Xiaojin Gong

Sign in / Sign up

Export Citation Format

Share Document