Omnidirectional Camera Calibration and 3D Reconstruction by Contour Matching

Author(s):  
Yongho Hwang ◽  
Jaeman Lee ◽  
Hyunki Hong
Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 421 ◽  
Author(s):  
Gwon An ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Kugjin Yun ◽  
Won-Sik Cheong ◽  
...  

In this paper, we propose a Charuco board-based omnidirectional camera calibration method to solve the problem of conventional methods requiring overly complicated calibration procedures. Specifically, the proposed method can easily and precisely provide two-dimensional and three-dimensional coordinates of patterned feature points by arranging the omnidirectional camera in the Charuco board-based cube structure. Then, using the coordinate information of the feature points, an intrinsic calibration of each camera constituting the omnidirectional camera can be performed by estimating the perspective projection matrix. Furthermore, without an additional calibration structure, an extrinsic calibration of each camera can be performed, even though only part of the calibration structure is included in the captured image. Compared to conventional methods, the proposed method exhibits increased reliability, because it does not require additional adjustments to the mirror angle or the positions of several pattern boards. Moreover, the proposed method calibrates independently, regardless of the number of cameras comprising the omnidirectional camera or the camera rig structure. In the experimental results, for the intrinsic parameters, the proposed method yielded an average reprojection error of 0.37 pixels, which was better than that of conventional methods. For the extrinsic parameters, the proposed method had a mean absolute error of 0.90° for rotation displacement and a mean absolute error of 1.32 mm for translation displacement.


2014 ◽  
Vol 536-537 ◽  
pp. 213-217
Author(s):  
Meng Qiang Zhu ◽  
Jie Yang

This paper takes the following measures to solve the problem of 3D reconstruction. Camera calibration is based on chessboard, taking several different attitude images. Use corner point coordinates by corner detection to process camera calibration. The calibration result is important to be used to correct the distorted image. Next, the left and right images should be matched to find out the object surface points’ imaging position respectively so that the object depth can be calculated by triangulation. According to the inverse process of projection mapping, we can project the object depth and disparity information into 3D space. As a result, we can obtain dense point cloud, which is ready for 3D reconstruction.


Sign in / Sign up

Export Citation Format

Share Document