scholarly journals Measurement of the three-dimensional mirror parameters by polarization imaging applied to catadioptric camera calibration

Author(s):  
Olivier Morel ◽  
Ralph Seulin ◽  
David Fofi
2020 ◽  
pp. 1-10
Author(s):  
Linlin Wang

With the continuous development of computer science and technology, symbol recognition systems may be converted from two-dimensional space to three-dimensional space. Therefore, this article mainly introduces the symbol recognition system based on 3D stereo vision. The three-dimensional image is taken by the visual coordinate measuring machine in two places on the left and right. Perform binocular stereo matching on the edge of the feature points of the two images. A corner detection algorithm combining SUSAN and Harris is used to detect the left and right camera calibration templates. The two-dimensional coordinate points of the object are determined by the image stereo matching module, and the three-dimensional discrete coordinate points of the object space can be obtained according to the transformation relationship between the image coordinates and the actual object coordinates. Then draw the three-dimensional model of the object through the three-dimensional drawing software. Experimental data shows that the logic resources and memory resources occupied by image preprocessing account for 30.4% and 27.4% of the entire system, respectively. The results show that the system can calibrate the internal and external parameters of the camera. In this way, the camera calibration result will be more accurate and the range will be wider. At the same time, it can effectively make up for the shortcomings of traditional modeling techniques to ensure the measurement accuracy of the detection system.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 421 ◽  
Author(s):  
Gwon An ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Kugjin Yun ◽  
Won-Sik Cheong ◽  
...  

In this paper, we propose a Charuco board-based omnidirectional camera calibration method to solve the problem of conventional methods requiring overly complicated calibration procedures. Specifically, the proposed method can easily and precisely provide two-dimensional and three-dimensional coordinates of patterned feature points by arranging the omnidirectional camera in the Charuco board-based cube structure. Then, using the coordinate information of the feature points, an intrinsic calibration of each camera constituting the omnidirectional camera can be performed by estimating the perspective projection matrix. Furthermore, without an additional calibration structure, an extrinsic calibration of each camera can be performed, even though only part of the calibration structure is included in the captured image. Compared to conventional methods, the proposed method exhibits increased reliability, because it does not require additional adjustments to the mirror angle or the positions of several pattern boards. Moreover, the proposed method calibrates independently, regardless of the number of cameras comprising the omnidirectional camera or the camera rig structure. In the experimental results, for the intrinsic parameters, the proposed method yielded an average reprojection error of 0.37 pixels, which was better than that of conventional methods. For the extrinsic parameters, the proposed method had a mean absolute error of 0.90° for rotation displacement and a mean absolute error of 1.32 mm for translation displacement.


Sign in / Sign up

Export Citation Format

Share Document