scholarly journals A Novel Calibration Board and Experiments for 3D LiDAR and Camera Calibration

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1130 ◽  
Author(s):  
Huaiyu Cai ◽  
Weisong Pang ◽  
Xiaodong Chen ◽  
Yi Wang ◽  
Haolin Liang

Aiming at the problems of feature point calibration method of 3D light detection and ranging (LiDAR) and camera calibration that are calibration boards in various forms, incomplete information extraction methods and large calibration errors, a novel calibration board with local gradient depth information and main plane square corner information (BWDC) was designed. In addition, the "three-step fitting interpolation method" was proposed to select feature points and obtain the corresponding coordinates of feature points in the LiDAR coordinate system and camera pixel coordinate system based on BWDC. Finally, calibration experiments were carried out, and the calibration results were verified by methods such as incremental verification and reprojection error comparison. The calibration results show that using BWDC and the "three-step fitting interpolation method" can solve quite accurate coordinate transformation matrix and intrinsic and external parameters of sensors, which dynamically change within 0.2% in the repeatable experiments. The difference between the experimental value and the actual value in the incremental verification experiment is about 0.5%. The average reprojection error is 1.8312 pixels, and the value changes at different distances do not exceed 0.1 pixels, which also show that the calibration method is accurate and stable.

Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 421 ◽  
Author(s):  
Gwon An ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Kugjin Yun ◽  
Won-Sik Cheong ◽  
...  

In this paper, we propose a Charuco board-based omnidirectional camera calibration method to solve the problem of conventional methods requiring overly complicated calibration procedures. Specifically, the proposed method can easily and precisely provide two-dimensional and three-dimensional coordinates of patterned feature points by arranging the omnidirectional camera in the Charuco board-based cube structure. Then, using the coordinate information of the feature points, an intrinsic calibration of each camera constituting the omnidirectional camera can be performed by estimating the perspective projection matrix. Furthermore, without an additional calibration structure, an extrinsic calibration of each camera can be performed, even though only part of the calibration structure is included in the captured image. Compared to conventional methods, the proposed method exhibits increased reliability, because it does not require additional adjustments to the mirror angle or the positions of several pattern boards. Moreover, the proposed method calibrates independently, regardless of the number of cameras comprising the omnidirectional camera or the camera rig structure. In the experimental results, for the intrinsic parameters, the proposed method yielded an average reprojection error of 0.37 pixels, which was better than that of conventional methods. For the extrinsic parameters, the proposed method had a mean absolute error of 0.90° for rotation displacement and a mean absolute error of 1.32 mm for translation displacement.


2014 ◽  
Vol 568-570 ◽  
pp. 320-325 ◽  
Author(s):  
Feng Shan Huang ◽  
Li Chen

A new CCD camera calibration method based on the translation of Coordinate Measuring Machine (CMM) is proposed. The CMM brings the CCD camera to produce the relative translation with respect to the center of the white ceramic standard sphere along the X, Y, Z axis, and the coordinates of the different positions of the calibration characteristic point in the probe coordinate system can be generated. Meanwhile, the camera captures the image of the white ceramic standard sphere at every position, and the coordinates of the calibration characteristic point in the computer frame coordinate system can be registered. The calibration mathematic model was established, and the calibration steps were given and the calibration system was set up. The comparing calibration result shows that precision of this method is equivalent to that of the special calibration method, and the difference between the calibrating data of these two methods is within ±1μm.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


2015 ◽  
Vol 741 ◽  
pp. 697-700 ◽  
Author(s):  
Li Lun Huang ◽  
Wen Guo Li ◽  
Qi Le Yang ◽  
Ying Chun Chen

The basic principles of camera calibration are first analyzed, and the method of camera calibrate based on 2D plane circular array is presented. The first process is the use of the canny edge detection operator, and get the edge coordinates of ellipse. Then the ellipse is fitted to obtain the center point of the ellipse, and the centre point coordinates of ellipse is used to regard the feature points to implement camera caliblation. Finally, Zhang Zhengyou's method is used to obtain internal and external parameters of camera. This calibration method can be used to calbration of robot system.


2012 ◽  
Vol 472-475 ◽  
pp. 968-973
Author(s):  
Hong Ru Wang ◽  
Wen Ding

To improve accuracy of computer visual inspection in keyboard automatic assembly line, a new two-stage camera calibration method was presented. 2D circle array was used as calibration plate, and centers of the circles were taken as feature points. And feature point coordinates were extracted without human interference. The proposed camera calibration method was divided into two stages. First, lens distortion was neglected, internal and external parameters of the camera were obtained by modified camera calibration toolbox for MATLAB. Then, lens distortion was taken into account, and improved genetic algorithm (GA) was adopted to optimize camera parameters gotten in the first stage. Experiment results indicate the proposed method is feasible, and can meet with requirements of the given application.


2014 ◽  
Vol 513-517 ◽  
pp. 3719-3722
Author(s):  
Wen Guo Li ◽  
Shao Jun Duan

We present a camera calibration method based on vanishing point, that is, the vanishing points of two groups of parallel lines on the target plane are used to achieve camera calibration. A series of known positions points on target plane are used as the feature points, and the target images are recorded, the image coordinates of feature points are used to calculate the coordinates of vanishing point, then the matrix between feature points and camera is used to obtain internal parameters of camera. Experimental results show that the proposed calibration algorithm is correct, simple and convenient.


2012 ◽  
Vol 591-593 ◽  
pp. 1281-1284
Author(s):  
Tian Xia ◽  
Chao Jin ◽  
Xiao Yang Jiang ◽  
Yi Zhong Li

The camera calibration is an essential part in the machine vision,the camera calibration is to establish the relationship between the camera image pixellocation and scene position,the approach is based on the camera model,solving the model parameters of the camera image coordinates and world coordinates from the known feature points. Using the camera calibration method based on quadratic curve,this article selects a template consists of two concentric circles and two concentric ovals.We shot 4 images in different directions,and use different combinations of three of the four images to calibrate camera.Thus we can calculatethe various parameters of the camera.


2021 ◽  
Vol 11 (4) ◽  
pp. 1373
Author(s):  
Jingyu Zhang ◽  
Zhen Liu ◽  
Guangjun Zhang

Pose measurement is a necessary technology for UAV navigation. Accurate pose measurement is the most important guarantee for a UAV stable flight. UAV pose measurement methods mostly use image matching with aircraft models or 2D points corresponding with 3D points. These methods will lead to pose measurement errors due to inaccurate contour and key feature point extraction. In order to solve these problems, a pose measurement method based on the structural characteristics of aircraft rigid skeleton is proposed in this paper. The depth information is introduced to guide and label the 2D feature points to eliminate the feature mismatch and segment the region. The space points obtained from the marked feature points fit the space linear equation of the rigid skeleton, and the UAV attitude is calculated by combining with the geometric model. This method does not need cooperative identification of the aircraft model, and can stably measure the position and attitude of short-range UAV in various environments. The effectiveness and reliability of the proposed method are verified by experiments on a visual simulation platform. The method proposed can prevent aircraft collision and ensure the safety of UAV navigation in autonomous refueling or formation flight.


Sign in / Sign up

Export Citation Format

Share Document