scholarly journals Robust Calibration of Cameras with Telephoto Lens Using Regularized Least Squares

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Mingpei Liang ◽  
Xinyu Huang ◽  
Chung-Hao Chen ◽  
Gaolin Zheng ◽  
Alade Tokuta

Cameras with telephoto lens are usually used to recover details of an object that is either small or located far away from the cameras. However, the calibration of this kind of cameras is not as accurate as the one of cameras with short focal lengths that are commonly used in many vision applications. This paper has two contributions. First, we present a first-order error analysis that shows the relation between focal length and estimation uncertainties of camera parameters. To our knowledge, this error analysis with respect to focal length has not been studied in the area of camera calibration. Second, we propose a robust algorithm to calibrate the camera with a long focal length without using additional devices. By adding a regularization term, our algorithm makes the estimation of the image of the absolute conic well posed. As a consequence, the covariance of camera parameters can be reduced greatly. We further used simulations and real data to verify our proposed algorithm and obtained very stable results.

2003 ◽  
Vol 3 (1) ◽  
pp. 189-201 ◽  
Author(s):  
Ilya D. Mishev

AbstractA new mixed finite volume method for elliptic equations with tensor coefficients on rectangular meshes (2 and 3-D) is presented. The implementation of the discretization as a finite volume method for the scalar variable (“pressure”) is derived. The scheme is well suited for heterogeneous and anisotropic media because of the generalized harmonic averaging. It is shown that the method is stable and well posed. First-order error estimates are derived. The theoretical results are confirmed by the presented numerical experiments.


Author(s):  
P. Agrafiotis ◽  
A. Georgopoulos

Refraction is the main cause of geometric distortions in the case of two media photogrammetry. However, this effect cannot be compensated and corrected by a suitable camera calibration procedure (Georgopoulos and Agrafiotis, 2012). In addition, according to the literature (Lavest et al. 2000), when the camera is underwater, the effective focal length is approximately equal to that in the air multiplied by the refractive index of water. This ratio depends on the composition of the water (salinity, temperature, etc.) and usually ranges from 1.10 to 1.34. It seems, that in two media photogrammetry, the 1.33 factor used for clean water in underwater cases does not apply and the most probable relation of the effective camera constant to the one in air is depending of the percentages of air and water within the total camera-to-object distance. This paper examines this relation in detail, verifies it and develops it through the application of calibration methods using different test fields. In addition the current methodologies for underwater and two-media calibration are mentioned and the problem of two-media calibration is described and analysed.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6319
Author(s):  
Zixuan Bai ◽  
Guang Jiang ◽  
Ailing Xu

In this paper, we introduce a novel approach to estimate the extrinsic parameters between a LiDAR and a camera. Our method is based on line correspondences between the LiDAR point clouds and camera images. We solve the rotation matrix with 3D–2D infinity point pairs extracted from parallel lines. Then, the translation vector can be solved based on the point-on-line constraint. Different from other target-based methods, this method can be performed simply without preparing specific calibration objects because parallel lines are commonly presented in the environment. We validate our algorithm on both simulated and real data. Error analysis shows that our method can perform well in terms of robustness and accuracy.


Author(s):  
W Warren-Hicks ◽  
S Qian ◽  
J Toll ◽  
D Fischer ◽  
E Fite ◽  
...  

2020 ◽  
Vol 44 (3) ◽  
pp. 385-392
Author(s):  
E.A. Shalimova ◽  
E.V. Shalnov ◽  
A.S. Konushin

Some computer vision tasks become easier with known camera calibration. We propose a method for camera focal length, location and orientation estimation by observing human poses in the scene. Weak requirements to the observed scene make the method applicable to a wide range of scenarios. Our evaluation shows that even being trained only on synthetic dataset, the proposed method outperforms known solution. Our experiments show that using only human poses as the input also allows the proposed method to calibrate dynamic visual sensors.


Sign in / Sign up

Export Citation Format

Share Document