scholarly journals The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points

2018 ◽  
Vol 10 (2) ◽  
pp. 260 ◽  
Author(s):  
Felipe Costa ◽  
Edson Mitishita ◽  
Marlo Martins
Author(s):  
E. Mitishita ◽  
F. Costa ◽  
M. Martins

Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO) approach was performed using Interior Orientation Parameter (IOP) values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.


Author(s):  
E. Mitishita ◽  
L. Ercolin Filho ◽  
N. Graça ◽  
J. Centeno

The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the <i>in situ</i> camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AV<sup>TM</sup> system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.


Author(s):  
E. Mitishita ◽  
L. Ercolin Filho ◽  
N. Graça ◽  
J. Centeno

The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the <i>in situ</i> camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AV<sup>TM</sup> system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.


Author(s):  
E. Mitishita ◽  
R. Barrios ◽  
J. Centeno

The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.


Author(s):  
P. Molina ◽  
M. Blázquez ◽  
J. Sastre ◽  
I. Colomina

We introduce a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method: mapKITE. By combining two mapping technologies such as terrestrial mobile mapping and unmanned aircraft aerial mapping, geodata are simultaneously acquired from air and ground. More in detail, a mapKITE geodata acquisition system consists on an unmanned aircraft and a terrestrial vehicle, which hosts the ground control station. By means of a real-time navigation system on the terrestrial vehicle, real-time waypoints are sent to the aircraft from the ground. By doing so, the aircraft is linked to the terrestrial vehicle through a “virtual tether,” acting as a “mapping kite.” <br><br> In the article, we entail the concept of mapKITE as well as the various technologies and techniques involved, from aircraft guidance and navigation based on IMU and GNSS, optical cameras for mapping and tracking, sensor orientation and calibration, etc. Moreover, we report of a new measurement introduced in mapKITE, that is, point-and-scale photogrammetric measurements [of image coordinates and scale] for optical targets of known size installed on the ground vehicle roof. By means of accurate posteriori trajectory determination of the terrestrial vehicle, mapKITE benefits then from kinematic ground control points which are photogrametrically observed by point-and-scale measures. <br><br> Initial results for simulated configurations show that these measurements added to the usual Integrated Sensor Orientation ones reduce or even eliminate the need of conventional ground control points –therefore, lowering mission costs– and enable selfcalibration of the unmanned aircraft interior orientation parameters in corridor configurations, in contrast to the situation of traditional corridor configurations. <br><br> Finally, we report about current developments of the first mapKITE prototype, developed under the European Union Research and Innovation programme Horizon 2020. The first mapKITE mission will be held at the BCN Drone Center (Collsuspina, Moià, Spain) in mid 2016.


Author(s):  
P. Molina ◽  
M. Blázquez ◽  
J. Sastre ◽  
I. Colomina

We introduce a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method: mapKITE. By combining two mapping technologies such as terrestrial mobile mapping and unmanned aircraft aerial mapping, geodata are simultaneously acquired from air and ground. More in detail, a mapKITE geodata acquisition system consists on an unmanned aircraft and a terrestrial vehicle, which hosts the ground control station. By means of a real-time navigation system on the terrestrial vehicle, real-time waypoints are sent to the aircraft from the ground. By doing so, the aircraft is linked to the terrestrial vehicle through a “virtual tether,” acting as a “mapping kite.” &lt;br&gt;&lt;br&gt; In the article, we entail the concept of mapKITE as well as the various technologies and techniques involved, from aircraft guidance and navigation based on IMU and GNSS, optical cameras for mapping and tracking, sensor orientation and calibration, etc. Moreover, we report of a new measurement introduced in mapKITE, that is, point-and-scale photogrammetric measurements [of image coordinates and scale] for optical targets of known size installed on the ground vehicle roof. By means of accurate posteriori trajectory determination of the terrestrial vehicle, mapKITE benefits then from kinematic ground control points which are photogrametrically observed by point-and-scale measures. &lt;br&gt;&lt;br&gt; Initial results for simulated configurations show that these measurements added to the usual Integrated Sensor Orientation ones reduce or even eliminate the need of conventional ground control points –therefore, lowering mission costs– and enable selfcalibration of the unmanned aircraft interior orientation parameters in corridor configurations, in contrast to the situation of traditional corridor configurations. &lt;br&gt;&lt;br&gt; Finally, we report about current developments of the first mapKITE prototype, developed under the European Union Research and Innovation programme Horizon 2020. The first mapKITE mission will be held at the BCN Drone Center (Collsuspina, Moià, Spain) in mid 2016.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1091
Author(s):  
Izaak Van Crombrugge ◽  
Rudi Penne ◽  
Steve Vanlanduit

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.


Author(s):  
C. Cortes ◽  
M. Shahbazi ◽  
P. Ménard

<p><strong>Abstract.</strong> In the last decade, applications of unmanned aerial vehicles (UAVs), as remote-sensing platforms, have extensively been investigated for fine-scale mapping, modeling and monitoring of the environment. In few recent years, integration of 3D laser scanners and cameras onboard UAVs has also received considerable attention as these two sensors provide complementary spatial/spectral information of the environment. Since lidar performs range and bearing measurements in its body-frame, precise GNSS/INS data are required to directly geo-reference the lidar measurements in an object-fixed coordinate system. However, such data comes at the price of tactical-grade inertial navigation sensors enabled with dual-frequency RTK-GNSS receivers, which also necessitates having access to a base station and proper post-processing software. Therefore, such UAV systems equipped with lidar and camera (UAV-LiCam Systems) are too expensive to be accessible to a wide range of users. Hence, new solutions must be developed to eliminate the need for costly navigation sensors. In this paper, a two-fold solution is proposed based on an in-house developed, low-cost system: 1) a multi-sensor self-calibration approach for calibrating the Li-Cam system based on planar and cylindrical multi-directional features; 2) an integrated sensor orientation method for georeferencing based on unscented particle filtering which compensates for time-variant IMU errors and eliminates the need for GNSS measurements.</p>


Sign in / Sign up

Export Citation Format

Share Document