scholarly journals LiDAR-Aided Interior Orientation Parameters Refinement Strategy for Consumer-Grade Cameras Onboard UAV Remote Sensing Systems

2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.

Author(s):  
M. Dahaghin ◽  
F. Samadzadegan ◽  
F. Dadras Javan

Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


2020 ◽  
Vol 12 (18) ◽  
pp. 2923
Author(s):  
Tengfei Zhou ◽  
Xiaojun Cheng ◽  
Peng Lin ◽  
Zhenlun Wu ◽  
Ensheng Liu

Due to the existence of environmental or human factors, and because of the instrument itself, there are many uncertainties in point clouds, which directly affect the data quality and the accuracy of subsequent processing, such as point cloud segmentation, 3D modeling, etc. In this paper, to address this problem, stochastic information of point cloud coordinates is taken into account, and on the basis of the scanner observation principle within the Gauss–Helmert model, a novel general point-based self-calibration method is developed for terrestrial laser scanners, incorporating both five additional parameters and six exterior orientation parameters. For cases where the instrument accuracy is different from the nominal ones, the variance component estimation algorithm is implemented for reweighting the outliers after the residual errors of observations obtained. Considering that the proposed method essentially is a nonlinear model, the Gauss–Newton iteration method is applied to derive the solutions of additional parameters and exterior orientation parameters. We conducted experiments using simulated and real data and compared them with those two existing methods. The experimental results showed that the proposed method could improve the point accuracy from 10−4 to 10−8 (a priori known) and 10−7 (a priori unknown), and reduced the correlation among the parameters (approximately 60% of volume). However, it is undeniable that some correlations increased instead, which is the limitation of the general method.


Author(s):  
Z. Xiong ◽  
D. Stanley ◽  
Y. Xin

The approximate value of exterior orientation parameters is needed for air photo bundle adjustment. Usually the air borne GPS/IMU can provide the initial value for the camera position and attitude angle. However, in some cases, the camera’s attitude angle is not available due to lack of IMU or other reasons. In this case, the kappa angle needs to be estimated for each photo before bundle adjustment. The kappa angle can be obtained from the Ground Control Points (GCPs) in the photo. Unfortunately it is not the case that enough GCPs are always available. In order to overcome this problem, an algorithm is developed to automatically estimate the kappa angle for air photos based on phase only correlation technique. This function has been embedded in PCI software. Extensive experiments show that this algorithm is fast, reliable, and stable.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


Author(s):  
M. Eslami ◽  
M. Saadatseresht

Abstract. Laser scanner generated point cloud and photogrammetric imagery are complimentary data for many applications and services. Misalignment between imagery and point cloud data is a common problem, which causes to inaccurate products and procedures. In this paper, a novel strategy is proposed for coarse to fine registration between close-range imagery and terrestrial laser scanner point cloud data. In such a case, tie points are extracted and matched from photogrammetric imagery and preprocessing is applied on generated tie points to eliminate non-robust ones. At that time, for every tie point, two neighbor pixels are selected and matched in all overlapped images. After that, coarse interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) of the images are employed to reconstruct object space points of the tie point and its two neighbor pixels. Then, corresponding nearest points to the object space photogrammetric points are estimated in the point cloud data. Estimated three point cloud points are used to calculate a plane and its normal vector. Theoretically, every object space tie point should be located on the aforementioned plane, which is used as conditional equation alongside the collinearity equation to fine register the photogrammetric imagery network. Attained root mean square error (RMSE) results on check points, have been shown less than 2.3 pixels, which shows the accuracy, completeness and robustness of the proposed method.


2022 ◽  
Author(s):  
Lukas Winiwarter ◽  
Katharina Anders ◽  
Daniel Schröder ◽  
Bernhard Höfle

Abstract. 4D topographic point cloud data contain information on surface change processes and their spatial and temporal characteristics, such as the duration, location, and extent of mass movements, e.g., rockfalls or debris flows. To automatically extract and analyse change and activity patterns from this data, methods considering the spatial and temporal properties are required. The commonly used M3C2 point cloud distance reduces uncertainty through spatial averaging for bitemporal analysis. To extend this concept into the full 4D domain, we use a Kalman filter for point cloud change analysis. The filter incorporates M3C2 distances together with uncertainties obtained through error propagation as Bayesian priors in a dynamic model. The Kalman filter yields a smoothed estimate of the change time series for each spatial location, again associated with an uncertainty. Through the temporal smoothing, the Kalman filter uncertainty is, in general, lower than the individual bitemporal uncertainties, which therefore allows detection of more change as significant. In our example time series of bi-hourly terrestrial laser scanning point clouds of around 6 days (71 epochs) showcasing a rockfall-affected high-mountain slope in Tyrol, Austria, we are able to almost double the number of points where change is deemed significant (from 14.9 % to 28.6 % of the area of interest). Since the Kalman filter allows interpolation and, under certain constraints, also extrapolation of the time series, the estimated change values can be temporally resampled. This can be critical for subsequent analyses that are unable to deal with missing data, as may be caused by, e.g., foggy or rainy weather conditions. We demonstrate two different clustering approaches, transforming the 4D data into 2D map visualisations that can be easily interpreted by analysts. By comparison to two state-of-the-art 4D point cloud change methods, we highlight the main advantage of our method to be the extraction of a smoothed best estimate time series for change at each location. A main disadvantage of not being able to detect spatially overlapping change objects in a single pass remains. In conclusion, the consideration of combined temporal and spatial data enables a notable reduction in the associated uncertainty of the quantified change value for each point in space and time, in turn allowing the extraction of more information from the 4D point cloud dataset.


Author(s):  
G. Stavropoulou ◽  
G. Tzovla ◽  
A. Georgopoulos

Over the past decade, large-scale photogrammetric products have been extensively used for the geometric documentation of cultural heritage monuments, as they combine metric information with the qualities of an image document. Additionally, the rising technology of terrestrial laser scanning has enabled the easier and faster production of accurate digital surface models (DSM), which have in turn contributed to the documentation of heavily textured monuments. However, due to the required accuracy of control points, the photogrammetric methods are always applied in combination with surveying measurements and hence are dependent on them. Along this line of thought, this paper explores the possibility of limiting the surveying measurements and the field work necessary for the production of large-scale photogrammetric products and proposes an alternative method on the basis of which the necessary control points instead of being measured with surveying procedures are chosen from a dense and accurate point cloud. Using this point cloud also as a surface model, the only field work necessary is the scanning of the object and image acquisition, which need not be subject to strict planning. To evaluate the proposed method an algorithm and the complementary interface were produced that allow the parallel manipulation of 3D point clouds and images and through which single image procedures take place. The paper concludes by presenting the results of a case study in the ancient temple of Hephaestus in Athens and by providing a set of guidelines for implementing effectively the method.


Sign in / Sign up

Export Citation Format

Share Document