Bundle adjustment with object space geometric constraints for site modeling

Author(s):  
J. Chris McGlone
2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


Author(s):  
H. Hastedt ◽  
T. Luhmann ◽  
H.-J. Przybilla ◽  
R. Rofallski

Abstract. For optical 3D measurements in close-range and UAV applications, the modelling of interior orientation is of superior importance in order to subsequently allow for high precision and accuracy in geometric 3D reconstruction. Nowadays, modern camera systems are often used for optical 3D measurements due to UAV payloads and economic purposes. They are constructed of aspheric and spherical lens combinations and include image pre-processing like low-pass filtering or internal distortion corrections that may lead to effects in image space not being considered with the standard interior orientation models. With a variety of structure-from-motion (SfM) data sets, four typical systematic patterns of residuals could be observed. These investigations focus on the evaluation of interior orientation modelling with respect to minimising systematics given in image space after bundle adjustment. The influences are evaluated with respect to interior and exterior orientation parameter changes and their correlations as well as the impact in object space. With the variety of data sets, camera/lens/platform configurations and pre-processing influences, these investigations indicate a number of different behaviours. Some specific advices in the usage of extended interior orientation models, like Fourier series, could be derived for a selection of the data sets. Significant reductions of image space systematics are achieved. Even though increasing standard deviations and correlations for the interior orientation parameters are a consequence, improvements in object space precision and image space reliability could be reached.


Author(s):  
S. Verykokou ◽  
C. Ioannidis

<p><strong>Abstract.</strong> The purpose of this paper is the presentation of a novel algorithm for automatic estimation of the exterior orientation parameters of image datasets, which can be applied in the case that the scene depicted in the images has a planar surface (e.g., roof of a building). The algorithm requires the measurement of four coplanar ground control points (GCPs) in only one image. It uses a template matching method combined with a homography-based technique for transfer of the GCPs in another image, along with an incremental photogrammetry-based Structure from Motion (SfM) workflow, coupled with robust iterative bundle adjustment methods that reject any remaining outliers, which have passed through the checks and geometric constraints imposed during the image matching procedure. Its main steps consist of (i) determination of overlapping images without the need for GPS/INS data; (ii) image matching and feature tracking; (iii) estimation of the exterior orientation parameters of a starting image pair; and (iv) photogrammetry-based SfM combined with iterative bundle adjustment methods. A developed software solution implementing the proposed algorithm was tested using a set of UAV oblique images. Several tests were performed for the assessment of the errors and comparisons with well-established commercial software were made, in terms of automation and correctness of the computed exterior orientation parameters. The results show that the estimated orientation parameters via the proposed solution have comparable accuracy with those ones computed through the commercial software using the highest possible accuracy settings; in addition, double manual work was required by the commercial software compared to the proposed solution.</p>


2018 ◽  
Vol 10 (10) ◽  
pp. 92 ◽  
Author(s):  
Qianru Teng ◽  
Yimin Chen ◽  
Chen Huang

We present an occlusion-aware unsupervised neural network for jointly learning three low-level vision tasks from monocular videos: depth, optical flow, and camera motion. The system consists of three different predicting sub-networks simultaneously coupled by combined loss terms and is capable of computing each task independently on test samples. Geometric constraints extracted from scene geometry which have traditionally been used in bundle adjustment or pose-graph optimization are formed as various self-supervisory signals during our end-to-end learning approach. Different from prior works, our image reconstruction loss also takes account of optical flow. Moreover, we impose novel 3D flow consistency constraints over the predictions of all the three tasks. By explicitly modeling occlusion and taking utilization of both 2D and 3D geometry relationships, abundant geometric constraints are formed over estimated outputs, enabling the system to capture both low-level representations and high-level cues to infer thinner scene structures. Empirical evaluation on the KITTI dataset demonstrates the effectiveness and improvement of our approach: (1) monocular depth estimation outperforms state-of-the-art unsupervised methods and is comparable to stereo supervised ones; (2) optical flow prediction ranks top among prior works and even beats supervised and traditional ones especially in non-occluded regions; (3) pose estimation outperforms established SLAM systems under comparable input settings with a reasonable margin.


Author(s):  
A. Cefalu ◽  
N. Haala ◽  
D. Fritsch

Bundle adjustment based on collinearity is the most widely used optimization method within image based scene reconstruction. It incorporates observed image coordinates, exterior and intrinsic camera parameters as well as object space coordinates of the observed points. The latter dominate the resulting nonlinear system, in terms of the number of unknowns which need to be estimated. In order to reduce the size of the problem regarding memory footprint and computational effort, several approaches have been developed to make the process more efficient, e.g. by exploitation of sparsity or hierarchical subdivision. Some recent developments express the bundle problem through epipolar geometry and scale consistency constraints which are free of object space coordinates. These approaches are usually referred to as structureless bundle adjustment. The number of unknowns in the resulting system is drastically reduced. However, most work in this field is focused on optimization towards speed and considers calibrated cameras, only. We present our work on structureless bundle adjustment, focusing on precision issues as camera calibration and residual weighting. We further investigate accumulation of constraint residuals as an approach to decrease the number of rows of the Jacobian matrix.


Author(s):  
J. Unger ◽  
F. Rottensteiner ◽  
C. Heipke

This paper addresses the integration of a building model into the pose estimation of image sequences. Images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between buildings. Two approaches to assign tie points to a generalised building model in object space are presented. A direct approach is based on the distances between the object coordinates of tie points and planes of the building model. An indirect approach first finds planes within the tie point cloud that are subsequently matched to model planes; finally based on these matches, tie points are assigned to model planes. For both cases, the assignments are used in a hybrid bundle adjustment to refine the poses (image orientations). Experimental results for an image sequence demonstrate improvements in comparison to an adjustment without the building model. Differences and limitations of the two approaches for point-plane assignment are discussed – in the experiments they perform similar with respect to estimated standard deviations of tie points.


Author(s):  
O. Kahmen ◽  
R. Rofallski ◽  
N. Conen ◽  
T. Luhmann

<p><strong>Abstract.</strong> In multimedia photogrammetry, multi-camera systems often provide scale by a calibrated relative orientation. Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the refractive interfaces are modelled in an implicit form. This contribution analyses different calibration strategies for bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within a bundle adjustment, synthetic datasets are simulated. Contrary to many publications, systematic effects of the exterior orientations can be verified with simulated data. The behaviour of interior, exterior and relative orientation parameters is analysed using error-free synthetic datasets. The relative orientation of a stereo camera shows systematic effects, when the angle of convergence varies and when the synthetic interface is set up at different distances to the camera. It becomes clear, that in most cases the implicit modelling is not suitable for multimedia photogrammetry. An explicit modelling of the refractive interfaces is implemented into a bundle adjustment. This strict model is analysed and compared with the implicit form regarding systematic effects in orientation parameters as well as errors in object space. In a real experiment, the discrepancies between the implicit form using standard software and the explicit modelling using our own implementation are quantified. It is highly advisable to model the interfaces strictly, since the implicit modelling might lead to relevant errors in object space.</p>


Author(s):  
A. Cefalu ◽  
N. Haala ◽  
D. Fritsch

Bundle adjustment based on collinearity is the most widely used optimization method within image based scene reconstruction. It incorporates observed image coordinates, exterior and intrinsic camera parameters as well as object space coordinates of the observed points. The latter dominate the resulting nonlinear system, in terms of the number of unknowns which need to be estimated. In order to reduce the size of the problem regarding memory footprint and computational effort, several approaches have been developed to make the process more efficient, e.g. by exploitation of sparsity or hierarchical subdivision. Some recent developments express the bundle problem through epipolar geometry and scale consistency constraints which are free of object space coordinates. These approaches are usually referred to as structureless bundle adjustment. The number of unknowns in the resulting system is drastically reduced. However, most work in this field is focused on optimization towards speed and considers calibrated cameras, only. We present our work on structureless bundle adjustment, focusing on precision issues as camera calibration and residual weighting. We further investigate accumulation of constraint residuals as an approach to decrease the number of rows of the Jacobian matrix.


Sign in / Sign up

Export Citation Format

Share Document