scholarly journals Point Cloud Stacking: A Workflow to Enhance 3D Monitoring Capabilities Using Time-Lapse Cameras

2020 ◽  
Vol 12 (8) ◽  
pp. 1240 ◽  
Author(s):  
Xabier Blanch ◽  
Antonio Abellan ◽  
Marta Guinau

The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions.

Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


2018 ◽  
Vol 42 (3) ◽  
pp. 457-467 ◽  
Author(s):  
A. N. Kamaev ◽  
D. A. Karmanov

A task of autonomous underwater vehicle (AUV) navigation is considered in the paper. The images obtained from an onboard stereo camera are used to build point clouds attached to a particular AUV position. Quantized SIFT descriptors of points are stored in a metric tree to organize an effective search procedure using a best bin first approach. Correspondences for a new point cloud are searched in a compact group of point clouds that have the largest number of similar descriptors stored in the tree. The new point cloud can be positioned relative to the other clouds without any prior information about the AUV position and uncertainty of this position. This approach increases the reliability of the AUV navigation system and makes it insensitive to data losses, textureless seafloor regions and long passes without trajectory intersections. Several algorithms are described in the paper: an algorithm of point clouds computation, an algorithm for establishing point clouds correspondence, and an algorithm of building groups of potentially linked point clouds to speedup the global search of correspondences. The general navigation algorithm consisting of three parallel subroutines: image adding, search tree updating, and global optimization is also presented. The proposed navigation system is tested on real and synthetic data. Tests on real data showed that the trajectory can be built even for an image sequence with 60% data losses with successive images that have either small or zero overlap. Tests on synthetic data showed that the constructed trajectory is close to the true one even for long missions. The average speed of image processing by the proposed navigation system is about 3 frames per second with  a middle-price desktop CPU.


Author(s):  
K. Nagara ◽  
T. Fuse

With increasing widespread use of three-dimensional data, the demand for simplified data acquisition is also increasing. The range camera, which is a simplified sensor, can acquire a dense-range image in a single shot; however, its measuring coverage is narrow and its measuring accuracy is limited. The former drawback had be overcome by registering sequential range images. This method, however, assumes that the point cloud is error-free. In this paper, we develop an integration method for sequential range images with error adjustment of the point cloud. The proposed method consists of ICP (Iterative Closest Point) algorithm and self-calibration bundle adjustment. The ICP algorithm is considered an initial specification for the bundle adjustment. By applying the bundle adjustment, coordinates of the point cloud are modified and the camera poses are updated. Through experimentation on real data, the efficiency of the proposed method has been confirmed.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 700 ◽  
Author(s):  
Anna Fryskowska

Three-dimensional (3D) mapping of power lines is very important for power line inspection. Many remotely-sensed data products like light detection and ranging (LiDAR) have been already studied for power line surveys. More and more data are being obtained via photogrammetric measurements. This increases the need for the implementation of advanced processing techniques. In recent years, there have been several developments in visualisation techniques using UAV (unmanned aerial vehicle) platform photography. The most modern of such imaging systems have the ability to generate dense point clouds. However, image-based point cloud accuracy is very often various (unstable) and dependent on the radiometric quality of images and the efficiency of image processing algorithms. The main factor influencing the point cloud quality is noise. Such problems usually arise with data obtained via low-cost UAV platforms. Therefore, generated point clouds representing power lines are usually incomplete and noisy. To obtain a complete and accurate 3D model of power lines and towers, it is necessary to develop improved data processing algorithms. The experiment tested the algorithms on power lines with different voltages. This paper presents the wavelet-based method of processing data acquired with a low-cost UAV camera. The proposed, original method involves the application of algorithms for coarse filtration and precise filtering. In addition, a new way of calculating the recommended flight height was proposed. At the end, the accuracy assessment of this two-stage filtration process was examined. For this, point quality indices were proposed. The experimental results show that the proposed algorithm improves the quality of low-cost point clouds. The proposed methods improve the accuracy of determining the parameters of the lines by more than twice. About 10% of noise is reduced by using the wavelet-based approach.


2021 ◽  
Vol 13 (22) ◽  
pp. 4713
Author(s):  
Jean-Emmanuel Deschaud ◽  
David Duque ◽  
Jean Pierre Richa ◽  
Santiago Velasco-Forero ◽  
Beatriz Marcotegui ◽  
...  

Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline.


Author(s):  
A. Hanel ◽  
U. Stilla

Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.


2020 ◽  
Author(s):  
Eleanor Bash ◽  
Christine Dow ◽  
Greg McDermid

<p>Recent advances in camera sensors, data storage, and structure-from-motion (SfM) processing are opening new possibilities for monitoring glacier processes through time series imagery. With SfM processing, internal and external camera parameters can be estimated in a bundle adjustment, alleviating problems associated with camera stability in the field. Orienting points in real world coordinates, however, still requires manual intervention in the form of ground control identification in imagery when dealing with two camera systems. We are introducing a new automated method of orienting point clouds from two-camera time lapse set ups to allow for fast processing of large numbers of frames. We accomplish this by leveraging several algorithms developed for computer vision and apply them to an analysis of glacier surface elevation change. Two time lapse systems were installed overlooking Nùłàdäy (Lowell Glacier), Yukon, Canada, on July 13, 2019. Each system consisted of a Nikon D5600 and a DigiSnap Pro, recording images at 2-hour intervals. On July 1, 2019 a manned aircraft flight collected imagery of the glacier using a Nikon D850 with a differential GPS collecting high precision locations for each image. The July 1 imagery was processed using Agisoft Photoscan Professional through the Python API to produce a target point cloud for orientation of unregistered time lapse imagery. Using Photoscan Professional’s 4D capability, a time series of images from each time lapse camera were aligned in a one-step bundle adjustment to produce a series of dense point clouds at each time step. Point clouds from time lapse imagery were coregistered to the target point cloud using a Fast Point Feature Histograms and a color-enhanced point cloud alignment based on Rusu et al. (2009) and Park et al. (2017). The M3C2 algorithm (Lague et al., 2013) was used to difference point clouds in the timeseries and derive a time series of elevation change at Nùłàdäy with an uncertainty of 1.5 m<sup>2</sup>.  All steps in the workflow are executed through Python, allowing for easy automated execution of data processing. With streamlined processing it is possible to integrate more time steps into SfM analysis of glacier surface elevation change and integrate the data into modelling efforts of glacier evolution.</p>


2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Sign in / Sign up

Export Citation Format

Share Document