scholarly journals On-Ground Vineyard Reconstruction Using a LiDAR-Based Automated System

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1102 ◽  
Author(s):  
Hugo Moreno ◽  
Constantino Valero ◽  
José María Bengochea-Guevara ◽  
Ángela Ribeiro ◽  
Miguel Garrido-Izard ◽  
...  

Crop 3D modeling allows site-specific management at different crop stages. In recent years, light detection and ranging (LiDAR) sensors have been widely used for gathering information about plant architecture to extract biophysical parameters for decision-making programs. The study reconstructed vineyard crops using light detection and ranging (LiDAR) technology. Its accuracy and performance were assessed for vineyard crop characterization using distance measurements, aiming to obtain a 3D reconstruction. A LiDAR sensor was installed on-board a mobile platform equipped with an RTK-GNSS receiver for crop 2D scanning. The LiDAR system consisted of a 2D time-of-flight sensor, a gimbal connecting the device to the structure, and an RTK-GPS to record the sensor data position. The LiDAR sensor was facing downwards installed on-board an electric platform. It scans in planes perpendicular to the travel direction. Measurements of distance between the LiDAR and the vineyards had a high spatial resolution, providing high-density 3D point clouds. The 3D point cloud was obtained containing all the points where the laser beam impacted. The fusion of LiDAR impacts and the positions of each associated to the RTK-GPS allowed the creation of the 3D structure. Although point clouds were already filtered, discarding points out of the study area, the branch volume cannot be directly calculated, since it turns into a 3D solid cluster that encloses a volume. To obtain the 3D object surface, and therefore to be able to calculate the volume enclosed by this surface, a suitable alpha shape was generated as an outline that envelops the outer points of the point cloud. The 3D scenes were obtained during the winter season when only branches were present and defoliated. The models were used to extract information related to height and branch volume. These models might be used for automatic pruning or relating this parameter to evaluate the future yield at each location. The 3D map was correlated with ground truth, which was manually determined, pruning the remaining weight. The number of scans by LiDAR influenced the relationship with the actual biomass measurements and had a significant effect on the treatments. A positive linear fit was obtained for the comparison between actual dry biomass and LiDAR volume. The influence of individual treatments was of low significance. The results showed strong correlations with actual values of biomass and volume with R2 = 0.75, and when comparing LiDAR scans with weight, the R2 rose up to 0.85. The obtained values show that this LiDAR technique is also valid for branch reconstruction with great advantages over other types of non-contact ranging sensors, regarding a high sampling resolution and high sampling rates. Even narrow branches were properly detected, which demonstrates the accuracy of the system working on difficult scenarios such as defoliated crops.

2020 ◽  
Vol 9 (7) ◽  
pp. 450
Author(s):  
Zhen Ye ◽  
Yusheng Xu ◽  
Rong Huang ◽  
Xiaohua Tong ◽  
Xin Li ◽  
...  

The semantic labeling of the urban area is an essential but challenging task for a wide variety of applications such as mapping, navigation, and monitoring. The rapid advance in Light Detection and Ranging (LiDAR) systems provides this task with a possible solution using 3D point clouds, which are accessible, affordable, accurate, and applicable. Among all types of platforms, the airborne platform with LiDAR can serve as an efficient and effective tool for large-scale 3D mapping in the urban area. Against this background, a large number of algorithms and methods have been developed to fully explore the potential of 3D point clouds. However, the creation of publicly accessible large-scale annotated datasets, which are critical for assessing the performance of the developed algorithms and methods, is still at an early age. In this work, we present a large-scale aerial LiDAR point cloud dataset acquired in a highly-dense and complex urban area for the evaluation of semantic labeling methods. This dataset covers an urban area with highly-dense buildings of approximately 1 km2 and includes more than three million points with five classes of objects labeled. Moreover, experiments are carried out with the results from several baseline methods, demonstrating the feasibility and capability of the dataset serving as a benchmark for assessing semantic labeling methods.


2021 ◽  
pp. 002029402199280
Author(s):  
Yang Miao ◽  
Changan Li ◽  
Zhan Li ◽  
Yipeng Yang ◽  
Xinghu Yu

Achieving port automation of machinery at bulk terminals is a challenging problem due to the volatile operation environments and complexity of bulk loading compared to the situations in container terminals. In order to facilitate port automation, we present a method of hull modeling (reconstruction of hull’s structure) and operation target (cargo holds under loading) identification based on 3D point cloud collected by Laser Measurement System mounted on the ship loader. In the hull modeling algorithm, we incrementally register pairs of point clouds and reconstruct the 3D structure of bulk ship’s hull blocks in details through process of encoder data of the loader, FPFH feature matching and ICP algorithm. In the identification algorithm, we project real-time point clouds of the operation zone to spherical coordinate and transforms the 3D point clouds to 2D images for fast and reliable identification of operation target. Our method detects and complements four edges of the operation target through process of the 2D images and estimates both posture and size of operation target in the bulk terminal based on the complemented edges. Experimental trials show that our algorithm allows us to achieve the reconstruction of hull blocks and real-time identification of operation target with high accuracy and reliability.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 75
Author(s):  
Dario Carrea ◽  
Antonio Abellan ◽  
Marc-Henri Derron ◽  
Neal Gauvin ◽  
Michel Jaboyedoff

The use of 3D point clouds to improve the understanding of natural phenomena is currently applied in natural hazard investigations, including the quantification of rockfall activity. However, 3D point cloud treatment is typically accomplished using nondedicated (and not optimal) software. To fill this gap, we present an open-source, specific rockfall package in an object-oriented toolbox developed in the MATLAB® environment. The proposed package offers a complete and semiautomatic 3D solution that spans from extraction to identification and volume estimations of rockfall sources using state-of-the-art methods and newly implemented algorithms. To illustrate the capabilities of this package, we acquired a series of high-quality point clouds in a pilot study area referred to as the La Cornalle cliff (West Switzerland), obtained robust volume estimations at different volumetric scales, and derived rockfall magnitude–frequency distributions, which assisted in the assessment of rockfall activity and long-term erosion rates. An outcome of the case study shows the influence of the volume computation on the magnitude–frequency distribution and ensuing erosion process interpretation.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Uzair Nadeem ◽  
Mohammad A. A. K. Jalwana ◽  
Mohammed Bennamoun ◽  
Roberto Togneri ◽  
Ferdous Sohel

Author(s):  
T. Fiolka ◽  
F. Rouatbi ◽  
D. Bender

3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.


Sign in / Sign up

Export Citation Format

Share Document