scholarly journals Three-Dimensional Inundation Mapping Using UAV Image Segmentation and Digital Surface Model

2021 ◽  
Vol 10 (3) ◽  
pp. 144
Author(s):  
Asmamaw A Gebrehiwot ◽  
Leila Hashemi-Beni

Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gauge water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively.

Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


2015 ◽  
Vol 764-765 ◽  
pp. 1375-1379 ◽  
Author(s):  
Cheng Tiao Hsieh

This paper aims at presenting a simple approach utilizing a Kinect-based scanner to create models available for 3D printing or other digital manufacturing machines. The outputs of Kinect-based scanners are a depth map and they usually need complicated computational processes to prepare them ready for a digital fabrication. The necessary processes include noise filtering, point cloud alignment and surface reconstruction. Each process may require several functions and algorithms to accomplish these specific tasks. For instance, the Iterative Closest Point (ICP) is frequently used in a 3D registration and the bilateral filter is often used in a noise point filtering process. This paper attempts to develop a simple Kinect-based scanner and its specific modeling approach without involving the above complicated processes.The developed scanner consists of an ASUS’s Xtion Pro and rotation table. A set of organized point cloud can be generated by the scanner. Those organized point clouds can be aligned precisely by a simple transformation matrix instead of the ICP. The surface quality of raw point clouds captured by Kinect are usually rough. For this drawback, this paper introduces a solution to obtain a smooth surface model. Inaddition, those processes have been efficiently developed by free open libraries, VTK, Point Cloud Library and OpenNI.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1649
Author(s):  
Muhammad Hamid Chaudhry ◽  
Anuar Ahmad ◽  
Qudsia Gulzar ◽  
Muhammad Shahid Farid ◽  
Himan Shahabi ◽  
...  

Unmanned Aerial Vehicle (UAV) is one of the latest technologies for high spatial resolution 3D modeling of the Earth. The objectives of this study are to assess low-cost UAV data using image radiometric transformation techniques and investigate its effects on global and local accuracy of the Digital Surface Model (DSM). This research uses UAV Light Detection and Ranging (LIDAR) data from 80 meters and UAV Drone data from 300 and 500 meters flying height. RAW UAV images acquired from 500 meters flying height are radiometrically transformed in Matrix Laboratory (MATLAB). UAV images from 300 meters flying height are processed for the generation of 3D point cloud and DSM in Pix4D Mapper. UAV LIDAR data are used for the acquisition of Ground Control Points (GCP) and accuracy assessment of UAV Image data products. Accuracy of enhanced DSM with DSM generated from 300 meters flight height were analyzed for point cloud number, density and distribution. Root Mean Square Error (RMSE) value of Z is enhanced from ±2.15 meters to 0.11 meters. For local accuracy assessment of DSM, four different types of land covers are statistically compared with UAV LIDAR resulting in compatibility of enhancement technique with UAV LIDAR accuracy.


Author(s):  
Nolan Lunscher ◽  
John Zelek

Fit is extremely important in footwear as fit largely determines performanceand comfort. Current footwear fit estimation mainly usesonly shoe size, which is extremely limited in characterizing theshape of a foot or the shape of a shoe. 3D scanning presents asolution to this, where a foot shape can be captured and virtuallyfit with shoe models. Traditional 3D scanning techniques have theirown complications however, stemming from their need to collectviews covering all aspects of an object. In this work we explore adeep learning technique to compete a foot scan point cloud frominformation contained in a single depth map view. We examine thebenefits of implementing residual blocks in architectures for this application,and find that they can improve accuracies while reducingmodel size and training time.


2020 ◽  
Vol 12 (20) ◽  
pp. 3305
Author(s):  
Mohamed Kerkech ◽  
Adel Hafiane ◽  
Raphael Canals

Vine pathologies generate several economic and environmental problems, causing serious difficulties for the viticultural activity. The early detection of vine disease can significantly improve the control of vine diseases and avoid spread of virus or fungi. Currently, remote sensing and artificial intelligence technologies are emerging in the field of precision agriculture. They offer interesting potential for crop disease management. However, despite the advances in these technologies, particularly deep learning technologies, many problems still present considerable challenges, such as semantic segmentation of images for disease mapping. In this paper, we present a new deep learning architecture called Vine Disease Detection Network (VddNet). It is based on three parallel auto-encoders integrating different information (i.e., visible, infrared and depth). Then, the decoder reconstructs and retrieves the features, and assigns a class to each output pixel. An orthophotos registration method is also proposed to align the three types of images and enable the processing by VddNet. The proposed architecture is assessed by comparing it with the most known architectures: SegNet, U-Net, DeepLabv3+ and PSPNet. The deep learning architectures were trained on multispectral data from an unmanned aerial vehicle (UAV) and depth map information extracted from 3D processing. The results of the proposed architecture show that the VddNet architecture achieves higher scores than the baseline methods. Moreover, this study demonstrates that the proposed method has many advantages compared to methods that directly use the UAV images.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 307
Author(s):  
Minseok Kim ◽  
Sung Ho Choi ◽  
Kyeong-Beom Park ◽  
Jae Yeol Lee

Typical AR methods have generic problems such as visual mismatching, incorrect occlusions, and limited augmentation due to the inability to estimate depth from AR images and attaching the AR markers onto physical objects, which prevents the industrial worker from conducting manufacturing tasks effectively. This paper proposes a hybrid approach to industrial AR for complementing existing AR methods using deep learning-based facility segmentation and depth prediction without AR markers and a depth camera. First, the outlines of physical objects are extracted by applying a deep learning-based instance segmentation method to the RGB image acquired from the AR camera. Simultaneously, a depth prediction method is applied to the AR image to estimate the depth map as a 3D point cloud for the detected object. Based on the segmented 3D point cloud data, 3D spatial relationships among the physical objects are calculated, which can assist in solving the visual mismatch and occlusion problems properly. In addition, it can deal with a dynamically operating or a moving facility, such as a robot—the conventional AR cannot do so. For these reasons, the proposed approach can be utilized as a hybrid or complementing function to existing AR methods, since it can be activated whenever the industrial worker requires handing of visual mismatches or occlusions. Quantitative and qualitative analyses verify the advantage of the proposed approach compared with existing AR methods. Some case studies also prove that the proposed method can be applied not only to manufacturing but also to other fields. These studies confirm the scalability, effectiveness, and originality of this proposed approach.


2020 ◽  
Vol 1 (2) ◽  
Author(s):  
Bui NGOC QUY ◽  
Le DINH HIEN ◽  
Nguyen QUOC LONG ◽  
Tong SI SON ◽  
Duong ANH QUAN ◽  
...  

Image data from Drones/Unmanned Aerial Vehicles (UAVs) has been studied and used extensively for establishing maps. The process of UAV data provides three main products including (Digital Surface Model) DSM, Point cloud and Ortho-photos, in which point cloud is a valuable data source in building 3D models and topographic surfaces as well. However, processing point cloud separately to achieve secondary products has not been received much attention from researchers. This study determines parameters to develop a method for classifying point cloud data constructed from UAV images. Consequently, A 3D surface of the ground is built by applying a developed algorithm for the point cloud data for an open-pit mine. The temporal or non-ground objects such as trees, houses, vehicles are automatically subtracted from the point cloud by the algorithms.  According to this line, it is possible to calculate and analyze the amount of reserves, the exploited volume to evaluate the efficiency for each mine during operation with the support of UAV integrated camera.


2020 ◽  
Vol 12 (22) ◽  
pp. 3698
Author(s):  
Elżbieta Pastucha ◽  
Edyta Puniach ◽  
Agnieszka Ścisłowicz ◽  
Paweł Ćwiąkała ◽  
Witold Niewiem ◽  
...  

Regular power line inspections are essential to ensure the reliability of electricity supply. The inspections of overground power submission lines include corridor clearance monitoring and fault identification. The power lines corridor is a three-dimensional space around power cables defined by a set distance. Any obstacles breaching this space should be detected, as they potentially threaten the safety of the infrastructure. Corridor clearance monitoring is usually performed either by a labor-intensive total station survey (TS), terrestrial laser scanning (TLS), or expensive airborne laser scanning (ALS) from a plane or a helicopter. This paper proposes a method that uses unmanned aerial vehicle (UAV) images to monitor corridor clearance. To maintain the adequate accuracy of the relative position of wires in regard to surrounding obstacles, the same data were used both to reconstruct a point cloud representation of a digital surface model (DSM) and a 3D power line. The proposed algorithm detects power lines in a series of images using decorrelation stretch for initial image processing, the modified Prewitt filter for edge enhancement, random sample consensus (RANSAC) with additional parameters for line fitting, and epipolar geometry for 3D reconstruction. DSM points intruding into the corridor are then detected by calculating the spatial distance between a reconstructed power line and the DSM point cloud representation. Problematic objects are localized by segmenting points into voxels and then subsequent clusterization. The processing results were compared to the results of two verification methods—TS and TLS. The comparison results show that the proposed method can be used to survey power lines with an accuracy consistent with that of classical measurements.


Sign in / Sign up

Export Citation Format

Share Document