scholarly journals UAV-Based Structural Damage Mapping: A Review

2019 ◽  
Vol 9 (1) ◽  
pp. 14 ◽  
Author(s):  
Norman Kerle ◽  
Francesco Nex ◽  
Markus Gerke ◽  
Diogo Duarte ◽  
Anand Vetrivel

Structural disaster damage detection and characterization is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of unmanned aerial vehicles (UAVs) in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. This study provides a comprehensive review of how UAV-based damage mapping has evolved from providing simple descriptive overviews of a disaster science, to more sophisticated texture and segmentation-based approaches, and finally to studies using advanced deep learning approaches, as well as multi-temporal and multi-perspective imagery to provide comprehensive damage descriptions. The paper further reviews studies on the utility of the developed mapping strategies and image processing pipelines for first responders, focusing especially on outcomes of two recent European research projects, RECONASS (Reconstruction and Recovery Planning: Rapid and Continuously Updated Construction Damage, and Related Needs Assessment) and INACHUS (Technological and Methodological Solutions for Integrated Wide Area Situation Awareness and Survivor Localization to Support Search and Rescue Teams). Finally, recent and emerging developments are reviewed, such as recent improvements in machine learning, increasing mapping autonomy, damage mapping in interior, GPS-denied environments, the utility of UAVs for infrastructure mapping and maintenance, as well as the emergence of UAVs with robotic abilities.

Author(s):  
John A. Quinn ◽  
Marguerite M. Nyhan ◽  
Celia Navarro ◽  
Davide Coluccia ◽  
Lars Bromley ◽  
...  

The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4447
Author(s):  
Jisun Shin ◽  
Young-Heon Jo ◽  
Joo-Hyung Ryu ◽  
Boo-Keun Khim ◽  
Soo Mee Kim

Red tides caused by Margalefidinium polykrikoides occur continuously along the southern coast of Korea, where there are many aquaculture cages, and therefore, prompt monitoring of bloom water is required to prevent considerable damage. Satellite-based ocean-color sensors are widely used for detecting red tide blooms, but their low spatial resolution restricts coastal observations. Contrarily, terrestrial sensors with a high spatial resolution are good candidate sensors, despite the lack of spectral resolution and bands for red tide detection. In this study, we developed a U-Net deep learning model for detecting M. polykrikoides blooms along the southern coast of Korea from PlanetScope imagery with a high spatial resolution of 3 m. The U-Net model was trained with four different datasets that were constructed with randomly or non-randomly chosen patches consisting of different ratios of red tide and non-red tide pixels. The qualitative and quantitative assessments of the conventional red tide index (RTI) and four U-Net models suggest that the U-Net model, which was trained with a dataset of non-randomly chosen patches including non-red tide patches, outperformed RTI in terms of sensitivity, precision, and F-measure level, accounting for an increase of 19.84%, 44.84%, and 28.52%, respectively. The M. polykrikoides map derived from U-Net provides the most reasonable red tide patterns in all water areas. Combining high spatial resolution images and deep learning approaches represents a good solution for the monitoring of red tides over coastal regions.


2021 ◽  
Vol 13 (13) ◽  
pp. 2564
Author(s):  
Mauro Martini ◽  
Vittorio Mazzia ◽  
Aleem Khaliq ◽  
Marcello Chiaberge

The increasing availability of large-scale remote sensing labeled data has prompted researchers to develop increasingly precise and accurate data-driven models for land cover and crop classification (LC&CC). Moreover, with the introduction of self-attention and introspection mechanisms, deep learning approaches have shown promising results in processing long temporal sequences in the multi-spectral domain with a contained computational request. Nevertheless, most practical applications cannot rely on labeled data, and in the field, surveys are a time-consuming solution that pose strict limitations to the number of collected samples. Moreover, atmospheric conditions and specific geographical region characteristics constitute a relevant domain gap that does not allow direct applicability of a trained model on the available dataset to the area of interest. In this paper, we investigate adversarial training of deep neural networks to bridge the domain discrepancy between distinct geographical zones. In particular, we perform a thorough analysis of domain adaptation applied to challenging multi-spectral, multi-temporal data, accurately highlighting the advantages of adapting state-of-the-art self-attention-based models for LC&CC to different target zones where labeled data are not available. Extensive experimentation demonstrated significant performance and generalization gain in applying domain-adversarial training to source and target regions with marked dissimilarities between the distribution of extracted features.


Author(s):  
N. Kerle ◽  
F. Nex ◽  
D. Duarte ◽  
A. Vetrivel

<p><strong>Abstract.</strong> Structural disaster damage detection and characterisation is one of the oldest remote sensing challenges, and the utility of virtually every type of active and passive sensor deployed on various air- and spaceborne platforms has been assessed. The proliferation and growing sophistication of UAV in recent years has opened up many new opportunities for damage mapping, due to the high spatial resolution, the resulting stereo images and derivatives, and the flexibility of the platform. We have addressed the problem in the context of two European research projects, RECONASS and INACHUS. In this paper we synthesize and evaluate the progress of 6 years of research focused on advanced image analysis that was driven by progress in computer vision, photogrammetry and machine learning, but also by constraints imposed by the needs of first responder and other civil protection end users. The projects focused on damage to individual buildings caused by seismic activity but also explosions, and our work centred on the processing of 3D point cloud information acquired from stereo imagery. Initially focusing on the development of both supervised and unsupervised damage detection methods built on advanced texture features and basic classifiers such as Support Vector Machine and Random Forest, the work moved on to the use of deep learning. In particular the coupling of image-derived features and 3D point cloud information in a Convolutional Neural Network (CNN) proved successful in detecting also subtle damage features. In addition to the detection of standard rubble and debris, CNN-based methods were developed to detect typical façade damage indicators, such as cracks and spalling, including with a focus on multi-temporal and multi-scale feature fusion. We further developed a processing pipeline and mobile app to facilitate near-real time damage mapping. The solutions were tested in a number of pilot experiments and evaluated by a variety of stakeholders.</p>


2018 ◽  
Vol 156 (1) ◽  
pp. 24-36 ◽  
Author(s):  
Y. Palchowdhuri ◽  
R. Valcarce-Diñeiro ◽  
P. King ◽  
M. Sanabria-Soto

AbstractRemote sensing (RS) offers an efficient and reliable means to map features on Earth. Crop type mapping using RS at various temporal and spatial resolutions plays an important role spanning from environmental to economical. The main objective of the current study was to evaluate the significance of optical data in a multi-temporal crop type classification-based on very high spatial resolution and high spatial resolution imagery. With this aim, three images from WorldView-3 and Sentinel-2 were acquired over Coalville (UK) between April and July 2016. Three vegetation indices (VIs); the normalized difference vegetation index, the green normalized difference vegetation index and soil adjusted vegetation index were generated using red, green and near-infrared spectral bands; then a supervised classification was performed using ground reference data collected from field surveys, Random forest (RF) and decision tree (DT) classification algorithms. Accuracy assessment was undertaken by comparing the classified output with the reference data. An overall accuracy of 91% and κ coefficient of 0·90 were estimated using the combination of RF and DT classification algorithms. Therefore, it can be concluded that integrating very high- and high-resolution imagery with different VIs can be implemented effectively to produce large-scale crop maps even with a limited temporal-dataset.


2008 ◽  
Vol 112 (6) ◽  
pp. 2729-2740 ◽  
Author(s):  
Michael A. Wulder ◽  
Joanne C. White ◽  
Nicholas C. Coops ◽  
Christopher R. Butson

2018 ◽  
Vol 10 (2) ◽  
pp. 287 ◽  
Author(s):  
Pietro Milillo ◽  
Giorgia Giardina ◽  
Matthew DeJong ◽  
Daniele Perissin ◽  
Giovanni Milillo

Sign in / Sign up

Export Citation Format

Share Document