scholarly journals Building Damage Assessment Based on the Fusion of Multiple Texture Features Using a Single Post-Earthquake PolSAR Image

2019 ◽  
Vol 11 (8) ◽  
pp. 897 ◽  
Author(s):  
Wei Zhai ◽  
Chunlin Huang ◽  
Wansheng Pei

After a destructive earthquake, most of the casualties are brought about by building collapse. Our work is focused on using a single postevent PolSAR (full-polarimetric synthetic aperture radar) imagery to extract the building damage information for effective emergency decision-making. PolSAR data is subject to sunlight and contains richer backscatter information. The undamaged buildings whose orientation is not parallel to the SAR flight pass and the collapsed buildings share similar dominated scattering mechanisms, i.e., volume scattering, so they are easily confused. However, the two kinds of buildings have different textures. For a more accurate classification of damaged buildings and undamaged buildings, the OPCE (optimization of polarimetric contrast enhancement) algorithm is employed to enhance the contrast ratio of the textures for the two kinds of buildings and the precision-weighted multifeature fusion (PWMF) method is proposed to merge the multiple texture features. The experiment results show that the accuracy of the proposed novel method is improved by 8.34% compared to the traditional method. In general, the proposed PWMF method can effectively merge the multiple features and the overestimation of the building collapse rate can be reduced using the proposed method in this study.

2021 ◽  
Vol 13 (6) ◽  
pp. 1146
Author(s):  
Yuliang Nie ◽  
Qiming Zeng ◽  
Haizhen Zhang ◽  
Qing Wang

Synthetic aperture radar (SAR) is an effective tool in detecting building damage. At present, more and more studies detect building damage using a single post-event fully polarimetric SAR (PolSAR) image, because it permits faster and more convenient damage detection work. However, the existence of non-buildings and obliquely-oriented buildings in disaster areas presents a challenge for obtaining accurate detection results using only post-event PolSAR data. To solve these problems, a new method is proposed in this work to detect completely collapsed buildings using a single post-event full polarization SAR image. The proposed method makes two improvements to building damage detection. First, it provides a more effective solution for non-building area removal in post-event PolSAR images. By selecting and combining three competitive polarization features, the proposed solution can remove most non-building areas effectively, including mountain vegetation and farmland areas, which are easily confused with collapsed buildings. Second, it significantly improves the classification performance of collapsed and standing buildings. A new polarization feature was created specifically for the classification of obliquely-oriented and collapsed buildings via development of the optimization of polarimetric contrast enhancement (OPCE) matching algorithm. Using this developed feature combined with texture features, the proposed method effectively distinguished collapsed and obliquely-oriented buildings, while simultaneously also identifying the affected collapsed buildings in error-prone areas. Experiments were implemented on three PolSAR datasets obtained in fully polarimetric mode: Radarsat-2 PolSAR data from the 2010 Yushu earthquake in China (resolution: 12 m, scale of the study area: ); ALOS PALSAR PolSAR data from the 2011 Tohoku tsunami in Japan (resolution: 23.14 m, scale of the study area: ); and ALOS-2 PolSAR data from the 2016 Kumamoto earthquake in Japan (resolution: 5.1 m, scale of the study area: ). Through the experiments, the proposed method was proven to obtain more than 90% accuracy for built-up area extraction in post-event PolSAR data. The achieved detection accuracies of building damage were 82.3%, 97.4%, and 78.5% in Yushu, Ishinomaki, and Mashiki town study sites, respectively.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Shaodan Li ◽  
Hong Tang

Field survey is a labour-intensive way to objectively evaluate the grade of building damage triggered by earthquakes. In this paper, we present a decision-tree-based approach to classify the type of building damage by using multiple-source remote sensing from both pre- and postearthquakes. Specifically, the boundary of buildings is delineated from preearthquake multiple-source satellite images using an unsupervised learning method. Then, building damage is classified into four types using decision tree method from postearthquake UAV images, that is, basically intact buildings, slightly damaged buildings, partially collapsed buildings, and completely collapsed buildings. Furthermore, the slightly damaged buildings are determined by the detected roof-holes using joint color and height features. Two experimental areas from Wenchuan and Ya’an earthquakes are used to verify the proposed method.


2019 ◽  
Vol 11 (10) ◽  
pp. 1202 ◽  
Author(s):  
Min Ji ◽  
Lanfa Liu ◽  
Runlin Du ◽  
Manfred F. Buchroithner

The accurate and quick derivation of the distribution of damaged building must be considered essential for the emergency response. With the success of deep learning, there is an increasing interest to apply it for earthquake-induced building damage mapping, and its performance has not been compared with conventional methods in detecting building damage after the earthquake. In the present study, the performance of grey-level co-occurrence matrix texture and convolutional neural network (CNN) features were comparatively evaluated with the random forest classifier. Pre- and post-event very high-resolution (VHR) remote sensing imagery were considered to identify collapsed buildings after the 2010 Haiti earthquake. Overall accuracy (OA), allocation disagreement (AD), quantity disagreement (QD), Kappa, user accuracy (UA), and producer accuracy (PA) were used as the evaluation metrics. The results showed that the CNN feature with random forest method had the best performance, achieving an OA of 87.6% and a total disagreement of 12.4%. CNNs have the potential to extract deep features for identifying collapsed buildings compared to the texture feature with random forest method by increasing Kappa from 61.7% to 69.5% and reducing the total disagreement from 16.6% to 14.1%. The accuracy for identifying buildings was improved by combining CNN features with random forest compared with the CNN approach. OA increased from 85.9% to 87.6%, and the total disagreement reduced from 14.1% to 12.4%. The results indicate that the learnt CNN features can outperform texture features for identifying collapsed buildings using VHR remotely sensed space imagery.


Author(s):  
Yashpal Jitarwal ◽  
Tabrej Ahamad Khan ◽  
Pawan Mangal

In earlier times fruits were sorted manually and it was very time consuming and laborious task. Human sorted the fruits of the basis of shape, size and color. Time taken by human to sort the fruits is very large therefore to reduce the time and to increase the accuracy, an automatic classification of fruits comes into existence.To improve this human inspection and reduce time required for fruit sorting an advance technique is developed that accepts information about fruits from their images, and is called as Image Processing Technique.


2021 ◽  
Author(s):  
Guang-Jun Jiang ◽  
Hong-Xia Chen ◽  
Hong-Hua Sun ◽  
Mohammad Yazdi ◽  
Arman Nedjati ◽  
...  

2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Rajesh Kumar ◽  
Rajeev Srivastava ◽  
Subodh Srivastava

A framework for automated detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features is proposed and examined. The various stages involved in the proposed methodology include enhancement of microscopic images, segmentation of background cells, features extraction, and finally the classification. An appropriate and efficient method is employed in each of the design steps of the proposed framework after making a comparative analysis of commonly used method in each category. For highlighting the details of the tissue and structures, the contrast limited adaptive histogram equalization approach is used. For the segmentation of background cells, k-means segmentation algorithm is used because it performs better in comparison to other commonly used segmentation methods. In feature extraction phase, it is proposed to extract various biologically interpretable and clinically significant shapes as well as morphology based features from the segmented images. These include gray level texture features, color based features, color gray level texture features, Law’s Texture Energy based features, Tamura’s features, and wavelet features. Finally, the K-nearest neighborhood method is used for classification of images into normal and cancerous categories because it is performing better in comparison to other commonly used methods for this application. The performance of the proposed framework is evaluated using well-known parameters for four fundamental tissues (connective, epithelial, muscular, and nervous) of randomly selected 1000 microscopic biopsy images.


1996 ◽  
Vol 17 (6) ◽  
pp. 1267-1273 ◽  
Author(s):  
S. L. DURDEN ◽  
Z. S. HADDAD ◽  
L. A. MORRISSEY ◽  
G. P. LIVINGSTON
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document