scholarly journals A Comparative Study of Texture and Convolutional Neural Network Features for Detecting Collapsed Buildings After Earthquakes Using Pre- and Post-Event Satellite Imagery

2019 ◽  
Vol 11 (10) ◽  
pp. 1202 ◽  
Author(s):  
Min Ji ◽  
Lanfa Liu ◽  
Runlin Du ◽  
Manfred F. Buchroithner

The accurate and quick derivation of the distribution of damaged building must be considered essential for the emergency response. With the success of deep learning, there is an increasing interest to apply it for earthquake-induced building damage mapping, and its performance has not been compared with conventional methods in detecting building damage after the earthquake. In the present study, the performance of grey-level co-occurrence matrix texture and convolutional neural network (CNN) features were comparatively evaluated with the random forest classifier. Pre- and post-event very high-resolution (VHR) remote sensing imagery were considered to identify collapsed buildings after the 2010 Haiti earthquake. Overall accuracy (OA), allocation disagreement (AD), quantity disagreement (QD), Kappa, user accuracy (UA), and producer accuracy (PA) were used as the evaluation metrics. The results showed that the CNN feature with random forest method had the best performance, achieving an OA of 87.6% and a total disagreement of 12.4%. CNNs have the potential to extract deep features for identifying collapsed buildings compared to the texture feature with random forest method by increasing Kappa from 61.7% to 69.5% and reducing the total disagreement from 16.6% to 14.1%. The accuracy for identifying buildings was improved by combining CNN features with random forest compared with the CNN approach. OA increased from 85.9% to 87.6%, and the total disagreement reduced from 14.1% to 12.4%. The results indicate that the learnt CNN features can outperform texture features for identifying collapsed buildings using VHR remotely sensed space imagery.

2020 ◽  
Vol 12 (12) ◽  
pp. 1924 ◽  
Author(s):  
Hiroyuki Miura ◽  
Tomohiro Aridome ◽  
Masashi Matsuoka

A methodology for the automated identification of building damage from post-disaster aerial images was developed based on convolutional neural network (CNN) and building damage inventories. The aerial images and the building damage data obtained in the 2016 Kumamoto, and the 1995 Kobe, Japan earthquakes were analyzed. Since the roofs of many moderately damaged houses are covered with blue tarps immediately after disasters, not only collapsed and non-collapsed buildings but also the buildings covered with blue tarps were identified by the proposed method. The CNN architecture developed in this study correctly classifies the building damage with the accuracy of approximately 95 % in both earthquake data. We applied the developed CNN model to aerial images in Chiba, Japan, damaged by the typhoon in September 2019. The result shows that more than 90 % of the building damage are correctly classified by the CNN model.


2019 ◽  
Vol 4 (1) ◽  
pp. 1
Author(s):  
Candra Dewi ◽  
Suci Sundari ◽  
Mardji Mardji

Patchouli (Pogostemon Cablin Bent) has higher PA (Patchouli Alcohol) and oil production if grown in soil containing 75% organic matter. One way that can be used to detect the content of organic matter is to use soil images. The problem in the use of soil images is the color of the soil that is almost similar, namely the gradation between dark brown to black. Therefore, color features are not enough to be used as input in the recognition process. For this purposes, texture features are added in this study in addition to color features. The color features are extracted using color moment and the texture features are extracted using Gray Level Co-occurrence Matrix (GLCM). These feature was then chosen to get the best combination as input in the identification process using the Backpropagation Neural Network (BPNN). The system identifies the quantity of soil organic matter into five classes, namely very low, low, medium, high, and very high. The highest accuracy result obtained was 73% and MSE value 0.5122 by using five GLCM features (Angular Second Moment, contrast, correlation, Inverse Difference Moment, and entropy). This result was obtained by using the BPNN parameter, namely learning rate values 0.5, maximum iteration values of 1000, number training data 210, and total test data 12.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 85421-85430 ◽  
Author(s):  
Yongyi Sun ◽  
Hongquan Zhang ◽  
Tingting Zhao ◽  
Zhihui Zou ◽  
Bin Shen ◽  
...  

Author(s):  
Endang Anggiratih ◽  
Agfianto Eko Putra

Ship identification on satellite imagery can be used for fisheries management, monitoring of smuggling activities, ship traffic services, and naval warfare. However, high-resolution satellite imagery also makes the segmentation of the ship difficult in the background, so that to handle it requires reliable features so that it can be identified adequately between large vessels, small vessels and not ships. The Convolutional Neural Network (CNN) method, which has the advantage of being able to extract features automatically and produce reliable features that facilitate ship identification. This study combines CNN ZFNet architecture with the Random Forest method. The training was conducted with the aim of knowing the accuracy of the ZFNet layers to produce the best features, which are characterized by high accuracy, combined with the Random Forest method. Testing the combination of this method is done with two parameters, namely batch size and a number of trees. The test results identify large vessels with an accuracy of 87.5% and small vessels with an accuracy of not up to 50%.


2021 ◽  
pp. 004051752110460
Author(s):  
Yaolin Zhu ◽  
Jiameng Duan ◽  
Yunhong Li ◽  
Tong Wu

Cashmere and wool play an important role in the wool industry and textile industry, and suitable features are the key to identifying them. To obtain effective features and improve the accuracy of cashmere and wool classification, the multi-feature selection and random forest method is used to express in this article. Firstly, the gray-gradient co-occurrence matrix model is used for texture feature extraction to construct the original high-dimensional feature data set; secondly, considering that the original feature data set contains a large number of invalid and redundant features, the feature selection algorithm combining correlation analysis and principal component analysis–weight coefficient evaluation is used to obtain important features, independent features, and principal component sensitive features to complement each other; last but not least, the optimized random forest model analyzes the results. The results show that the combination of multi-feature selection subsets and random forest makes the classification accuracy of cashmere and wool more reliable, and the accuracy fluctuates around 90%.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Guofang Qin ◽  
Guoliang Qin

As one of the most widely used methods in deep learning technology, convolutional neural networks have powerful feature extraction capabilities and nonlinear data fitting capabilities. However, the convolutional neural network method still has disadvantages such as complex network model, too long training time and excessive consumption of computing resources, slow convergence speed, network overfitting, and classification accuracy that needs to be improved. Therefore, this article proposes a dense convolutional neural network classification algorithm based on texture features for images in virtual reality videos. First, the texture feature of the image is introduced as a priori information to reflect the spatial relationship between pixels and the unique characteristics of different types of ground features. Second, the grey level cooccurrence matrix (GLCM) is used to extract the grey level correlation features of the image in space. Then, Gauss Markov Random Field (GMRF) is used to establish the statistical correlation characteristics between neighbouring pixels, and the extracted GLCM-GMRF texture feature and image intensity vector are combined. Finally, based on DenseNet, an improved shallow layer dense convolutional neural network (L-DenseNet) is proposed, which can compress network parameters and improve the feature extraction ability of the network. The experimental results show that compared with the current classification method, this method can effectively suppress the influence of coherent speckle noise and obtain better classification results.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2021 ◽  
Vol 13 (6) ◽  
pp. 1146
Author(s):  
Yuliang Nie ◽  
Qiming Zeng ◽  
Haizhen Zhang ◽  
Qing Wang

Synthetic aperture radar (SAR) is an effective tool in detecting building damage. At present, more and more studies detect building damage using a single post-event fully polarimetric SAR (PolSAR) image, because it permits faster and more convenient damage detection work. However, the existence of non-buildings and obliquely-oriented buildings in disaster areas presents a challenge for obtaining accurate detection results using only post-event PolSAR data. To solve these problems, a new method is proposed in this work to detect completely collapsed buildings using a single post-event full polarization SAR image. The proposed method makes two improvements to building damage detection. First, it provides a more effective solution for non-building area removal in post-event PolSAR images. By selecting and combining three competitive polarization features, the proposed solution can remove most non-building areas effectively, including mountain vegetation and farmland areas, which are easily confused with collapsed buildings. Second, it significantly improves the classification performance of collapsed and standing buildings. A new polarization feature was created specifically for the classification of obliquely-oriented and collapsed buildings via development of the optimization of polarimetric contrast enhancement (OPCE) matching algorithm. Using this developed feature combined with texture features, the proposed method effectively distinguished collapsed and obliquely-oriented buildings, while simultaneously also identifying the affected collapsed buildings in error-prone areas. Experiments were implemented on three PolSAR datasets obtained in fully polarimetric mode: Radarsat-2 PolSAR data from the 2010 Yushu earthquake in China (resolution: 12 m, scale of the study area: ); ALOS PALSAR PolSAR data from the 2011 Tohoku tsunami in Japan (resolution: 23.14 m, scale of the study area: ); and ALOS-2 PolSAR data from the 2016 Kumamoto earthquake in Japan (resolution: 5.1 m, scale of the study area: ). Through the experiments, the proposed method was proven to obtain more than 90% accuracy for built-up area extraction in post-event PolSAR data. The achieved detection accuracies of building damage were 82.3%, 97.4%, and 78.5% in Yushu, Ishinomaki, and Mashiki town study sites, respectively.


2020 ◽  
Vol 230 ◽  
pp. 117451 ◽  
Author(s):  
Tongshu Zheng ◽  
Michael H. Bergin ◽  
Shijia Hu ◽  
Joshua Miller ◽  
David E. Carlson

2020 ◽  
Vol 3 (4) ◽  
pp. 240-251
Author(s):  
Dmitro Yuriiovych Hrishko ◽  
Ievgen Arnoldovich Nastenko ◽  
Maksym Oleksandrovych Honcharuk ◽  
Volodymyr Anatoliyovich Pavlov

This article discusses the use of texture analysis methods to obtain informative features that describe the texture of liver ultrasound images. In total, 317 liver ultrasound images were analyzed, which were provided by the Institute of Nuclear Medicine and Radiation Diagnostics of NAMS of Ukraine. The images were taken by three different sensors (convex, linear, and linear sensor in increased signal level mode). Both images of patients with a normal liver condition and patients with specific liver disease (there were diseases such as: autoimmune hepatitis, Wilson's disease, hepatitis B and C, steatosis, and cirrhosis) were present in the database. Texture analysis was used for “Feature Construction”, which resulted in more than a hundred different informative features that made up a common stack. Among them, there are such features as: three authors’ patented features derived from the grey level co-occurrence matrix; features, obtained with the help of spatial sweep method (working by the principle of group method of data handling), which was applied to ultrasound images; statistical features, calculated on the images, brought to one scale with the help of differential horizontal and vertical matrices, which are proposed by the authors; greyscale pairs ensembles (found using the genetic algorithm), which identify liver pathology on images, transformed with the help of horizontal and vertical differentiations, in the best possible way. The resulting trait stack was used to solve the problem of binary classification (“norma-pathology”) of ultrasound liver images. A Machine Learning method, namely “Random Forest”, was used for this purpose. Before the classification, in order to obtain objective results, the total samples were divided into training (70 %), testing (20 %), and examining (10 %). The result was the best three Random Forest models separately for each sensor, which gave the following recognition rates: 93.4 % for the convex sensor, 92.9 % for the linear sensor, and 92 % for the reinforced linear sensor


Sign in / Sign up

Export Citation Format

Share Document