scholarly journals Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan

2020 ◽  
Vol 12 (23) ◽  
pp. 3992
Author(s):  
Pengfei Zhang ◽  
Chong Xu ◽  
Siyuan Ma ◽  
Xiaoyi Shao ◽  
Yingying Tian ◽  
...  

After a major earthquake, the rapid identification and mapping of co-seismic landslides in the whole affected area is of great significance for emergency rescue and loss assessment of seismic hazards. In recent years, researchers have achieved good results in research on a small scale and single environment characteristics of this issue. However, for the whole earthquake-affected area with large scale and complex environments, the correct rate of extracting co-seismic landslides remains low, and there is no ideal method to solve this problem. In this paper, Planet Satellite images with a spatial resolution of 3 m are used to train a seismic landslide recognition model based on the deep learning method to carry out rapid and automatic extraction of landslides triggered by the 2018 Iburi earthquake, Japan. The study area is about 671.87 km2, of which 60% is used to train the model, and the remaining 40% is used to verify the accuracy of the model. The results show that most of the co-seismic landslides can be identified by this method. In this experiment, the verification precision of the model is 0.7965 and the F1 score is 0.8288. This method can intelligently identify and map landslides triggered by earthquakes from Planet images. It has strong practicability and high accuracy. It can provide assistance for earthquake emergency rescue and rapid disaster assessment.

2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 595 ◽  
Author(s):  
Peixin Liu ◽  
Xiaofeng Li ◽  
Han Liu ◽  
Zhizhong Fu

Multi-object tracking aims to estimate the complete trajectories of objects in a scene. Distinguishing among objects efficiently and correctly in complex environments is a challenging problem. In this paper, a Siamese network with an auto-encoding constraint is proposed to extract discriminative features from detection responses in a tracking-by-detection framework. Different from recent deep learning methods, the simple two layers stacked auto-encoder structure enables the Siamese network to operate efficiently only with small-scale online sample data. The auto-encoding constraint reduces the possibility of overfitting during small-scale sample training. Then, the proposed Siamese network is improved to extract the previous-appearance-next vector from tracklet for better association. The new feature integrates the appearance, previous, and next stage motions of an element in a tracklet. With the new features, an online incremental learned tracking framework is established. It contains reliable tracklet generation, data association to generate complete object trajectories, and tracklet growth to deal with missing detections and to enhance the new feature for tracklet. Benefiting from discriminative features, the final trajectories of objects can be achieved by an efficient iterative greedy algorithm. Feature experiments show that the proposed Siamese network has advantages in terms of both discrimination and correctness. The system experiments show the improved tracking performance of the proposed method.


2016 ◽  
Author(s):  
Fangping Wan ◽  
Jianyang (Michael) Zeng

AbstractAccurately identifying compound-protein interactions in silico can deepen our understanding of the mechanisms of drug action and significantly facilitate the drug discovery and development process. Traditional similarity-based computational models for compound-protein interaction prediction rarely exploit the latent features from current available large-scale unlabelled compound and protein data, and often limit their usage on relatively small-scale datasets. We propose a new scheme that combines feature embedding (a technique of representation learning) with deep learning for predicting compound-protein interactions. Our method automatically learns the low-dimensional implicit but expressive features for compounds and proteins from the massive amount of unlabelled data. Combining effective feature embedding with powerful deep learning techniques, our method provides a general computational pipeline for accurate compound-protein interaction prediction, even when the interaction knowledge of compounds and proteins is entirely unknown. Evaluations on current large-scale databases of the measured compound-protein affinities, such as ChEMBL and BindingDB, as well as known drug-target interactions from DrugBank have demonstrated the superior prediction performance of our method, and suggested that it can offer a useful tool for drug development and drug repositioning.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

<div>Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 2.38% to 5.27%. The code and the pre-trained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.</div><div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2021 ◽  
Vol 13 (12) ◽  
pp. 2310
Author(s):  
Xuying Yang ◽  
Peng Sun ◽  
Feng Zhang ◽  
Zhenhong Du ◽  
Renyi Liu

Infrared observation is an all-weather, real-time, large-scale precipitation observation method with high spatio-temporal resolution. A high-precision deep learning algorithm of infrared precipitation estimation can provide powerful data support for precipitation nowcasting and other hydrological studies with high timeliness requirements. The “classification-estimation” two-stage framework is widely used for balancing the data distribution in precipitation estimation algorithms, but still has the error accumulation issue due to its simple series-wound combination mode. In this paper, we propose a multi-task collaboration framework (MTCF), i.e., a novel combination mode of the classification and estimation model, which alleviates the error accumulation and retains the ability to improve the data balance. Specifically, we design a novel positive information feedback loop composed of a consistency constraint mechanism, which largely improves the information abundance and the prediction accuracy of the classification branch, and a cross-branch interaction module (CBIM), which realizes the soft feature transformation between branches via the soft spatial attention mechanism. In addition, we also model and analyze the importance of the input infrared bands, which lay a foundation for further optimizing the input and improving the generalization of the model on other infrared data. Extensive experiments based on Himawari-8 demonstrate that compared with the baseline model, our MTCF obtains a significant improvement by 3.2%, 3.71%, 5.13%, 4.04% in F1-score when the precipitation intensity is 0.5, 2, 5, 10 mm/h, respectively. Moreover, it also has a satisfactory performance in identifying precipitation spatial distribution details and small-scale precipitation, and strong stability to the extreme-precipitation of typhoons.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

<div>Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 2.38% to 5.27%. The code and the pre-trained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.</div><div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2021 ◽  
Author(s):  
Hang Lu ◽  
Kathleen Bates ◽  
Kim Le

Robust and accurate behavioral tracking is essential for ethological studies. Common methods for tracking and extracting behavior rely on user adjusted heuristics that can significantly vary across different individuals, environments, and experimental conditions. As a result, they are difficult to implement in large-scale behavioral studies with complex, heterogenous environmental conditions. Recently developed deep-learning methods for object recognition such as Faster R-CNN have advantages in their speed, accuracy, and robustness. Here, we show that Faster R-CNN can be employed for identification and detection of Caenorhabditis elegans in a variety of life stages in complex environments. We applied the algorithm to track animal speeds during development, fecundity rates and spatial distribution in reproductive adults, and behavioral decline in aging populations. By doing so, we demonstrate the flexibility, speed, and scalability of Faster R-CNN across a variety of experimental conditions, illustrating its generalized use for future large-scale behavioral studies.


2019 ◽  
Vol 8 (4) ◽  
pp. 4887-4893

Financial Crisis Prediction (FCP) being the most complicated and expected problem to be solved from the context of corporate organization, small scale to large scale industries, investors, bank organizations and government agencies, it is important to design a framework to determine a methodology that will reveal a solution for early prediction of the Financial Crisis Prediction (FCP). Earlier methods are reviewed through the various works in statistical techniques applied to solve the problem. However, it is not sufficient to predict the results with much more intelligence and automated manner. The major objective of this paper is to enhance the early prediction of Financial Crisis in any organization based on machine learning models like Multilayer Perceptron, Radial basis Function (RBF) Network, Logistic regression and Deep Learning methods and conduct a comparative analysis of them to determine the best methods for Financial Crisis Prediction (FDP). The testing is conducted with globalized benchmark datasets namely German dataset, Weislaw dataset and Polish Dataset. The testing is performed in both WEKA and Rapid Miner Framework design and obtained with accuracies and other performance measures like False Positive Rate (FPR), False Negative Rate (FNR), Precision, Recall, F-score and Kappa that would determine the best result from specific algorithm that will intelligently identify the financial crisis before it actually occurs in an organization. The results achieved the algorithms DL, MLP, LR and RBF Network with accuracies 96%, 72.10%, 75.20% and 74% on German Dataset, 91.25%, 85.83%, 83.75% and 73.75% on Weislaw dataset, 99.70%, 96.30%, 96.21% and 96.14 on Polish dataset respectively. It is evident from all the predictive results and the analytics in Rapid Miner that Deep Learning (DL) is the best classifier and performer among other machine learners and classifiers. This method will enhance the future predictions and would provide efficient solutions for financial crisis predictions.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

<div>Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 2.38% to 5.27%. The code and the pre-trained model will be available at https://github.com/linlei1214/SITS-BERT upon publication.</div><div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Xiaoqing Liu ◽  
Kunlun Gao ◽  
Bo Liu ◽  
Chengwei Pan ◽  
Kongming Liang ◽  
...  

Importance. With the booming growth of artificial intelligence (AI), especially the recent advancements of deep learning, utilizing advanced deep learning-based methods for medical image analysis has become an active research area both in medical industry and academia. This paper reviewed the recent progress of deep learning research in medical image analysis and clinical applications. It also discussed the existing problems in the field and provided possible solutions and future directions. Highlights. This paper reviewed the advancement of convolutional neural network-based techniques in clinical applications. More specifically, state-of-the-art clinical applications include four major human body systems: the nervous system, the cardiovascular system, the digestive system, and the skeletal system. Overall, according to the best available evidence, deep learning models performed well in medical image analysis, but what cannot be ignored are the algorithms derived from small-scale medical datasets impeding the clinical applicability. Future direction could include federated learning, benchmark dataset collection, and utilizing domain subject knowledge as priors. Conclusion. Recent advanced deep learning technologies have achieved great success in medical image analysis with high accuracy, efficiency, stability, and scalability. Technological advancements that can alleviate the high demands on high-quality large-scale datasets could be one of the future developments in this area.


Sign in / Sign up

Export Citation Format

Share Document