scholarly journals Detecting Pipeline Pathways in Landsat 5 Satellite Images with Deep Learning

Energies ◽  
2021 ◽  
Vol 14 (18) ◽  
pp. 5642
Author(s):  
Jan Dasenbrock ◽  
Adam Pluta ◽  
Matthias Zech ◽  
Wided Medjroubi

Energy system modeling is essential in analyzing present and future system configurations motivated by the energy transition. Energy models need various input data sets at different scales, including detailed information about energy generation and transport infrastructure. However, accessing such data sets is not straightforward and often restricted, especially for energy infrastructure data. We present a detection model for the automatic recognition of pipeline pathways using a Convolutional Neural Network (CNN) to address this lack of energy infrastructure data sets. The model was trained with historical low-resolution satellite images of the construction phase of British gas transport pipelines, made with the Landsat 5 Thematic Mapper instrument. The satellite images have been automatically labeled with the help of high-resolution pipeline route data provided by the respective Transmission System Operator (TSO). We have used data augmentation on the training data and trained our model with four different initial learning rates. The models trained with the different learning rates have been validated with 5-fold cross-validation using the Intersection over Union (IoU) metric. We show that our model can reliably identify pipeline pathways despite the comparably low resolution of the used satellite images. Further, we have successfully tested the model’s capability in other geographic regions by deploying satellite images of the NEL pipeline in Northern Germany.

This research is aimed to achieve high-precision accuracy and for face recognition system. Convolution Neural Network is one of the Deep Learning approaches and has demonstrated excellent performance in many fields, including image recognition of a large amount of training data (such as ImageNet). In fact, hardware limitations and insufficient training data-sets are the challenges of getting high performance. Therefore, in this work the Deep Transfer Learning method using AlexNet pre-trained CNN is proposed to improve the performance of the face-recognition system even for a smaller number of images. The transfer learning method is used to fine-tuning on the last layer of AlexNet CNN model for new classification tasks. The data augmentation (DA) technique also proposed to minimize the over-fitting problem during Deep transfer learning training and to improve accuracy. The results proved the improvement in over-fitting and in performance after using the data augmentation technique. All the experiments were tested on UTeMFD, GTFD, and CASIA-Face V5 small data-sets. As a result, the proposed system achieved a high accuracy as 100% on UTeMFD, 96.67% on GTFD, and 95.60% on CASIA-Face V5 in less than 0.05 seconds of recognition time.


2021 ◽  
Author(s):  
Tim Scherr ◽  
Katharina Loeffler ◽  
Oliver Neumann ◽  
Ralf Mikut

The virtually error-free segmentation and tracking of densely packed cells and cell nuclei is still a challenging task. Especially in low-resolution and low signal-to-noise-ratio microscopy images erroneously merged and missing cells are common segmentation errors making the subsequent cell tracking even more difficult. In 2020, we successfully participated as team KIT-Sch-GE (1) in the 5th edition of the ISBI Cell Tracking Challenge. With our deep learning-based distance map regression segmentation and our graph-based cell tracking, we achieved multiple top 3 rankings on the diverse data sets. In this manuscript, we show how our approach can be further improved by using another optimizer and by fine-tuning training data augmentation parameters, learning rate schedules, and the training data representation. The fine-tuned segmentation in combination with an improved tracking enabled to further improve our performance in the 6th edition of the Cell Tracking Challenge 2021 as team KIT-Sch-GE (2).


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Zeming Fan ◽  
Mudasir Jamil ◽  
Muhammad Tariq Sadiq ◽  
Xiwei Huang ◽  
Xiaojun Yu

Due to the rapid spread of COVID-19 and its induced death worldwide, it is imperative to develop a reliable tool for the early detection of this disease. Chest X-ray is currently accepted to be one of the reliable means for such a detection purpose. However, most of the available methods utilize large training data, and there is a need for improvement in the detection accuracy due to the limited boundary segment of the acquired images for symptom identifications. In this study, a robust and efficient method based on transfer learning techniques is proposed to identify normal and COVID-19 patients by employing small training data. Transfer learning builds accurate models in a timesaving way. First, data augmentation was performed to help the network for memorization of image details. Next, five state-of-the-art transfer learning models, AlexNet, MobileNetv2, ShuffleNet, SqueezeNet, and Xception, with three optimizers, Adam, SGDM, and RMSProp, were implemented at various learning rates, 1e-4, 2e-4, 3e-4, and 4e-4, to reduce the probability of overfitting. All the experiments were performed on publicly available datasets with several analytical measurements attained after execution with a 10-fold cross-validation method. The results suggest that MobileNetv2 with Adam optimizer at a learning rate of 3e-4 provides an average accuracy, recall, precision, and F-score of 97%, 96.5%, 97.5%, and 97%, respectively, which are higher than those of all other combinations. The proposed method is competitive with the available literature, demonstrating that it could be used for the early detection of COVID-19 patients.


Author(s):  
Marija Habijan ◽  
Hrvoje Leventić ◽  
Irena Galić ◽  
Danilo Babin

The most recent research is showing the importance and suitability of neural networks for medical image processing tasks. Nonetheless, their efficiency in segmentation tasks is greatly dependent on the amount of available training data. To overcome issues of using small datasets, various data augmentation techniques have been developed. In this paper, an approach for the whole heart segmentation based on the convolutional neural network, specifically on the 3D U-Net architecture, is presented. Also, we propose the incorporation of the principal component analysis as an additional data augmentation technique. The network is trained end-to-end, i.e., no pre-trained network is required. Evaluation of the proposed approach is performed on CT images from MICCAI 2017 Multi-Modality Whole Heart Segmentation Challenge dataset, delivering in a three-fold cross-validation an average dice coefficient overlap of 88.2% for the whole heart, i.e. all heart substructures. Final segmentation results show a high accuracy with the ground truth, indicating that the proposed approach is competitive to the state-of-the-art. Additionally, experiments on the influence of different learning rates are provided as well, showing the optimal learning rate of 0.005 to give the best segmentation results.


2010 ◽  
Vol 139-141 ◽  
pp. 1847-1851 ◽  
Author(s):  
Qian Qian Shen ◽  
Zong Hai Sun

Gaussian Process (GP) is a new learning method on nonlinear system modeling. The most common way of model training is conjugate gradient method, but this method should compute Heisenberg matrix which needs much computing resource. It is not a suitable training method for online learning algorithm. There is one online learning algorithm of GP which is named sparse online GP now. This algorithm has constraint to the training data sets. In order to satisfy the real-time modeling without the limit of the training data sets, an online algorithm of GP based on adaptive natural gradient (ANG) is proposed in this paper. The algorithm is applied in Continuous Stirred Tank Reactor (CSTR) modeling and the sparse online GP is also applied in CSTR modeling for comparison. Obtained from the simulation results, the algorithm is effective and has higher Accuracy compared with the sparse online GP algorithm.


2020 ◽  
Author(s):  
Hitoshi Miyamoto ◽  
Takuya Sato ◽  
Akito Momose ◽  
Shuji Iwami

<p>This presentation examined a new method for classifying riverine land covers by using the machine learning technique applied to both the satellite and UAV (Unmanned Aerial Vehicle) images in a Kurobe River channel.  The method used Random Forests (RF) for the classification with RGBs and NDVIs (Normalized Difference Vegetation Index) of the images in combination.  In the process, the high-resolution UAV images made it possible to create accurate training data for the land cover classification of the low-resolution satellite images.  The results indicated that the combination of the high- and low-resolution images in the machine learning could effectively detect waters, gravel/sand beds, trees, and grasses from the satellite images with a certain degree of accuracy.  In contrast, the usage of only low-resolution satellite images failed to detect the vegetation difference between trees and grasses.  These results could actively support the effectiveness of the present machine learning method in the combination of satellite and UAV images to grasp the most critical areas in riparian vegetation management.</p>


2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Michał Klimont ◽  
Mateusz Flieger ◽  
Jacek Rzeszutek ◽  
Joanna Stachera ◽  
Aleksandra Zakrzewska ◽  
...  

Hydrocephalus is a common neurological condition that can have traumatic ramifications and can be lethal without treatment. Nowadays, during therapy radiologists have to spend a vast amount of time assessing the volume of cerebrospinal fluid (CSF) by manual segmentation on Computed Tomography (CT) images. Further, some of the segmentations are prone to radiologist bias and high intraobserver variability. To improve this, researchers are exploring methods to automate the process, which would enable faster and more unbiased results. In this study, we propose the application of U-Net convolutional neural network in order to automatically segment CT brain scans for location of CSF. U-Net is a neural network that has proven to be successful for various interdisciplinary segmentation tasks. We optimised training using state of the art methods, including “1cycle” learning rate policy, transfer learning, generalized dice loss function, mixed float precision, self-attention, and data augmentation. Even though the study was performed using a limited amount of data (80 CT images), our experiment has shown near human-level performance. We managed to achieve a 0.917 mean dice score with 0.0352 standard deviation on cross validation across the training data and a 0.9506 mean dice score on a separate test set. To our knowledge, these results are better than any known method for CSF segmentation in hydrocephalic patients, and thus, it is promising for potential practical applications.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1052
Author(s):  
Leang Sim Nguon ◽  
Kangwon Seo ◽  
Jung-Hyun Lim ◽  
Tae-Jun Song ◽  
Sung-Hyun Cho ◽  
...  

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817–0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.


2021 ◽  
Vol 16 (1) ◽  
pp. 1-24
Author(s):  
Yaojin Lin ◽  
Qinghua Hu ◽  
Jinghua Liu ◽  
Xingquan Zhu ◽  
Xindong Wu

In multi-label learning, label correlations commonly exist in the data. Such correlation not only provides useful information, but also imposes significant challenges for multi-label learning. Recently, label-specific feature embedding has been proposed to explore label-specific features from the training data, and uses feature highly customized to the multi-label set for learning. While such feature embedding methods have demonstrated good performance, the creation of the feature embedding space is only based on a single label, without considering label correlations in the data. In this article, we propose to combine multiple label-specific feature spaces, using label correlation, for multi-label learning. The proposed algorithm, mu lti- l abel-specific f eature space e nsemble (MULFE), takes consideration label-specific features, label correlation, and weighted ensemble principle to form a learning framework. By conducting clustering analysis on each label’s negative and positive instances, MULFE first creates features customized to each label. After that, MULFE utilizes the label correlation to optimize the margin distribution of the base classifiers which are induced by the related label-specific feature spaces. By combining multiple label-specific features, label correlation based weighting, and ensemble learning, MULFE achieves maximum margin multi-label classification goal through the underlying optimization framework. Empirical studies on 10 public data sets manifest the effectiveness of MULFE.


Sign in / Sign up

Export Citation Format

Share Document