scholarly journals Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6936
Author(s):  
Remis Balaniuk ◽  
Olga Isupova ◽  
Steven Reece

This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.

2021 ◽  
Author(s):  
Jun Liu ◽  
Feng Deng ◽  
Geng Yuan ◽  
Xue Lin ◽  
Houbing Song ◽  
...  

Recently, the study on model interpretability has become a hot topic in deep learning research area. Especially in the field of medical imaging, the requirements for safety are extremely high; Moreover, it is very important for the model to be able to explain. However, the existing solutions for left ventricular segmentation by convolutional neural networks are black boxes; explainable CNNs remains a challenge; explainable deep learning models has always been a task often overlooked in the entire data science lifecycle by data scientists or deep learning engineers. Because of very limited medical imaging data, most solutions currently use transfer learning methods to transfer the model which used on large-scale benchmark data sets (such as ImageNet) to fine tune medical imaging models. Consequently, a large amount of useless parameters are generated, resulting in further barrier for the model to provide a convincing explanation. This paper presents a novel method to automatically segment the Left Ventricle in Cardiac MRI by explainable convolutional neural networks with optimized size and parameters by our enhanced Deep Learning GPU Training System. It is very suitable for deployment on mobile devices. We simplify deep learning tasks on DIGITS systems, monitoring performance, and displaying the heat map of each layer of the network with advanced visualizations in real time. Our experiment results demonstrated that the proposed method is feasible and efficient.


2012 ◽  
Vol 37 (4) ◽  
pp. 168-171 ◽  
Author(s):  
Birutė Ruzgienė ◽  
Qian Yi Xiang ◽  
Silvija Gečytė

The rectification of high resolution digital aerial images or satellite imagery employed for large scale city mapping is modern technology that needs well distributed and accurately defined control points. Digital satellite imagery, obtained using widely known software Google Earth, can be applied for accurate city map construction. The method of five control points is suggested for imagery rectification introducing the algorithm offered by Prof. Ruan Wei (tong ji University, Shanghai). Image rectification software created on the basis of the above suggested algorithm can correct image deformation with required accuracy, is reliable and keeps advantages in flexibility. Experimental research on testing the applied technology has been executed using GeoEye imagery with Google Earth builder over the city of Vilnius. Orthophoto maps at the scales of 1:1000 and 1:500 are generated referring to the methodology of five control points. Reference data and rectification results are checked comparing with those received from processing digital aerial images using a digital photogrammetry approach. The image rectification process applying the investigated method takes a short period of time (about 4-5 minutes) and uses only five control points. The accuracy of the created models satisfies requirements for large scale mapping. Santrauka Didelės skiriamosios gebos skaitmeninių nuotraukų ir kosminių nuotraukų rektifikavimas miestams kartografuoti stambiuoju masteliu yra nauja technologija. Tai atliekant būtini tikslūs ir aiškiai matomi kontroliniai taškai. Skaitmeninės kosminės nuotraukos, gautos taikant plačiai žinomą programinį paketą Google Earth, gali būti naudojamos miestams kartografuoti dideliu tikslumu. Siūloma nuotraukas rektifikuoti Penkių kontrolinių taskų metodu pagal prof. Ruan Wei (Tong Ji universitetas, Šanchajus) algoritmą. Moksliniam eksperimentui pasirinkta Vilniaus GeoEye nuotrauka iš Google Earth. 1:1000 ir 1:500 mastelio ortofotografiniai žemėlapiai sudaromi Penkių kontrolinių taškų metodu. Rektifikavimo duomenys lyginami su skaitmeninių nuotraukų apdorojimo rezultatais, gautais skaitmeninės fotogrametrijos metodu. Nuotraukų rektifikavimas Penkių kontrolinių taskų metodu atitinka kartografavimo stambiuoju masteliu reikalavimus, sumažėja laiko sąnaudos. Резюме Ректификация цифровых и космических снимков высокой резолюции для крупномасштабного картографирования является новой технологией, требующей точных и четких контрольных точек. Цифровые космические снимки, полученные с использованием широкоизвестного программного пакета Google Earth, могут применяться для точного картографирования городов. Для ректификации снимков предложен метод пяти контрольных точек с применением алгоритма проф. Ruan Wei (Университет Tong Ji, Шанхай). Для научного эксперимента использован снимок города Вильнюса GeoEye из Google Earth. Ортофотографические карты в масштабе 1:1000 и 1:500 генерируются с применением метода пяти контрольных точек. Полученные результаты и данные ректификации сравниваются с результатами цифровых снимков, полученных с применением метода цифровой фотограмметрии. Ректификация снимков с применением метода пяти контрольных точек уменьшает временные расходы и удовлетворяет требования, предъявляемые к крупномасштабному картографированию.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


2021 ◽  
Author(s):  
Ramez Saeed ◽  
Saad Abdelrahman ◽  
Andrea Scozari ◽  
Abdelazim Negm

<p><strong>ABSTRACT</strong></p><p>With the fast and highly growing demand for all possible ways of remote work as a result of COVID19 pandemic, new technologies using Satellite data were highly encouraged for multidisciplinary applications in different fields such as; agriculture, climate change, environment, coastal management, maritime, security and Blue Economy.</p><p>This work supports applying Satellite Derived Bathymetry (SDB) with the available low-cost multispectral satellite imagery applications, instruments and readily accessible data for different areas with only their benthic parameters, water characteristics and atmospheric conditions.  The main goal of this work is to derive bathymetric data needed for different hydrographic applications, such as: nautical charting, coastal engineering, water quality monitoring, sediment movement monitoring and supporting both green carbon and marine data science.  Also, this work proposes and assesses a SDB procedure that makes use of publicly-available multispectral satellite images (Sentinel2 MSI) and applies algorithms available in the SNAP software package for extracting bathymetry and supporting bathymetric layers against highly expensive traditional in-situ hydrographic surveys. The procedure was applied at SAFAGA harbor area, located south of Hurghada at (26°44′N, 33°56′E), on the Egyptian Red Sea coast.  SAFAGA controls important maritime traffic line in Red Sea such as (Safaga – Deba, Saudi Arabia) maritime cruises.  SAFAGA depths change between 6 m to 22m surrounded by many shoal batches and confined waters that largely affect maritime safety of navigation.  Therefore, there is always a high demand for updated nautical charts which this work supports.  The outcome of this work provides and fulfils those demands with bathymetric layers data for the approach channel and harbour usage bands electronic nautical chart of SAFAGA with reasonable accuracies.  The coefficient of determination (R<sup>2</sup>) differs between 0.42 to 0.71 after applying water column correction by Lyzenga algorithm and deriving bathymetric data depending on reflectance /radiance of optical imagery collected by sentinel2 missions with in-situ depth data values relationship by Stumpf equation.  The adopted approach proved to give  highly reasonable results that could be used in nautical charts compilation. Similar methodologies could be applied to inland water bodies.  This study is part of the MSc Thesis of the first author and is in the framework of a bilateral project between ASRT of Egypt and CNR of Italy which is still running.</p><p><strong>Keywords: Algorithm, Bathymetry, Sentinel 2, nautical charting, Safaga port, satellite imagery, water depth, Egypt.</strong></p>


BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Yang-Ming Lin ◽  
Ching-Tai Chen ◽  
Jia-Ming Chang

Abstract Background Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search. Results We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep. Conclusions We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.


2020 ◽  
Vol 12 (16) ◽  
pp. 2626 ◽  
Author(s):  
Qingting Li ◽  
Zhengchao Chen ◽  
Bing Zhang ◽  
Baipeng Li ◽  
Kaixuan Lu ◽  
...  

The timely and accurate mapping and monitoring of mine tailings dams is crucial to the improvement of management practices by decision makers and to the prevention of disasters caused by failures of these dams. Due to the complex topography, varying geomorphological characteristics, and the diversity of ore types and mining activities, as well as the range of scales and production processes involved, as they appear in remote sensing imagery, tailings dams vary in terms of their scale, color, shape, and surrounding background. The application of high-resolution satellite imagery for automatic detection of tailings dams at large spatial scales has been barely reported. In this study, a target detection method based on deep learning was developed for identifying the locations of tailings ponds and obtaining their geographical distribution from high-resolution satellite imagery automatically. Training samples were produced based on the characteristics of tailings ponds in satellite images. According to the sample characteristics, the Single Shot Multibox Detector (SSD) model was fine-tuned during model training. The results showed that a detection accuracy of 90.2% and a recall rate of 88.7% could be obtained. Based on the optimized SSD model, 2221 tailing ponds were extracted from Gaofen-1 high resolution imagery in the Jing–Jin–Ji region in northern China. In this region, the majority of tailings ponds are located at high altitudes in remote mountainous areas. At the city level, the tailings ponds were found to be located mainly in Chengde, Tangshan, and Zhangjiakou. The results prove that the deep learning method is very effective at detecting complex land-cover features from remote sensing images.


2021 ◽  
Author(s):  
Lahiru N. Wimalasena ◽  
Jonas F. Braun ◽  
Mohammad Reza Keshtkaran ◽  
David Hofmann ◽  
Juan Álvaro Gallego ◽  
...  

AbstractObjectiveTo study the neural control of movement, it is often necessary to estimate how muscles are activated across a variety of behavioral conditions. However, estimating the latent command signal that underlies muscle activation is challenging due to its complex relation with recorded electromyographic (EMG) signals. Common approaches estimate muscle activation independently for each channel or require manual tuning of model hyperparameters to optimally preserve behaviorally-relevant features.ApproachHere, we adapted AutoLFADS, a large-scale, unsupervised deep learning approach originally designed to de-noise cortical spiking data, to estimate muscle activation from multi-muscle EMG signals. AutoLFADS uses recurrent neural networks (RNNs) to model the spatial and temporal regularities that underlie multi-muscle activation.Main ResultsWe first tested AutoLFADS on muscle activity from the rat hindlimb during locomotion, and found that it dynamically adjusts its frequency response characteristics across different phases of behavior. The model produced single-trial estimates of muscle activation that improved prediction of joint kinematics as compared to low-pass or Bayesian filtering. We also tested the generality of the approach by applying AutoLFADS to monkey forearm muscle activity from an isometric task. AutoLFADS uncovered previously uncharacterized high-frequency oscillations in the EMG that enhanced the correlation with measured force compared to low-pass or Bayesian filtering. The AutoLFADS-inferred estimates of muscle activation were also more closely correlated with simultaneously-recorded motor cortical activity than other tested approaches.SignificanceUltimately, this method leverages both dynamical systems modeling and artificial neural networks to provide estimates of muscle activation for multiple muscles that can be used for further studies of multi-muscle coordination and its control by upstream brain areas.


Author(s):  
Alex Dexter ◽  
Spencer A. Thomas ◽  
Rory T. Steven ◽  
Kenneth N. Robinson ◽  
Adam J. Taylor ◽  
...  

AbstractHigh dimensionality omics and hyperspectral imaging datasets present difficult challenges for feature extraction and data mining due to huge numbers of features that cannot be simultaneously examined. The sample numbers and variables of these methods are constantly growing as new technologies are developed, and computational analysis needs to evolve to keep up with growing demand. Current state of the art algorithms can handle some routine datasets but struggle when datasets grow above a certain size. We present a training deep learning via neural networks on non-linear dimensionality reduction, in particular t-distributed stochastic neighbour embedding (t-SNE), to overcome prior limitations of these methods.One Sentence SummaryAnalysis of prohibitively large datasets by combining deep learning via neural networks with non-linear dimensionality reduction.


2020 ◽  
Vol 10 (14) ◽  
pp. 4913
Author(s):  
Tin Kramberger ◽  
Božidar Potočnik

Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved car image dataset that would be better suited for GAN training. To improve the performance of the GAN, we coupled the LSUN and Stanford car datasets. A new merged dataset was then pruned in order to adjust zoom levels and reduce the noise of images. This process resulted in fewer images that could be used for training, with increased quality though. This pruned dataset was evaluated by training the StyleGAN with original settings. Pruning the combined LSUN and Stanford datasets resulted in 2,067,710 images of cars with less noise and more adjusted zoom levels. The training of the StyleGAN on the LSUN-Stanford car dataset proved to be superior to the training with just the LSUN dataset by 3.7% using the Fréchet Inception Distance (FID) as a metric. Results pointed out that the proposed LSUN-Stanford car dataset is more consistent and better suited for training GAN neural networks than other currently available large car datasets.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 705
Author(s):  
Po-Chou Shih ◽  
Chun-Chin Hsu ◽  
Fang-Chih Tien

Silicon wafer is the most crucial material in the semiconductor manufacturing industry. Owing to limited resources, the reclamation of monitor and dummy wafers for reuse can dramatically lower the cost, and become a competitive edge in this industry. However, defects such as void, scratches, particles, and contamination are found on the surfaces of the reclaimed wafers. Most of the reclaimed wafers with the asymmetric distribution of the defects, known as the “good (G)” reclaimed wafers, can be re-polished if their defects are not irreversible and if their thicknesses are sufficient for re-polishing. Currently, the “no good (NG)” reclaimed wafers must be first screened by experienced human inspectors to determine their re-usability through defect mapping. This screening task is tedious, time-consuming, and unreliable. This study presents a deep-learning-based reclaimed wafers defect classification approach. Three neural networks, multilayer perceptron (MLP), convolutional neural network (CNN) and Residual Network (ResNet), are adopted and compared for classification. These networks analyze the pattern of defect mapping and determine not only the reclaimed wafers are suitable for re-polishing but also where the defect categories belong. The open source TensorFlow library was used to train the MLP, CNN, and ResNet networks using collected wafer images as input data. Based on the experimental results, we found that the system applying CNN networks with a proper design of kernels and structures gave fast and superior performance in identifying defective wafers owing to its deep learning capability, and the ResNet averagely exhibited excellent accuracy, while the large-scale MLP networks also acquired good results with proper network structures.


Sign in / Sign up

Export Citation Format

Share Document