scholarly journals Automated Identification of Mineral Types and Grain Size Using Hyperspectral Imaging and Deep Learning for Mineral Processing

Minerals ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 809 ◽  
Author(s):  
Natsuo Okada ◽  
Yohei Maekawa ◽  
Narihiro Owada ◽  
Kazutoshi Haga ◽  
Atsushi Shibayama ◽  
...  

In mining operations, an ore is separated into its constituents through mineral processing methods, such as flotation. Identifying the type of minerals contained in the ore in advance aids greatly in performing faster and more efficient mineral processing. The human eye can recognize visual information in three wavelength regions: red, green, and blue. With hyperspectral imaging, high resolution spectral data that contains information from the visible light wavelength region to the near infrared region can be obtained. Using deep learning, the features of the hyperspectral data can be extracted and learned, and the spectral pattern that is unique to each mineral can be identified and analyzed. In this paper, we propose an automatic mineral identification system that can identify mineral types before the mineral processing stage by combining hyperspectral imaging and deep learning. By using this technique, it is possible to quickly identify the types of minerals contained in rocks using a non-destructive method. As a result of experimentation, the identification accuracy of the minerals that underwent deep learning on the red, green, and blue (RGB) image of the mineral was approximately 30%, while the result of the hyperspectral data analysis using deep learning identified the mineral species with a high accuracy of over 90%.

Author(s):  
Shiyang Yin ◽  
Xiaoqing Bi ◽  
Yong Niu ◽  
Xiaomin Gu ◽  
Yong Xiao

Fast and nondestructive detection of early decay caused by fungal infection in citrus fruit was a challenging task for the citrus industry during the postharvest fruit processing. In general, workers relied on the ultraviolet induction fluorescence technique to detect and remove the decayed citrus fruits in fruit packing houses. However, this operation was harmful for human health, and was also very inefficient. In this study, navel oranges were used as research object. A novel method combining with hyperspectral imaging technology in the wavelength region between 400 and 1100 nm wavelength was proposed to solve this problem. First, normalization approaches were applied to decrease the variation of spectral reflectance intensity due to natural curvature of navel orange surface. Then, the spectral data of regions of interest (ROIs) from normal and decayed tissues was analyzed by principal component analysis (PCA) for investigating the performance of visible and near infrared (Vis-NIR) hyperspectral data to discriminate these two kinds of tissues. Next, six characteristic wavelength images were obtained by analyzing the loadings of the first principal component (PC1). And, a multispectral image was established by using the corrected six characteristic wavelength images. On basis of the multispectral image, pseudo-color image processing with intensity slicing was utilized to produce a two-dimensional color image with clear contrast between decayed and normal tissues. Finally, an image segmentation algorithm by combining the pseudo-color processing method and a global threshold method was proposed for fast identification of decayed navel oranges. For 240 independent samples, the success rates were 100 and 97.5% for decayed navel oranges infected by Penicillium digitatum and normal navel oranges, respectively. In particular, the proposed algorithm was also applied to detect the decayed navel oranges infected by Penicillium italicum (samples not used for the development of algorithm) and obtained a 91.7% identification accuracy, indicating a well generalization ability and actual application value of the proposed algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1288
Author(s):  
Cinmayii A. Garillos-Manliguez ◽  
John Y. Chiang

Fruit maturity is a critical factor in the supply chain, consumer preference, and agriculture industry. Most classification methods on fruit maturity identify only two classes: ripe and unripe, but this paper estimates six maturity stages of papaya fruit. Deep learning architectures have gained respect and brought breakthroughs in unimodal processing. This paper suggests a novel non-destructive and multimodal classification using deep convolutional neural networks that estimate fruit maturity by feature concatenation of data acquired from two imaging modes: visible-light and hyperspectral imaging systems. Morphological changes in the sample fruits can be easily measured with RGB images, while spectral signatures that provide high sensitivity and high correlation with the internal properties of fruits can be extracted from hyperspectral images with wavelength range in between 400 nm and 900 nm—factors that must be considered when building a model. This study further modified the architectures: AlexNet, VGG16, VGG19, ResNet50, ResNeXt50, MobileNet, and MobileNetV2 to utilize multimodal data cubes composed of RGB and hyperspectral data for sensitivity analyses. These multimodal variants can achieve up to 0.90 F1 scores and 1.45% top-2 error rate for the classification of six stages. Overall, taking advantage of multimodal input coupled with powerful deep convolutional neural network models can classify fruit maturity even at refined levels of six stages. This indicates that multimodal deep learning architectures and multimodal imaging have great potential for real-time in-field fruit maturity estimation that can help estimate optimal harvest time and other in-field industrial applications.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


The Analyst ◽  
2019 ◽  
Vol 144 (21) ◽  
pp. 6438-6446
Author(s):  
Hideaki Kanayama ◽  
Te Ma ◽  
Satoru Tsuchikawa ◽  
Tetsuya Inagaki

From the viewpoint of combating illegal logging and examining wood properties, there is a contemporary demand for a wood species identification system.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Tao Zhang ◽  
Biyao Wang ◽  
Pengtao Yan ◽  
Kunlun Wang ◽  
Xu Zhang ◽  
...  

For the identification of salmon adulteration with water injection, a nondestructive identification method based on hyperspectral images was proposed. The hyperspectral images of salmon fillets in visible and near-infrared ranges (390–1050 nm) were obtained with a system. The original hyperspectral data were processed through the principal-component analysis (PCA). According to the image quality and PCA parameters, a second principal-component (PC2) image was selected as the feature image, and the wavelengths corresponding to the local extremum values of feature image weighting coefficients were extracted as feature wavelengths, which were 454.9, 512.3, and 569.1 nm. On this basis, the color combined with spectra at feature wavelengths, texture combined with spectra at feature wavelengths, and color-texture combined with spectra at feature wavelengths were independently set as the input, for the modeling of salmon adulteration identification based on the self-organizing feature map (SOM) network. The distances between neighboring neurons and feature weights of the models were analyzed to realize the visualization of identification results. The results showed that the SOM-based model, with texture-color combined with fusion features of spectra at feature wavelengths as the input, was evaluated to possess the best performance and identification accuracy is as high as 96.7%.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 411 ◽  
Author(s):  
Emanuele Torti ◽  
Alessandro Fontanella ◽  
Antonio Plaza ◽  
Javier Plaza ◽  
Francesco Leporati

One of the most important tasks in hyperspectral imaging is the classification of the pixels in the scene in order to produce thematic maps. This problem can be typically solved through machine learning techniques. In particular, deep learning algorithms have emerged in recent years as a suitable methodology to classify hyperspectral data. Moreover, the high dimensionality of hyperspectral data, together with the increasing availability of unlabeled samples, makes deep learning an appealing approach to process and interpret those data. However, the limited number of labeled samples often complicates the exploitation of supervised techniques. Indeed, in order to guarantee a suitable precision, a large number of labeled samples is normally required. This hurdle can be overcome by resorting to unsupervised classification algorithms. In particular, autoencoders can be used to analyze a hyperspectral image using only unlabeled data. However, the high data dimensionality leads to prohibitive training times. In this regard, it is important to realize that the operations involved in autoencoders training are intrinsically parallel. Therefore, in this paper we present an approach that exploits multi-core and many-core devices in order to achieve efficient autoencoders training in hyperspectral imaging applications. Specifically, in this paper, we present new OpenMP and CUDA frameworks for autoencoder training. The obtained results show that the CUDA framework provides a speed-up of about two orders of magnitudes as compared to an optimized serial processing chain.


2019 ◽  
Vol 296 ◽  
pp. 126630 ◽  
Author(s):  
Pengcheng Nie ◽  
Jinnuo Zhang ◽  
Xuping Feng ◽  
Chenliang Yu ◽  
Yong He

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6666
Author(s):  
Kamil Książek ◽  
Michał Romaszewski ◽  
Przemysław Głomb ◽  
Bartosz Grabowski ◽  
Michał Cholewa

In recent years, growing interest in deep learning neural networks has raised a question on how they can be used for effective processing of high-dimensional datasets produced by hyperspectral imaging (HSI). HSI, traditionally viewed as being within the scope of remote sensing, is used in non-invasive substance classification. One of the areas of potential application is forensic science, where substance classification on the scenes is important. An example problem from that area—blood stain classification—is a case study for the evaluation of methods that process hyperspectral data. To investigate the deep learning classification performance for this problem we have performed experiments on a dataset which has not been previously tested using this kind of model. This dataset consists of several images with blood and blood-like substances like ketchup, tomato concentrate, artificial blood, etc. To test both the classic approach to hyperspectral classification and a more realistic application-oriented scenario, we have prepared two different sets of experiments. In the first one, Hyperspectral Transductive Classification (HTC), both a training and a test set come from the same image. In the second one, Hyperspectral Inductive Classification (HIC), a test set is derived from a different image, which is more challenging for classifiers but more useful from the point of view of forensic investigators. We conducted the study using several architectures like 1D, 2D and 3D convolutional neural networks (CNN), a recurrent neural network (RNN) and a multilayer perceptron (MLP). The performance of the models was compared with baseline results of Support Vector Machine (SVM). We have also presented a model evaluation method based on t-SNE and confusion matrix analysis that allows us to detect and eliminate some cases of model undertraining. Our results show that in the transductive case, all models, including the MLP and the SVM, have comparative performance, with no clear advantage of deep learning models. The Overall Accuracy range across all models is 98–100% for the easier image set, and 74–94% for the more difficult one. However, in a more challenging inductive case, selected deep learning architectures offer a significant advantage; their best Overall Accuracy is in the range of 57–71%, improving the baseline set by the non-deep models by up to 9 percentage points. We have presented a detailed analysis of results and a discussion, including a summary of conclusions for each tested architecture. An analysis of per-class errors shows that the score for each class is highly model-dependent. Considering this and the fact that the best performing models come from two different architecture families (3D CNN and RNN), our results suggest that tailoring the deep neural network architecture to hyperspectral data is still an open problem.


2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

<p>Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.</p><p>Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic  payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4–2.5 µm, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of ±0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.</p><p>This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.</p><p>The contributions of this work to the state of the art are the following:</p><ul><li>Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.</li> <li>Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.</li> <li>Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.</li> <li>Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.</li> </ul><p>To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.</p>


Cancers ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 4593
Author(s):  
Cho-Lun Tsai ◽  
Arvind Mukundan ◽  
Chen-Shuan Chung ◽  
Yi-Hsun Chen ◽  
Yao-Kuang Wang ◽  
...  

This study uses hyperspectral imaging (HSI) and a deep learning diagnosis model that can identify the stage of esophageal cancer and mark the locations. This model simulates the spectrum data from the image using an algorithm developed in this study which is combined with deep learning for the classification and diagnosis of esophageal cancer using a single-shot multibox detector (SSD)-based identification system. Some 155 white-light endoscopic images and 153 narrow-band endoscopic images of esophageal cancer were used to evaluate the prediction model. The algorithm took 19 s to predict the results of 308 test images and the accuracy of the test results of the WLI and NBI esophageal cancer was 88 and 91%, respectively, when using the spectral data. Compared with RGB images, the accuracy of the WLI was 83% and the NBI was 86%. In this study, the accuracy of the WLI and NBI was increased by 5%, confirming that the prediction accuracy of the HSI detection method is significantly improved.


Sign in / Sign up

Export Citation Format

Share Document