scholarly journals Distributed Parallel Endmember Extraction of Hyperspectral Data Based on Spark

2016 ◽  
Vol 2016 ◽  
pp. 1-9
Author(s):  
Zebin Wu ◽  
Jinping Gu ◽  
Yonglong Li ◽  
Fu Xiao ◽  
Jin Sun ◽  
...  

Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model, Hadoop Distributed File System (HDFS), and Apache Spark to realize distributed parallel implementation for hyperspectral endmember extraction, which significantly accelerates the computation of hyperspectral processing and provides high throughput access to large hyperspectral data. The experimental results, which are obtained by extracting endmembers of hyperspectral datasets on a cloud computing platform built on a cluster, demonstrate the effectiveness and computational efficiency of the proposed method.

2021 ◽  
Vol 13 (2) ◽  
pp. 176
Author(s):  
Peng Zheng ◽  
Zebin Wu ◽  
Jin Sun ◽  
Yi Zhang ◽  
Yaoqin Zhu ◽  
...  

As the volume of remotely sensed data grows significantly, content-based image retrieval (CBIR) becomes increasingly important, especially for cloud computing platforms that facilitate processing and storing big data in a parallel and distributed way. This paper proposes a novel parallel CBIR system for hyperspectral image (HSI) repository on cloud computing platforms under the guide of unmixed spectral information, i.e., endmembers and their associated fractional abundances, to retrieve hyperspectral scenes. However, existing unmixing methods would suffer extremely high computational burden when extracting meta-data from large-scale HSI data. To address this limitation, we implement a distributed and parallel unmixing method that operates on cloud computing platforms in parallel for accelerating the unmixing processing flow. In addition, we implement a global standard distributed HSI repository equipped with a large spectral library in a software-as-a-service mode, providing users with HSI storage, management, and retrieval services through web interfaces. Furthermore, the parallel implementation of unmixing processing is incorporated into the CBIR system to establish the parallel unmixing-based content retrieval system. The performance of our proposed parallel CBIR system was verified in terms of both unmixing efficiency and accuracy.


NIR news ◽  
2014 ◽  
Vol 25 (7) ◽  
pp. 15-17 ◽  
Author(s):  
Y. Dixit ◽  
R. Cama ◽  
C. Sullivan ◽  
L. Alvarez Jubete ◽  
A. Ktenioudaki

Author(s):  
J. González Santiago ◽  
F. Schenkel ◽  
W. Gross ◽  
W. Middelmann

Abstract. The application of hyperspectral image analysis for land cover classification is mainly executed in presence of manually labeled data. The ground truth represents the distribution of the actual classes and it is mostly derived from field recorded information. Its manual generation is ineffective, tedious and very time-consuming. The continuously increasing amount of proprietary and publicly available datasets makes it imperative to reduce these related costs. In addition, adequately equipped computer systems are more capable of identifying patterns and neighbourhood relationships than a human operator. Based on these facts, an unsupervised labeling approach is presented to automatically generate labeled images used during the training of a convolutional neural network (CNN) classifier. The proposed method begins with the segmentation stage where an adapted version of the simple linear iterative clustering (SLIC) algorithm for dealing with hyperspectral data is used. Consequently, the Hierarchical Agglomerative Clustering (HAC) and Fuzzy C-Means (FCM) algorithms are employed to efficiently group similar superpixels considering distances with respect to each other. The distinct utilization of these clustering techniques defines a complementary stage for overcoming class overlapping during image generation. Ultimately, a CNN classifier is trained using the computed image to pixel-wise predict classes on unseen datasets. The labeling results, obtained using two hyperspectral benchmark datasets, indicate that the current approach is able to detect objects boundaries, automatically assign class labels to the entire dataset and to classify new data with a prediction certainty of 90%. Additionally, this method is also capable of achieving better classification accuracy and visual correspondence with reality than the ground truth images.


2020 ◽  
Vol 12 (5) ◽  
pp. 843 ◽  
Author(s):  
Wenzhi Zhao ◽  
Xi Chen ◽  
Jiage Chen ◽  
Yang Qu

Hyperspectral image analysis plays an important role in agriculture, mineral industry, and for military purposes. However, it is quite challenging when classifying high-dimensional hyperspectral data with few labeled samples. Currently, generative adversarial networks (GANs) have been widely used for sample generation, but it is difficult to acquire high-quality samples with unwanted noises and uncontrolled divergences. To generate high-quality hyperspectral samples, a self-attention generative adversarial adaptation network (SaGAAN) is proposed in this work. It aims to increase the number and quality of training samples to avoid the impact of over-fitting. Compared to the traditional GANs, the proposed method has two contributions: (1) it includes a domain adaptation term to constrain generated samples to be more realistic to the original ones; and (2) it uses the self-attention mechanism to capture the long-range dependencies across the spectral bands and further improve the quality of generated samples. To demonstrate the effectiveness of the proposed SaGAAN, we tested it on two well-known hyperspectral datasets: Pavia University and Indian Pines. The experiment results illustrate that the proposed method can greatly improve the classification accuracy, even with a small number of initial labeled samples.


2019 ◽  
Vol 5 (5) ◽  
pp. 52 ◽  
Author(s):  
Alberto Signoroni ◽  
Mattia Savardi ◽  
Annalisa Baronio ◽  
Sergio Benini

Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends.


Sign in / Sign up

Export Citation Format

Share Document