scholarly journals A Multiscale Hierarchical Model for Sparse Hyperspectral Unmixing

2019 ◽  
Vol 11 (5) ◽  
pp. 500 ◽  
Author(s):  
Jinlin Zou ◽  
Jinhui Lan

Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. Traditional sparse hyperspectral unmixing often leads to an ill-posed inverse problem, which can be circumvented by spatial regularization approaches. However, their adoption has come at the expense of a massive increase in computational cost. In this paper, a novel multiscale hierarchical model for a method of sparse hyperspectral unmixing is proposed. The paper decomposes HU into two domain problems, one is in an approximation scale representation based on resampling the method’s domain, and the other is in the original domain. The use of multiscale spatial resampling methods for HU leads to an effective strategy that deals with spectral variability and computational cost. Furthermore, the hierarchical strategy with abundant sparsity representation in each layer aims to obtain the global optimal solution. Both simulations and real hyperspectral data experiments show that the proposed method outperforms previous methods in endmember extraction and abundance fraction estimation, and promotes piecewise homogeneity in the estimated abundance without compromising sharp discontinuities among neighboring pixels. Additionally, compared with total variation regularization, the proposed method reduces the computational time effectively.

2020 ◽  
Vol 12 (17) ◽  
pp. 2834
Author(s):  
Simon Rebeyrol ◽  
Yannick Deville ◽  
Véronique Achard ◽  
Xavier Briottet ◽  
Stephane May

Hyperspectral unmixing is a widely studied field of research aiming at estimating the pure material signatures and their abundance fractions from hyperspectral images. Most spectral unmixing methods are based on prior knowledge and assumptions that induce limitations, such as the existence of at least one pure pixel for each material. This work presents a new approach aiming to overcome some of these limitations by introducing a co-registered panchromatic image in the unmixing process. Our method, called Heterogeneity-Based Endmember Extraction coupled with Local Constrained Non-negative Matrix Factorization (HBEE-LCNMF), has several steps: a first set of endmembers is estimated based on a heterogeneity criterion applied on the panchromatic image followed by a spectral clustering. Then, in order to complete this first endmember set, a local approach using a constrained non-negative matrix factorization strategy, is proposed. The performance of our method, in regards of several criteria, is compared to those of state-of-the-art methods obtained on synthetic and satellite data describing urban and periurban scenes, and considering the French HYPXIM/HYPEX2 mission characteristics. The synthetic images are built with real spectral reflectances and do not contain a pure pixel for each endmember. The satellite images are simulated from airborne acquisition with the spatial and spectral features of the mission. Our method demonstrates the benefit of a panchromatic image to reduce some well-known limitations in unmixing hyperspectral data. On synthetic data, our method reduces the spectral angle between the endmembers and the real material spectra by 46% compared to the Vertex Component Analysis (VCA) and N-finder (N-FINDR) methods. On real data, HBEE-LCNMF and other methods yield equivalent performance, but, the proposed method shows more robustness over the data sets compared to the tested state-of-the-art methods. Moreover, HBEE-LCNMF does not require one to know the number of endmembers.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1673
Author(s):  
Aili Wang ◽  
Chengyang Liu ◽  
Dong Xue ◽  
Haibin Wu ◽  
Yuxiao Zhang ◽  
...  

Although hyperspectral data provide rich feature information and are widely used in other fields, the data are still scarce. Training small sample data classification is still a major challenge for HSI classification based on deep learning. Recently, the method of mining sample relationships has been proved to be an effective method for training small samples. However, this strategy requires high computational power, which will increase the difficulty of network model training. This paper proposes a modified depthwise separable relational network to deeply capture the similarity between samples. In addition, in order to effectively mine the similarity between samples, the feature vectors of support samples and query samples are symmetrically spliced. According to the metric distance between symmetrical structures, the dependence of the model on samples can be effectively reduced. Firstly, in order to improve the training efficiency of the model, depthwise separable convolution is introduced to reduce the computational cost of the model. Secondly, the Leaky-ReLU function effectively activates all neurons in each layer of neural network to improve the training efficiency of the model. Finally, the cosine annealing learning rate adjustment strategy is introduced to avoid the model falling into the local optimal solution and enhance the robustness of the model. The experimental results on two widely used hyperspectral remote sensing image data sets (Pavia University and Kennedy Space Center) show that compared with seven other advanced classification methods, the proposed method achieves better classification accuracy under the condition of limited training samples.


2021 ◽  
Vol 13 (2) ◽  
pp. 190
Author(s):  
Bouthayna Msellmi ◽  
Daniele Picone ◽  
Zouhaier Ben Rabah ◽  
Mauro Dalla Mura ◽  
Imed Riadh Farah

In this research study, we deal with remote sensing data analysis over high dimensional space formed by hyperspectral images. This task is generally complex due to the large spectral, spatial richness, and mixed pixels. Thus, several spectral un-mixing methods have been proposed to discriminate mixing spectra by estimating the classes and their presence rates. However, information related to mixed pixel composition is very interesting for some applications, but it is insufficient for many others. Thus, it is necessary to have much more data about the spatial localization of the classes detected during the spectral un-mixing process. To solve the above-mentioned problem and specify the spatial location of the different land cover classes in the mixed pixel, sub-pixel mapping techniques were introduced. This manuscript presents a novel sub-pixel mapping process relying on K-SVD (K-singular value decomposition) learning and total variation as a spatial regularization parameter (SMKSVD-TV: Sub-pixel Mapping based on K-SVD dictionary learning and Total Variation). The proposed approach adopts total variation as a spatial regularization parameter, to make edges smooth, and a pre-constructed spatial dictionary with the K-SVD dictionary training algorithm to have more spatial configurations at the sub-pixel level. It was tested and validated with three real hyperspectral data. The experimental results reveal that the attributes obtained by utilizing a learned spatial dictionary with isotropic total variation allowed improving the classes sub-pixel spatial localization, while taking into account pre-learned spatial patterns. It is also clear that the K-SVD dictionary learning algorithm can be applied to construct a spatial dictionary, particularly for each data set.


2021 ◽  
Vol 13 (12) ◽  
pp. 2348
Author(s):  
Jingyan Zhang ◽  
Xiangrong Zhang ◽  
Licheng Jiao

Hyperspectral image unmixing is an important task for remote sensing image processing. It aims at decomposing the mixed pixel of the image to identify a set of constituent materials called endmembers and to obtain their proportions named abundances. Recently, number of algorithms based on sparse nonnegative matrix factorization (NMF) have been widely used in hyperspectral unmixing with good performance. However, these sparse NMF algorithms only consider the correlation characteristics of abundance and usually just take the Euclidean structure of data into account, which can make the extracted endmembers become inaccurate. Therefore, with the aim of addressing this problem, we present a sparse NMF algorithm based on endmember independence and spatial weighted abundance in this paper. Firstly, it is assumed that the extracted endmembers should be independent from each other. Thus, by utilizing the autocorrelation matrix of endmembers, the constraint based on endmember independence is to be constructed in the model. In addition, two spatial weights for abundance by neighborhood pixels and correlation coefficient are proposed to make the estimated abundance smoother so as to further explore the underlying structure of hyperspectral data. The proposed algorithm not only considers the relevant characteristics of endmembers and abundances simultaneously, but also makes full use of the spatial-spectral information in the image, achieving a more desired unmixing performance. The experiment results on several data sets further verify the effectiveness of the proposed algorithm.


Author(s):  
Tung T. Vu ◽  
Ha Hoang Kha

In this research work, we investigate precoder designs to maximize the energy efficiency (EE) of secure multiple-input multiple-output (MIMO) systems in the presence of an eavesdropper. In general, the secure energy efficiency maximization (SEEM) problem is highly nonlinear and nonconvex and hard to be solved directly. To overcome this difficulty, we employ a branch-and-reduce-and-bound (BRB) approach to obtain the globally optimal solution. Since it is observed that the BRB algorithm suffers from highly computational cost, its globally optimal solution is importantly served as a benchmark for the performance evaluation of the suboptimal algorithms. Additionally, we also develop a low-complexity approach using the well-known zero-forcing (ZF) technique to cancel the wiretapped signal, making the design problem more amenable. Using the ZF based method, we transform the SEEM problem to a concave-convex fractional one which can be solved by applying the combination of the Dinkelbach and bisection search algorithm. Simulation results show that the ZF-based method can converge fast and obtain a sub-optimal EE performance which is closed to the optimal EE performance of the BRB method. The ZF based scheme also shows its advantages in terms of the energy efficiency in comparison with the conventional secrecy rate maximization precoder design.


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 13 (13) ◽  
pp. 2559
Author(s):  
Daniele Cerra ◽  
Miguel Pato ◽  
Kevin Alonso ◽  
Claas Köhler ◽  
Mathias Schneider ◽  
...  

Spectral unmixing represents both an application per se and a pre-processing step for several applications involving data acquired by imaging spectrometers. However, there is still a lack of publicly available reference data sets suitable for the validation and comparison of different spectral unmixing methods. In this paper, we introduce the DLR HyperSpectral Unmixing (DLR HySU) benchmark dataset, acquired over German Aerospace Center (DLR) premises in Oberpfaffenhofen. The dataset includes airborne hyperspectral and RGB imagery of targets of different materials and sizes, complemented by simultaneous ground-based reflectance measurements. The DLR HySU benchmark allows a separate assessment of all spectral unmixing main steps: dimensionality estimation, endmember extraction (with and without pure pixel assumption), and abundance estimation. Results obtained with traditional algorithms for each of these steps are reported. To the best of our knowledge, this is the first time that real imaging spectrometer data with accurately measured targets are made available for hyperspectral unmixing experiments. The DLR HySU benchmark dataset is openly available online and the community is welcome to use it for spectral unmixing and other applications.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Sign in / Sign up

Export Citation Format

Share Document