scholarly journals OCTAVVS: A Graphical Toolbox for High-Throughput Preprocessing and Analysis of Vibrational Spectroscopy Imaging Data

2020 ◽  
Vol 3 (2) ◽  
pp. 34
Author(s):  
Carl Troein ◽  
Syahril Siregar ◽  
Michiel Op De Beeck ◽  
Carsten Peterson ◽  
Anders Tunlid ◽  
...  

Modern vibrational spectroscopy techniques enable the rapid collection of thousands of spectra in a single hyperspectral image, allowing researchers to study spatially heterogeneous samples at micrometer resolution. A number of algorithms have been developed to correct for effects such as atmospheric absorption, light scattering by cellular structures and varying baseline levels. After preprocessing, spectra are commonly decomposed and clustered to reveal informative patterns and subtle spectral changes. Several of these steps are slow, labor-intensive and require programming skills to make use of published algorithms and code. We here present a free and platform-independent graphical toolbox that allows rapid preprocessing of large sets of spectroscopic images, including atmospheric correction and a new algorithm for resonant Mie scattering with improved speed. The software also includes modules for decomposition into constituent spectra using the popular Multivariate Curve Resolution–Alternating Least Squares (MCR-ALS) algorithm, augmented by region-of-interest selection, as well as clustering and cluster annotation.

2019 ◽  
Author(s):  
Carl Troein ◽  
Syahril Siregar ◽  
Michiel Op De Beeck ◽  
Carsten Peterson ◽  
Anders Tunlid ◽  
...  

AbstractModern vibrational spectroscopy techniques enable rapid collection of thousands of spectra in a single hyperspectral image, allowing researchers to resolve spatially heterogeneous samples down to a resolution of a few μm. A number of algorithms have been developed to correct for effects such as atmospheric absorption, light scattering by cellular structures and varying baseline levels. Following such preprocessing, spectra are commonly decomposed and clustered to reveal informative patterns and subtle spectral changes. Several of these steps are slow, labor-intensive and require programming skills to make use of published algorithms and code. We here present a free and platform-independent graphical toolbox that allows rapid processing of large sets of spectroscopic images, including atmospheric correction and an algorithm for resonant Mie scattering with improved speed and stability. The software includes modules for decomposition into constituent spectra using the popular MCR-ALS algorithm, augmented by region-of-interest selection, as well as clustering and cluster annotation.


2011 ◽  
Vol 108 ◽  
pp. 224-229 ◽  
Author(s):  
Ping Wang ◽  
Zhu Rong Xing ◽  
You Gui Feng

HJ-1 Hyperspectral image radiometer is on the HJ-1A satellite. It will provide images about 115 spectral bands between 0.45 and 0.95μm, with spatial resolution of 100 meters and give a return visit every 96 hours. Atmospheric correction models of 6S and FLAASH were applied to the image. The results showed that both models could carry out the radiation correction to radiation-induced distortion caused by atmospheric effects, but correction results were different. 6S had well results comparing to the in-situ spectrum of vegetation. The image quality was better which had the further application ability.


Plants ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 2291
Author(s):  
Jae Gyeong Jung ◽  
Ki Eun Song ◽  
Sun Hee Hong ◽  
Sang In Shim

Since the application of hyperspectral technology to agriculture, many scientists have been conducting studies to apply the technology in crop diagnosis. However, due to the properties of optical devices, the reflectances obtained according to the image acquisition conditions are different. Nevertheless, there is no optimized method for minimizing such technical errors in applying hyperspectral imaging. Therefore, this study was conducted to find the appropriate image acquisition conditions that reflect the growth status of wheat grown under different nitrogen fertilization regimes. The experiment plots were comprised of six plots with various N application levels of 145.6 kg N ha−1 (N1), 109.2 kg N ha−1 (N2), 91.0 kg N ha−1 (N3), 72.8 kg N ha−1 (N4), 54.6 kg N ha−1 (N5), and 36.4 kg N ha−1 (N6). Hyperspectral image acquisitions were performed at different shooting angles of 105° and 125° from the surface, and spike, flag leaf, and the second uppermost leaf were divided into five parts from apex to base when analyzing the images. The growth analysis conducted at heading showed that the N6 was 85.6% in the plant height, 44.1% in LAI, and 64.9% in SPAD as compared to N1. The nitrogen content in the leaf decreased by 55.2% compared to N1 and the quantity was 44.9% in N6 compared to N1. Based on the vegetation indices obtained from hyperspectral reflectances at the heading stage, the spike was not suitable for analysis. In the case of the flag leaf and the 2nd uppermost leaf, the vegetation indices from spectral data taken at 105 degrees were more appropriate for acquiring imaging data by clearly dividing the effects of fertilization level. The results of the regional variation in a leaf showed that the region of interest (ROI), which is close to the apex of the flag leaf and the base of the second uppermost leaf, has a high coefficient of determination between the fertilization levels and the vegetation indices, which effectively reflected the status of wheat.


2008 ◽  
Vol 22 (9) ◽  
pp. 482-490 ◽  
Author(s):  
Howland D. T. Jones ◽  
David M. Haaland ◽  
Michael B. Sinclair ◽  
David K. Melgaard ◽  
Mark H. Van Benthem ◽  
...  

Author(s):  
Aoife Gowen ◽  
Jun-Li Xu ◽  
Ana Herrero-Langreo

Applications of hyperspectral imaging (HSI) to the quantitative and qualitative measurement of samples have grown widely in recent years, due mainly to the improved performance and lower cost of imaging spectroscopy instrumentation. Data sampling is a crucial yet often overlooked step in hyperspectral image analysis, which impacts the subsequent results and their interpretation. In the selection of pixel spectra for the calibration of classification models, the spatial information in HSI data can be exploited. In this paper, a variety of sampling strategies for selection of pixel spectra are presented, exemplified through five case studies. The strategies are compared in terms of the proportion of global variability captured, practicality and predictive model performance. The use of variographic analysis as a guide to the spatial segmentation prior to sampling leads to the selection of representative subsets while reducing the variation in model performance parameters over repeated random selection.


2020 ◽  
Vol 2 (1) ◽  
Author(s):  
Marta M Correia ◽  
Timothy Rittman ◽  
Christopher L Barnes ◽  
Ian T Coyle-Gilchrist ◽  
Boyd Ghosh ◽  
...  

Abstract The early and accurate differential diagnosis of parkinsonian disorders is still a significant challenge for clinicians. In recent years, a number of studies have used magnetic resonance imaging data combined with machine learning and statistical classifiers to successfully differentiate between different forms of Parkinsonism. However, several questions and methodological issues remain, to minimize bias and artefact-driven classification. In this study, we compared different approaches for feature selection, as well as different magnetic resonance imaging modalities, with well-matched patient groups and tightly controlling for data quality issues related to patient motion. Our sample was drawn from a cohort of 69 healthy controls, and patients with idiopathic Parkinson’s disease (n = 35), progressive supranuclear palsy Richardson’s syndrome (n = 52) and corticobasal syndrome (n = 36). Participants underwent standardized T1-weighted and diffusion-weighted magnetic resonance imaging. Strict data quality control and group matching reduced the control and patient numbers to 43, 32, 33 and 26, respectively. We compared two different methods for feature selection and dimensionality reduction: whole-brain principal components analysis, and an anatomical region-of-interest based approach. In both cases, support vector machines were used to construct a statistical model for pairwise classification of healthy controls and patients. The accuracy of each model was estimated using a leave-two-out cross-validation approach, as well as an independent validation using a different set of subjects. Our cross-validation results suggest that using principal components analysis for feature extraction provides higher classification accuracies when compared to a region-of-interest based approach. However, the differences between the two feature extraction methods were significantly reduced when an independent sample was used for validation, suggesting that the principal components analysis approach may be more vulnerable to overfitting with cross-validation. Both T1-weighted and diffusion magnetic resonance imaging data could be used to successfully differentiate between subject groups, with neither modality outperforming the other across all pairwise comparisons in the cross-validation analysis. However, features obtained from diffusion magnetic resonance imaging data resulted in significantly higher classification accuracies when an independent validation cohort was used. Overall, our results support the use of statistical classification approaches for differential diagnosis of parkinsonian disorders. However, classification accuracy can be affected by group size, age, sex and movement artefacts. With appropriate controls and out-of-sample cross validation, diagnostic biomarker evaluation including magnetic resonance imaging based classifiers may be an important adjunct to clinical evaluation.


Microscopy ◽  
2019 ◽  
Vol 69 (2) ◽  
pp. 110-122 ◽  
Author(s):  
Shunsuke Muto ◽  
Motoki Shiga

Abstract The combination of scanning transmission electron microscopy (STEM) with analytical instruments has become one of the most indispensable analytical tools in materials science. A set of microscopic image/spectral intensities collected from many sampling points in a region of interest, in which multiple physical/chemical components may be spatially and spectrally entangled, could be expected to be a rich source of information about a material. To unfold such an entangled image comprising information and spectral features into its individual pure components would necessitate the use of statistical treatment based on informatics and statistics. These computer-aided schemes or techniques are referred to as multivariate curve resolution, blind source separation or hyperspectral image analysis, depending on their application fields, and are classified as a subset of machine learning. In this review, we introduce non-negative matrix factorization, one of these unfolding techniques, to solve a wide variety of problems associated with the analysis of materials, particularly those related to STEM, electron energy-loss spectroscopy and energy-dispersive X-ray spectroscopy. This review, which commences with the description of the basic concept, the advantages and drawbacks of the technique, presents several additional strategies to overcome existing problems and their extensions to more general tensor decomposition schemes for further flexible applications are described.


Sign in / Sign up

Export Citation Format

Share Document