A compact hyperspectral camera for measurement of perfusion parameters in medicine

2018 ◽  
Vol 63 (5) ◽  
pp. 519-527 ◽  
Author(s):  
Axel Kulcke ◽  
Amadeus Holmer ◽  
Philip Wahl ◽  
Frank Siemers ◽  
Thomas Wild ◽  
...  

Abstract Worldwide, chronic wounds are still a major and increasing problem area in medicine with protracted suffering of patients and enormous costs. Beside conventional wound treatment, for instance kinds of oxygen therapy and cold plasma technology have been tested, providing an improvement in the perfusion of wounds and their healing potential, but these methods are unfortunately not sufficiently validated and accepted for clinical practice to date. Using hyperspectral imaging technology in the visible (VIS) and near infrared (NIR) region with high spectral and spatial resolution, perfusion parameters of tissue and wounds can be determined. We present a new compact hyperspectral camera which can be used in clinical practice. From hyperspectral data the hemoglobin oxygenation (StO2), the relative concentration of hemoglobin [tissue hemoglobin index (THI)] and the so-called NIR-perfusion index can be determined. The first two parameters are calculated from the VIS-part of the spectrum and represent the perfusion of superficial tissue layers, whereas the NIR-perfusion index is calculated from the NIR-part representing the perfusion in deeper layers. First clinical measurements of transplanted flaps and chronic ulcer wounds show, that the perfusion level can be determined quantitatively allowing sensitive evaluation and monitoring for an optimization of the wound treatment planning and for validation of new treatment methods.

Plants ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 341
Author(s):  
Pauliina Salmi ◽  
Matti A. Eskelinen ◽  
Matti T. Leppänen ◽  
Ilkka Pölönen

Spectral cameras are traditionally used in remote sensing of microalgae, but increasingly also in laboratory-scale applications, to study and monitor algae biomass in cultures. Practical and cost-efficient protocols for collecting and analyzing hyperspectral data are currently needed. The purpose of this study was to test a commercial, easy-to-use hyperspectral camera to monitor the growth of different algae strains in liquid samples. Indices calculated from wavebands from transmission imaging were compared against algae abundance and wet biomass obtained from an electronic cell counter, chlorophyll a concentration, and chlorophyll fluorescence. A ratio of selected wavebands containing near-infrared and red turned out to be a powerful index because it was simple to calculate and interpret, yet it yielded strong correlations to abundances strain-specifically (0.85 < r < 0.96, p < 0.001). When all the indices formulated as A/B, A/(A + B) or (A − B)/(A + B), where A and B were wavebands of the spectral camera, were scrutinized, good correlations were found amongst them for biomass of each strain (0.66 < r < 0.98, p < 0.001). Comparison of near-infrared/red index to chlorophyll a concentration demonstrated that small-celled strains had higher chlorophyll absorbance compared to strains with larger cells. The comparison of spectral imaging to chlorophyll fluorescence was done for one strain of green algae and yielded strong correlations (near-infrared/red, r = 0.97, p < 0.001). Consequently, we described a simple imaging setup and information extraction based on vegetation indices that could be used to monitor algae cultures.


2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

&lt;p&gt;Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.&lt;/p&gt;&lt;p&gt;Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic&amp;#160; payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4&amp;#8211;2.5 &amp;#181;m, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of &amp;#177;0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.&lt;/p&gt;&lt;p&gt;This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.&lt;/p&gt;&lt;p&gt;The contributions of this work to the state of the art are the following:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.&lt;/li&gt; &lt;li&gt;Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.&lt;/li&gt; &lt;li&gt;Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.&lt;/li&gt; &lt;li&gt;Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.&lt;/li&gt; &lt;/ul&gt;&lt;p&gt;To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.&lt;/p&gt;


2018 ◽  
Vol 58 (8) ◽  
pp. 1488 ◽  
Author(s):  
S. Rahman ◽  
P. Quin ◽  
T. Walsh ◽  
T. Vidal-Calleja ◽  
M. J. McPhee ◽  
...  

The objectives of the present study were to describe the approach used for classifying surface tissue, and for estimating fat depth in lamb short loins and validating the approach. Fat versus non-fat pixels were classified and then used to estimate the fat depth for each pixel in the hyperspectral image. Estimated reflectance, instead of image intensity or radiance, was used as the input feature for classification. The relationship between reflectance and the fat/non-fat classification label was learnt using support vector machines. Gaussian processes were used to learn regression for fat depth as a function of reflectance. Data to train and test the machine learning algorithms was collected by scanning 16 short loins. The near-infrared hyperspectral camera captured lines of data of the side of the short loin (i.e. with the subcutaneous fat facing the camera). Advanced single-lens reflex camera took photos of the same cuts from above, such that a ground truth of fat depth could be semi-automatically extracted and associated with the hyperspectral data. A subset of the data was used to train the machine learning model, and to test it. The results of classifying pixels as either fat or non-fat achieved a 96% accuracy. Fat depths of up to 12 mm were estimated, with an R2 of 0.59, a mean absolute bias of 1.72 mm and root mean square error of 2.34 mm. The techniques developed and validated in the present study will be used to estimate fat coverage to predict total fat, and, subsequently, lean meat yield in the carcass.


2021 ◽  
Vol 13 (15) ◽  
pp. 2967
Author(s):  
Nicola Acito ◽  
Marco Diani ◽  
Gregorio Procissi ◽  
Giovanni Corsini

Atmospheric compensation (AC) allows the retrieval of the reflectance from the measured at-sensor radiance and is a fundamental and critical task for the quantitative exploitation of hyperspectral data. Recently, a learning-based (LB) approach, named LBAC, has been proposed for the AC of airborne hyperspectral data in the visible and near-infrared (VNIR) spectral range. LBAC makes use of a parametric regression function whose parameters are learned by a strategy based on synthetic data that accounts for (1) a physics-based model for the radiative transfer, (2) the variability of the surface reflectance spectra, and (3) the effects of random noise and spectral miscalibration errors. In this work we extend LBAC with respect to two different aspects: (1) the platform for data acquisition and (2) the spectral range covered by the sensor. Particularly, we propose the extension of LBAC to spaceborne hyperspectral sensors operating in the VNIR and short-wave infrared (SWIR) portion of the electromagnetic spectrum. We specifically refer to the sensor of the PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission, and the recent Earth Observation mission of the Italian Space Agency that offers a great opportunity to improve the knowledge on the scientific and commercial applications of spaceborne hyperspectral data. In addition, we introduce a curve fitting-based procedure for the estimation of column water vapor content of the atmosphere that directly exploits the reflectance data provided by LBAC. Results obtained on four different PRISMA hyperspectral images are presented and discussed.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


2021 ◽  
Author(s):  
Jianjian Yang ◽  
Boshen Chang ◽  
Yuchen Zhang ◽  
Wenjie Luo ◽  
Miao Wu

Abstract Aiming at the problem of coal gangue identification in the current fully mechanized mining face and coal washing links, this article proposes a CNN coal and rock identification method based on hyperspectral data. First, collect coal and rock spectrum data by a near-infrared spectrometer, and then use four methods such as first-order differential (FD), second-order differential (SD), standard normal variable transformation (SNV), and multi-style smoothing to filter the 120 sets of collected data. The coal and rock reflectance spectrum data is preprocessed to enhance the intensity of spectral reflectance and absorption characteristics, and effectively remove the spectral curve noise generated by instrument performance and environmental factors.Construct a CNN model, judge the pros and cons of the model by comparing the accuracy of the three parameter combinations, select the most appropriate learning rate, the number of feature extraction layers, and the dropout rate, and generate the best CNN classifier for hyperspectral data. Rock recognition. Experiments show that the recognition accuracy of the one-dimensional convolutional neural network model proposed in this paper reaches 94.6%, which is higher than BP (57%), SVM (72%) and DBN (86%). Verify the advantages and effectiveness of the method proposed in this article.


2019 ◽  
Vol 12 (1) ◽  
pp. 93
Author(s):  
Nichole Gosselin ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Jack Fishman ◽  
Kelley Belina ◽  
...  

Remotely-sensed identification of ozone stress in crops can allow for selection of ozone resistant genotypes, improving yields. This is critical as population, food demand, and background tropospheric ozone are projected to increase over the next several decades. Visual scores of common ozone damage have been used to identify ozone-stress in bio-indicator plants. This paper evaluates the use of a visual scoring metric of ozone damage applied to soybeans. The scoring of the leaves is then combined with hyperspectral data to identify spectral indices specific to ozone damage. Two genotypes of soybean, Dwight and Pana, that have shown different sensitivities to ozone, were grown and visually scored for ozone-specific damage on multiple dates throughout the growing season. Leaf reflectance, foliar biophysical properties, and yield data were collected. Additionally, ozone bio-indicator plants, snap beans, and common milkweed, were investigated with visual scores and hyperspectral leaf data for comparison. The normalized difference spectral index (NDSI) was used to identify the significant bands in the visible (VIS), near infrared (NIR), and shortwave infrared (SWIR) that best correlated with visual damage score when used in the index. Results were then compared to multiple well-established indices. Indices were also evaluated for correlation with seed and pod weight. The ozone damage scoring metric for soybeans evaluated in August had a coefficient of determination of 0.60 with end-of-season pod weight and a Pearson correlation coefficient greater than 0.6 for photosynthetic rate, stomatal conductance, and transpiration. NDSI [R558, R563] correlated best with visual scores of ozone damage in soybeans when evaluating data from all observation dates. These wavelengths were similar to those identified as most sensitive to visual damage in August when used in NDSI (560 nm, 563 nm). NDSI [R560, R563] in August had the highest coefficient of determination for individual pod weight (R2 = 0.64) and seed weight (R2 = 0.54) when compared against 21 well-established indices used for identification of pigment or photosynthetic stress in plants. When evaluating use of spectral bands in NDSI, longer wavelengths in SWIR were identified as more sensitive to ozone visual damage. Trends in the bands and biophysical properties of the soybeans combined with evaluation of ozone data indicate likely timing of significant ozone damage as after late-July for this season. This work has implications for better spectral detection of ozone stress in crops and could help with efforts to identify ozone tolerant varieties to increase future yield.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Tao Zhang ◽  
Biyao Wang ◽  
Pengtao Yan ◽  
Kunlun Wang ◽  
Xu Zhang ◽  
...  

For the identification of salmon adulteration with water injection, a nondestructive identification method based on hyperspectral images was proposed. The hyperspectral images of salmon fillets in visible and near-infrared ranges (390–1050 nm) were obtained with a system. The original hyperspectral data were processed through the principal-component analysis (PCA). According to the image quality and PCA parameters, a second principal-component (PC2) image was selected as the feature image, and the wavelengths corresponding to the local extremum values of feature image weighting coefficients were extracted as feature wavelengths, which were 454.9, 512.3, and 569.1 nm. On this basis, the color combined with spectra at feature wavelengths, texture combined with spectra at feature wavelengths, and color-texture combined with spectra at feature wavelengths were independently set as the input, for the modeling of salmon adulteration identification based on the self-organizing feature map (SOM) network. The distances between neighboring neurons and feature weights of the models were analyzed to realize the visualization of identification results. The results showed that the SOM-based model, with texture-color combined with fusion features of spectra at feature wavelengths as the input, was evaluated to possess the best performance and identification accuracy is as high as 96.7%.


NIR news ◽  
2014 ◽  
Vol 25 (7) ◽  
pp. 15-17 ◽  
Author(s):  
Y. Dixit ◽  
R. Cama ◽  
C. Sullivan ◽  
L. Alvarez Jubete ◽  
A. Ktenioudaki

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3855 ◽  
Author(s):  
Lin Bai ◽  
Cuizhen Wang ◽  
Shuying Zang ◽  
Changshan Wu ◽  
Jinming Luo ◽  
...  

In arid and semi-arid regions, identifying and monitoring of soil alkalinity and salinity are in urgently need for preventing land degradation and maintaining ecological balances. In this study, physicochemical, statistical, and spectral analysis revealed that potential of hydrogen (pH) and electrical conductivity (EC) characterized the saline-alkali soils and were sensitive to the visible and near infrared (VIS-NIR) wavelengths. On the basis of soil pH, EC, and spectral data, the partial least squares regression (PLSR) models for estimating soil alkalinity and salinity were constructed. The R2 values for soil pH and EC models were 0.77 and 0.48, and the root mean square errors (RMSEs) were 0.95 and 17.92 dS/m, respectively. The ratios of performance to inter-quartile distance (RPIQ) for the soil pH and EC models were 3.84 and 0.14, respectively, indicating that the soil pH model performed well but the soil EC model was not considerably reliable. With the validation dataset, the RMSEs of the two models were 1.06 and 18.92 dS/m. With the PLSR models applied to hyperspectral data acquired from the hyperspectral imager (HSI) onboard the HJ-1A satellite (launched in 2008 by China), the soil alkalinity and salinity distributions were mapped in the study area, and were validated with RMSEs of 1.09 and 17.30 dS/m, respectively. These findings revealed that the hyperspectral images in the VIS-NIR wavelengths had the potential to map soil alkalinity and salinity in the Songnen Plain, China.


Sign in / Sign up

Export Citation Format

Share Document