scholarly journals Chlorophyll Concentration Retrieval by Training Convolutional Neural Network for Stochastic Model of Leaf Optical Properties (SLOP) Inversion

2020 ◽  
Vol 12 (2) ◽  
pp. 283 ◽  
Author(s):  
Leevi Annala ◽  
Eija Honkavaara ◽  
Sakari Tuominen ◽  
Ilkka Pölönen

Miniaturized hyperspectral imaging techniques have developed rapidly in recent years and have become widely available for different applications. Combining calibrated hyperspectral imagery with inverse physically based reflectance models is an interesting approach for estimating chlorophyll concentrations that are good indicators of vegetation health. The objective of this study was to develop a novel approach for retrieving chlorophyll a and b values from remotely sensed data by inverting the stochastic model of leaf optical properties using a one-dimensional convolutional neural network. The inversion results and retrieved values are validated in two ways: A classical machine learning validation dataset and calculating chlorophyll maps from empirical remotely sensed hyperspectral data and comparing them to TCARI OSAVI , an index that has strong negative correlation with chlorophyll concentration. With the validation dataset, coefficients of determination ( R 2 ) of 0.97 were obtained for chlorophyll a and 0.95 for chlorophyll b. The chlorophyll maps correlate with the TCARI OSAVI map. The correlation coefficient (R) is −0.87 for chlorophyll a and −0.68 for chlorophyll b in selected plots. These results indicate that the approach is highly promising approach for estimating vegetation chlorophyll content.

Water ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 664
Author(s):  
Yun Xue ◽  
Lei Zhu ◽  
Bin Zou ◽  
Yi-min Wen ◽  
Yue-hong Long ◽  
...  

For Case-II water bodies with relatively complex water qualities, it is challenging to establish a chlorophyll-a concentration (Chl-a concentration) inversion model with strong applicability and high accuracy. Convolutional Neural Network (CNN) shows excellent performance in image target recognition and natural language processing. However, there little research exists on the inversion of Chl-a concentration in water using convolutional neural networks. Taking China’s Dongting Lake as an example, 90 water samples and their spectra were collected in this study. Using eight combinations as independent variables and Chl-a concentration as the dependent variable, a CNN model was constructed to invert Chl-a concentration. The results showed that: (1) The CNN model of the original spectrum has a worse inversion effect than the CNN model of the preprocessed spectrum. The determination coefficient (RP2) of the predicted sample is increased from 0.79 to 0.88, and the root mean square error (RMSEP) of the predicted sample is reduced from 0.61 to 0.49, indicating that preprocessing can significantly improve the inversion effect of the model.; (2) among the combined models, the CNN model with Baseline1_SC (strong correlation factor of 500–750 nm baseline) has the best effect, with RP2 reaching 0.90 and RMSEP only 0.45. The average inversion effect of the eight CNN models is better. The average RP2 reaches 0.86 and the RMSEP is only 0.52, indicating the feasibility of applying CNN to Chl-a concentration inversion modeling; (3) the performance of the CNN model (Baseline1_SC (RP2 = 0.90, RMSEP = 0.45)) was far better than the traditional model of the same combination, i.e., the linear regression model (RP2 = 0.61, RMSEP = 0.72) and partial least squares regression model (Baseline1_SC (RP2 = 0.58. RMSEP = 0.95)), indicating the superiority of the convolutional neural network inversion modeling of water body Chl-a concentration.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Yao Zhang ◽  
Jingfeng Huang ◽  
Fumin Wang ◽  
George Alan Blackburn ◽  
Hankui K. Zhang ◽  
...  

1995 ◽  
Vol 25 (3) ◽  
pp. 407-412 ◽  
Author(s):  
Gregory A. Carter ◽  
Joanne Rebbeck ◽  
Kevin E. Percy

Seedlings of Liriodendrontulipifera L. and PinusstrobusL. were grown in open-top chambers in the field to determine leaf optical responses to increased ozone (O3) or O3 and carbon dioxide (CO2). In both species, seedlings were exposed to charcoal-filtered air, air with 1.3 times ambient O3 concentrations (1.3×), or air with 1.3 times ambient O3 and 700 μL•L−1 CO2 (1.3× + CO2). Exposure to 1.3× increased reflectance in the 633–697 nm range in L. tulipifera. Also, 1.3× decreased transmittance within the 400–420 nm range, increased transmittance at 686–691 nm, and decreased absorptance at 655–695 nm. With 700 μL•L−1 CO2, O3 did not affect reflectance in L. tulipifera, but decreased transmittance and increased absorptance within the 400–421 nm range and increased transmittance and decreased absorptance in the 694–697 nm range. Under 1.3×, reflectance in P. strobus was not affected. However, 1.3× + CO2 increased pine reflectance in the 538–647, 650, and 691–716 nm ranges. Transmittances and absorptances were not determined for P. strobus. Reflectance in both species, and transmittance and absorptance in L. tulipifera, were most sensitive to O3 near 695 nm. Reflectance at 695 nm, but particularly the ratio of reflectance at 695 nm to reflectance at 760 nm, was related closely to ozone-induced decreases in leaf chlorophyll contents, particularly chlorophyll a (r = 0.82).


2020 ◽  
Author(s):  
Yuwei Sun ◽  
Hideya Ochiai ◽  
Hiroshi Esaki

Abstract This article illustrates a method of visualizing network traffic in LAN based on the Hilbert Curve structure and the array exchange and projection, with nine types of protocols’ communication frequency information as the discriminators, the results of which we call them feature maps of network events. Several known scan cases are simulated in LANs and network traffic is collected for generating feature maps under each case. In order to solve this multi-label classification task, we adopt and train a deep convolutional neural network (DCNN), in two different network environments with feature maps as the input data, and different scan cases as the labels. We separate datasets with a ratio of 4:1 into the training dataset and the validation dataset. Then, based on the micro scores and the macro scores of the validation, we evaluate performance of the scheme, achieving macro-F-measure scores of 0.982 and 0.975, and micro-F-measure scores of 0.976 and 0.965 separately in these two LANs.


2019 ◽  
Vol 55 (7) ◽  
pp. 5631-5649 ◽  
Author(s):  
Feng Ling ◽  
Doreen Boyd ◽  
Yong Ge ◽  
Giles M. Foody ◽  
Xiaodong Li ◽  
...  

Endoscopy ◽  
2019 ◽  
Vol 51 (12) ◽  
pp. 1121-1129 ◽  
Author(s):  
Bum-Joo Cho ◽  
Chang Seok Bang ◽  
Se Woo Park ◽  
Young Joo Yang ◽  
Seung In Seo ◽  
...  

Abstract Background Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist’s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). Conclusion The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


Sign in / Sign up

Export Citation Format

Share Document