scholarly journals Early Detection of Change by Applying Scale-Space Methodology to Hyperspectral Images

2020 ◽  
Vol 10 (7) ◽  
pp. 2298
Author(s):  
Stig Uteng ◽  
Thomas Haugland Johansen ◽  
Jose Ignacio Zaballos ◽  
Samuel Ortega ◽  
Lasse Holmström ◽  
...  

Given an object of interest that evolves in time, one often wants to detect possible changes in its properties. The first changes may be small and occur in different scales and it may be crucial to detect them as early as possible. Examples include identification of potentially malignant changes in skin moles or the gradual onset of food quality deterioration. Statistical scale-space methodologies can be very useful in such situations since exploring the measurements in multiple resolutions can help identify even subtle changes. We extend a recently proposed scale-space methodology to a technique that successfully detects such small changes and at the same time keeps false alarms at a very low level. The potential of the novel methodology is first demonstrated with hyperspectral skin mole data artificially distorted to include a very small change. Our real data application considers hyperspectral images used for food quality detection. In these experiments the performance of the proposed method is either superior or on par with a standard approach such as principal component analysis.

2017 ◽  
Vol 2 (1) ◽  
pp. 40-50
Author(s):  
M. AMMICHE ◽  
A. KOUADRI

False alarms are the major problem in fault detection when using multivariate statistical process monitoring such as principal component analysis (PCA), they affect the detection accuracy and lead to make wrong decisions about the process operation status. In this work, filtering the monitoring indices is proposed to enhance the detection by reducing the number of false alarms. The filters that were used are: Standard Median Filter (SMF), Improved Median Filter (IMF) and fuzzy logic based filter. Signal to Noise Ratio (SNR), False Alarms Rate (FAR) and the detection time of the fault were used as criteria to compare their performance and their filtering action influence on monitoring. The algorithms were applied to cement rotary kiln data; real data, to remove spikes and outliers on the monitoring indices of PCA, and then, the filtered signals were used to supervise the system. The results, in which the fuzzy logic based filter showed a satisfactory performance, are presented and discussed.


Metabolites ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 214
Author(s):  
Aneta Sawikowska ◽  
Anna Piasecka ◽  
Piotr Kachlicki ◽  
Paweł Krajewski

Peak overlapping is a common problem in chromatography, mainly in the case of complex biological mixtures, i.e., metabolites. Due to the existence of the phenomenon of co-elution of different compounds with similar chromatographic properties, peak separation becomes challenging. In this paper, two computational methods of separating peaks, applied, for the first time, to large chromatographic datasets, are described, compared, and experimentally validated. The methods lead from raw observations to data that can form inputs for statistical analysis. First, in both methods, data are normalized by the mass of sample, the baseline is removed, retention time alignment is conducted, and detection of peaks is performed. Then, in the first method, clustering is used to separate overlapping peaks, whereas in the second method, functional principal component analysis (FPCA) is applied for the same purpose. Simulated data and experimental results are used as examples to present both methods and to compare them. Real data were obtained in a study of metabolomic changes in barley (Hordeum vulgare) leaves under drought stress. The results suggest that both methods are suitable for separation of overlapping peaks, but the additional advantage of the FPCA is the possibility to assess the variability of individual compounds present within the same peaks of different chromatograms.


Processes ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 166
Author(s):  
Majed Aljunaid ◽  
Yang Tao ◽  
Hongbo Shi

Partial least squares (PLS) and linear regression methods are widely utilized for quality-related fault detection in industrial processes. Standard PLS decomposes the process variables into principal and residual parts. However, as the principal part still contains many components unrelated to quality, if these components were not removed it could cause many false alarms. Besides, although these components do not affect product quality, they have a great impact on process safety and information about other faults. Removing and discarding these components will lead to a reduction in the detection rate of faults, unrelated to quality. To overcome the drawbacks of Standard PLS, a novel method, MI-PLS (mutual information PLS), is proposed in this paper. The proposed MI-PLS algorithm utilizes mutual information to divide the process variables into selected and residual components, and then uses singular value decomposition (SVD) to further decompose the selected part into quality-related and quality-unrelated components, subsequently constructing quality-related monitoring statistics. To ensure that there is no information loss and that the proposed MI-PLS can be used in quality-related and quality-unrelated fault detection, a principal component analysis (PCA) model is performed on the residual component to obtain its score matrix, which is combined with the quality-unrelated part to obtain the total quality-unrelated monitoring statistics. Finally, the proposed method is applied on a numerical example and Tennessee Eastman process. The proposed MI-PLS has a lower computational load and more robust performance compared with T-PLS and PCR.


2018 ◽  
Vol 14 (s1) ◽  
pp. 79-88
Author(s):  
Katalin Badak-Kerti ◽  
Szabina Németh ◽  
Andreas Zitek ◽  
Ferenc Firtha

In our research marzipan samples of different sugar to almond paste ratios (1:1, 2:1, 3:1) were stored at 17 °C. Reducing sugar content was measured by analytical method, texture analysis was done by penetrometry, electric characteristics were measured by conductometry and hyperspectral images were taken 6–8 times during the 16 days of storage. For statistical analyses (discriminant analysis, principal component analysis) SPSS program was used. According to our findings with the hyperspectral analysis technique, it is possible to identify how long the samples were stored (after production), and to which class (ratio of sugar to almond) the sample belonged. The main wavelengths which gave the best discrimination results among the days of storage were between 960 and 1100 nm. The type of the marzipan was easy to distinguish with the hyperspectral data; the biggest differences were observed at 1200 and 1400 nm, which are connected to the first overtone of C-H bound, therefore correlate with the oil content. The spatial distribution of penetrometric, electric and spectral properties were also characteristic to fructose content. The fructose content of marzipan is difficult to measure by usual optical ways (polarimetry, spectroscopy), but since fructose is hygroscopic, the spatial distribution of spectral properties can be characteristic.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-3
Author(s):  
Bin Zhao ◽  
◽  
Jinming Cao ◽  

With the arrival of COVID-19, some areas are under closed management, bringing about changes in the way people consume. It also leads to the excessive consumption of some people, especially college students. In order to give early warning to unreasonable consumption behavior, this study designed KPAG algorithm to give early warning to consumption risk. Using particle swarm optimization (PSO) kernel principal component analysis (KPCA) parameter optimization, optimal polynomial kernel to delete data information, and ant colony genetic algorithm (association) clustering analysis of data dimensionality reduction, according to the consumption behavior of college students are divided into three categories, for the consumption behavior of college students to build an early warning model. Through the classification and verification experiment of real data, the results show that compared with the traditional PCA data fitting method, the accuracy of the model in this paper can reach 90%, which is more reliable than the traditional algorithm, and the accuracy of the model is improved by nearly 20%, which can be used for effective early warning.


Author(s):  
A. K. Singh ◽  
H. V. Kumar ◽  
G. R. Kadambi ◽  
J. K. Kishore ◽  
J. Shuttleworth ◽  
...  

In this paper, the quality metrics evaluation on hyperspectral images has been presented using k-means clustering and segmentation. After classification the assessment of similarity between original image and classified image is achieved by measurements of image quality parameters. Experiments were carried out on four different types of hyperspectral images. Aerial and spaceborne hyperspectral images with different spectral and geometric resolutions were considered for quality metrics evaluation. Principal Component Analysis (PCA) has been applied to reduce the dimensionality of hyperspectral data. PCA was ultimately used for reducing the number of effective variables resulting in reduced complexity in processing. In case of ordinary images a human viewer plays an important role in quality evaluation. Hyperspectral data are generally processed by automatic algorithms and hence cannot be viewed directly by human viewers. Therefore evaluating quality of classified image becomes even more significant. An elaborate comparison is made between k-means clustering and segmentation for all the images by taking Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), Maximum Squared Error, ratio of squared norms called L2RAT and Entropy. First four parameters are calculated by comparing the quality of original hyperspectral image and classified image. Entropy is a measure of uncertainty or randomness which is calculated for classified image. Proposed methodology can be used for assessing the performance of any hyperspectral image classification techniques.


Author(s):  
C J S Webber ◽  
B S Payne ◽  
F Gu ◽  
A D Ball

This paper (Part 1) describes the principles of a novel unsupervised adaptive neural network anomaly detection technique, called componential coding, in the context of condition monitoring of electrical machines. Numerical examples are given to illustrate the technique's capabilities. The companion paper (Part 2), which follows, assesses componential coding in its application to real data recorded from a known machine and an entirely unseen machine (a conventional induction motor and a novel transverse flux motor respectively). Componential coding is particularly suited to applications in which no machine-specific tailored techniques have been developed or in which no previous monitoring experience is available. This is because componential coding is an unsupervised technique that derives the features of the data during training, and so requires neither labelling of known faults nor pre-processing to enhance known fault characteristics. Componential coding offers advantages over more familiar unsupervised data processing techniques such as principal component analysis. In addition, componential coding may be implemented in a computationally efficient manner by exploiting the periodic convolution theorem. Periodic convolution also gives the algorithm the advantage of time invariance; i.e. it will work equally well even if the input data signal is offset by arbitrary displacements in time. This means that there is no need to synchronize the input data signal with respect to reference points or to determine the absolute angular position of a rotating part.


Author(s):  
Gonzalo Vergara ◽  
Juan J. Carrasco ◽  
Jesus Martínez-Gómez ◽  
Manuel Domínguez ◽  
José A. Gámez ◽  
...  

The study of energy efficiency in buildings is an active field of research. Modeling and predicting energy related magnitudes leads to analyze electric power consumption and can achieve economical benefits. In this study, classical time series analysis and machine learning techniques, introducing clustering in some models, are applied to predict active power in buildings. The real data acquired corresponds to time, environmental and electrical data of 30 buildings belonging to the University of León (Spain). Firstly, we segmented buildings in terms of their energy consumption using principal component analysis. Afterwards, we applied state of the art machine learning methods and compare between them. Finally, we predicted daily electric power consumption profiles and compare them with actual data for different buildings. Our analysis shows that multilayer perceptrons have the lowest error followed by support vector regression and clustered extreme learning machines. We also analyze daily load profiles on weekdays and weekends for different buildings.


Sign in / Sign up

Export Citation Format

Share Document