scholarly journals Predictable Components and Singular Vectors

2008 ◽  
Vol 65 (5) ◽  
pp. 1666-1678 ◽  
Author(s):  
Timothy DelSole ◽  
Michael K. Tippett

Abstract This paper shows that if a measure of predictability is invariant to affine transformations and monotonically related to forecast uncertainty, then the component that maximizes this measure for normally distributed variables is independent of the detailed form of the measure. This result explains why different measures of predictability such as anomaly correlation, signal-to-noise ratio, predictive information, and the Mahalanobis error are each maximized by the same components. These components can be determined by applying principal component analysis to a transformed forecast ensemble, a procedure called predictable component analysis (PrCA). The resulting vectors define a complete set of components that can be ordered such that the first maximizes predictability, the second maximizes predictability subject to being uncorrelated of the first, and so on. The transformation in question, called the whitening transformation, can be interpreted as changing the norm in principal component analysis. The resulting norm renders noise variance analysis equivalent to signal variance analysis, whereas these two analyses lead to inconsistent results if other norms are chosen to define variance. Predictable components also can be determined by applying singular value decomposition to a whitened propagator in linear models. The whitening transformation is tantamount to changing the initial and final norms in the singular vector calculation. The norm for measuring forecast uncertainty has not appeared in prior predictability studies. Nevertheless, the norms that emerge from this framework have several attractive properties that make their use compelling. This framework generalizes singular vector methods to models with both stochastic forcing and initial condition error. These and other components of interest to predictability are illustrated with an empirical model for sea surface temperature.

Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.


2005 ◽  
Vol 3 (4) ◽  
pp. 731-741 ◽  
Author(s):  
Petr Praus

AbstractPrincipal Component Analysis (PCA) was used for the mapping of geochemical data. A testing data matrix was prepared from the chemical and physical analyses of the coals altered by thermal and oxidation effects. PCA based on Singular Value Decomposition (SVD) of the standardized (centered and scaled by the standard deviation) data matrix revealed three principal components explaining 85.2% of the variance. Combining the scatter and components weights plots with knowledge of the composition of tested samples, the coal samples were divided into seven groups depending on the degree of their oxidation and thermal alteration.The PCA findings were verified by other multivariate methods. The relationships among geochemical variables were successfully confirmed by Factor Analysis (FA). The data structure was also described by the Average Group dendrogram using Euclidean distance. The found sample clusters were not defined so clearly as in the case of PCA. It can be explained by the PCA filtration of the data noise.


2020 ◽  
Vol 16 (4) ◽  
pp. 155014772091640
Author(s):  
Lanmei Wang ◽  
Yao Wang ◽  
Guibao Wang ◽  
Jianke Jia

In this article, principal component analysis method, which is applied to image compression and feature extraction, is introduced into the dimension reduction of input characteristic variable of support vector regression, and a method of joint estimation of near-field angle and range based on principal component analysis dimension reduction is proposed. Signal-to-noise ratio and calculation amount are the decisive factors affecting the performance of the algorithm. Principal component analysis is used to fuse the main characteristics of training data and discard redundant information, the signal-to-noise ratio is improved, and the calculation amount is reduced accordingly. Similarly, support vector regression is used to model the signal, and the upper triangular elements of the signal covariance matrix are usually used as input features. Since the covariance matrix has more upper triangular elements, training it as a feature input will affect the training speed to some extent. Principal component analysis is used to reduce the dimensionality of the upper triangular element of the covariance matrix of the known signal, and it is used as the input feature of the multi-output support vector regression machine to construct the near-field parameter estimation model, and the parameter estimation of unknown signal is herein obtained. Simulation results show that this method has high estimation accuracy and training speed, and has strong adaptability at low signal-to-noise ratio, and the performance is better than that of the back-propagation neural network algorithm and the two-step multiple signal classification algorithm.


2005 ◽  
Vol 77 (20) ◽  
pp. 6563-6570 ◽  
Author(s):  
Zeng Ping Chen ◽  
Julian Morris ◽  
Elaine Martin ◽  
Robert B. Hammond ◽  
Xiaojun Lai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document