Properties of basis functions generated by shift invariant sparse representations of natural images

2000 ◽  
Vol 83 (2) ◽  
pp. 111-118 ◽  
Author(s):  
Wakako Hashimoto ◽  
Koji Kurata
2003 ◽  
Author(s):  
John A. Black, Jr. ◽  
Kanav Kahol ◽  
Prem Kuchi ◽  
Gamal F. Fahmy ◽  
Sethuraman Panchanathan

2010 ◽  
Vol 22 (7) ◽  
pp. 1812-1836 ◽  
Author(s):  
Laurent U. Perrinet

Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.


2003 ◽  
Vol 15 (2) ◽  
pp. 349-396 ◽  
Author(s):  
Kenneth Kreutz-Delgado ◽  
Joseph F. Murray ◽  
Bhaskar D. Rao ◽  
Kjersti Engan ◽  
Te-Won Lee ◽  
...  

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).


2021 ◽  
Vol 38 (6) ◽  
pp. 1599-1611
Author(s):  
Hong Yang ◽  
Yanming Zhao ◽  
Guoan Su ◽  
Xiuyun Liu ◽  
Songwen Jin ◽  
...  

The conventional slow feature analysis (SFA) algorithm has no support of computational theory of vision for primates, nor does it have the ability to learn the global features with visual selection consistency continuity. And what is more, the algorithm is highly complex. Based on this, Slow Feature Extraction Algorithm Based on Visual selection consistency continuity and Its Application was proposed. Inspired by the visual selection consistency continuity theory for primates, this paper replaced the principal component analysis (PCA) method of the conventional SFA algorithm with the myTICA method, extracted the Gabor basis functions of natural images, initialized the basis function family; it used the feature basis expansion algorithm based on visual selection consistency continuity (the VSCC_FBEA algorithm) to replace the polynomial expansion method in the original SFA algorithm to generates the Gabor basis functions of features with long and short-term visual selectivity in the family of basis functions, which solved the drawbacks of the polynomial prediction algorithm; it also designed the Lipschitz consistency constraint, and proposed the Lipschitz-Orthogonal-Pruning-Method (LOPM algorithm) to optimize the basis function family into an over-complete family of basis functions. In addition, this paper used the feature expression method based on visual invariance theory (visual invariance theory -FEM) to establish the set of features of natural images with visual selection consistency continuity. Subsequently, it adopted three error evaluation methods and mySFA classification method to evaluate the proposed algorithm. According to the experimental results, the proposed algorithm showed good prediction performance with respect to the LSVRC2012 data set; compared with the SFA, GSFA, TICA, myICA and mySFA algorithms, the proposed algorithm is correct and feasible; when the classification threshold of the algorithm was set at 8.0, the recognition rate of the proposed algorithm reached 99.66%, and neither of the false recognition rate and the false rejection rate was higher than 0.33%. The proposed algorithm has good performance in prediction and classification, and also shows good anti-noise capacity under limited noise conditions.


2005 ◽  
Vol 25 (1_suppl) ◽  
pp. S634-S634 ◽  
Author(s):  
Yun Zhou ◽  
Weiguo Ye ◽  
James R Brasic ◽  
Mohab Alexander ◽  
John Hilton ◽  
...  

Author(s):  
Yuki HAYAMI ◽  
Daiki TAKASU ◽  
Hisakazu AOYANAGI ◽  
Hiroaki TAKAMATSU ◽  
Yoshifumi SHIMODAIRA ◽  
...  

2020 ◽  
Vol 2020 (14) ◽  
pp. 294-1-294-8
Author(s):  
Sandamali Devadithya ◽  
David Castañón

Dual-energy imaging has emerged as a superior way to recognize materials in X-ray computed tomography. To estimate material properties such as effective atomic number and density, one often generates images in terms of basis functions. This requires decomposition of the dual-energy sinograms into basis sinograms, and subsequently reconstructing the basis images. However, the presence of metal can distort the reconstructed images. In this paper we investigate how photoelectric and Compton basis functions, and synthesized monochromatic basis (SMB) functions behave in the presence of metal and its effect on estimation of effective atomic number and density. Our results indicate that SMB functions, along with edge-preserving total variation regularization, show promise for improved material estimation in the presence of metal. The results are demonstrated using both simulated data as well as data collected from a dualenergy medical CT scanner.


Sign in / Sign up

Export Citation Format

Share Document