Spectral Embedding Methods for Manifold Learning

2011 ◽  
pp. 1-36 ◽  
Author(s):  
Alan Izenman
2011 ◽  
Vol 32 (10) ◽  
pp. 1447-1455 ◽  
Author(s):  
Housen Li ◽  
Hao Jiang ◽  
Roberto Barrio ◽  
Xiangke Liao ◽  
Lizhi Cheng ◽  
...  

2019 ◽  
Vol 116 (13) ◽  
pp. 5995-6000 ◽  
Author(s):  
Carey E. Priebe ◽  
Youngser Park ◽  
Joshua T. Vogelstein ◽  
John M. Conroy ◽  
Vince Lyzinski ◽  
...  

Clustering is concerned with coherently grouping observations without any explicit concept of true groupings. Spectral graph clustering—clustering the vertices of a graph based on their spectral embedding—is commonly approached viaK-means (or, more generally, Gaussian mixture model) clustering composed with either Laplacian spectral embedding (LSE) or adjacency spectral embedding (ASE). Recent theoretical results provide deeper understanding of the problem and solutions and lead us to a “two-truths” LSE vs. ASE spectral graph clustering phenomenon convincingly illustrated here via a diffusion MRI connectome dataset: The different embedding methods yield different clustering results, with LSE capturing left hemisphere/right hemisphere affinity structure and ASE capturing gray matter/white matter core–periphery structure.


2004 ◽  
Vol 16 (10) ◽  
pp. 2197-2219 ◽  
Author(s):  
Yoshua Bengio ◽  
Olivier Delalleau ◽  
Nicolas Le Roux ◽  
Jean-François Paiement ◽  
Pascal Vincent ◽  
...  

In this letter, we show a direct relation between spectral embedding methods and kernel principal components analysis and how both are special cases of a more general learning problem: learning the principal eigenfunctions of an operator defined from a kernel and the unknown data-generating density. Whereas spectral embedding methods provided only coordinates for the training points, the analysis justifies a simple extension to out-of-sample examples (the Nyström formula) for multidimensional scaling (MDS), spectral clustering, Laplacian eigenmaps, locally linear embedding (LLE), and Isomap. The analysis provides, for all such spectral embedding methods, the definition of a loss function, whose empirical average is minimized by the traditional algorithms. The asymptotic expected value of that loss defines a generalization performance and clarifies what these algorithms are trying to learn. Experiments with LLE, Isomap, spectral clustering, and MDS show that this out-of-sample embedding formula generalizes well, with a level of error comparable to the effect of small perturbations of the training set on the embedding.


2014 ◽  
Vol 39 (12) ◽  
pp. 2077-2089
Author(s):  
Min YUAN ◽  
Lei CHENG ◽  
Ran-Gang ZHU ◽  
Ying-Ke LEI

2013 ◽  
Vol 32 (6) ◽  
pp. 1670-1673
Author(s):  
Xue-yan ZHOU ◽  
Jian-min HAN ◽  
Yu-bin ZHAN

Sign in / Sign up

Export Citation Format

Share Document