vandermonde matrices
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 13)

H-INDEX

19
(FIVE YEARS 1)

Author(s):  
Dongwei Li

Full spark frames have been widely applied in sparse signal processing, signal reconstruction with erasures and phase retrieval. Since testing whether a given frame is full spark is hard for NP under randomized polynomial-time reductions, hence the deterministic full spark (DFS) frames are particularly significant. However, the degree of freedom of choices of DFS frames is not enough in practical applications because the DFS frames are well known as Vandermonde frames and harmonic frames. In this paper, we focus on the deterministic constructions of full spark frames. We present a new and effective method to construct DFS frames by using Cauchy matrices. We also construct the DFS frames by using Cauchy-Vandermonde matrices. Finally, we show that full spark tight frames can be constructed from generalized Cauchy matrices.


2021 ◽  
Vol 76 (2) ◽  
Author(s):  
Gerlind Plonka ◽  
Therese von Wulffen

AbstractIn this paper we extend the deterministic sublinear FFT algorithm in Plonka et al. (Numer Algorithms 78:133–159, 2018. 10.1007/s11075-017-0370-5) for fast reconstruction of M-sparse vectors $${\mathbf{x}}$$ x of length $$N= 2^J$$ N = 2 J , where we assume that all components of the discrete Fourier transform $$\hat{\mathbf{x}}= {\mathbf{F}}_{N} {\mathbf{x}}$$ x ^ = F N x are available. The sparsity of $${\mathbf{x}}$$ x needs not to be known a priori, but is determined by the algorithm. If the sparsity M is larger than $$2^{J/2}$$ 2 J / 2 , then the algorithm turns into a usual FFT algorithm with runtime $${\mathcal O}(N \log N)$$ O ( N log N ) . For $$M^{2} < N$$ M 2 < N , the runtime of the algorithm is $${\mathcal O}(M^2 \, \log N)$$ O ( M 2 log N ) . The proposed modifications of the approach in Plonka et al. (2018) lead to a significant improvement of the condition numbers of the Vandermonde matrices which are employed in the iterative reconstruction. Our numerical experiments show that our modification has a huge impact on the stability of the algorithm. While the algorithm in Plonka et al. (2018) starts to be unreliable for $$M>20$$ M > 20 because of numerical instabilities, the modified algorithm is still numerically stable for $$M=200$$ M = 200 .


Sign in / Sign up

Export Citation Format

Share Document