scholarly journals Interplay of Sensor Quantity, Placement and System Dimension in POD-Based Sparse Reconstruction of Fluid Flows

Fluids ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 109 ◽  
Author(s):  
Balaji Jayaraman ◽  
S M Abdullah Al Mamun ◽  
Chen Lu

Sparse linear estimation of fluid flows using data-driven proper orthogonal decomposition (POD) basis is systematically explored in this work. Fluid flows are manifestations of nonlinear multiscale partial differential equations (PDE) dynamical systems with inherent scale separation that impact the system dimensionality. Given that sparse reconstruction is inherently an ill-posed problem, the most successful approaches require the knowledge of the underlying low-dimensional space spanning the manifold in which the system resides. In this paper, we adopt an approach that learns basis from singular value decomposition (SVD) of training data to recover sparse information. This results in a set of four design parameters for sparse recovery, namely, the choice of basis, system dimension required for sufficiently accurate reconstruction, sensor budget and their placement. The choice of design parameters implicitly determines the choice of algorithm as either l 2 minimization reconstruction or sparsity promoting l 1 minimization reconstruction. In this work, we systematically explore the implications of these design parameters on reconstruction accuracy so that practical recommendations can be identified. We observe that greedy-smart sensor placement, particularly interpolation points from the discrete empirical interpolation method (DEIM), provide the best balance of computational complexity and accurate reconstruction.

Author(s):  
Balaji Jayaraman ◽  
S. M. Abdullah Al-Mamun ◽  
Chen Lu

Sparse recovery of fluid flows using data-driven proper orthogonal decomposition (POD) basis is systematically explored in this work. Fluid flows are manifestations of nonlinear multiscale PDE dynamical systems with inherent scale separation that impact the system dimensionality. Given that sparse reconstruction is inherently an ill-posed problem, the most successful approaches require the knowledge of the underlying basis space spanning the manifold in which the system resides. In this study, we employ an approach that learns basis from singular value decomposition (SVD) of training data to reconstruct sparsely sensed information. This results in a set of four control parameters for sparse recovery, namely, the choice of basis, system dimension required for sufficiently accurate reconstruction, sensor budget and their placement. The choice of control parameters implicitly determines the choice of algorithm as either $l_2$ minimization reconstruction or sparsity promoting $l_1$ norm minimization reconstruction. In this work, we systematically explore the implications of these control parameters on reconstruction accuracy so that practical recommendations can be identified. We observe that greedy-smart sensor placement provides the best balance of computational complexity and robust reconstruction for marginally oversampled cases which happens to be the most challenging regime in the explored parameter design space.


2008 ◽  
Vol 18 (03) ◽  
pp. 195-205 ◽  
Author(s):  
WEIBAO ZOU ◽  
ZHERU CHI ◽  
KING CHUEN LO

Image classification is a challenging problem in organizing a large image database. However, an effective method for such an objective is still under investigation. A method based on wavelet analysis to extract features for image classification is presented in this paper. After an image is decomposed by wavelet, the statistics of its features can be obtained by the distribution of histograms of wavelet coefficients, which are respectively projected onto two orthogonal axes, i.e., x and y directions. Therefore, the nodes of tree representation of images can be represented by the distribution. The high level features are described in low dimensional space including 16 attributes so that the computational complexity is significantly decreased. 2800 images derived from seven categories are used in experiments. Half of the images were used for training neural network and the other images used for testing. The features extracted by wavelet analysis and the conventional features are used in the experiments to prove the efficacy of the proposed method. The classification rate on the training data set with wavelet analysis is up to 91%, and the classification rate on the testing data set reaches 89%. Experimental results show that our proposed approach for image classification is more effective.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Norapon Sukuntee ◽  
Saifon Chaturantabut

This work considers the model order reduction approach for parametrized viscous fingering in a horizontal flow through a 2D porous media domain. A technique for constructing an optimal low-dimensional basis for a multidimensional parameter domain is introduced by combining K-means clustering with proper orthogonal decomposition (POD). In particular, we first randomly generate parameter vectors in multidimensional parameter domain of interest. Next, we perform the K-means clustering algorithm on these parameter vectors to find the centroids. POD basis is then generated from the solutions of the parametrized systems corresponding to these parameter centroids. The resulting POD basis is then used with Galerkin projection to construct reduced-order systems for various parameter vectors in the given domain together with applying the discrete empirical interpolation method (DEIM) to further reduce the computational complexity in nonlinear terms of the miscible flow model. The numerical results with varying different parameters are demonstrated to be efficient in decreasing simulation time while maintaining accuracy compared to the full-order model for various parameter values.


Author(s):  
Nerea González-García ◽  
Ana Belén Nieto-Librero ◽  
Purificación Galindo-Villardón

AbstractIn this work, a new mathematical algorithm for sparse and orthogonal constrained biplots, called CenetBiplots, is proposed. Biplots provide a joint representation of observations and variables of a multidimensional matrix in the same reference system. In this subspace the relationships between them can be interpreted in terms of geometric elements. CenetBiplots projects a matrix onto a low-dimensional space generated simultaneously by sparse and orthogonal principal components. Sparsity is desired to select variables automatically, and orthogonality is necessary to keep the geometrical properties that ensure the biplots graphical interpretation. To this purpose, the present study focuses on two different objectives: 1) the extension of constrained singular value decomposition to incorporate an elastic net sparse constraint (CenetSVD), and 2) the implementation of CenetBiplots using CenetSVD. The usefulness of the proposed methodologies for analysing high-dimensional and low-dimensional matrices is shown. Our method is implemented in R software and available for download from https://github.com/ananieto/SparseCenetMA.


2020 ◽  
Vol 26 (4) ◽  
pp. 434-453
Author(s):  
Milan Sečujski ◽  
Darko Pekar ◽  
Siniša Suzić ◽  
Anton Smirnov ◽  
Tijana Nosek

The paper presents a novel architecture and method for training neural networks to produce synthesized speech in a particular voice and speaking style, based on a small quantity of target speaker/style training data. The method is based on neural network embedding, i.e. mapping of discrete variables into continuous vectors in a low-dimensional space, which has been shown to be a very successful universal deep learning technique. In this particular case, different speaker/style combinations are mapped into different points in a low-dimensional space, which enables the network to capture the similarities and differences between speakers and speaking styles more efficiently. The initial model from which speaker/style adaptation was carried out was a multi-speaker/multi-style model based on 8.5 hours of American English speech data which corresponds to 16 different speaker/style combinations. The results of the experiments show that both versions of the obtained system, one using 10 minutes and the other as little as 30 seconds of target data, outperform the state of the art in parametric speaker/style-dependent speech synthesis. This opens a wide range of application of speaker/style dependent speech synthesis based on small quantities of training data, in domains ranging from customer interaction in call centers to robot-assisted medical therapy.


2014 ◽  
Vol 602-605 ◽  
pp. 2170-2173
Author(s):  
Xiao Fei Li

The popular approaches for face recognition are PCA and LDA methods. But PCA could not capture the simplest invariance unless information is explicitly provided in the training data and LDA approach suffers from a small size problem. 2DPCA could reduce high dimensional data to a low-dimensional space.2DLDA could extract the proper features from image matrices based on LDA. Ensemble incomplete wavelet analysis method for face recognition is proposed based on improved fuzzy C-Means in this paper. The method proposed shows that it improves the accuracy and reduces the running time.


NeuroImage ◽  
2021 ◽  
pp. 118200
Author(s):  
Sayan Ghosal ◽  
Qiang Chen ◽  
Giulio Pergola ◽  
Aaron L. Goldman ◽  
William Ulrich ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4454 ◽  
Author(s):  
Marek Piorecky ◽  
Vlastimil Koudelka ◽  
Jan Strobl ◽  
Martin Brunovsky ◽  
Vladimir Krajca

Simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) are at the forefront of technologies of interest to physicians and scientists because they combine the benefits of both modalities—better time resolution (hdEEG) and space resolution (fMRI). However, EEG measurements in the scanner contain an electromagnetic field that is induced in leads as a result of gradient switching slight head movements and vibrations, and it is corrupted by changes in the measured potential because of the Hall phenomenon. The aim of this study is to design and test a methodology for inspecting hidden EEG structures with respect to artifacts. We propose a top-down strategy to obtain additional information that is not visible in a single recording. The time-domain independent component analysis algorithm was employed to obtain independent components and spatial weights. A nonlinear dimension reduction technique t-distributed stochastic neighbor embedding was used to create low-dimensional space, which was then partitioned using the density-based spatial clustering of applications with noise (DBSCAN). The relationships between the found data structure and the used criteria were investigated. As a result, we were able to extract information from the data structure regarding electrooculographic, electrocardiographic, electromyographic and gradient artifacts. This new methodology could facilitate the identification of artifacts and their residues from simultaneous EEG in fMRI.


2009 ◽  
Vol 610-613 ◽  
pp. 450-453
Author(s):  
Hong Yan Duan ◽  
You Tang Li ◽  
Jin Zhang ◽  
Gui Ping He

The fracture problems of ecomaterial (aluminum alloyed cast iron) under extra-low cycle rotating bending fatigue loading were studied using artificial neural networks (ANN) in this paper. The training data were used in the formation of training set of ANN. The ANN model exhibited excellent in results comparison with the experimental results. It was concluded that predicted fracture design parameters by the trained neural network model seem more reasonable compared to approximate methods. It is possible to claim that, ANN is fairly promising prediction technique if properly used. Training ANN model was introduced at first. And then the Training data for the development of the neural network model was obtained from the experiments. The input parameters, notch depth, the presetting deflection and tip radius of the notch, and the output parameters, the cycle times of fracture were used during the network training. The neural network architecture is designed. The ANN model was developed using back propagation architecture with three layers jump connections, where every layer was connected or linked to every previous layer. The number of hidden neurons was determined according to special formula. The performance of system is summarized at last. In order to facilitate the comparisons of predicted values, the error evaluation and mean relative error are obtained. The result show that the training model has good performance, and the experimental data and predicted data from ANN are in good coherence.


2021 ◽  
Vol 11 (3) ◽  
pp. 1013
Author(s):  
Zvezdan Lončarević ◽  
Rok Pahič ◽  
Aleš Ude ◽  
Andrej Gams

Autonomous robot learning in unstructured environments often faces the problem that the dimensionality of the search space is too large for practical applications. Dimensionality reduction techniques have been developed to address this problem and describe motor skills in low-dimensional latent spaces. Most of these techniques require the availability of a sufficiently large database of example task executions to compute the latent space. However, the generation of many example task executions on a real robot is tedious, and prone to errors and equipment failures. The main result of this paper is a new approach for efficient database gathering by performing a small number of task executions with a real robot and applying statistical generalization, e.g., Gaussian process regression, to generate more data. We have shown in our experiments that the data generated this way can be used for dimensionality reduction with autoencoder neural networks. The resulting latent spaces can be exploited to implement robot learning more efficiently. The proposed approach has been evaluated on the problem of robotic throwing at a target. Simulation and real-world results with a humanoid robot TALOS are provided. They confirm the effectiveness of generalization-based database acquisition and the efficiency of learning in a low-dimensional latent space.


Sign in / Sign up

Export Citation Format

Share Document