Equation-Free Continuation of Maximal Vibration Amplitudes in a Nonlinear Rotor-Bearing Model of a Turbocharger

Author(s):  
Michael Elmegaard ◽  
Jan Ru¨bel ◽  
Mizuho Inagaki ◽  
Atsushi Kawamoto ◽  
Jens Starke

Mechanical systems are typically described with finite element models resulting in high-dimensional dynamical systems. The high-dimensional space excludes the application of certain investigation methods like numerical continuation and bifurcation analysis to investigate the dynamical behaviour and its parameter dependence. Nevertheless, the dynamical behaviour usually lives on a low-dimensional manifold but typically no closed equations are available for the macroscopic quantities of interest. Therefore, an equation-free approach is suggested here to analyse and investigate the vibration behaviour of nonlinear rotating machinery. This allows then in the next step to optimize the rotor design specifications to reduce unbalance vibrations of a rotor-bearing system with nonlinear factors like the oil film dynamics. As an example we provide a simple model of a passenger car turbocharger where we investigate how the maximal vibration amplitude of the rotor depends on the viscosity of the oil used in the bearings.

Author(s):  
MIAO CHENG ◽  
BIN FANG ◽  
YUAN YAN TANG ◽  
HENGXIN CHEN

Many problems in pattern classification and feature extraction involve dimensionality reduction as a necessary processing. Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, seek the low-dimensional manifold in an unsupervised way, while the local discriminant analysis methods identify the underlying supervised submanifold structures. In addition, it has been well-known that the intraclass null subspace contains the most discriminative information if the original data exist in a high-dimensional space. In this paper, we seek for the local null space in accordance with the null space LDA (NLDA) approach and reveal that its computational expense mainly depends on the quantity of connected edges in graphs, which may be still unacceptable if a great deal of samples are involved. To address this limitation, an improved local null space algorithm is proposed to employ the penalty subspace to approximate the local discriminant subspace. Compared with the traditional approach, the proposed method can achieve more efficiency so that the overload problem is avoided, while slight discriminant power is lost theoretically. A comparative study on classification shows that the performance of the approximative algorithm is quite close to the genuine one.


Author(s):  
Muhammad Amjad

Advances in manifold learning have proven to be of great benefit in reducing the dimensionality of large complex datasets. Elements in an intricate dataset will typically belong in high-dimensional space as the number of individual features or independent variables will be extensive. However, these elements can be integrated into a low-dimensional manifold with well-defined parameters. By constructing a low-dimensional manifold and embedding it into high-dimensional feature space, the dataset can be simplified for easier interpretation. In spite of this elemental dimensionality reduction, the dataset’s constituents do not lose any information, but rather filter it with the hopes of elucidating the appropriate knowledge. This paper will explore the importance of this method of data analysis, its applications, and its extensions into topological data analysis.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Li Jiang ◽  
Shunsheng Guo

The high-dimensional features of defective bearings usually include redundant and irrelevant information, which will degrade the diagnosis performance. Thus, it is critical to extract the sensitive low-dimensional characteristics for improving diagnosis performance. This paper proposes modified kernel marginal Fisher analysis (MKMFA) for feature extraction with dimensionality reduction. Due to its outstanding performance in enhancing the intraclass compactness and interclass dispersibility, MKMFA is capable of effectively extracting the sensitive low-dimensional manifold characteristics beneficial to subsequent pattern classification even for few training samples. A MKMFA- based fault diagnosis model is presented and applied to identify different bearing faults. It firstly utilizes MKMFA to directly extract the low-dimensional manifold characteristics from the raw time-series signal samples in high-dimensional ambient space. Subsequently, the sensitive low-dimensional characteristics in feature space are inputted into K-nearest neighbor classifier so as to distinguish various fault patterns. The four-fault-type and ten-fault-severity bearing fault diagnosis experiment results show the feasibility and superiority of the proposed scheme in comparison with the other five methods.


2020 ◽  
Vol 34 (04) ◽  
pp. 3666-3675
Author(s):  
Marissa Connor ◽  
Christopher Rozell

Deep generative networks have been widely used for learning mappings from a low-dimensional latent space to a high-dimensional data space. In many cases, data transformations are defined by linear paths in this latent space. However, the Euclidean structure of the latent space may be a poor match for the underlying latent structure in the data. In this work, we incorporate a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latent space to accommodate this structure. In particular, we focus on applications in which the data has closed transformation paths which extend from a starting point and return to nearly the same point. Through experiments on data with natural closed transformation paths, we show that this model introduces the ability to learn the latent dynamics of complex systems, generate transformation paths, and classify samples that belong on the same transformation path.


2021 ◽  
Vol 118 (29) ◽  
pp. e2100473118
Author(s):  
Duluxan Sritharan ◽  
Shu Wang ◽  
Sahand Hormoz

Most high-dimensional datasets are thought to be inherently low-dimensional—that is, data points are constrained to lie on a low-dimensional manifold embedded in a high-dimensional ambient space. Here, we study the viability of two approaches from differential geometry to estimate the Riemannian curvature of these low-dimensional manifolds. The intrinsic approach relates curvature to the Laplace–Beltrami operator using the heat-trace expansion and is agnostic to how a manifold is embedded in a high-dimensional space. The extrinsic approach relates the ambient coordinates of a manifold’s embedding to its curvature using the Second Fundamental Form and the Gauss–Codazzi equation. We found that the intrinsic approach fails to accurately estimate the curvature of even a two-dimensional constant-curvature manifold, whereas the extrinsic approach was able to handle more complex toy models, even when confounded by practical constraints like small sample sizes and measurement noise. To test the applicability of the extrinsic approach to real-world data, we computed the curvature of a well-studied manifold of image patches and recapitulated its topological classification as a Klein bottle. Lastly, we applied the extrinsic approach to study single-cell transcriptomic sequencing (scRNAseq) datasets of blood, gastrulation, and brain cells to quantify the Riemannian curvature of scRNAseq manifolds.


2021 ◽  
Author(s):  
Duluxan Sritharan ◽  
Shu Wang ◽  
Sahand Hormoz

Most high-dimensional datasets are thought to be inherently low-dimensional, that is, datapoints are constrained to lie on a low-dimensional manifold embedded in a high-dimensional ambient space. Here we study the viability of two approaches from differential geometry to estimate the Riemannian curvature of these low-dimensional manifolds. The intrinsic approach relates curvature to the Laplace-Beltrami operator using the heat-trace expansion, and is agnostic to how a manifold is embedded in a high-dimensional space. The extrinsic approach relates the ambient coordinates of a manifold's embedding to its curvature using the Second Fundamental Form and the Gauss-Codazzi equation. Keeping in mind practical constraints of real-world datasets, like small sample sizes and measurement noise, we found that estimating curvature is only feasible for even simple, low-dimensional toy manifolds, when the extrinsic approach is used. To test the applicability of the extrinsic approach to real-world data, we computed the curvature of a well-studied manifold of image patches, and recapitulated its topological classification as a Klein bottle. Lastly, we applied the approach to study single-cell transcriptomic sequencing (scRNAseq) datasets of blood, gastrulation, and brain cells, revealing for the first time the intrinsic curvature of scRNAseq manifolds.


2017 ◽  
Vol 19 (12) ◽  
pp. 125012 ◽  
Author(s):  
Carlos Floyd ◽  
Christopher Jarzynski ◽  
Garegin Papoian

2020 ◽  
Author(s):  
Wei Guo ◽  
Jie J. Zhang ◽  
Jonathan P. Newman ◽  
Matthew A. Wilson

AbstractLatent learning allows the brain the transform experiences into cognitive maps, a form of implicit memory, without reinforced training. Its mechanism is unclear. We tracked the internal states of the hippocampal neural ensembles and discovered that during latent learning of a spatial map, the state space evolved into a low-dimensional manifold that topologically resembled the physical environment. This process requires repeated experiences and sleep in-between. Further investigations revealed that a subset of hippocampal neurons, instead of rapidly forming place fields in a novel environment, remained weakly tuned but gradually developed correlated activity with other neurons. These ‘weakly spatial’ neurons bond activity of neurons with stronger spatial tuning, linking discrete place fields into a map that supports flexible navigation.


Sign in / Sign up

Export Citation Format

Share Document