An Emulator-Based Prediction of Dynamic Stiffness for Redundant Parallel Kinematic Mechanisms

2015 ◽  
Vol 8 (2) ◽  
Author(s):  
Mario Luces ◽  
Pinar Boyraz ◽  
Masih Mahmoodi ◽  
Farhad Keramati ◽  
James K. Mills ◽  
...  

The accuracy of a parallel kinematic mechanism (PKM) is directly related to its dynamic stiffness, which in turn is configuration dependent. For PKMs with kinematic redundancy, configurations with higher stiffness can be chosen during motion-trajectory planning for optimal performance. Herein, dynamic stiffness refers to the deformation of the mechanism structure, subject to dynamic loads of changing frequency. The stiffness-optimization problem has two computational constraints: (i) calculation of the dynamic stiffness of any considered PKM configuration, at a given task-space location, and (ii) searching for the PKM configuration with the highest stiffness at this location. Due to the lack of available analytical models, herein, the former subproblem is addressed via a novel effective emulator to provide a computationally efficient approximation of the high-dimensional dynamic-stiffness function suitable for optimization. The proposed method for emulator development identifies the mechanism's structural modes in order to breakdown the high-dimensional stiffness function into multiple functions of lower dimension. Despite their computational efficiency, however, emulators approximating high-dimensional functions are often difficult to develop and implement due to the large amount of data required to train the emulator. Reducing the dimensionality of the approximation function would, thus, result in a smaller training data set. In turn, the smaller training data set can be obtained accurately via finite-element analysis (FEA). Moving least-squares (MLS) approximation is proposed herein to compute the low-dimensional functions for stiffness approximation. Via extensive simulations, some of which are described herein, it is demonstrated that the proposed emulator can predict the dynamic stiffness of a PKM at any given configuration with high accuracy and low computational expense, making it quite suitable for most high-precision applications. For example, our results show that the proposed methodology can choose configurations along given trajectories within a few percentage points of the optimal ones.

2014 ◽  
Vol 6 ◽  
pp. 238928 ◽  
Author(s):  
Hai-wei Luo ◽  
Hui Wang ◽  
Jun Zhang ◽  
Qi Li

Based on the substructure synthesis and modal reduction technique, a computationally efficient elastodynamic model for a fully flexible 3-RPS parallel kinematic machine (PKM) tool is proposed, in which the frequency response function (FRF) at the end of the tool can be obtained at any given position throughout its workspace. In the proposed elastodynamic model, the whole system is divided into a moving platform subsystem and three identical RPS limb subsystems, in which all joint compliances are included. The spherical joint and the revolute joint are treated as lumped virtual springs with equal stiffness; the platform is treated as a rigid body and the RPS limbs are modelled with modal reduction techniques. With the compatibility conditions at interfaces between the limbs and the platform, an analytical system governing differential equation is derived. Based on the derived model, the position-dependent dynamic characteristics such as natural frequencies, mode shapes, and FRFs of the 3-RPS PKM are simulated. The simulation results indicate that the distributions of natural frequencies throughout the workspace are strongly dependant on mechanism's configurations and demonstrate an axial-symmetric tendency. The following finite element analysis and modal tests both validate the analytical results of natural frequencies, mode shapes, and the FRFs.


2021 ◽  
Vol 50 (1) ◽  
pp. 138-152
Author(s):  
Mujeeb Ur Rehman ◽  
Dost Muhammad Khan

Recently, anomaly detection has acquired a realistic response from data mining scientists as a graph of its reputation has increased smoothly in various practical domains like product marketing, fraud detection, medical diagnosis, fault detection and so many other fields. High dimensional data subjected to outlier detection poses exceptional challenges for data mining experts and it is because of natural problems of the curse of dimensionality and resemblance of distant and adjoining points. Traditional algorithms and techniques were experimented on full feature space regarding outlier detection. Customary methodologies concentrate largely on low dimensional data and hence show ineffectiveness while discovering anomalies in a data set comprised of a high number of dimensions. It becomes a very difficult and tiresome job to dig out anomalies present in high dimensional data set when all subsets of projections need to be explored. All data points in high dimensional data behave like similar observations because of its intrinsic feature i.e., the distance between observations approaches to zero as the number of dimensions extends towards infinity. This research work proposes a novel technique that explores deviation among all data points and embeds its findings inside well established density-based techniques. This is a state of art technique as it gives a new breadth of research towards resolving inherent problems of high dimensional data where outliers reside within clusters having different densities. A high dimensional dataset from UCI Machine Learning Repository is chosen to test the proposed technique and then its results are compared with that of density-based techniques to evaluate its efficiency.


2008 ◽  
Vol 18 (03) ◽  
pp. 195-205 ◽  
Author(s):  
WEIBAO ZOU ◽  
ZHERU CHI ◽  
KING CHUEN LO

Image classification is a challenging problem in organizing a large image database. However, an effective method for such an objective is still under investigation. A method based on wavelet analysis to extract features for image classification is presented in this paper. After an image is decomposed by wavelet, the statistics of its features can be obtained by the distribution of histograms of wavelet coefficients, which are respectively projected onto two orthogonal axes, i.e., x and y directions. Therefore, the nodes of tree representation of images can be represented by the distribution. The high level features are described in low dimensional space including 16 attributes so that the computational complexity is significantly decreased. 2800 images derived from seven categories are used in experiments. Half of the images were used for training neural network and the other images used for testing. The features extracted by wavelet analysis and the conventional features are used in the experiments to prove the efficacy of the proposed method. The classification rate on the training data set with wavelet analysis is up to 91%, and the classification rate on the testing data set reaches 89%. Experimental results show that our proposed approach for image classification is more effective.


Author(s):  
Bharat Gupta ◽  
Durga Toshniwal

In high dimensional data large no of outliers are embedded in low dimensional subspaces known as projected outliers, but most of existing outlier detection techniques are unable to find these projected outliers, because these methods perform detection of abnormal patterns in full data space. So, outlier detection in high dimensional data becomes an important research problem. In this paper we are proposing an approach for outlier detection of high dimensional data. Here we are modifying the existing SPOT approach by adding three new concepts namely Adaption of Sparse Sub-Space Template (SST), Different combination of PCS parameters and set of non outlying cells for testing data set.


Author(s):  
Felix Jimenez ◽  
Amanda Koepke ◽  
Mary Gregg ◽  
Michael Frey

A generative adversarial network (GAN) is an artificial neural network with a distinctive training architecture, designed to createexamples that faithfully reproduce a target distribution. GANs have recently had particular success in applications involvinghigh-dimensional distributions in areas such as image processing. Little work has been reported for low dimensions, where properties of GANs may be better identified and understood. We studied GAN performance in simulated low-dimensional settings, allowing us totransparently assess effects of target distribution complexity and training data sample size on GAN performance in a simpleexperiment. This experiment revealed two important forms of GAN error, tail underfilling and bridge bias, where the latter is analogousto the tunneling observed in high-dimensional GANs.


Author(s):  
I.A. Borisova ◽  
O.A. Kutnenko

The paper proposes a new approach in data censoring, which allows correcting diagnostic errors in the data sets in case when these samples are described in high-dimensional feature spaces. Considering this case as a separate task is explained by the fact that in high-dimensional spaces most of the methods of outliers detection and data filtering, both statistical and metric, stop working. At the same time, for the tasks of medical diagnostics, given the complexity of the objects and phenomena studied, a large number of descriptive characteristics are the norm rather than the exception. To solve this problem, an approach that focuses on local similarity between objects belonging to the same class and uses the function of rival similarity (FRiS function) as a measure of similarity has been proposed. In this approach for efficient data cleaning from misclassified objects, the most informative and relevant low-dimensional feature subspace is selected, in which the separability of classes after their correction will be maximal. Class separability here means the similarity of objects of one class to each other and their dissimilarity to objects of another class. Cleaning data from class errors can consist both in their correction and removing the objects-outliers from the data set. The described method was implemented as a FRiS-LCFS algorithm (FRiS Local Censoring with Feature Selection) and tested on model and real biomedical problems, including the problem of diagnosing prostate cancer based on DNA microarray analysis. The developed algorithm showed its competitiveness in comparison with the standard methods for filtering data in high-dimensional spaces.


2004 ◽  
Vol 3 (2) ◽  
pp. 109-122 ◽  
Author(s):  
Alistair Morrison ◽  
Matthew Chalmers

The problem of exploring or visualising data of high dimensionality is central to many tools for information visualisation. Through representing a data set in terms of inter-object proximities, multidimensional scaling may be employed to generate a configuration of objects in low-dimensional space in such a way as to preserve high-dimensional relationships. An algorithm is presented here for a heuristic hybrid model for the generation of such configurations. Building on a model introduced in 2002, the algorithm functions by means of sampling, spring model and interpolation phases. The most computationally complex stage of the original algorithm involved the execution of a series of nearest-neighbour searches. In this paper, we describe how the complexity of this phase has been reduced by treating all high-dimensional relationships as a set of discretised distances to a constant number of randomly selected items: pivots. In improving this computational bottle-neck, the algorithmic complexity is reduced from O( N√N) to O( N5/4). As well as documenting this improvement, the paper describes evaluation with a data set of 108,000 13-dimensional items and a set of 23,141 17-dimensional items. Results illustrate that the reduction in complexity is reflected in significantly improved run times and that no negative impact is made upon the quality of layout produced.


2010 ◽  
Vol 132 (5) ◽  
Author(s):  
Songqing Shan ◽  
G. Gary Wang

Computational tools such as finite element analysis and simulation are widely used in engineering, but they are mostly used for design analysis and validation. If these tools can be integrated for design optimization, it will undoubtedly enhance a manufacturer’s competitiveness. Such integration, however, faces three main challenges: (1) high computational expense of simulation, (2) the simulation process being a black-box function, and (3) design problems being high dimensional. In the past two decades, metamodeling has been intensively developed to deal with expensive black-box functions, and has achieved success for low dimensional design problems. But when high dimensionality is also present in design, which is often found in practice, there lacks of a practical method to deal with the so-called high dimensional, expensive, and black-box (HEB) problems. This paper proposes the first metamodel of its kind to tackle the HEB problem. This paper integrates the radial basis function with high dimensional model representation into a new model, RBF-HDMR. The developed RBF-HDMR model offers an explicit function expression, and can reveal (1) the contribution of each design variable, (2) inherent linearity/nonlinearity with respect to input variables, and (3) correlation relationships among input variables. An accompanying algorithm to construct the RBF-HDMR has also been developed. The model and the algorithm fundamentally change the exponentially growing computation cost to be polynomial. Testing and comparison confirm the efficiency and capability of RBF-HDMR for HEB problems.


2021 ◽  
Vol 104 (1) ◽  
pp. 003685042110033
Author(s):  
Junqing Yin ◽  
Jinyu Gu ◽  
Yongdang Chen ◽  
Wenbin Tang ◽  
Feng Zhang

Fixed beam structures are widely used in engineering, and a common problem is determining the load conditions of these structures resulting from impact loads. In this study, a method for accurately identifying the location and magnitude of the load causing plastic deformation of a fixed beam using a backpropagation artificial neural network (BP-ANN). First, a load of known location and magnitude is applied to the finite element model of a fixed beam to create plastic deformation, and a polynomial expression is used to fit the resulting deformed shape. A basic data set was established through this method for a series of calculations, and it consists of the location and magnitude of the applied load and polynomial coefficients. Then, a BP-ANN model for expanding the sample data is established and the sample set is expanded to solve the common problem of insufficient samples. Finally, using the extended sample set as training data, the coefficients of the polynomial function describing the plastic deformation of the fixed beam are used as input data, the position and magnitude of the load are used as output data, a BP-ANN prediction model is established. The prediction results are compared with the results of finite element analysis to verify the effectiveness of the method.


Sign in / Sign up

Export Citation Format

Share Document