scholarly journals An Enhanced Joint Hilbert Embedding-Based Metric to Support Mocap Data Classification with Preserved Interpretability

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4443
Author(s):  
Cristian Kaori Valencia-Marin ◽  
Juan Diego Pulgarin-Giraldo ◽  
Luisa Fernanda Velasquez-Martinez ◽  
Andres Marino Alvarez-Meza ◽  
German Castellanos-Dominguez

Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class).

2021 ◽  
Vol 30 (1) ◽  
pp. 159-167
Author(s):  
Chunsheng Jiang

Abstract A new method of orbit determination (OD) is proposed: distribution regression. The paper focuses on the process of using sparse observation data to determine the orbit of the spacecraft without any prior information. The standard regression process is to learn a map from real numbers to real numbers, but the approach put forward in this paper is to map from probability distributions to real-valued responses. According to the new algorithm, the number of orbital elements can be predicted by embedding the probability distribution into the reproducing kernel Hilbert space. While making full use of the edge of big data, it also avoids the problem that the algorithm cannot converge due to improper initial values in precise OD. The simulation experiment proves the effectiveness, robustness, and rapidity of the algorithm in the presence of noise in the measurement data.


2019 ◽  
Vol 20 (6) ◽  
pp. 562-591
Author(s):  
Sonia Barahona ◽  
Pablo Centella ◽  
Ximo Gual-Arnau ◽  
M. Victoria Ibáñez ◽  
Amelia Simó

The aim of this article is to model an ordinal response variable in terms of vector-valued functional data included on a vector-valued reproducing kernel Hilbert space (RKHS). In particular, we focus on the vector-valued RKHS obtained when a geometrical object (body) is characterized by a current and on the ordinal regression model. A common way to solve this problem in functional data analysis is to express the data in the orthonormal basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one hand, they are vector-valued functions, and on the other, they are functions in an RKHS with a previously defined norm. We propose to use three different bases: the orthonormal basis given by the kernel that defines the RKHS, a basis obtained from decomposition of the integral operator defined using the covariance function and a third basis that combines the previous two. The three approaches are compared and applied to an interesting problem: building a model to predict the fit of children's garment sizes, based on a 3D database of the Spanish child population. Our proposal has been compared with alternative methods that explore the performance of other classifiers (Support Vector Machine and [Formula: see text]-NN), and with the result of applying the classification method proposed in this work, from different characterizations of the objects (landmarks and multivariate anthropometric measurements instead of currents), obtaining in all these cases worst results.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1363
Author(s):  
Michael R. Lindstrom ◽  
Hyuntae Jung ◽  
Denis Larocque

We present an unsupervised method to detect anomalous time series among a collection of time series. To do so, we extend traditional Kernel Density Estimation for estimating probability distributions in Euclidean space to Hilbert spaces. The estimated probability densities we derive can be obtained formally through treating each series as a point in a Hilbert space, placing a kernel at those points, and summing the kernels (a “point approach”), or through using Kernel Density Estimation to approximate the distributions of Fourier mode coefficients to infer a probability density (a “Fourier approach”). We refer to these approaches as Functional Kernel Density Estimation for Anomaly Detection as they both yield functionals that can score a time series for how anomalous it is. Both methods naturally handle missing data and apply to a variety of settings, performing well when compared with an outlyingness score derived from a boxplot method for functional data, with a Principal Component Analysis approach for functional data, and with the Functional Isolation Forest method. We illustrate the use of the proposed methods with aviation safety report data from the International Air Transport Association (IATA).


2003 ◽  
Vol 03 (04) ◽  
pp. 529-544 ◽  
Author(s):  
A. J. LAWRANCE ◽  
RODNEY C. WOLFF

This paper examines stochastic pairwise dependence structures in binary time series obtained from discretised versions of standard chaotic logistic maps. It is motivated by applications in communications modelling which make use of so-called chaotic binary sequences. The strength of non-linear stochastic dependence of the binary sequences is explored. In contrast to the original chaotic sequence, the binary version is non-chaotic with non-Markovian non-linear dependence, except in a special case. Marginal and joint probability distributions, and autocorrelation functions are elicited. Multivariate binary and more discretised time series from a single realisation of the logistic map are developed from the binary paradigm. Proposals for extension of the methodology to other cases of the general logistic map are developed. Finally, a brief illustration of the place of chaos-based binary processes in chaos communications is given.


Sign in / Sign up

Export Citation Format

Share Document