scholarly journals Decoding of human hand actions to handle missing limbs in Neuroprosthetics

2015 ◽  
Author(s):  
Jovana Belic ◽  
Aldo Faisal

The only way we can interact with the world is through movements, and our primary interactions are via the hands, thus any loss of hand function has immediate impact on our quality of life. However, to date it has not been systematically assessed how coordination in the hand's joints affects every day actions. This is important for two fundamental reasons. Firstly, to understand the representations and computations underlying motor control "in-the-wild" situations, and secondly to develop smarter controllers for prosthetic hands that have the same functionality as natural limbs. In this work we exploit the correlation structure of our hand and finger movements in daily-life. The novelty of our idea is that instead of averaging variability out, we take the view that the structure of variability may contain valuable information about the task being performed. We asked seven subjects to interact in 17 daily-life situations, and quantified behaviour in a principled manner using CyberGlove body sensor networks that, after accurate calibration, track all major joints of the hand. Our key findings are: 1. We confirmed that hand control in daily-life tasks is very low-dimensional, with four to five dimensions being sufficient to explain 80-90% of the variability in the natural movement data. 2. We established a universally applicable measure of manipulative complexity that allowed us to measure and compare limb movements across tasks. We used Bayesian latent variable models to model the low-dimensional structure of finger joint angles in natural actions. 3. This allowed us to build a naïve classifier that within the first 1000ms of action initiation (from a flat hand start configuration) predicted which of the 17 actions was going to be executed - enabling us to reliably predict the action intention from very short-time-scale initial data, further revealing the foreseeable nature of hand movements for control of neuroprosthetics and tele operation purposes. 4. Using the Expectation-Maximization algorithm on our latent variable model permitted us to reconstruct with high accuracy (<5°-6° MAE) the movement trajectory of missing fingers by simply tracking the remaining fingers. Overall, our results suggest the hypothesis that specific hand actions are orchestrated by the brain in such a way that in the natural tasks of daily-life there is sufficient redundancy and predictability to be directly exploitable for neuroprosthetics.

2020 ◽  
Author(s):  
Archit Verma ◽  
Barbara Engelhardt

Joint analysis of multiple single cell RNA-sequencing (scRNA-seq) data is confounded by technical batch effects across experiments, biological or environmental variability across cells, and different capture processes across sequencing platforms. Manifold alignment is a principled, effective tool for integrating multiple data sets and controlling for confounding factors. We demonstrate that the semi-supervised t-distributed Gaussian process latent variable model (sstGPLVM), which projects the data onto a mixture of fixed and latent dimensions, can learn a unified low-dimensional embedding for multiple single cell experiments with minimal assumptions. We show the efficacy of the model as compared with state-of-the-art methods for single cell data integration on simulated data, pancreas cells from four sequencing technologies, induced pluripotent stem cells from male and female donors, and mouse brain cells from both spatial seqFISH+ and traditional scRNA-seq.Code and data is available at https://github.com/architverma1/sc-manifold-alignment


2020 ◽  
Author(s):  
Aditya Arie Nugraha ◽  
Kouhei Sekiguchi ◽  
Kazuyoshi Yoshii

This paper describes a deep latent variable model of speech power spectrograms and its application to semi-supervised speech enhancement with a deep speech prior. By integrating two major deep generative models, a variational autoencoder (VAE) and a normalizing flow (NF), in a mutually-beneficial manner, we formulate a flexible latent variable model called the NF-VAE that can extract low-dimensional latent representations from high-dimensional observations, akin to the VAE, and does not need to explicitly represent the distribution of the observations, akin to the NF. In this paper, we consider a variant of NF called the generative flow (GF a.k.a. Glow) and formulate a latent variable model called the GF-VAE. We experimentally show that the proposed GF-VAE is better than the standard VAE at capturing fine-structured harmonics of speech spectrograms, especially in the high-frequency range. A similar finding is also obtained when the GF-VAE and the VAE are used to generate speech spectrograms from latent variables randomly sampled from the standard Gaussian distribution. Lastly, when these models are used as speech priors for statistical multichannel speech enhancement, the GF-VAE outperforms the VAE and the GF.


2020 ◽  
Vol 117 (27) ◽  
pp. 15403-15408
Author(s):  
Lawrence K. Saul

We propose a latent variable model to discover faithful low-dimensional representations of high-dimensional data. The model computes a low-dimensional embedding that aims to preserve neighborhood relationships encoded by a sparse graph. The model both leverages and extends current leading approaches to this problem. Like t-distributed Stochastic Neighborhood Embedding, the model can produce two- and three-dimensional embeddings for visualization, but it can also learn higher-dimensional embeddings for other uses. Like LargeVis and Uniform Manifold Approximation and Projection, the model produces embeddings by balancing two goals—pulling nearby examples closer together and pushing distant examples further apart. Unlike these approaches, however, the latent variables in our model provide additional structure that can be exploited for learning. We derive an Expectation–Maximization procedure with closed-form updates that monotonically improve the model’s likelihood: In this procedure, embeddings are iteratively adapted by solving sparse, diagonally dominant systems of linear equations that arise from a discrete graph Laplacian. For large problems, we also develop an approximate coarse-graining procedure that avoids the need for negative sampling of nonadjacent nodes in the graph. We demonstrate the model’s effectiveness on datasets of images and text.


2018 ◽  
Author(s):  
Archit Verma ◽  
Barbara E. Engelhardt

AbstractModern developments in single cell sequencing technologies enable broad insights into cellular state. Single cell RNA sequencing (scRNA-seq) can be used to explore cell types, states, and developmental trajectories to broaden understanding of cell heterogeneity in tissues and organs. Analysis of these sparse, high-dimensional experimental results requires dimension reduction. Several methods have been developed to estimate low-dimensional embeddings for filtered and normalized single cell data. However, methods have yet to be developed for unfiltered and unnormalized count data. We present a nonlinear latent variable model with robust, heavy-tailed error and adaptive kernel learning to estimate low-dimensional nonlinear structure in scRNA-seq data. Gene expression in a single cell is modeled as a noisy draw from a Gaussian process in high dimensions from low-dimensional latent positions. This model is called the Gaussian process latent variable model (GPLVM). We model residual errors with a heavy-tailed Student’s t-distribution to estimate a manifold that is robust to technical and biological noise. We compare our approach to common dimension reduction tools to highlight our model’s ability to enable important downstream tasks, including clustering and inferring cell developmental trajectories, on available experimental data. We show that our robust nonlinear manifold is well suited for raw, unfiltered gene counts from high throughput sequencing technologies for visualization and exploration of cell states.


2020 ◽  
Author(s):  
Aditya Arie Nugraha ◽  
Kouhei Sekiguchi ◽  
Kazuyoshi Yoshii

This paper describes a deep latent variable model of speech power spectrograms and its application to semi-supervised speech enhancement with a deep speech prior. By integrating two major deep generative models, a variational autoencoder (VAE) and a normalizing flow (NF), in a mutually-beneficial manner, we formulate a flexible latent variable model called the NF-VAE that can extract low-dimensional latent representations from high-dimensional observations, akin to the VAE, and does not need to explicitly represent the distribution of the observations, akin to the NF. In this paper, we consider a variant of NF called the generative flow (GF a.k.a. Glow) and formulate a latent variable model called the GF-VAE. We experimentally show that the proposed GF-VAE is better than the standard VAE at capturing fine-structured harmonics of speech spectrograms, especially in the high-frequency range. A similar finding is also obtained when the GF-VAE and the VAE are used to generate speech spectrograms from latent variables randomly sampled from the standard Gaussian distribution. Lastly, when these models are used as speech priors for statistical multichannel speech enhancement, the GF-VAE outperforms the VAE and the GF.


Author(s):  
Sanjeev Arora ◽  
Yuanzhi Li ◽  
Yingyu Liang ◽  
Tengyu Ma ◽  
Andrej Risteski

Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.


2019 ◽  
Author(s):  
Zita Oravecz ◽  
Joachim Vandekerckhove

The Extended Condorcet Model allows us to explore interindividual consensus concerning culturally held knowledge. At the same time, it enables a process-level description of interindividual differences in the knowledge a person has of the consensus, their willingness to guess in the absence of knowledge, and their bias in guessing. These person-specific characteristics potentially have an influence on one's everyday life experiences. Here, we develop a cognitive latent variable model in which dynamic process parameters from intensive longitudinal daily life data are systematically linked to parameters of the Extended Condorcet Model. We apply this joint model of consensus and longitudinal dynamics to study whether subjective beliefs on what makes people feel loved are linked to daily life experiences of love.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xin Jin ◽  
Jia Guo ◽  
Zhong Li ◽  
Ruihao Wang

With the development of powered exoskeleton in recent years, one important limitation is the capability of collaborating with human. Human-machine interaction requires the exoskeleton to accurately predict the human motion of the upcoming movement. Many recent works implement neural network algorithms such as recurrent neural networks (RNN) in motion prediction. However, they are still insufficient in efficiency and accuracy. In this paper, a Gaussian process latent variable model (GPLVM) is employed to transform the high-dimensional data into low-dimensional data. Combining with the nonlinear autoregressive (NAR) neural network, the GPLVM-NAR method is proposed to predict human motions. Experiments with volunteers wearing powered exoskeleton performing different types of motion are conducted. Results validate that the proposed method can forecast the future human motion with relative error of 2%∼5% and average calculation time of 120 s∼155 s, depending on the type of different motions.


Sign in / Sign up

Export Citation Format

Share Document