scholarly journals MEG Source Localization Via Deep Learning

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4278
Author(s):  
Dimitrios Pantazis ◽  
Amir Adler

We present a deep learning solution to the problem of localization of magnetoencephalography (MEG) brain signals. The proposed deep model architectures are tuned to single and multiple time point MEG data, and can estimate varying numbers of dipole sources. Results from simulated MEG data on the cortical surface of a real human subject demonstrated improvements against the popular RAP-MUSIC localization algorithm in specific scenarios with varying SNR levels, inter-source correlation values, and number of sources. Importantly, the deep learning models had robust performance to forward model errors resulting from head translation and rotation and a significant reduction in computation time, to a fraction of 1 ms, paving the way to real-time MEG source localization.

2021 ◽  
Author(s):  
Abhishek S. Bhutada ◽  
Chang Cai ◽  
Danielle Mizuiri ◽  
Anne Findlay ◽  
Jessie Chen ◽  
...  

AbstractMagnetoencephalography (MEG) is a robust method for non-invasive functional brain mapping of sensory cortices due to its exceptional spatial and temporal resolution. The clinical standard for MEG source localization of functional landmarks from sensory evoked responses is the equivalent current dipole (ECD) localization algorithm, known to be sensitive to initialization, noise, and manual choice of the number of dipoles. Recently many automated and robust algorithms have been developed, including the Champagne algorithm, an empirical Bayesian algorithm, with powerful abilities for MEG source reconstruction and time course estimation (Wipf et al. 2010; Owen et al. 2012). Here, we evaluate automated Champagne performance in a clinical population of tumor patients where there was minimal failure in localizing sensory evoked responses using the clinical standard, ECD localization algorithm. MEG data of auditory evoked potentials and somatosensory evoked potentials from 21 brain tumor patients were analyzed using Champagne, and these results were compared with equivalent current dipole (ECD) fit. Across both somatosensory and auditory evoked field localization, we found there was a strong agreement between Champagne and ECD localizations in all cases. Given resolution of 8mm voxel size, peak source localizations from Champagne were below 10mm of ECD peak source localization. The Champagne algorithm provides a robust and automated alternative to manual ECD fits for clinical localization of sensory evoked potentials and can contribute to improved clinical MEG data processing workflows.


2021 ◽  
Vol 10 (1) ◽  
pp. 18
Author(s):  
Quentin Cabanes ◽  
Benaoumeur Senouci ◽  
Amar Ramdane-Cherif

Cyber-Physical Systems (CPSs) are a mature research technology topic that deals with Artificial Intelligence (AI) and Embedded Systems (ES). They interact with the physical world via sensors/actuators to solve problems in several applications (robotics, transportation, health, etc.). These CPSs deal with data analysis, which need powerful algorithms combined with robust hardware architectures. On one hand, Deep Learning (DL) is proposed as the main solution algorithm. On the other hand, the standard design and prototyping methodologies for ES are not adapted to modern DL-based CPS. In this paper, we investigate AI design for CPS around embedded DL. The main contribution of this work is threefold: (1) We define an embedded DL methodology based on a Multi-CPU/FPGA platform. (2) We propose a new hardware design architecture of a Neural Network Processor (NNP) for DL algorithms. The computation time of a feed forward sequence is estimated to 23 ns for each parameter. (3) We validate the proposed methodology and the DL-based NNP using a smart LIDAR application use-case. The input of our NNP is a voxel grid hardware computed from 3D point cloud. Finally, the results show that our NNP is able to process Dense Neural Network (DNN) architecture without bias.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Jiaqi Song ◽  
Haihong Tao

Noncircular signals are widely used in the area of radar, sonar, and wireless communication array systems, which can offer more accurate estimates and detect more sources. In this paper, the noncircular signals are employed to improve source localization accuracy and identifiability. Firstly, an extended real-valued covariance matrix is constructed to transform complex-valued computation into real-valued computation. Based on the property of noncircular signals and symmetric uniform linear array (SULA) which consist of dual-polarization sensors, the array steering vectors can be separated into the source position parameters and the nuisance parameter. Therefore, the rank reduction (RARE) estimators are adopted to estimate the source localization parameters in sequence. By utilizing polarization information of sources and real-valued computation, the maximum number of resolvable sources, estimation accuracy, and resolution can be improved. Numerical simulations demonstrate that the proposed method outperforms the existing methods in both resolution and estimation accuracy.


Energies ◽  
2020 ◽  
Vol 13 (14) ◽  
pp. 3517 ◽  
Author(s):  
Anh Ngoc-Lan Huynh ◽  
Ravinesh C. Deo ◽  
Duc-Anh An-Vo ◽  
Mumtaz Ali ◽  
Nawin Raj ◽  
...  

This paper aims to develop the long short-term memory (LSTM) network modelling strategy based on deep learning principles, tailored for the very short-term, near-real-time global solar radiation (GSR) forecasting. To build the prescribed LSTM model, the partial autocorrelation function is applied to the high resolution, 1 min scaled solar radiation dataset that generates statistically significant lagged predictor variables describing the antecedent behaviour of GSR. The LSTM algorithm is adopted to capture the short- and the long-term dependencies within the GSR data series patterns to accurately predict the future GSR at 1, 5, 10, 15, and 30 min forecasting horizons. This objective model is benchmarked at a solar energy resource rich study site (Bac-Ninh, Vietnam) against the competing counterpart methods employing other deep learning, a statistical model, a single hidden layer and a machine learning-based model. The LSTM model generates satisfactory predictions at multiple-time step horizons, achieving a correlation coefficient exceeding 0.90, outperforming all of the counterparts. In accordance with robust statistical metrics and visual analysis of all tested data, the study ascertains the practicality of the proposed LSTM approach to generate reliable GSR forecasts. The Diebold–Mariano statistic test also shows LSTM outperforms the counterparts in most cases. The study confirms the practical utility of LSTM in renewable energy studies, and broadly in energy-monitoring devices tailored for other energy variables (e.g., hydro and wind energy).


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141769231 ◽  
Author(s):  
Yingfeng Cai ◽  
Youguo He ◽  
Hai Wang ◽  
Xiaoqiang Sun ◽  
Long Chen ◽  
...  

The emergence and development of deep learning theory in machine learning field provide new method for visual-based pedestrian recognition technology. To achieve better performance in this application, an improved weakly supervised hierarchical deep learning pedestrian recognition algorithm with two-dimensional deep belief networks is proposed. The improvements are made by taking into consideration the weaknesses of structure and training methods of existing classifiers. First, traditional one-dimensional deep belief network is expanded to two-dimensional that allows image matrix to be loaded directly to preserve more information of a sample space. Then, a determination regularization term with small weight is added to the traditional unsupervised training objective function. By this modification, original unsupervised training is transformed to weakly supervised training. Subsequently, that gives the extracted features discrimination ability. Multiple sets of comparative experiments show that the performance of the proposed algorithm is better than other deep learning algorithms in recognition rate and outperforms most of the existing state-of-the-art methods in non-occlusion pedestrian data set while performs fair in weakly and heavily occlusion data set.


2011 ◽  
Vol 317-319 ◽  
pp. 1078-1083 ◽  
Author(s):  
Qing Tao Lin ◽  
Xiang Bing Zeng ◽  
Xiao Feng Jiang ◽  
Xin Yu Jin

This paper establishes a 3-D localization model and based on this model, it proposes a collaborative localization framework. In this framework, node that observes the object sends its attitude information and the relative position of the object's projection in its camera to the cluster head. The cluster head adopts an algorithm proposed in this paper to select some nodes to participate localization. The localization algorithm is based on least square method. Because the localization framework is based on a 3-D model, the size of the object or other prerequisites is not necessary. At the end of this paper, a simulation is taken on the numbers of nodes selected to locate and the localization accuracy. The result implies that selecting 3~4 nodes is proper. The theoretical analysis and the simulation result also imply that a const computation time cost is paid in this framework with a high localization accuracy (in our simulation environment, a 0.01 meter error).


Sign in / Sign up

Export Citation Format

Share Document