Joint Sparse Representation of Brain Activity Patterns Related to Perceptual and Cognitive Components of a Speech Comprehension Task

Author(s):  
Mahdi Ramezani ◽  
Purang Abolmaesumi ◽  
Kris Marble ◽  
H. MacDonald ◽  
Ingrid Johnsrude
2015 ◽  
Vol 34 (1) ◽  
pp. 2-12 ◽  
Author(s):  
M. Ramezani ◽  
K. Marble ◽  
H. Trang ◽  
I. S. Johnsrude ◽  
P. Abolmaesumi

2020 ◽  
Vol 6 (2) ◽  
pp. 71-83
Author(s):  
Mohammad Raouf ◽  
◽  
Somayeh Raiesdana ◽  

Background: Spatial learning and navigation is a fundamental cognitive ability consisting of multiple cognitive components. Despite intensive efforts conducted with the assistance of virtual reality technology and functional Magnetic Resonance Imaging (fMRI) modality, the music effect on this cognition and the involved neuronal mechanisms remain elusive. Objectives: We aimed to investigate the effect of familiarity with music on human’s spatial learning performance in a goal-directed virtual-navigation task combined with an fMRI study. Materials and Methods: Healthy adult participants were navigated using fMRI-compatible equipment within a 3D virtual maze developed with the MazeSuite application. This measure was taken to learn the environment and find the position of hidden objects. The fMRI data were obtained, processed, and analyzed to map the brain activity and identify the differences in the Blood Oxygen Level Dependent (BOLD) activity between the research groups during searching and finding phases. Both behavioral and image analysis were outperformed in this research. Besides, three T-contrasts were defined to compare the activity patterns between the study groups. The selected music was Mozart sonata owing to its known facilitating impact on cognition. Results: The obtained data indicated that those who have heard music prior to the test had a better performance; they navigated faster and committed fewer errors. The activation of regions, like parahippocampal gyrus, related to spatial cognition, was observed in the searching phase and the activation of the cerebellum, superior temporal, and marginal gyrus, i.e. more probably related to music processing was observed during the finding step. Conclusion: The active regions found in this work indicated the interplay of the neural substrate underlying to spatial-temporal tasks and music processing.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Meir Meshulam ◽  
Liat Hasenfratz ◽  
Hanna Hillman ◽  
Yun-Fei Liu ◽  
Mai Nguyen ◽  
...  

AbstractDespite major advances in measuring human brain activity during and after educational experiences, it is unclear how learners internalize new content, especially in real-life and online settings. In this work, we introduce a neural approach to predicting and assessing learning outcomes in a real-life setting. Our approach hinges on the idea that successful learning involves forming the right set of neural representations, which are captured in canonical activity patterns shared across individuals. Specifically, we hypothesized that learning is mirrored in neural alignment: the degree to which an individual learner’s neural representations match those of experts, as well as those of other learners. We tested this hypothesis in a longitudinal functional MRI study that regularly scanned college students enrolled in an introduction to computer science course. We additionally scanned graduate student experts in computer science. We show that alignment among students successfully predicts overall performance in a final exam. Furthermore, within individual students, we find better learning outcomes for concepts that evoke better alignment with experts and with other students, revealing neural patterns associated with specific learned concepts in individuals.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2011 ◽  
Vol 228 (2) ◽  
pp. 200-205 ◽  
Author(s):  
Naim Haddad ◽  
Rathinaswamy B. Govindan ◽  
Srinivasan Vairavan ◽  
Eric Siegel ◽  
Jessica Temple ◽  
...  

Neuroreport ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Yan Tong ◽  
Xin Huang ◽  
Chen-Xing Qi ◽  
Yin Shen

Sign in / Sign up

Export Citation Format

Share Document