scholarly journals S10H2 Neuroral and secreatory activities analysed by using two-photon microscopy(State-of-the-art Techniques to Build the Coming Generation in Biophysics: Novel Approaches to Revealing Molecular Mechanisms of Cells and Proteins)

2007 ◽  
Vol 47 (supplement) ◽  
pp. S14
Author(s):  
Tomomi Nemoto
2016 ◽  
Author(s):  
Marius Pachitariu ◽  
Carsen Stringer ◽  
Mario Dipoppa ◽  
Sylvia Schröder ◽  
L. Federico Rossi ◽  
...  

AbstractTwo-photon microscopy of calcium-dependent sensors has enabled unprecedented recordings from vast populations of neurons. While the sensors and microscopes have matured over several generations of development, computational methods to process the resulting movies remain inefficient and can give results that are hard to interpret. Here we introduce Suite2p: a fast, accurate and complete pipeline that registers raw movies, detects active cells, extracts their calcium traces and infers their spike times. Suite2p runs on standard workstations, operates faster than real time, and recovers ~2 times more cells than the previous state-of-the-art method. Its low computational load allows routine detection of ~10,000 cells simultaneously with standard two-photon resonant-scanning microscopes. Recordings at this scale promise to reveal the fine structure of activity in large populations of neurons or large populations of subcellular structures such as synaptic boutons.


2016 ◽  
Vol 54 (12) ◽  
pp. 1343-1404
Author(s):  
A Ghallab ◽  
R Reif ◽  
R Hassan ◽  
AS Seddek ◽  
JG Hengstler

Energies ◽  
2014 ◽  
Vol 7 (8) ◽  
pp. 4757-4780 ◽  
Author(s):  
Alistair McCay ◽  
Thomas Harley ◽  
Paul Younger ◽  
David Sanderson ◽  
Alan Cresswell

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4233
Author(s):  
Bogdan Mocanu ◽  
Ruxandra Tapu ◽  
Titus Zaharia

Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various applications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we introduce a novel speech emotion recognition method, based on the Squeeze and Excitation ResNet (SE-ResNet) model and fed with spectrogram inputs. In order to overcome the limitations of the state-of-the-art techniques, which fail in providing a robust feature representation at the utterance level, the CNN architecture is extended with a trainable discriminative GhostVLAD clustering layer that aggregates the audio features into compact, single-utterance vector representation. In addition, an end-to-end neural embedding approach is introduced, based on an emotionally constrained triplet loss function. The loss function integrates the relations between the various emotional patterns and thus improves the latent space data representation. The proposed methodology achieves 83.35% and 64.92% global accuracy rates on the RAVDESS and CREMA-D publicly available datasets, respectively. When compared with the results provided by human observers, the gains in global accuracy scores are superior to 24%. Finally, the objective comparative evaluation with state-of-the-art techniques demonstrates accuracy gains of more than 3%.


Sign in / Sign up

Export Citation Format

Share Document