scholarly journals Growth of difference tone level [L(f2−f1)] with input level as a function of f2/f1

1979 ◽  
Vol 65 (S1) ◽  
pp. S41-S41
Author(s):  
L. E. Humes
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1792
Author(s):  
Juan Hagad ◽  
Tsukasa Kimura ◽  
Ken-ichi Fukui ◽  
Masayuki Numao

Two of the biggest challenges in building models for detecting emotions from electroencephalography (EEG) devices are the relatively small amount of labeled samples and the strong variability of signal feature distributions between different subjects. In this study, we propose a context-generalized model that tackles the data constraints and subject variability simultaneously using a deep neural network architecture optimized for normally distributed subject-independent feature embeddings. Variational autoencoders (VAEs) at the input level allow the lower feature layers of the model to be trained on both labeled and unlabeled samples, maximizing the use of the limited data resources. Meanwhile, variational regularization encourages the model to learn Gaussian-distributed feature embeddings, resulting in robustness to small dataset imbalances. Subject-adversarial regularization applied to the bi-lateral features further enforces subject-independence on the final feature embedding used for emotion classification. The results from subject-independent performance experiments on the SEED and DEAP EEG-emotion datasets show that our model generalizes better across subjects than other state-of-the-art feature embeddings when paired with deep learning classifiers. Furthermore, qualitative analysis of the embedding space reveals that our proposed subject-invariant bi-lateral variational domain adversarial neural network (BiVDANN) architecture may improve the subject-independent performance by discovering normally distributed features.


2003 ◽  
Author(s):  
Vishal Monga ◽  
Niranjan Damera-Venkata ◽  
Brian L. Evans

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Endre Grøvik ◽  
Darvin Yi ◽  
Michael Iv ◽  
Elizabeth Tong ◽  
Line Brennhaug Nilsen ◽  
...  

AbstractThe purpose of this study was to assess the clinical value of a deep learning (DL) model for automatic detection and segmentation of brain metastases, in which a neural network is trained on four distinct MRI sequences using an input-level dropout layer, thus simulating the scenario of missing MRI sequences by training on the full set and all possible subsets of the input data. This retrospective, multicenter study, evaluated 165 patients with brain metastases. The proposed input-level dropout (ILD) model was trained on multisequence MRI from 100 patients and validated/tested on 10/55 patients, in which the test set was missing one of the four MRI sequences used for training. The segmentation results were compared with the performance of a state-of-the-art DeepLab V3 model. The MR sequences in the training set included pre-gadolinium and post-gadolinium (Gd) T1-weighted 3D fast spin echo, post-Gd T1-weighted inversion recovery (IR) prepped fast spoiled gradient echo, and 3D fluid attenuated inversion recovery (FLAIR), whereas the test set did not include the IR prepped image-series. The ground truth segmentations were established by experienced neuroradiologists. The results were evaluated using precision, recall, Intersection over union (IoU)-score and Dice score, and receiver operating characteristics (ROC) curve statistics, while the Wilcoxon rank sum test was used to compare the performance of the two neural networks. The area under the ROC curve (AUC), averaged across all test cases, was 0.989 ± 0.029 for the ILD-model and 0.989 ± 0.023 for the DeepLab V3 model (p = 0.62). The ILD-model showed a significantly higher Dice score (0.795 ± 0.104 vs. 0.774 ± 0.104, p = 0.017), and IoU-score (0.561 ± 0.225 vs. 0.492 ± 0.186, p < 0.001) compared to the DeepLab V3 model, and a significantly lower average false positive rate of 3.6/patient vs. 7.0/patient (p < 0.001) using a 10 mm3 lesion-size limit. The ILD-model, trained on all possible combinations of four MRI sequences, may facilitate accurate detection and segmentation of brain metastases on a multicenter basis, even when the test cohort is missing input MRI sequences.


2010 ◽  
Vol 103 (4) ◽  
pp. 2185-2194 ◽  
Author(s):  
Nina Deisig ◽  
Martin Giurfa ◽  
Jean Christophe Sandoz

Local networks within the primary olfactory centers reformat odor representations from olfactory receptor neurons to second-order neurons. By studying the rules underlying mixture representation at the input to the antennal lobe (AL), the primary olfactory center of the insect brain, we recently found that mixture representation follows a strict elemental rule in honeybees: the more a component activates the AL when presented alone, the more it is represented in a mixture. We now studied mixture representation at the output of the AL by imaging a population of second-order neurons, which convey AL processed odor information to higher brain centers. We systematically measured odor-evoked activity in 22 identified glomeruli in response to four single odorants and all their possible binary, ternary and quaternary mixtures. By comparing input and output responses, we determined how the AL network reformats mixture representation and what advantage this confers for odor discrimination. We show that increased inhibition within the AL leads to more synthetic, less elemental, mixture representation at the output level than that at the input level. As a result, mixture representations become more separable in the olfactory space, thus allowing better differentiation among floral blends in nature.


Author(s):  
Jae Young Choi

Recently, considerable research efforts have been devoted to effective utilization of facial color information for improved recognition performance. Of all color-based face recognition (FR) methods, the most widely used approach is a color FR method using input-level fusion. In this method, augmented input vectors of the color images are first generated by concatenating different color components (including both luminance and chrominance information) by column order at the input level and feature subspace is then trained with a set of augmented input vectors. However, in practical applications, a testing image could be captured as a grayscale image, rather than as a color image, mainly caused by different, heterogeneous image acquisition environment. A grayscale testing image causes so-called dimensionality mismatch between the trained feature subspace and testing input vector. Disparity in dimensionality negatively impacts the reliable FR performance and even imposes a significant restriction on carrying out FR operations in practical color FR systems. To resolve the dimensionality mismatch, we propose a novel approach to estimate new feature subspace, suitable for recognizing a grayscale testing image. In particular, new feature subspace is estimated from a given feature subspace created using color training images. The effectiveness of proposed solution has been successfully tested on four public face databases (DBs) such as CMU, FERET, XM2VTSDB, and ORL DBs. Extensive and comparative experiments showed that the proposed solution works well for resolving dimensionality mismatch of importance in real-life color FR systems.


Sign in / Sign up

Export Citation Format

Share Document