scholarly journals Amplifying the Neural Power Spectrum

2019 ◽  
Author(s):  
J. Andrew Doyle ◽  
Paule-Joanne Toussaint ◽  
Alan C. Evans

AbstractWe introduce a novel method that employs a parametric model of human electroen-cephalographic (EEG) brain signal power spectra to evaluate cognitive science experiments and test scientific hypotheses. We develop the Neural Power Amplifier (NPA), a data-driven approach to EEG pre-processing that can replace current filtering strategies with a principled method based on combining filters with log-arithmic and Gaussian magnitude responses. Presenting the first time domain evidence to validate an increasingly popular model for neural power spectra [1], we show that filtering out the 1/f background signal and selecting peaks improves a time-domain decoding experiment for visual stimulus of human faces versus random noise.

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 758 ◽  
Author(s):  
Jialin Li ◽  
Xueyi Li ◽  
David He ◽  
Yongzhi Qu

Research on data-driven fault diagnosis methods has received much attention in recent years. The deep belief network (DBN) is a commonly used deep learning method for fault diagnosis. In the past, when people used DBN to diagnose gear pitting faults, it was found that the diagnosis result was not good with continuous time domain vibration signals as direct inputs into DBN. Therefore, most researchers extracted features from time domain vibration signals as inputs into DBN. However, it is desirable to use raw vibration signals as direct inputs to achieve good fault diagnosis results. Therefore, this paper proposes a novel method by stacking spare autoencoder (SAE) and Gauss-Binary restricted Boltzmann machine (GBRBM) for early gear pitting faults diagnosis with raw vibration signals as direct inputs. The SAE layer is used to compress the raw vibration data and the GBRBM layer is used to effectively process continuous time domain vibration signals. Vibration signals of seven early gear pitting faults collected from a gear test rig are used to validate the proposed method. The validation results show that the proposed method maintains a good diagnosis performance under different working conditions and gives higher diagnosis accuracy compared to other traditional methods.


JAMIA Open ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Fuchiang R Tsui ◽  
Lingyun Shi ◽  
Victor Ruiz ◽  
Neal D Ryan ◽  
Candice Biernesser ◽  
...  

Abstract Objective Limited research exists in predicting first-time suicide attempts that account for two-thirds of suicide decedents. We aimed to predict first-time suicide attempts using a large data-driven approach that applies natural language processing (NLP) and machine learning (ML) to unstructured (narrative) clinical notes and structured electronic health record (EHR) data. Methods This case-control study included patients aged 10–75 years who were seen between 2007 and 2016 from emergency departments and inpatient units. Cases were first-time suicide attempts from coded diagnosis; controls were randomly selected without suicide attempts regardless of demographics, following a ratio of nine controls per case. Four data-driven ML models were evaluated using 2-year historical EHR data prior to suicide attempt or control index visits, with prediction windows from 7 to 730 days. Patients without any historical notes were excluded. Model evaluation on accuracy and robustness was performed on a blind dataset (30% cohort). Results The study cohort included 45 238 patients (5099 cases, 40 139 controls) comprising 54 651 variables from 5.7 million structured records and 798 665 notes. Using both unstructured and structured data resulted in significantly greater accuracy compared to structured data alone (area-under-the-curve [AUC]: 0.932 vs. 0.901 P < .001). The best-predicting model utilized 1726 variables with AUC = 0.932 (95% CI, 0.922–0.941). The model was robust across multiple prediction windows and subgroups by demographics, points of historical most recent clinical contact, and depression diagnosis history. Conclusions Our large data-driven approach using both structured and unstructured EHR data demonstrated accurate and robust first-time suicide attempt prediction, and has the potential to be deployed across various populations and clinical settings.


Author(s):  
Anil Kumar ◽  
Amina Khatun ◽  
Sanjib Kumar Agarwalla ◽  
Amol Dighe

AbstractAtmospheric neutrino experiments can show the “oscillation dip” feature in data, due to their sensitivity over a large L/E range. In experiments that can distinguish between neutrinos and antineutrinos, like INO, oscillation dips can be observed in both these channels separately. We present the dip-identification algorithm employing a data-driven approach – one that uses the asymmetry in the upward-going and downward-going events, binned in the reconstructed L/E of muons – to demonstrate the dip, which would confirm the oscillation hypothesis. We further propose, for the first time, the identification of an “oscillation valley” in the reconstructed ($$E_\mu $$ E μ ,$$\,\cos \theta _\mu $$ cos θ μ ) plane, feasible for detectors like ICAL having excellent muon energy and direction resolutions. We illustrate how this two-dimensional valley would offer a clear visual representation and test of the L/E dependence, the alignment of the valley quantifying the atmospheric mass-squared difference. Owing to the charge identification capability of the ICAL detector at INO, we always present our results using $$\mu ^{-}$$ μ - and $$\mu ^{+}$$ μ + events separately. Taking into account the statistical fluctuations and systematic errors, and varying oscillation parameters over their currently allowed ranges, we estimate the precision to which atmospheric neutrino oscillation parameters would be determined with the 10-year simulated data at ICAL using our procedure.


2019 ◽  
Vol 20 (S15) ◽  
Author(s):  
Liyuan Liu ◽  
Bingchen Yu ◽  
Meng Han ◽  
Shanshan Yuan ◽  
Na Wang

Abstract Background Cognitive decline has emerged as a significant threat to both public health and personal welfare, and mild cognitive decline/impairment (MCI) can further develop into Dementia/Alzheimer’s disease. While treatment of Dementia/Alzheimer’s disease can be expensive and ineffective sometimes, the prevention of MCI by identifying modifiable risk factors is a complementary and effective strategy. Results In this study, based on the data collected by Centers for Disease Control and Prevention (CDC) through the nationwide telephone survey, we apply a data-driven approach to re-exam the previously founded risk factors and discover new risk factors. We found that depression, physical health, cigarette usage, education level, and sleep time play an important role in cognitive decline, which is consistent with the previous discovery. Besides that, the first time, we point out that other factors such as arthritis, pulmonary disease, stroke, asthma, marital status also contribute to MCI risk, which is less exploited previously. We also incorporate some machine learning and deep learning algorithms to weigh the importance of various factors contributed to MCI and predicted cognitive declined. Conclusion By incorporating the data-driven approach, we can determine that risk factors significantly correlated with diseases. These correlations could also be expanded to another medical diagnosis besides MCI.


2019 ◽  
Author(s):  
Friederike Ehrhart ◽  
Egon L. Willighagen ◽  
Martina Kutmon ◽  
Max van Hoften ◽  
Nasim Bahram Sangani ◽  
...  

AbstractThis dataset provides information about monogenic, rare diseases with a known genetic cause supplemented with manually extracted provenance of both the disease and the discovery of the underlying genetic cause of the disease.We collected 4166 rare monogenic diseases according to their OMIM identifier, linked them to 3163 causative genes which are annotated with Ensembl identifiers and HGNC symbols. The PubMed identifier of the scientific publication, which for the first time describes the rare disease, and the publication which found the gene causing this disease were added using information from OMIM, Wikipedia, Google Scholar, Whonamedit, and PubMed. The data is available as a spreadsheet and as RDF in a semantic model modified from DisGeNET.This dataset relies on publicly available data and publications with a PubMed IDs but this is to our knowledge the first time this data has been linked and made available for further study under a liberal license. Analysis of this data reveals the timeline of rare disease and causative genes discovery and links them to developments in methods and databases.


2019 ◽  
Vol 185 (17) ◽  
pp. 540-540 ◽  
Author(s):  
Hannah Schubert ◽  
Sarah Wood ◽  
Kristen Reyher ◽  
Harriet Mills

BackgroundKnowledge of accurate weights of cattle is crucial for effective dosing of individual animals and for reporting antimicrobial usage. For the first time, we provide an evidence-based estimate of the average weight of UK dairy cattle to better inform farmers, veterinarians and the scientific community.MethodsData were collected for 2747 lactating dairy cattle from 20 farms in the UK. Data were used to calculate a mean weight for lactating dairy cattle by breed and a UK-specific mean weight. Trends in weight by lactation number and production level were also explored.ResultsMean weight for adult dairy cattle in this study was 617 kg (sd=85.6 kg). Mean weight varied across breeds, with a range of 466 kg (sd=56.0 kg, Jersey) to 636 kg (sd=84.1, Holsteins). When scaled to UK breed proportions, the estimated UK-specific mean weight was 620 kg.ConclusionThis study is the first to calculate a mean weight of adult dairy cattle in the UK based on on-farm data. Overall mean weight was higher than that most often proposed in the literature (600 kg). Evidence-informed weights are crucial as the UK works to better monitor and report metrics to measure antimicrobial use and are useful to farmers and veterinarians to inform dosing decisions.


2020 ◽  
Author(s):  
Christian Herrera ◽  
Charlie Chubb ◽  
Ted Wright ◽  
Peng Sun ◽  
George Sperling

In this paper we set up to enumerate and characterize mechanisms sensitive to color scrambles. A color scramble is a texture made of a finite number of elements drawn from a set Ω, in this case small colored squares, distributed according to a histogram. We use a novel method to derive a eight equiluminant lights along the Red-Green cardinal axis. We then generate a background annulus and a target disk to be detected in one of eight pre-defined locations. We modeled the mechanisms available to the subjects to do the task using the seed-expansion weighting procedure. This theory-free, data- driven approach, constrained only by the size of the set Ω, makes no assumptions about the number of mechanism used to perform the task, or about what they are sensitive to. We found that a model of three mechanisms explains well the data: one half-wave rectified mechanism sensitive to green, one half-wave rectified mechanism sensitive to red, and a mechanism sensitive to gray. We discuss the implications of this result.


Author(s):  
David C. Wilson ◽  
Emily Wolin ◽  
William L. Yeck ◽  
Robert E. Anthony ◽  
Adam T. Ringler

Abstract Estimating the detection threshold of a seismic network (the minimum magnitude earthquake that can be reliably located) is a critical part of network design and can drive network maintenance efforts. The ability of a station to detect an earthquake is often estimated by assuming the spectral amplitude for an earthquake of a given size, assuming an attenuation relationship, and comparing the predicted amplitude with the average station background noise level. This approach has significant uncertainty because of unknown regional attenuation and complications in computing small event power spectra, and it fails to account for the specific capabilities of the automatic seismic phase picker used in monitoring. We develop a data-driven approach to determine network detection thresholds using a multiband phase picking algorithm that is currently in use at the U.S. Geological Survey National Earthquake Information Center. We apply this picking algorithm to cataloged earthquakes to determine an empirical relationship of the observability of earthquakes as a function of magnitude and distance. Using this relationship, we produce maps of detection threshold using station spatial configuration and station noise levels. We show that quiet, well-sited stations significantly increase the detection capabilities of a network compared with a network composed of many noisy stations. Because our method is data driven, it has two distinct advantages: (1) it is less dependent on theoretical assumptions of source spectra and models of regional attenuation, and (2) it can easily be applied to any seismic network. This tool allows for an objective approach to the management of stations in regional seismic networks.


2016 ◽  
Vol 113 (50) ◽  
pp. 14183-14188 ◽  
Author(s):  
Huan Lei ◽  
Nathan A. Baker ◽  
Xiantao Li

We present a data-driven approach to determine the memory kernel and random noise in generalized Langevin equations. To facilitate practical implementations, we parameterize the kernel function in the Laplace domain by a rational function, with coefficients directly linked to the equilibrium statistics of the coarse-grain variables. We show that such an approximation can be constructed to arbitrarily high order and the resulting generalized Langevin dynamics can be embedded in an extended stochastic model without explicit memory. We demonstrate how to introduce the stochastic noise so that the second fluctuation-dissipation theorem is exactly satisfied. Results from several numerical tests are presented to demonstrate the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document