scholarly journals Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment

2014 ◽  
Vol 112 (9) ◽  
pp. 2332-2348 ◽  
Author(s):  
Ted Brookings ◽  
Marie L. Goeritz ◽  
Eve Marder

We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/ f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy.

2021 ◽  
Author(s):  
Andrew J Kavran ◽  
Aaron Clauset

Abstract Background: Large-scale biological data sets are often contaminated by noise, which can impede accurate inferences about underlying processes. Such measurement noise can arise from endogenous biological factors like cell cycle and life history variation, and from exogenous technical factors like sample preparation and instrument variation.Results: We describe a general method for automatically reducing noise in large-scale biological data sets. This method uses an interaction network to identify groups of correlated or anti-correlated measurements that can be combined or “filtered” to better recover an underlying biological signal. Similar to the process of denoising an image, a single network filter may be applied to an entire system, or the system may be first decomposed into distinct modules and a different filter applied to each. Applied to synthetic data with known network structure and signal, network filters accurately reduce noise across a wide range of noise levels and structures. Applied to a machine learning task of predicting changes in human protein expression in healthy and cancerous tissues, network filtering prior to training increases accuracy up to 43% compared to using unfiltered data.Conclusions: Network filters are a general way to denoise biological data and can account for both correlation and anti-correlation between different measurements. Furthermore, we find that partitioning a network prior to filtering can significantly reduce errors in networks with heterogenous data and correlation patterns, and this approach outperforms existing diffusion based methods. Our results on proteomics data indicate the broad potential utility of network filters to applications in systems biology.


2019 ◽  
Vol 141 (6) ◽  
Author(s):  
Edith Osorio de la Rosa ◽  
Guillermo Becerra Nuñez ◽  
Alfredo Omar Palafox Roca ◽  
René Ledesma-Alonso

This paper presents a methodology to estimate solar irradiance using an empiric-stochastic approach, which is based on the computation of normalization parameters from the solar irradiance data. For this study, the solar irradiance data were collected in a weather station during a year. Posttreatment included a trimmed moving average to smooth the data, the performance of a fitting procedure using a simple model to recover normalization parameters, and the estimation of a probability density, which evolves along the daytime, by means of a kernel density estimation method. The normalization parameters correspond to characteristic physical variables that allow us to decouple the short- and long-term behaviors of solar irradiance and to describe their average trends with simple equations. The normalization parameters and the probability densities allowed us to build an empiric-stochastic methodology that generates an estimate of the solar irradiance. Finally, in order to validate our method, we had run simulations of solar irradiance and afterward computed the theoretical generation of solar power, which in turn had been compared with the experimental data retrieved from a commercial photovoltaic system. Since the simulation results show a good agreement with the experimental data, this simple methodology can generate the synthetic data of solar power production and may help to design and test a photovoltaic system before installation.


Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. I13-I22 ◽  
Author(s):  
Fernando J. Silva Dias ◽  
Valeria C. Barbosa ◽  
João B. Silva

We present a new semiautomatic gravity interpretation method for estimating a complex interface between two media containing density heterogeneities (referred to as interfering sources) that give rise to a complex and interfering gravity field. The method combines a robust fitting procedure and the constraint that the interface is very smooth near the interfering sources, whose approximate horizontal coordinates are defined by the user. The proposed method differs from the regional-residual separation techniques by using no spectral content assumption about the anomaly produced by the interface to be estimated, i.e., the interface can produce a gravity response containing both low- and high-wavenumber features. As a result, it may be applied to map the relief of a complex interface in a geologic setting containing either shallow or deep-seated interfering sources. Tests conducted with synthetic data show that the method can be of utility in estimating the basement relief of a sedimentary basin in the presence of salt layers and domes or in the presence of mafic intrusions in the basement or in both basement and the sedimentary section. The method was applied to real gravity data from two geologic settings having different kinds of interfering sources and interfaces to be interpreted: (1) the interface between the upper and lower crusts over the Bavali shear zone of southern India and (2) the anorthosite-tonalite interface over the East Bull Lake gabbro-anorthosite complex outcrop in Ontario, Canada.


2020 ◽  
Author(s):  
Grigoriy Gogoshin ◽  
Sergio Branciamore ◽  
Andrei S. Rodin

AbstractBayesian Network (BN) modeling is a prominent and increasingly popular computational systems biology method. It aims to construct probabilistic networks from the large heterogeneous biological datasets that reflect the underlying networks of biological relationships. Currently, a variety of strategies exist for evaluating BN methodology performance, ranging from utilizing artificial benchmark datasets and models, to specialized biological benchmark datasets, to simulation studies that generate synthetic data from predefined network models. The latter is arguably the most comprehensive approach; however, existing implementations are typically limited by their reliance on the SEM (structural equation modeling) framework, which includes many explicit and implicit assumptions that may be unrealistic in a typical biological data analysis scenario. In this study, we develop an alternative, purely probabilistic, simulation framework that more appropriately fits with real biological data and biological network models. In conjunction, we also expand on our current understanding of the theoretical notions of causality and dependence / conditional independence in BNs and the Markov Blankets within.


2014 ◽  
Author(s):  
Conrad Burden ◽  
Sumaira Qureshi ◽  
Susan R Wilson

A number of algorithms exist for analysing RNA-sequencing data to infer profiles of differential gene expression. Problems inherent in building algorithms around statistical models of over dispersed count data are formidable and frequently lead to non-uniform p-value distributions for null-hypothesis data and to inaccurate estimates of false discovery rates (FDRs). This can lead to an inaccurate measure of significance and loss of power to detect differential expression. We use synthetic and real biological data to assess the ability of several available R packages to accurately estimate FDRs. The packages surveyed are based on statistical models of overdispersed Poisson data and include edgeR, DESeq, DESeq2, PoissonSeq and QuasiSeq. Also tested is an add-on package to edgeR and DESeq which we introduce called Polyfit. Polyfit aims to address the problem of a non-uniform null p-value distribution for two-class datasets by adapting the Storey-Tibshirani procedure. We find the best performing package in the sense that it achieves a low FDR which is accurately estimated over the full range of p-values, albeit with a very slow run time, is the QLSpline implementation of QuasiSeq. This finding holds provided the number of biological replicates in each condition is at least 4. The next best performing packages are edgeR and DESeq2. When the number of biological replicates is sufficiently high, and within a range accessible to multiplexed experimental designs, the Polyfit extension improves the performance DESeq (for approximately 6 or more replicates per condition), making its performance comparable with that of edgeR and DESeq2 in our tests with synthetic data.


2020 ◽  
Author(s):  
Kristoffer Sahlin ◽  
Veli Mäkinen

AbstractLong-read RNA sequencing techniques are quickly establishing themselves as the primary sequencing technique to study the transcriptome landscape. Many such analyses are dependent upon splice alignment of reads to the genome. However, the error rate and sequencing length of long-read technologies create new challenges for accurately aligning these reads. We present an alignment method uLTRA that, on simulated and synthetic data, shows higher accuracy over state-of-the-art with substantially higher accuracy for small exons. We show several examples on biological data where uLTRA aligns to known and novel isoforms with exon structures that are not detected with other aligners. uLTRA is available at https://github.com/ksahlin/ultra.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Miha Moškon

Abstract Background Even though several computational methods for rhythmicity detection and analysis of biological data have been proposed in recent years, classical trigonometric regression based on cosinor still has several advantages over these methods and is still widely used. Different software packages for cosinor-based rhythmometry exist, but lack certain functionalities and require data in different, non-unified input formats. Results We present CosinorPy, a Python implementation of cosinor-based methods for rhythmicity detection and analysis. CosinorPy merges and extends the functionalities of existing cosinor packages. It supports the analysis of rhythmic data using single- or multi-component cosinor models, automatic selection of the best model, population-mean cosinor regression, and differential rhythmicity assessment. Moreover, it implements functions that can be used in a design of experiments, a synthetic data generator, and import and export of data in different formats. Conclusion CosinorPy is an easy-to-use Python package for straightforward detection and analysis of rhythmicity requiring minimal statistical knowledge, and produces publication-ready figures. Its code, examples, and documentation are available to download from https://github.com/mmoskon/CosinorPy. CosinorPy can be installed manually or by using pip, the package manager for Python packages. The implementation reported in this paper corresponds to the software release v1.1.


2017 ◽  
Vol 28 (09) ◽  
pp. 1750116 ◽  
Author(s):  
Fabiola León-Bejarano ◽  
Miguel Ramírez-Elías ◽  
Martin O. Mendez ◽  
Guadalupe Dorantes-Méndez ◽  
Ma. del Carmen Rodríguez-Aranda ◽  
...  

Raman spectroscopy of biological samples presents undesirable noise and fluorescence generated by the biomolecular excitation. The reduction of these types of noise is a fundamental task to obtain the valuable information of the sample under analysis. This paper proposes the application of the empirical mode decomposition (EMD) for noise elimination. EMD is a parameter-free and adaptive signal processing method useful for the analysis of nonstationary signals. EMD performance was compared with the commonly used Vancouver algorithm (VRA) through artificial data (Teflon), synthetic (Vitamin E and paracetamol) and biological (Mouse brain and human nails) Raman spectra. The correlation coefficient ([Formula: see text]) was used as performance measure. Results on synthetic data showed a better performance of EMD ([Formula: see text]) at high noise levels compared with VRA ([Formula: see text]). The methods with simulated fluorescence added to artificial material exhibited a similar shape of fluorescence in both cases ([Formula: see text] for VRA and [Formula: see text] for EMD). For synthetic data, Raman spectra of vitamin E were used and the results showed a good performance comparing both methods ([Formula: see text] for EMD and [Formula: see text] for VRA). Finally, in biological data, EMD and VRA displayed a similar behavior ([Formula: see text] for EMD and [Formula: see text] for VRA), but with the advantage that EMD maintains small amplitude Raman peaks. The results suggest that EMD could be an effective method for denoising biological Raman spectra, EMD is able to retain information and correctly eliminates the fluorescence without parameter tuning.


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


Sign in / Sign up

Export Citation Format

Share Document