scholarly journals Efficient Identification of Assembly Neurons within Massively Parallel Spike Trains

2010 ◽  
Vol 2010 ◽  
pp. 1-18 ◽  
Author(s):  
Denise Berger ◽  
Christian Borgelt ◽  
Sebastien Louis ◽  
Abigail Morrison ◽  
Sonja Grün

The chance of detecting assembly activity is expected to increase if the spiking activities of large numbers of neurons are recorded simultaneously. Although such massively parallel recordings are now becoming available, methods able to analyze such data for spike correlation are still rare, as a combinatorial explosion often makes it infeasible to extend methods developed for smaller data sets. By evaluating pattern complexity distributions the existence of correlated groups can be detected, but their member neurons cannot be identified. In this contribution, we present approaches to actually identify the individual neurons involved in assemblies. Our results may complement other methods and also provide a way to reduce data sets to the “relevant” neurons, thus allowing us to carry out a refined analysis of the detailed correlation structure due to reduced computation time.

2015 ◽  
Vol 2015 ◽  
pp. 1-12
Author(s):  
David Picado Muiño ◽  
Christian Borgelt

In recent years numerous improvements have been made in multiple-electrode recordings (i.e., parallel spike-train recordings) and spike sorting to the extent that nowadays it is possible to monitor the activity of up to hundreds of neurons simultaneously. Due to these improvements it is now potentially possible to identify assembly activity (roughly understood assignificantsynchronous spiking of a group of neurons) from these recordings, which—if it can be demonstrated reliably—would significantly improve our understanding of neural activity and neural coding. However, several methodological problems remain when trying to do so and, among them, a principal one is the combinatorial explosion that one faces when considering all potential neuronal assemblies, since in principle every subset of the recorded neurons constitutes a candidate set for an assembly. We present several statistical tests to identify assembly neurons (i.e., neurons that participate in a neuronal assembly) from parallel spike trains with the aim of reducing the set of neurons to a relevant subset of them and this way ease the task of identifying neuronal assemblies in further analyses. These tests are an improvement of those introduced in the work by Berger et al. (2010) based on additional features like spike weight or pairwise overlap and on alternative ways to identify spike coincidences (e.g., by avoiding time binning, which tends to lose information).


2019 ◽  
Author(s):  
Ulrike Niemeier ◽  
Claudia Timmreck ◽  
Kirstin Krüger

Abstract. In 1963 a series of eruptions of Mt. Agung, Indonesia, resulted in the 3rd largest eruption of the 20th century and claimed about 1900 lives. Two eruptions of this series injected SO2 into the stratosphere, a requirement to get a long lasting stratospheric sulfate layer. The first eruption on March 17th injected 4.7 Tg SO2 into the stratosphere, the second eruption 2.3 Tg SO2 on May, 16th. In recent volcanic emission data sets these eruption phases are merged together to one large eruption phase for Mt. Agung in March 1963 with an injection rate of 7 Tg SO2. The injected sulfur forms a sulfate layer in the stratosphere. The evolution of sulfur is non-linear and depends on the injection rate and aerosol background conditions. We performed ensembles of two model experiments, one with a single and a second one with two eruptions. The two smaller eruptions result in a lower burden, smaller particles and 0.1 to 0.3 Wm−2 (10–20 %) lower radiative forcing in monthly mean global average compared to the individual eruption experiment. The differences are the consequence of slightly stronger meridional transport due to different seasons of the eruptions, lower injection height of the second eruption and the resulting different aerosol evolution. The differences between the two experiments are significant but smaller than the variance of the individual ensemble means. Overall, the evolution of the volcanic clouds is different in case of two eruptions than with a single eruption only. We conclude that there is no justification to use one eruption only and both climatic eruptions should be taken into account in future emission datasets.


2021 ◽  
pp. M56-2021-22
Author(s):  
Mirko Scheinert ◽  
Olga Engels ◽  
Ernst J. O. Schrama ◽  
Wouter van der Wal ◽  
Martin Horwath

AbstractGeodynamic processes in Antarctica such as glacial isostatic adjustment (GIA) and post-seismic deformation are measured by geodetic observations such as GNSS and satellite gravimetry. GNSS measurements have been comprising continuous measurements as well as episodic measurements since the mid-1990s. The estimated velocities typically reach an accuracy of 1 mm/a for horizontal and 2 mm/a for vertical velocities. However, the elastic deformation due to present-day ice-load change needs to be considered accordingly.Space gravimetry derives mass changes from small variations in the inter-satellite distance of a pair of satellites, starting with the GRACE satellite mission in 2002 and continuing with the GRACE-FO mission launched in 2018. The spatial resolution of the measurements is low (about 300 km) but the measurement error is homogeneous across Antarctica. The estimated trends contain signals from ice mass change, local and global GIA signal. To combine the strengths of the individual data sets statistical combinations of GNSS, GRACE and satellite altimetry data have been developed. These combinations rely on realistic error estimates and assumptions of snow density. Nevertheless, they capture signal that is missing from geodynamic forward models such as the large uplift in the Amundsen Sea sector due to low-viscous response to century-scale ice-mass changes.


Development ◽  
1994 ◽  
Vol 120 (4) ◽  
pp. 853-859 ◽  
Author(s):  
M. Leptin ◽  
S. Roth

The mesoderm in Drosophila invaginates by a series of characteristic cell shape changes. Mosaics of wild-type cells in an environment of mutant cells incapable of making mesodermal invaginations show that this morphogenetic behaviour does not require interactions between large numbers of cells but that small patches of cells can invaginate independent of their neighbours' behaviour. While the initiation of cell shape change is locally autonomous, the shapes the cells assume are partly determined by the individual cell's environment. Cytoplasmic transplantation experiments show that areas of cells expressing mesodermal genes ectopically at any position in the egg form an invagination. We propose that ventral furrow formation is the consequence of all prospective mesodermal cells independently following their developmental program. Gene expression at the border of the mesoderm is induced by the apposition of mesodermal and non-mesodermal cells.


MycoKeys ◽  
2018 ◽  
Vol 39 ◽  
pp. 29-40 ◽  
Author(s):  
Sten Anslan ◽  
R. Henrik Nilsson ◽  
Christian Wurzbacher ◽  
Petr Baldrian ◽  
Leho Tedersoo ◽  
...  

Along with recent developments in high-throughput sequencing (HTS) technologies and thus fast accumulation of HTS data, there has been a growing need and interest for developing tools for HTS data processing and communication. In particular, a number of bioinformatics tools have been designed for analysing metabarcoding data, each with specific features, assumptions and outputs. To evaluate the potential effect of the application of different bioinformatics workflow on the results, we compared the performance of different analysis platforms on two contrasting high-throughput sequencing data sets. Our analysis revealed that the computation time, quality of error filtering and hence output of specific bioinformatics process largely depends on the platform used. Our results show that none of the bioinformatics workflows appears to perfectly filter out the accumulated errors and generate Operational Taxonomic Units, although PipeCraft, LotuS and PIPITS perform better than QIIME2 and Galaxy for the tested fungal amplicon dataset. We conclude that the output of each platform requires manual validation of the OTUs by examining the taxonomy assignment values.


2011 ◽  
Vol 44 (1) ◽  
pp. 32-42 ◽  
Author(s):  
Thomas Vad ◽  
Wiebke F. C. Sager

Two simple iterative desmearing procedures – the Lake algorithm and the Van Cittert method – have been investigated by introducing different convergence criteria using both synthetic and experimental small-angle neutron scattering data. Implementing appropriate convergence criteria resulted in stable and reliable solutions in correcting resolution errors originating from instrumental smearing,i.e.finite collimation and polychromaticity of the incident beam. Deviations at small momentum transfer for concentrated ensembles of spheres encountered in earlier studies are not observed. Amplification of statistical errors can be reduced by applying a noise filter after desmearing. In most cases investigated, the modified Lake algorithm yields better results with a significantly smaller number of iterations and is, therefore, suitable for automated desmearing of large numbers of data sets.


1994 ◽  
Vol 346 (1317) ◽  
pp. 333-343 ◽  

High mutation rates are generally considered to be detrimental to the fitness of multicellular organisms because mutations untune finely tuned biological machinery. However, high mutation rates may be favoured by a need to evade an immune system that has been strongly stimulated to recognize those variants that reproduced earlier during the infection, hiv infections conform to this situation because they are characterized by large numbers of viruses that are continually breaking latency and large numbers that are actively replicating throughout a long period of infection. To be transmitted, HIVS are thus generally exposed to an immune system that has been activated to destroy them in response to prior viral replication in the individual. Increases in sexual contact should contribute to this predicament by favouring evolution toward relatively high rates of replication early during infection. Because rapid replication and high mutation rate probably contribute to rapid progression of infections to aids, the interplay of sexual activity, replication rate, and mutation rate helps explain why HIV-1 has only recently caused a lethal pandemic, even though molecular data suggest that it may have been present in humans for more than a century. This interplay also offers an explanation for geographic differences in progression to cancer found among infections due to the other major group of human retroviruses, human T-cell lymphotropic viruses (HTLV). Finally, it suggests ways in which we can use natural selection as a tool to control the aids pandemic and prevent similar pandemics from arising in the future.


2007 ◽  
Vol 46 (03) ◽  
pp. 324-331 ◽  
Author(s):  
P. Jäger ◽  
S. Vogel ◽  
A. Knepper ◽  
T. Kraus ◽  
T. Aach ◽  
...  

Summary Objectives: Pleural thickenings as biomarker of exposure to asbestos may evolve into malignant pleural mesothelioma. Foritsearly stage, pleurectomy with perioperative treatment can reduce morbidity and mortality. The diagnosis is based on a visual investigation of CT images, which is a time-consuming and subjective procedure. Our aim is to develop an automatic image processing approach to detect and quantitatively assess pleural thickenings. Methods: We first segment the lung areas, and identify the pleural contours. A convexity model is then used together with a Hounsfield unit threshold to detect pleural thickenings. The assessment of the detected pleural thickenings is based on a spline-based model of the healthy pleura. Results: Tests were carried out on 14 data sets from three patients. In all cases, pleural contours were reliably identified, and pleural thickenings detected. PC-based Computation times were 85 min for a data set of 716 slices, 35 min for 401 slices, and 4 min for 75 slices, resulting in an average computation time of about 5.2 s per slice. Visualizations of pleurae and detected thickeningswere provided. Conclusion: Results obtained so far indicate that our approach is able to assist physicians in the tedious task of finding and quantifying pleural thickenings in CT data. In the next step, our system will undergo an evaluation in a clinical test setting using routine CT data to quantifyits performance.


Author(s):  
M. McDermott ◽  
S. K. Prasad ◽  
S. Shekhar ◽  
X. Zhou

Discovery of interesting paths and regions in spatio-temporal data sets is important to many fields such as the earth and atmospheric sciences, GIS, public safety and public health both as a goal and as a preliminary step in a larger series of computations. This discovery is usually an exhaustive procedure that quickly becomes extremely time consuming to perform using traditional paradigms and hardware and given the rapidly growing sizes of today’s data sets is quickly outpacing the speed at which computational capacity is growing. In our previous work (Prasad et al., 2013a) we achieved a 50 times speedup over sequential using a single GPU. We were able to achieve near linear speedup over this result on interesting path discovery by using Apache Hadoop to distribute the workload across multiple GPU nodes. Leveraging the parallel architecture of GPUs we were able to drastically reduce the computation time of a 3-dimensional spatio-temporal interest region search on a single tile of normalized difference vegetative index for Saudi Arabia. We were further able to see an almost linear speedup in compute performance by distributing this workload across several GPUs with a simple MapReduce model. This increases the speed of processing 10 fold over the comparable sequential while simultaneously increasing the amount of data being processed by 384 fold. This allowed us to process the entirety of the selected data set instead of a constrained window.


2020 ◽  
Author(s):  
Anna M. Sozanska ◽  
Charles Fletcher ◽  
Dóra Bihary ◽  
Shamith A. Samarajiwa

AbstractMore than three decades ago, the microarray revolution brought about high-throughput data generation capability to biology and medicine. Subsequently, the emergence of massively parallel sequencing technologies led to many big-data initiatives such as the human genome project and the encyclopedia of DNA elements (ENCODE) project. These, in combination with cheaper, faster massively parallel DNA sequencing capabilities, have democratised multi-omic (genomic, transcriptomic, translatomic and epigenomic) data generation leading to a data deluge in bio-medicine. While some of these data-sets are trapped in inaccessible silos, the vast majority of these data-sets are stored in public data resources and controlled access data repositories, enabling their wider use (or misuse). Currently, most peer reviewed publications require the deposition of the data-set associated with a study under consideration in one of these public data repositories. However, clunky and difficult to use interfaces, subpar or incomplete annotation prevent discovering, searching and filtering of these multi-omic data and hinder their re-purposing in other use cases. In addition, the proliferation of multitude of different data repositories, with partially redundant storage of similar data are yet another obstacle to their continued usefulness. Similarly, interfaces where annotation is spread across multiple web pages, use of accession identifiers with ambiguous and multiple interpretations and lack of good curation make these data-sets difficult to use. We have produced SpiderSeqR, an R package, whose main features include the integration between NCBI GEO and SRA databases, enabling an integrated unified search of SRA and GEO data-sets and associated annotations, conversion between database accessions, as well as convenient filtering of results and saving past queries for future use. All of the above features aim to promote data reuse to facilitate making new discoveries and maximising the potential of existing data-sets.Availabilityhttps://github.com/ss-lab-cancerunit/SpiderSeqR


Sign in / Sign up

Export Citation Format

Share Document