scholarly journals Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience

2017 ◽  
Author(s):  
L Paninski ◽  
J.P Cunningham

AbstractModern large - scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single - neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control – developed in lockstep with advances in experimental neurotechnology - - promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time - series data with single - neuronal precision. Figure 1.The central role of data science in modern large - scale neuroscience.Topics reviewed herein are indicated in black.

1980 ◽  
Vol 45 (2) ◽  
pp. 246-267 ◽  
Author(s):  
Robert L. Hamblin ◽  
Brian L. Pitcher

Several lines of archaeological evidence are presented in this paper to suggest the existence of class warfare among the Classic Maya and of issues that historically have been associated with class conflict. This evidence indicates that class warfare may have halted the rule of the monument-producing, or Classic, elites and precipitated the depopulation of the lowland area. The theory is evaluated quantitatively by testing for time-related mathematical patterns that have been found to characterize large-scale conflicts in historical societies. The information used in the evaluation involves the time series data on the duration of rule by Classic elites as inferred from the production of monuments with Long Count dates at a sample of 82 ceremonial centers. The analyses confirm that the Maya data do exhibit the temporal and geographical patterns predicted from the class conflict explanation of the Classic Maya collapse. Alternative predictions from the other theories are considered but generally not found to be supported by these data.


Author(s):  
Ronald Rateiwa ◽  
Meshach J. Aziakpono

Background: In order for the post-2015 world development agenda – termed the sustainable development goals (SDGs) – to succeed, there is a pronounced need to ensure that available resources are used more effectively and additional financing is accessed from the private sector. Given that traditional bank lending has slowed down, the development of non-bank financing has become imperative. To this end, this article intends to empirically test the role of non-bank financial institutions (NBFIs) in stimulating economic growth.Aim: The aim of this article is to empirically test the existence of a long-run equilibrium relationship between economic growth and the development of NBFIs, and the causality thereof.Setting: The empirical assessment uses time-series data from Africa’s three largest economies, namely, Egypt, Nigeria and South Africa, over the period 1971–2013.Methods: This article uses the Johansen cointegration and vector error correction model within a country-specific setting.Results: The results showed that the long-run relationship between NBFI development and economic growth is relatively stronger in Egypt and South Africa, than in Nigeria. Evidence in respect of Nigeria shows that such a relationship is weak. The nature of the relationship between NBFI development and economic growth in Egypt is positive and significant, and predominantly bidirectional. This suggests that a virtuous relationship between NBFIs and economic growth exists in Egypt. In South Africa, the relationship is positive and significant and predominantly runs from NBFI development to economic growth, implying a supply-leading phenomenon. In Nigeria, the results are weak and mixed.Conclusion: The study concludes that in countries with more developed financial systems, the role of NBFIs and their importance to the economic growth process are more pronounced. Thus, there is need for developing policies targeted at developing the NBFI sector, given their potential to contribute to economic growth.


Stats ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 71-85
Author(s):  
Hossein Hassani ◽  
Mohammad Reza Yeganegi ◽  
Xu Huang

Fusing nature with computational science has been proved paramount importance and researchers have also shown growing enthusiasm on inventing and developing nature inspired algorithms for solving complex problems across subjects. Inevitably, these advancements have rapidly promoted the development of data science, where nature inspired algorithms are changing the traditional way of data processing. This paper proposes the hybrid approach, namely SSA-GA, which incorporates the optimization merits of genetic algorithm (GA) for the advancements of Singular Spectrum Analysis (SSA). This approach further boosts the performance of SSA forecasting via better and more efficient grouping. Given the performances of SSA-GA on 100 real time series data across various subjects, this newly proposed SSA-GA approach is proved to be computationally efficient and robust with improved forecasting performance.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jing Zhao ◽  
Shubo Liu ◽  
Xingxing Xiong ◽  
Zhaohui Cai

Privacy protection is one of the major obstacles for data sharing. Time-series data have the characteristics of autocorrelation, continuity, and large scale. Current research on time-series data publication mainly ignores the correlation of time-series data and the lack of privacy protection. In this paper, we study the problem of correlated time-series data publication and propose a sliding window-based autocorrelation time-series data publication algorithm, called SW-ATS. Instead of using global sensitivity in the traditional differential privacy mechanisms, we proposed periodic sensitivity to provide a stronger degree of privacy guarantee. SW-ATS introduces a sliding window mechanism, with the correlation between the noise-adding sequence and the original time-series data guaranteed by sequence indistinguishability, to protect the privacy of the latest data. We prove that SW-ATS satisfies ε-differential privacy. Compared with the state-of-the-art algorithm, SW-ATS is superior in reducing the error rate of MAE which is about 25%, improving the utility of data, and providing stronger privacy protection.


2021 ◽  
Vol 22 (1) ◽  
pp. 55-73
Author(s):  
Ali Mohammed Khalel Al-Shawaf ◽  
Tahira Yasmin

With the pace of development and competitiveness, innovation plays an important role to capture the market share. Various countries have effective strategies to enhance Research and Development (R&D) and exchange value added products in international market. So, based on this the aim of this research is to examine the role of R&D, industrial design and charges for intellectual property in innovative exports in South Korean economy. Time series data for the period 1998 to 2017, Ordinary Least Square (OLS) and Generalized Method of Moments (GMM) models are used to determine the dynamic interrelationship among the study variables. In summary, the overall results show that there is co-integration rank of in both trace test and value test at 1% significance level. Moreover, OLS and GMM findings depict that there is significant and positive coefficient for ID & RD which represent that they have positive impact on HT. Whereas, the IP displays a negative and significant relationship with high technology exports accordingly. Lastly, the diagnostic tests show that model is stable for the study time period and result is reliable. The current study also suggests some policy implications which can enhance innovative export products of South Korea while enhancing R&D.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 95 ◽  
Author(s):  
Johannes Stübinger ◽  
Katharina Adler

This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature.


Sign in / Sign up

Export Citation Format

Share Document