The Neo-Alexandrians: A Review Essay on Data Handbooks in Political Science

1974 ◽  
Vol 68 (1) ◽  
pp. 243-252 ◽  
Author(s):  
Ted Robert Gurr

Four major new compilations of macropolitical data are compared and evaluated. Each summarizes a large-scale research effort to code or to collect data suitable for theoretically relevant, cross-national comparisons. As a group the new handbooks incorporate many improvements and innovations on earlier handbooks, which concentrated mainly on cross-sectional, aggregate data or simplistically coded judgments about nation-states. About a third of their measures consist of “made” data, derived by coding journalistic and historical sources. All provide some measures for cross-time comparisons; one is devoted exclusively to time-series data. Many of their measures denote properties of internal and international conflict and of international transactions. All but one are painfully self-conscious about problems of reliability and comparability of data. One criticism is the reliance of several of the handbooks on “counts” of conflict events rather than assessment of more theoretically relevant properties of conflict. A second is the paucity of indicators of inequality and, more generally, of measures which give a “view from the bottom” of political systems.

2019 ◽  
Author(s):  
Sacha Epskamp

Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGM)---an undirected network model of partial correlations---between observed variables of cross-sectional data or single subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics, which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rest on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.


2020 ◽  
Vol 12 (3) ◽  
pp. 895 ◽  
Author(s):  
Cephas Paa Kwasi Coffie ◽  
Hongjiang Zhao ◽  
Isaac Adjei Mensah

The financial landscape of sub-Sahara Africa is undergoing major changes due to the advent of FinTech, which has seen mobile payments boom in the region. This paper examines the salient role of mobile payments in traditional banks’ drive toward financial accessibility in sub-Sahara Africa by using panel econometric approaches that consider the issues of independencies among cross-sectional residuals. Using data from the World Development Index (WDI) 2011–2017 on 11 countries in the region, empirical results from cross-sectional dependence (CD) tests, panel unit root test, panel cointegration test, and the fully modified ordinary least squares (FMOLS) approach indicates that (i) the panel time series data are cross-sectionally independent, (ii) the variables have the same order of integration and are cointegrated, and (iii) growth in mobile payment transactions had a significant positive relationship with formal account ownership, the number of ATMs, and number of new bank branches in the long-run. The paper therefore confirms that the institutional structure of traditional banks that makes them competitive, irrespective of emerging disruptive technologies, has stimulated overall financial accessibility in the region leading to overall sustainable growth in the financial sector. We conclude the paper with feasible policy suggestions.


Author(s):  
Andrew Q. Philips

In cross-sectional time-series data with a dichotomous dependent variable, failing to account for duration dependence when it exists can lead to faulty inferences. A common solution is to include duration dummies, polynomials, or splines to proxy for duration dependence. Because creating these is not easy for the common practitioner, I introduce a new command, mkduration, that is a straightforward way to generate a duration variable for binary cross-sectional time-series data in Stata. mkduration can handle various forms of missing data and allows the duration variable to easily be turned into common parametric and nonparametric approximations.


1980 ◽  
Vol 45 (2) ◽  
pp. 246-267 ◽  
Author(s):  
Robert L. Hamblin ◽  
Brian L. Pitcher

Several lines of archaeological evidence are presented in this paper to suggest the existence of class warfare among the Classic Maya and of issues that historically have been associated with class conflict. This evidence indicates that class warfare may have halted the rule of the monument-producing, or Classic, elites and precipitated the depopulation of the lowland area. The theory is evaluated quantitatively by testing for time-related mathematical patterns that have been found to characterize large-scale conflicts in historical societies. The information used in the evaluation involves the time series data on the duration of rule by Classic elites as inferred from the production of monuments with Long Count dates at a sample of 82 ceremonial centers. The analyses confirm that the Maya data do exhibit the temporal and geographical patterns predicted from the class conflict explanation of the Classic Maya collapse. Alternative predictions from the other theories are considered but generally not found to be supported by these data.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jing Zhao ◽  
Shubo Liu ◽  
Xingxing Xiong ◽  
Zhaohui Cai

Privacy protection is one of the major obstacles for data sharing. Time-series data have the characteristics of autocorrelation, continuity, and large scale. Current research on time-series data publication mainly ignores the correlation of time-series data and the lack of privacy protection. In this paper, we study the problem of correlated time-series data publication and propose a sliding window-based autocorrelation time-series data publication algorithm, called SW-ATS. Instead of using global sensitivity in the traditional differential privacy mechanisms, we proposed periodic sensitivity to provide a stronger degree of privacy guarantee. SW-ATS introduces a sliding window mechanism, with the correlation between the noise-adding sequence and the original time-series data guaranteed by sequence indistinguishability, to protect the privacy of the latest data. We prove that SW-ATS satisfies ε-differential privacy. Compared with the state-of-the-art algorithm, SW-ATS is superior in reducing the error rate of MAE which is about 25%, improving the utility of data, and providing stronger privacy protection.


Author(s):  
Arini Wahyu Utami ◽  
Jamhari Jamhari ◽  
Suhatmini Hardyastuti

Paddy and maize are two important food crops in Indonesia and mainly produced in Java Island. This research aimed to know the impact of El Nino and La Nina on paddy and maize farmer’s supply in Java. Cross sectional data from four provinces in Java was combined with time series data during 1987-2006. Paddy supply was estimated using log model, while maize supply used autoregressive model; each was estimated using two types of regression function. First, it included dummy variable of El Nino and La Nina to know their influence into paddy and maize supply. Second, Southern Oscillation Index was used to analyze the supply changing when El Nino or La Nina occur. The result showed that El Nino and La Nina did not influence paddy supply, while La Nina influenced maize supply in Java. Maize supply increased when La Nina occurred.


Author(s):  
Josep Escrig Escrig ◽  
Buddhika Hewakandamby ◽  
Georgios Dimitrakis ◽  
Barry Azzopardi

Intermittent gas and liquid two-phase flow was generated in a 6 m × 67 mm diameter pipe mounted rotatable frame (vertical up to −20°). Air and a 5 mPa s silicone oil at atmospheric pressure were studied. Gas superficial velocities between 0.17 and 2.9 m/s and liquid superficial velocities between 0.023 and 0.47 m/s were employed. These runs were repeated at 7 angles making a total of 420 runs. Cross sectional void fraction time series were measured over 60 seconds for each run using a Wire Mesh Sensor and a twin plane Electrical Capacitance Tomography. The void fraction time series data were analysed in order to extract average void fraction, structure velocities and structure frequencies. Results are presented to illustrate the effect of the angle as well as the phase superficial velocities affect the intermittent flows behaviour. Existing correlations suggested to predict average void fraction and gas structures velocity and frequency in slug flow have been compared with new experimental results for any intermittent flow including: slug, cap bubble and churn. Good agreements have been seen for the gas structure velocity and mean void fraction. On the other hand, no correlation was found to predict the gas structure frequency, especially in vertical and inclined pipes.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


2018 ◽  
Vol 1 (1) ◽  
pp. 62-75
Author(s):  
Pradip Raj Poudel ◽  
Narayan Raj Joshi ◽  
Shanta Pokhrel

A study on effects of climate change on rice (Oryza sativa) production in Tharu communities of Dang district of Nepal was conducted in 2018A.D to investigate the perception and major adaptation strategies followed by Tharu farmers. The study areas were selected purposively. Cross-sectional data was collected using a household survey of 120 households by applying simple random sampling technique with lottery method for sample selection. Primary data were collected using semi-structured and pretested interview schedule, focus group discussion and key informants interview whereas monthly and annual time series data on temperature and precipitation over 21years (1996-2016) were collected from Department of Hydrology and Meteorology, Kathmandu as secondary data. Descriptive statistics and trend analysis were used to analyze the data. The ratio of male and female was found to be equal with higher literacy rate at study area than district. Most of the farmers depended on agriculture only for their livelihood where there was large variation in land distribution. Farmers had better access to FM/radio for agricultural extension information sources. The study resulted that Tharu farmers of Dang perceived all parameters of climate. Temperature and rainfall were the most changing component of climate perceived by farmers. The trend analysis of temperature data of Dang over 21 years showed that maximum, minimum and average temperature were increasing at the rate of 0.031°C, 0.021°C and 0.072°C per year respectively which supports the farmers perception whereas trend of rainfall was decreased with 7.56mm per year. The yearly maximum rainfall amount was increased by 1.15mm. The production of local indigenous rice varieties were decreasing while hybrid and improved rice varieties were increasing. The district rice production trend was increasing which support the farmer’s perception. The study revealed that there were climate change effects on paddy production and using various adaptation strategies to cope in Dang district.


Sign in / Sign up

Export Citation Format

Share Document