Theory Driven Bias in Ideal Point Estimates—A Monte Carlo Study

2011 ◽  
Vol 19 (1) ◽  
pp. 87-102 ◽  
Author(s):  
Alexander V. Hirsch

This paper analyzes the use of ideal point estimates for testing pivot theories of lawmaking such as Krehbiel's (1998, Pivotal politics: A theory of U.S. lawmaking. Chicago, IL: University of Chicago) pivotal politics and Cox and McCubbins's (2005, Setting the Agenda: Responsible Party Government in the U.S. House of Representations. New York: Cambridge University Press) party cartel model. Among the prediction of pivot theories is that all pivotal legislators will vote identically on all successful legislation. Clinton (2007, Lawmaking and roll calls. Journal of Politics 69:455–67) argues that the estimated ideal points of the pivotal legislators are therefore predicted to be statistically indistinguishable and false when estimated from the set of successful final passage roll call votes, which implies that ideal point estimates cannot logically be used to test pivot theories. I show using Monte Carlo simulation that when pivot theories are augmented with probabilistic voting, Clinton's prediction only holds in small samples when voting is near perfect. I furthermore show that the predicted bias is unlikely to be consequential with U.S. Congressional voting data. My analysis suggests that the methodology of estimating ideal points to compute theoretically relevant quantities for empirical tests is not inherently flawed in the case of pivot theories.

2015 ◽  
Vol 5 (2) ◽  
pp. 397-408 ◽  
Author(s):  
Lindsay Nielson ◽  
Neil Visalvanich

Primary elections in the United States have been under-studied in the political science literature. Using new data to estimate the ideal points of primary election candidates and constituents, we examine the link between the ideological leanings of primary electorates and the ideological orientation of US congressional candidates. We use district-level data from the Cooperative Congressional Election Study and ideal point estimates for congressional primary election candidates to examine the role of primary electorate ideology in the selection of party nominees. We find that more extreme Republicans are more likely to win their party’s primary and that Republican and Democratic candidates are responsive to different electoral constituencies.


2009 ◽  
Vol 17 (3) ◽  
pp. 261-275 ◽  
Author(s):  
Royce Carroll ◽  
Jeffrey B. Lewis ◽  
James Lo ◽  
Keith T. Poole ◽  
Howard Rosenthal

DW-NOMINATE scores for the U.S. Congress are widely used measures of legislators' ideological locations over time. These scores have been used in a large number of studies in political science and closely related fields. In this paper, we extend the work of Lewis and Poole (2004) on the parametric bootstrap to DW-NOMINATE and obtain standard errors for the legislator ideal points. These standard errors are in the range of 1%–4% of the range of DW-NOMINATE coordinates.


2017 ◽  
Vol 17 (68) ◽  
pp. 43-59
Author(s):  
Alexander Hudson ◽  
Ivar Alberto Hartmann

Brazil's Supremo Tribunal Federal (STF) is an especially interesting case for scholars with an interest in judicial behavior. The justices of the STF rule in tens of thousands of cases per year, in a great variety of legal disputes. The ideological breakdown of the STF remains puzzling. Observers of the STF find that a single left-right dimension is entirely inadequate to describe the voting coalitions that form in the court. In this paper, we utilize a new dataset covering a representative sample of all cases decided by the STF between 1992 and 2013. The first important finding is that the voting patterns of the STF show that at least four dimensions are necessary to describe the justices' ideal points. We then estimate ideal points for 23 justices on each of four dimensions, and associate these dimensions with the dominant areas of law with which the STF deals. Finally, we seek to use these ideal point estimates to compare the votes of the justices in key cases with their broader voting pattern.


2017 ◽  
Vol 21 (3) ◽  
Author(s):  
Lucie Kraicová ◽  
Jozef Baruník

AbstractThis work studies wavelet-based Whittle estimator of the fractionally integrated exponential generalized autoregressive conditional heteroscedasticity (FIEGARCH) model often used for modeling long memory in volatility of financial assets. The newly proposed estimator approximates the spectral density using wavelet transform, which makes it more robust to certain types of irregularities in data. Based on an extensive Monte Carlo study, both behavior of the proposed estimator and its relative performance with respect to traditional estimators are assessed. In addition, we study properties of the estimators in presence of jumps, which brings interesting discussion. We find that wavelet-based estimator may become an attractive robust and fast alternative to the traditional methods of estimation. In particular, a localized version of our estimator becomes attractive in small samples.


1977 ◽  
Vol 71 (1) ◽  
pp. 111-130 ◽  
Author(s):  
John H. Aldrich ◽  
Richard D. McKelvey

A method of scaling is proposed to estimate the positions of candidates and voters on a common issue dimension. The scaling model assumes that candidates occupy true positions in an issue space and that individual level perceptual data arise from this in a two step process. The first step consists of a stochastic component, satisfying the standard Gauss Markov assumptions, which reflects true misperception. The second step consists of a linear distortion which is introduced in the survey situation. Estimates of the parameters of the model are developed by applying the least squares criterion, and distributions of the estimates are investigated by Monte Carlo methods.The scaling technique is applied to the seven-point issue scales asked in the 1968 and 1972 SRC survey. The resulting ideal point estimates are related to candidate positions in 1968 to test a simple Downsian voting model.


Author(s):  
Martin Elff ◽  
Jan Paul Heisig ◽  
Merlin Schaeffer ◽  
Susumu Shikano

Abstract Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units.


Sign in / Sign up

Export Citation Format

Share Document