scholarly journals Taking the Dogma out of Econometrics: Structural Modeling and Credible Inference

2010 ◽  
Vol 24 (2) ◽  
pp. 69-82 ◽  
Author(s):  
Aviv Nevo ◽  
Michael D Whinston

Without a doubt, there has been a “credibility revolution” in applied econometrics. One contributing development has been in the improvement and increased use in data analysis of “structural methods”; that is, the use of models based in economic theory. Structural modeling attempts to use data to identify the parameters of an underlying economic model, based on models of individual choice or aggregate relations derived from them. Structural estimation has a long tradition in economics, but better and larger data sets, more powerful computers, improved modeling methods, faster computational techniques, and new econometric methods such as those mentioned above have allowed researchers to make significant improvements. While Angrist and Pischke extol the successes of empirical work that estimates “treatment effects” based on actual or quasi-experiments, they are much less sanguine about structural analysis and hold industrial organization up as an example where “progress is less dramatic.” Indeed, reading their article one comes away with the impression that there is only a single way to conduct credible empirical analysis. This seems to us a very narrow and dogmatic approach to empirical work; credible analysis can come in many guises, both structural and nonstructural, and for some questions structural analysis offers important advantages. In this comment, we address the criticism of structural analysis and its use in industrial organization, and consider why empirical analysis in industrial organization differs in such striking ways from that in field such as labor, which have recently emphasized the methods favored by Angrist and Pischke.

2013 ◽  
Vol 2 (1) ◽  
pp. 97-117 ◽  
Author(s):  
Tuukka Saarimaa ◽  
Janne Tukiainen

The efficiencyof local public goods provision and the functioning of local democracy crucially depend on the size and number of local jurisdictions. This article empirically analyzes voluntary municipal mergers in Finland. Our main focus is on aspects that have been somewhat neglected in prior empirical work: whether local democracy considerations, representation and voter preferences are involved in shaping the resulting municipal structure. The main results imply that some municipalities are forced to merge due to fiscal pressure and have to trade off political power to be accepted by their partners. The study also finds that the median voter's distance from services matters, while population size does not. The latter, somewhat surprising, observation is possibly explained by existing municipal co-operation, which already exhausts potential economies of scale.


2021 ◽  
pp. 089443932110415
Author(s):  
Vanessa Russo ◽  
Emiliano del Gobbo

The object of this research is to exploit the algorithm of Twitter’s trending topic (TT) and identify the elements capable of guiding public opinion in the Italian panorama. The underlying hypotheses that guide the whole article, confirmed by the research results, concern the existence of (a) a limited number of elements at the base of each popular hashtag with very high viral power and (b) hashtags transversal to the themes detected by the Twitter algorithm that define specific opinion polls. Through computational techniques, it was possible to extract and process data sets from six specific hashtags highlighted by TT. In a first step through social network analysis, we analyzed the hashtag semantic network to identify the hashtags transversal to the six TTs. Subsequently, we selected for each data set the contents with high sharing power and created a “potential opinion leader” index to identify users with influencer characteristics. Finally, a cross section of social actors able to guide public opinion in the Twittersphere emerged from the intersection between potentially influential users and the viral contents.


2015 ◽  
Vol 20 (1) ◽  
pp. 1-26 ◽  
Author(s):  
Dong-Hyeon Kim ◽  
Shu-Chin Lin ◽  
Yi-Chen Wu

Recent empirical work on globalization and inflation analyzes multicountry data sets in panel and/or cross-section frameworks and reaches inconclusive results. This paper highlights their shortcomings and reexamines the issue utilizing heterogeneous panel cointegration techniques that allow for cross-section heterogeneity and dependence. It finds that in a sample of developing countries globalization of both trade and finance, on the average, exerts a significant and positive effect on inflation, whereas in a sample of developed countries there is, on the average, no significant impact of openness. Neither type of openness disciplines inflationary policy. Despite this, there are large variations in the effect across countries, due possibly to differences in the quality of political institutions, central bank independence, the exchange-rate regimes, financial development, and/or legal traditions.


2012 ◽  
Vol 21 (06) ◽  
pp. 1250040
Author(s):  
NIALL ROONEY

In this paper we present a novel method that forms a weighted combination of a range of Stacking based methods for regression problems, without adding any major computational overhead in comparison to stacking itself. The intention of the technique is to benefit from the variation in performance of individual Stacking methods as demonstrated with different data sets, in order to provide a more robust technique overall. We detail an empirical analysis of the technique referred to as weighted Meta–Combiner (wMetaComb) and compare its performance to its underlying techniques.


Author(s):  
Pierre Salmon

This chapter mainly discusses the empirical work in the domain of local (subcentral) finance involving yardstick competition. It begins with a short section on the systemic novelty introduced by yardstick competition into the theory of fiscal federalism. The central part of the chapter focuses on the empirical arguments developed to probe the presence of yardstick competition and yardstick voting in various data sets. Then, some queries are formulated about the exact nature of what has been established empirically so far. It seems clearly confirmed that some form of yardstick competition or yardstick voting is at work in different settings. That result is important but the findings should not be supposed to validate the game-theoretical analysis or the pure mimicking behavior assumption typically associated with the empirical studies. Alternative approaches are considered toward the end of the chapter.


2019 ◽  
Vol 21 (1) ◽  
pp. 79 ◽  
Author(s):  
Jörn Lötsch ◽  
Alfred Ultsch

Advances in flow cytometry enable the acquisition of large and high-dimensional data sets per patient. Novel computational techniques allow the visualization of structures in these data and, finally, the identification of relevant subgroups. Correct data visualizations and projections from the high-dimensional space to the visualization plane require the correct representation of the structures in the data. This work shows that frequently used techniques are unreliable in this respect. One of the most important methods for data projection in this area is the t-distributed stochastic neighbor embedding (t-SNE). We analyzed its performance on artificial and real biomedical data sets. t-SNE introduced a cluster structure for homogeneously distributed data that did not contain any subgroup structure. In other data sets, t-SNE occasionally suggested the wrong number of subgroups or projected data points belonging to different subgroups, as if belonging to the same subgroup. As an alternative approach, emergent self-organizing maps (ESOM) were used in combination with U-matrix methods. This approach allowed the correct identification of homogeneous data while in sets containing distance or density-based subgroups structures; the number of subgroups and data point assignments were correctly displayed. The results highlight possible pitfalls in the use of a currently widely applied algorithmic technique for the detection of subgroups in high dimensional cytometric data and suggest a robust alternative.


2018 ◽  
Vol 56 (1) ◽  
pp. 157-184 ◽  
Author(s):  
Ali Hortaçsu ◽  
David McAdams

Abundant data has led to new opportunities for empirical auctions research in recent years, with much of the newest work on auctions of multiple objects, including: (1) auctions of ranked objects (such as sponsored search ads), (2) auctions of identical objects (such as Treasury bonds), and (3) auctions of dissimilar objects (such as FCC spectrum licenses). This paper surveys recent developments in the empirical analysis of such auctions. (JEL D44, H82)


Sign in / Sign up

Export Citation Format

Share Document