scholarly journals Estimating Party Positions across Countries and Time—A Dynamic Latent Variable Model for Manifesto Data

2013 ◽  
Vol 21 (4) ◽  
pp. 468-491 ◽  
Author(s):  
Thomas König ◽  
Moritz Marbach ◽  
Moritz Osnabrügge

This article presents a new method for estimating positions of political parties across country- and time-specific contexts by introducing a latent variable model for manifesto data. We estimate latent positions and exploit bridge observations to make the scales comparable. We also incorporate expert survey data as prior information in the estimation process to avoid ex post facto interpretation of the latent space. To illustrate the empirical contribution of our method, we estimate the left-right positions of 388 parties competing in 238 elections across twenty-five countries and over sixty years. Compared to the puzzling volatility of existing estimates, we find that parties more modestly change their left-right positions over time. We also show that estimates without country- and time-specific bias parameters risk serious, systematic bias in about two-thirds of our data. This suggests that researchers should carefully consider the comparability of party positions across countries and/or time.

Author(s):  
Wei Xu ◽  
Alan Ritter ◽  
Chris Callison-Burch ◽  
William B. Dolan ◽  
Yangfeng Ji

We present MultiP (Multi-instance Learning Paraphrase Model), a new model suited to identify paraphrases within the short messages on Twitter. We jointly model paraphrase relations between word and sentence pairs and assume only sentence-level annotations during learning. Using this principled latent variable model alone, we achieve the performance competitive with a state-of-the-art method which combines a latent space model with a feature-based supervised classifier. Our model also captures lexically divergent paraphrases that differ from yet complement previous methods; combining our model with previous work significantly outperforms the state-of-the-art. In addition, we present a novel annotation methodology that has allowed us to crowdsource a paraphrase corpus from Twitter. We make this new dataset available to the research community.


2013 ◽  
Vol 46 (3) ◽  
pp. 786-797 ◽  
Author(s):  
Katherina K. Hauner ◽  
Richard E. Zinbarg ◽  
William Revelle

2021 ◽  
Vol 421 ◽  
pp. 244-259
Author(s):  
Hao Xiong ◽  
Yuan Yan Tang ◽  
Fionn Murtagh ◽  
Leszek Rutkowski ◽  
Shlomo Berkovsky

Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3137
Author(s):  
Amine Tadjer ◽  
Reider B. Bratvold ◽  
Remus G. Hanea

Production forecasting is the basis for decision making in the oil and gas industry, and can be quite challenging, especially in terms of complex geological modeling of the subsurface. To help solve this problem, assisted history matching built on ensemble-based analysis such as the ensemble smoother and ensemble Kalman filter is useful in estimating models that preserve geological realism and have predictive capabilities. These methods tend, however, to be computationally demanding, as they require a large ensemble size for stable convergence. In this paper, we propose a novel method of uncertainty quantification and reservoir model calibration with much-reduced computation time. This approach is based on a sequential combination of nonlinear dimensionality reduction techniques: t-distributed stochastic neighbor embedding or the Gaussian process latent variable model and clustering K-means, along with the data assimilation method ensemble smoother with multiple data assimilation. The cluster analysis with t-distributed stochastic neighbor embedding and Gaussian process latent variable model is used to reduce the number of initial geostatistical realizations and select a set of optimal reservoir models that have similar production performance to the reference model. We then apply ensemble smoother with multiple data assimilation for providing reliable assimilation results. Experimental results based on the Brugge field case data verify the efficiency of the proposed approach.


2021 ◽  
Vol 11 (2) ◽  
pp. 624
Author(s):  
In-su Jo ◽  
Dong-bin Choi ◽  
Young B. Park

Chinese characters in ancient books have many corrupted characters, and there are cases in which objects are mixed in the process of extracting the characters into images. To use this incomplete image as accurate data, we use image completion technology, which removes unnecessary objects and restores corrupted images. In this paper, we propose a variational autoencoder with classification (VAE-C) model. This model is characterized by using classification areas and a class activation map (CAM). Through the classification area, the data distribution is disentangled, and then the node to be adjusted is tracked using CAM. Through the latent variable, with which the determined node value is reduced, an image from which unnecessary objects have been removed is created. The VAE-C model can be utilized not only to eliminate unnecessary objects but also to restore corrupted images. By comparing the performance of removing unnecessary objects with mask regions with convolutional neural networks (Mask R-CNN), one of the prevalent object detection technologies, and also comparing the image restoration performance with the partial convolution model (PConv) and the gated convolution model (GConv), which are image inpainting technologies, our model is proven to perform excellently in terms of removing objects and restoring corrupted areas.


Sign in / Sign up

Export Citation Format

Share Document