scholarly journals A Gamma-Poisson Mixture Topic Model for Short Text

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jocelyn Mazarura ◽  
Alta de Waal ◽  
Pieter de Villiers

Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in the literature are admixture models, making the assumption that a document is generated from a mixture of topics. In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text.

1990 ◽  
Vol 132 (supp1) ◽  
pp. 183-191 ◽  
Author(s):  
ROBERT D. GIBBONS ◽  
DAVID C. CLARK ◽  
JAN FAWCETT

Abstract The absence of any standard definition of suicide cluster events hinders understanding of the prevalence of the problem, hinders the development of appropriate public health responses to observed clusters, and ultimately hinders Investigation of the mechanisms underlying contagious communication of suicidal behavior. The authors introduce a Poisson mixture model for assessing potential dusters of adolescent suicide, apply that model to the monthly incidence rates of adolescent suicide for one populous US county over the last 11 years, and generate 99% tolerance limits with 95% confidence for the number of suicides which may occur by chance within specific intervals of time in that county. The suicide incidence data showed a remarkable fit to a single Poisson distribution, suggesting it is not unreasonable to consider the cases as randomly-distributed and independent events. The authors conclude that there is no evidence that adolescent suicides occurred in clusters in the place and in the time frame under study, and recommend the Poisson mixture model for ascertaining clusters as well as implementing cluster surveillance.


Author(s):  
Ximing Li ◽  
Jiaojiao Zhang ◽  
Jihong Ouyang

Conventional topic models suffer from a severe sparsity problem when facing extremely short texts such as social media posts. The family of Dirichlet multinomial mixture (DMM) can handle the sparsity problem, however, they are still very sensitive to ordinary and noisy words, resulting in inaccurate topic representations at the document level. In this paper, we alleviate this problem by preserving local neighborhood structure of short texts, enabling to spread topical signals among neighboring documents, so as to correct the inaccurate topic representations. This is achieved by using variational manifold regularization, constraining the close short texts should have similar variational topic representations. Upon this idea, we propose a novel Laplacian DMM (LapDMM) topic model. During the document graph construction, we further use the word mover’s distance with word embeddings to measure document similarities at the semantic level. To evaluate LapDMM, we compare it against the state-of-theart short text topic models on several traditional tasks. Experimental results demonstrate that our LapDMM achieves very significant performance gains over baseline models, e.g., achieving even about 0.2 higher scores on clustering and classification tasks in many cases.


2019 ◽  
Vol 174 ◽  
pp. 105-116 ◽  
Author(s):  
Katharina Falkner ◽  
Hermine Mitter ◽  
Elena Moltchanova ◽  
Erwin Schmid

2007 ◽  
Author(s):  
Gloria Haro ◽  
Gregory Randal ◽  
Guillermo Sapiro

BMC Genomics ◽  
2008 ◽  
Vol 9 (Suppl 2) ◽  
pp. S23 ◽  
Author(s):  
Weixing Feng ◽  
Yunlong Liu ◽  
Jiejun Wu ◽  
Kenneth P Nephew ◽  
Tim HM Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document