scholarly journals Topics (Automated Content Analysis)

Author(s):  
Valerie Hase

Topics describe the main issue discussed in an article, for example: Does an article deal with politics, economics or sports? Field of application/theoretical foundation: In the context of “Agenda Setting”, studies analyze which issues are on the public agenda. In the context of “News Values”, studies may analyze why some topics are more prominently covered than others. References/combination with other methods of data collection: Many studies combine manual inspection of topics with their automated detection. Quinn et al. (2010) demonstrate for their analyses of legislative speeches how manual inspection may increase the validity of results. Similarly, Hase et al. (2020) use automated content analysis to find and map similar topics for which manual coding is then conducted. Such combinations may contribute to a better and more detailed understanding of topics than automated analyses by themselves. The datasets referred to in the table are described in the following paragraph: Puschmann (2019a) uses New York Times articles (1996-2006, N = 30,862) as well as articles from Die Zeit (2011-2016, N = 377) to identify topics using supervised machine learning. In another tutorial, Puschmann (2019b) uses Sherlock Holmes stories (18th century, N = 12), articles from Die Zeit (2011-2016, N = 377) and debate transcripts (1970-2017, N = 7,897) to apply LDA and structural topic modeling. In her tutorials, Silge (2018a, 2018b) also uses Sherlock Holmes stories (18th century, N = 12) and a news corpus also containing comments (2006-ongoing, N = 100,000). Silge and Robinson (2020) apply LDA topic modeling on news stories by the Associated Press (1992, N = 2,246) as well as books by Dickens, Wells, Verne and Austen (18th century, N = 4). Roberts et al. (2019) use blogposts (2008, N = 13,248) for structural topic modeling. Watanabe and Müller (2019) apply LDA topic modeling on newspaper articles from The Guardian (2016, N = 6,000). Van Atteveldt and Welbers (2019, 2020) use State of the Union speeches (1981-2017, N = 10 and 1789-2017, N = 58) for their analyses. Lastly, Wiedemann and Niekler (2017) use the same data containing State of the Union speeches (1790-2017, N = 223).   Table 1. Measurement of “Topics” using automated content analysis. Author(s) Sample Procedure Formal validity check with manual coding as benchmark* Code Puschmann (2019a) (a) Newspaper articles (b) Newspaper articles Supervised machine learning Reported http://inhaltsanalyse-mit-r.de/maschinelles_lernen.html Puschmann (2019b) (a) Sherlock Holmes stories (b) Newspaper articles (c) United Nations General Debate Transcripts LDA topic modeling; structural topic modeling Not reported http://inhaltsanalyse-mit-r.de/themenmodelle.html Silge (2018a) & Silge (2018b) (a) Sherlock Holmes stories (b) News stories and comments t Structural topic modeling Not reported https://juliasilge.com/blog/sherlock-holmes-stm/ & https://juliasilge.com/blog/evaluating-stm/ Silge & Robinson (2020) (a) News articles (b) Books         LDA topic modeling Not reported https://www.tidytextmining.com/topicmodeling.html Roberts et al. (2019) Blogposts Structural topic modeling Not reported https://www.jstatsoft.org/article/view/v091i02 Watanabe & Müller (2019) Newspaper articles LDA topic modeling Not reported https://tutorials.quanteda.io/machine-learning/topicmodel/ van Atteveldt & Welbers (2019) State of the Union speeches Structural topic modeling Not reported https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_stm.md van Atteveldt & Welbers (2020) State of the Union speeches LDA topic modeling Not reported https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_lda.md Wiedemann & Niekler (2017) State of the Union speeches LDA topic modeling Not reported https://tm4ss.github.io/docs/Tutorial_6_Topic_Models.html Wiedemann & Niekler (2017) State of the Union speeches Supervised machine learning Reported https://tm4ss.github.io/docs/Tutorial_7_Klassifikation.html *Please note that many of the sources listed here are tutorials on how to conducted automated analyses – and therefore not focused on the validation of results. Readers should simply read this column as an indication in terms of which sources they can refer to if they are interested in the validation of results. References Hase, V., Engelke, K., Kieslich, K. (2020). The things we fear. Combining automated and manual content analysis to uncover themes, topics and threats in fear-related news. Journalism Studies, 21(10), 1384-1402. Puschmann, C. (2019). Automatisierte Inhaltsanalyse mit R. Retrieved from http://inhaltsanalyse-mit-r.de/index.html Quinn, K. M., Monroe, B. L., Colaresi, M., Crespin, M. H., & Radev, D. R. (2010). How to analyze political attention with minimal assumptions and costs. American Journal of Political Science, 54(1), 209–228. Roberts, M. E., Stewart, B. M., & Tingley, D. (2019). stm: An R Package for Structural Topic Model. Journal of Statistical Software, 91(2), 1–40. Silge, J. (2018a). The game is afoot! Topic modeling of Sherlock Holmes stories. Retrieved from https://juliasilge.com/blog/sherlock-holmes-stm/ Silge, J. (2018b). Training, evaluating, and interpreting topic models. Retrieved from https://juliasilge.com/blog/evaluating-stm/ Silge, J., & Robinson, D. (2020). Text Mining with R. A tidy approach. Retrieved from https://www.tidytextmining.com/ van Atteveldt, W., & Welbers, K. (2019). Structural Topic Modeling. Retrieved from https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_stm.md van Atteveldt, W., & Welbers, K. (2020). Fitting LDA models in R. Retrieved from https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_lda.md Watanabe, K., & Müller, S. (2019). Quanteda tutorials. Retrieved from https://tutorials.quanteda.io/ Wiedemann, G., Niekler, A. (2017). Hands-on: a five day text mining course for humanists and social scientists in R. Proceedings of the 1st Workshop Teaching NLP for Digital Humanities (Teach4DH@GSCL 2017), Berlin. Retrieved from https://tm4ss.github.io/docs/index.html

Author(s):  
Valerie Hase

Sentiment/tone describes the way issues or specific actors are described in coverage. Many analyses differentiate between negative, neutral/balanced or positive sentiment/tone as broader categories, but analyses might also measure expressions of incivility, fear, or happiness, for example, as more granular types of sentiment/tone. Analyses can detect sentiment/tone in full texts (e.g., general sentiment in financial news) or concerning specific issues (e.g., specific sentiment towards the stock market in financial news or a specific actor). The datasets referred to in the table are described in the following paragraph: Puschmann (2019) uses four data sets to demonstrate how sentiment/tone may be analyzed by the computer. Using Sherlock Holmes stories (18th century, N = 12), tweets (2016, N = 18,826), Swiss newspaper articles (2007-2012, N = 21,280), and debate transcripts (2013-2017, N = 205,584), he illustrates how dictionaries may be applied for such a task. Rauh (2019) uses three data sets to validate his organic German language dictionary for sentiment/tone. His data consists of sentences from German parliament speeches (1991-2013, N = 1,500), German-language quasi-sentences from German, Austrian and Swiss party manifestos (1998-2013, N = 14,008) and newspaper, journal and news wire articles (2011-2012, N = 4,038). Silge and Robinson (2020) use six Jane Austen novels to demonstrate how dictionaries may be used for sentiment analysis. Van Atteveldt and Welbers (2020) use state of the Union speeches (1789-2017, N = 58) for the same purpose. The same authors (van Atteveldt & Welbers, 2019) show based on a dataset of N = 2,000 movie reviews how supervised machine learning might also do the trick. In their Quanteda tutorials, Watanabe and Müller (2019) demonstrate the use of dictionaries and supervised machine learning for sentiment analysis on UK newspaper articles (2012-2016, N = 6,000) as well as the same set of movie reviews (n = 2,000). Lastly, Wiedemann and Niekler (2017) use state of the Union speeches (1790-2017, N = 233) to demonstrate how sentiment/tone can be coded automatically via a dictionary approach. Field of application/theoretical foundation: Related to theories of “Framing” and “Bias” in coverage, many analyses are concerned with the way the news evaluates and interprets specific issues and actors. References/combination with other methods of data collection: Manual coding is needed for many automated analyses, including the ones concerned with sentiment. Studies for example use manual content analysis to develop dictionaries, to create training sets on which algorithms used for automated classification are trained, or to validate the results of automated analyses (Song et al., 2020).   Table 1. Measurement of “Sentiment/Tone” using automated content analysis. Author(s) Sample Procedure Formal validity check with manual coding as benchmark* Code Puschmann (2019) (a) Sherlock Holmes stories (b) Tweets (c) Swiss newspaper articles (d) German Parliament transcripts   Dictionary approach Not reported http://inhaltsanalyse-mit-r.de/sentiment.html Rauh (2018) (a) Bundestag speeches (b) Quasi-sentences from German, Austrian and Swiss party manifestos (c) Newspapers, journals, agency reports Dictionary approach Reported https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/BKBXWD Silge & Robinson (2020) Books by Jane Austen Dictionary approach Not reported https://www.tidytextmining.com/sentiment.html van Atteveldt & Welbers (2020) State of the Union speeches Dictionary approach Reported https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/sentiment_analysis.md van Atteveldt & Welbers (2019) Movie reviews Supervised Machine Learning Approach Reported https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_ml.md Watanabe & Müller (2019) Newspaper articles Dictionary approach Not reported https://tutorials.quanteda.io/advanced-operations/targeted-dictionary-analysis/ Watanabe & Müller (2019) Movie reviews Supervised Machine Learning Approach Reported https://tutorials.quanteda.io/machine-learning/nb/ Wiedemann & Niekler (2017) State of the Union speeches Dictionary approach Not reported https://tm4ss.github.io/docs/Tutorial_3_Frequency.html *Please note that many of the sources listed here are tutorials on how to conducted automated analyses – and therefore not focused on the validation of results. Readers should simply read this column as an indication in terms of which sources they can refer to if they are interested in the validation of results. References Puschmann, C. (2019). Automatisierte Inhaltsanalyse mit R. Retrieved from http://inhaltsanalyse-mit-r.de/index.html Rauh, C. (2018). Validating a sentiment dictionary for German political language—A workbench note. Journal of Information Technology & Politics, 15(4), 319–343. doi:10.1080/19331681.2018.1485608 Silge, J., & Robinson, D. (2020). Text mining with R. A tidy approach. Retrieved from https://www.tidytextmining.com/ Song, H., Tolochko, P., Eberl, J.-M., Eisele, O., Greussing, E., Heidenreich, T., Lind, F., Galyga, S., & Boomgaarden, H.G. (2020) In validations we trust? The impact of imperfect human annotations as a gold standard on the quality of validation of automated content analysis. Political Communication, 37(4), 550-572. van Atteveldt, W., & Welbers, K. (2019). Supervised Text Classification. Retrieved from https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/r_text_ml.md van Atteveldt, W., & Welbers, K. (2020). Supervised Sentiment Analysis in R. Retrieved from https://github.com/ccs-amsterdam/r-course-material/blob/master/tutorials/sentiment_analysis.md Watanabe, K., & Müller, S. (2019). Quanteda tutorials. Retrieved from https://tutorials.quanteda.io/ Wiedemann, G., Niekler, A. (2017). Hands-on: a five day text mining course for humanists and social scientists in R. Proceedings of the 1st Workshop Teaching NLP for Digital Humanities (Teach4DH@GSCL 2017), Berlin. Retrieved from https://tm4ss.github.io/docs/index.html


2018 ◽  
Vol 46 (1) ◽  

Damian Trilling & Jelle Boumans Automated analysis of Dutch language-based texts. An overview and research agenda While automated methods of content analysis are increasingly popular in today’s communication research, these methods have hardly been adopted by communication scholars studying texts in Dutch. This essay offers an overview of the possibilities and current limitations of automated text analysis approaches in the context of the Dutch language. Particularly in dictionary-based approaches, research is far less prolific as research on the English language. We divide the most common types of content-analytical research questions into three categories: 1) research problems for which automated methods ought to be used, 2) research problems for which automated methods could be used, and 3) research problems for which automated methods (currently) cannot be used. Finally, we give suggestions for the advancement of automated text analysis approaches for Dutch texts. Keywords: automated content analysis, Dutch, dictionaries, supervised machine learning, unsupervised machine learning


2018 ◽  
Vol 220 ◽  
pp. 254-261 ◽  
Author(s):  
Marie Chandelier ◽  
Agnès Steuckardt ◽  
Raphaël Mathevet ◽  
Sascha Diwersy ◽  
Olivier Gimenez

2020 ◽  
Vol 110 (S3) ◽  
pp. S331-S339
Author(s):  
Amelia Jamison ◽  
David A. Broniatowski ◽  
Michael C. Smith ◽  
Kajal S. Parikh ◽  
Adeena Malik ◽  
...  

Objectives. To adapt and extend an existing typology of vaccine misinformation to classify the major topics of discussion across the total vaccine discourse on Twitter. Methods. Using 1.8 million vaccine-relevant tweets compiled from 2014 to 2017, we adapted an existing typology to Twitter data, first in a manual content analysis and then using latent Dirichlet allocation (LDA) topic modeling to extract 100 topics from the data set. Results. Manual annotation identified 22% of the data set as antivaccine, of which safety concerns and conspiracies were the most common themes. Seventeen percent of content was identified as provaccine, with roughly equal proportions of vaccine promotion, criticizing antivaccine beliefs, and vaccine safety and effectiveness. Of the 100 LDA topics, 48 contained provaccine sentiment and 28 contained antivaccine sentiment, with 9 containing both. Conclusions. Our updated typology successfully combines manual annotation with machine-learning methods to estimate the distribution of vaccine arguments, with greater detail on the most distinctive topics of discussion. With this information, communication efforts can be developed to better promote vaccines and avoid amplifying antivaccine rhetoric on Twitter.


2020 ◽  
Author(s):  
Sicheng Zhou ◽  
Yunpeng Zhao ◽  
Jiang Bian ◽  
Ann F Haynos ◽  
Rui Zhang

BACKGROUND Eating disorders (EDs) are a group of mental illnesses that have an adverse effect on both mental and physical health. As social media platforms (eg, Twitter) have become an important data source for public health research, some studies have qualitatively explored the ways in which EDs are discussed on these platforms. Initial results suggest that such research offers a promising method for further understanding this group of diseases. Nevertheless, an efficient computational method is needed to further identify and analyze tweets relevant to EDs on a larger scale. OBJECTIVE This study aims to develop and validate a machine learning–based classifier to identify tweets related to EDs and to explore factors (ie, topics) related to EDs using a topic modeling method. METHODS We collected potential ED-relevant tweets using keywords from previous studies and annotated these tweets into different groups (ie, ED relevant vs irrelevant and then promotional information vs laypeople discussion). Several supervised machine learning methods, such as convolutional neural network (CNN), long short-term memory (LSTM), support vector machine, and naïve Bayes, were developed and evaluated using annotated data. We used the classifier with the best performance to identify ED-relevant tweets and applied a topic modeling method—Correlation Explanation (CorEx)—to analyze the content of the identified tweets. To validate these machine learning results, we also collected a cohort of ED-relevant tweets on the basis of manually curated rules. RESULTS A total of 123,977 tweets were collected during the set period. We randomly annotated 2219 tweets for developing the machine learning classifiers. We developed a CNN-LSTM classifier to identify ED-relevant tweets published by laypeople in 2 steps: first relevant versus irrelevant (F<sub>1</sub> score=0.89) and then promotional versus published by laypeople (F<sub>1</sub> score=0.90). A total of 40,790 ED-relevant tweets were identified using the CNN-LSTM classifier. We also identified another set of tweets (ie, 17,632 ED-relevant and 83,557 ED-irrelevant tweets) posted by laypeople using manually specified rules. Using CorEx on all ED-relevant tweets, the topic model identified 162 topics. Overall, the coherence rate for topic modeling was 77.07% (1264/1640), indicating a high quality of the produced topics. The topics were further reviewed and analyzed by a domain expert. CONCLUSIONS A developed CNN-LSTM classifier could improve the efficiency of identifying ED-relevant tweets compared with the traditional manual-based method. The CorEx topic model was applied on the tweets identified by the machine learning–based classifier and the traditional manual approach separately. Highly overlapping topics were observed between the 2 cohorts of tweets. The produced topics were further reviewed by a domain expert. Some of the topics identified by the potential ED tweets may provide new avenues for understanding this serious set of disorders.


10.2196/18273 ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. e18273
Author(s):  
Sicheng Zhou ◽  
Yunpeng Zhao ◽  
Jiang Bian ◽  
Ann F Haynos ◽  
Rui Zhang

Background Eating disorders (EDs) are a group of mental illnesses that have an adverse effect on both mental and physical health. As social media platforms (eg, Twitter) have become an important data source for public health research, some studies have qualitatively explored the ways in which EDs are discussed on these platforms. Initial results suggest that such research offers a promising method for further understanding this group of diseases. Nevertheless, an efficient computational method is needed to further identify and analyze tweets relevant to EDs on a larger scale. Objective This study aims to develop and validate a machine learning–based classifier to identify tweets related to EDs and to explore factors (ie, topics) related to EDs using a topic modeling method. Methods We collected potential ED-relevant tweets using keywords from previous studies and annotated these tweets into different groups (ie, ED relevant vs irrelevant and then promotional information vs laypeople discussion). Several supervised machine learning methods, such as convolutional neural network (CNN), long short-term memory (LSTM), support vector machine, and naïve Bayes, were developed and evaluated using annotated data. We used the classifier with the best performance to identify ED-relevant tweets and applied a topic modeling method—Correlation Explanation (CorEx)—to analyze the content of the identified tweets. To validate these machine learning results, we also collected a cohort of ED-relevant tweets on the basis of manually curated rules. Results A total of 123,977 tweets were collected during the set period. We randomly annotated 2219 tweets for developing the machine learning classifiers. We developed a CNN-LSTM classifier to identify ED-relevant tweets published by laypeople in 2 steps: first relevant versus irrelevant (F1 score=0.89) and then promotional versus published by laypeople (F1 score=0.90). A total of 40,790 ED-relevant tweets were identified using the CNN-LSTM classifier. We also identified another set of tweets (ie, 17,632 ED-relevant and 83,557 ED-irrelevant tweets) posted by laypeople using manually specified rules. Using CorEx on all ED-relevant tweets, the topic model identified 162 topics. Overall, the coherence rate for topic modeling was 77.07% (1264/1640), indicating a high quality of the produced topics. The topics were further reviewed and analyzed by a domain expert. Conclusions A developed CNN-LSTM classifier could improve the efficiency of identifying ED-relevant tweets compared with the traditional manual-based method. The CorEx topic model was applied on the tweets identified by the machine learning–based classifier and the traditional manual approach separately. Highly overlapping topics were observed between the 2 cohorts of tweets. The produced topics were further reviewed by a domain expert. Some of the topics identified by the potential ED tweets may provide new avenues for understanding this serious set of disorders.


Sign in / Sign up

Export Citation Format

Share Document