Natural language processing and machine learning methods in public health surveillance: a narrative review (Preprint)

2020 ◽  
Author(s):  
Patrick James Ward ◽  
April M Young

BACKGROUND Public health surveillance is critical to detecting emerging population health threats and improvements. Surveillance data has increased in size and complexity, posing challenges to data management and analysis. Natural language processing (NLP) and machine learning (ML) are valuable tools for analysis of unstructured data involving free-text and have been used in innovative ways to examine a variety of health outcomes. OBJECTIVE Given the cross-disciplinary applications of NLP and ML, research on their applications in surveillance have been disseminated in a variety of outlets. As such, the aim of this narrative review was to describe the current state of NLP and ML use in surveillance science and to identify directions in future research. METHODS Information was abstracted from articles describing the use of natural language processing and machine learning in public health surveillance identified through a PubMed search. RESULTS Twenty-two articles met review criteria, 12 involving traditional surveillance data sources and 10 involving online media sources for surveillance. Traditional surveillance sources analyzed with NLP and ML consisted primarily of death certificates (n=6), hospital data (n=5), and online media sources (e.g., Twitter) (n=8). CONCLUSIONS The reviewed articles demonstrate the potential of NLP and ML to enhance surveillance data through improving timeliness of surveillance, identifying cases in the absence of standardized case definitions, and enabling mining of social media for public health surveillance.

2021 ◽  
Author(s):  
Jillian RYAN ◽  
Hamza Sellak ◽  
Emily Brindal

BACKGROUND Natural language processing is a machine learning technique that uses intelligent computer algorithms to detect patterns and themes in unstructured datasets commonly containing text data. Machine learning can aid with understanding the impacts of novel and disruptive events, and therefore offers myriad public health applications. OBJECTIVE This study aims to explore community sentiment towards COVID-19 and the nature of the impacts that COVID-19 has had on people using natural language processing on a linked research dataset. METHODS Stanford CoreNLP was used to analyse and detect sentiment in qualitative COVID-19 impact stories from 3,483 Australian adults. Common themes were categorised according to the Theoretical Life Domains framework and a multinomial regression analysis was conducted to identify psychological and demographic predictors of sentiment. RESULTS About one-third of participants (33%) expressed negative sentiment towards COVID-19, while a further 44% expressed neutral sentiment and 23% expressed positive sentiment. Of the Theoretical Life Domains, behavioural regulation was by far the most commonly impacted life domain, followed by environmental context and resources, emotion, and social influences. Negative sentiment was predicted by financial stress and lower subjective wellbeing. CONCLUSIONS COVID-19 and its containment measures have had dramatic impacts on Australian adults. Ability to regulate health and social behaviours were among the most common impacts and this raises concerns for the effects of public health crises on chronic health and mental health conditions. Positive effects of COVID-19, related to greater flexibility in working arrangements and reductions in life ‘busyness’ were also documented. CLINICALTRIAL N/A


2021 ◽  
Vol 17 (1) ◽  
pp. 39-52
Author(s):  
Aditya Kamleshbhai Lakkad ◽  
Rushit Dharmendrabhai Bhadaniya ◽  
Vraj Nareshkumar Shah ◽  
Lavanya K.

The explosive growth of news and news content generated worldwide, coupled with the expansion through online media and rapid access to data, has made trouble and screening of news tedious. An expanding need for a model that can reprocess, break down, and order main content to extract interpretable information, explicitly recognizing subjects and content-driven groupings of articles. This paper proposed automated analyzing heterogeneous news through complex event processing (CEP) and machine learning (ML) algorithms. Initially, news content streamed using Apache Kafka, stored in Apache Druid, and further processed by a blend of natural language processing (NLP) and unsupervised machine learning (ML) techniques.


2019 ◽  
Vol 26 (11) ◽  
pp. 1355-1359 ◽  
Author(s):  
Joshua Feldman ◽  
Andrea Thomas-Bachli ◽  
Jack Forsyth ◽  
Zaki Hasnain Patel ◽  
Kamran Khan

Abstract Objective We assessed whether machine learning can be utilized to allow efficient extraction of infectious disease activity information from online media reports. Materials and Methods We curated a data set of labeled media reports (n = 8322) indicating which articles contain updates about disease activity. We trained a classifier on this data set. To validate our system, we used a held out test set and compared our articles to the World Health Organization Disease Outbreak News reports. Results Our classifier achieved a recall and precision of 88.8% and 86.1%, respectively. The overall surveillance system detected 94% of the outbreaks identified by the WHO covered by online media (89%) and did so 43.4 (IQR: 9.5–61) days earlier on average. Discussion We constructed a global real-time disease activity database surveilling 114 illnesses and syndromes. We must further assess our system for bias, representativeness, granularity, and accuracy. Conclusion Machine learning, natural language processing, and human expertise can be used to efficiently identify disease activity from digital media reports.


Author(s):  
Rohan Pandey ◽  
Vaibhav Gautam ◽  
Ridam Pal ◽  
Harsh Bandhey ◽  
Lovedeep Singh Dhingra ◽  
...  

BACKGROUND The COVID-19 pandemic has uncovered the potential of digital misinformation in shaping the health of nations. The deluge of unverified information that spreads faster than the epidemic itself is an unprecedented phenomenon that has put millions of lives in danger. Mitigating this ‘Infodemic’ requires strong health messaging systems that are engaging, vernacular, scalable, effective and continuously learn the new patterns of misinformation. OBJECTIVE We created WashKaro, a multi-pronged intervention for mitigating misinformation through conversational AI, machine translation and natural language processing. WashKaro provides the right information matched against WHO guidelines through AI, and delivers it in the right format in local languages. METHODS We theorize (i) an NLP based AI engine that could continuously incorporate user feedback to improve relevance of information, (ii) bite sized audio in the local language to improve penetrance in a country with skewed gender literacy ratios, and (iii) conversational but interactive AI engagement with users towards an increased health awareness in the community. RESULTS A total of 5026 people who downloaded the app during the study window, among those 1545 were active users. Our study shows that 3.4 times more females engaged with the App in Hindi as compared to males, the relevance of AI-filtered news content doubled within 45 days of continuous machine learning, and the prudence of integrated AI chatbot “Satya” increased thus proving the usefulness of an mHealth platform to mitigate health misinformation. CONCLUSIONS We conclude that a multi-pronged machine learning application delivering vernacular bite-sized audios and conversational AI is an effective approach to mitigate health misinformation. CLINICALTRIAL Not Applicable


Sign in / Sign up

Export Citation Format

Share Document