Adapting for Informal Language in Arabic Twitter Improves ‎Monitoring of COVID-19 Pandemic and Influenza Epidemic‎ (Preprint)

2021 ◽  
Author(s):  
Lama Alsudias ◽  
Paul Rayson

BACKGROUND Twitter is a real time messaging platform widely used by people and organisations to share ‎information on many topics. It could potentially be useful to analyse tweets for infectious ‎disease monitoring purposes ‎ in order to reduce reporting lag time, and to provide an ‎independent complementary source of data, compared to traditional approaches. ‎However, such analysis is currently not possible in the Arabic speaking world due to lack of ‎basic building blocks for research.‎ OBJECTIVE We collect around 4,000 Arabic tweets related to COVID-19 and Influenza. We clean and ‎label the tweets relative to the Arabic Infectious Diseases Ontology which includes non-‎standard terminology and 11 core concepts and 21 relations. The aim of this study is to ‎analyse Arabic tweets to estimate their usefulness for health surveillance, understand the ‎impact of the informal terms in the analysis, show the effect of the deep learning methods ‎in the classification process, and identify the locations where the infection is spreading.‎ METHODS We apply multi-label classification techniques: Binary Relevance, Classifier Chains, Label ‎Powerset, Adapted Algorithm (MLKNN), NBSVM, BERT, and AraBERT to identify infected ‎people. We also use Named Entity Recognition to predict the locations affected. ‎ RESULTS We achieve an F1-score up to 88% in the Influenza case study and 94% in the COVID-19 one. ‎ ‎ Adapting for non-standard terminology and informal language helps to improve ‎accuracy by as ‎much as 15% with an average improvement of 8%.‎ Deep learning methods ‎achieve around 5% on hamming loss during the classifying process. Our geo-location ‎detection algorithm can predict on average 54% accuracy for the location of the users using ‎tweet content.‎ ‎ ‎ ‎ CONCLUSIONS This study identifies two Arabic social media datasets for monitoring tweets related to ‎Influenza and COVID-19‎. It demonstrates the importance of including informal terms, which ‎is regularly used by social media users, in the analysis. It also proves that BERT achieves good ‎results when used with new terms in COVID-19 tweets. Finally, the tweet content may ‎contain useful information to determine the location of the disease spread.

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2764
Author(s):  
Xin Yu Liew ◽  
Nazia Hameed ◽  
Jeremie Clos

A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1672
Author(s):  
Luya Lian ◽  
Tianer Zhu ◽  
Fudong Zhu ◽  
Haihua Zhu

Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored.


In this digitized world, the Internet has become a prominent source to glean various kinds of information. In today’s scenario, people prefer virtual reality instead of one to one communication. The Majority of the population prefers social networking sites to voice themselves through posts, blogs, comments, likes, dislikes. Their sentiments can be found/traced using opinion mining or Sentiment analysis. Sentiment analysis of social media text is a useful technique for identifying peoples’ positive, negative or neutral emotions/sentiments/opinions. Sentiment analysis has gained special attention by researchers from last few years. Traditionally many machine learning algorithms were used to implement it like navie bays, Support Vector Machine and many more. But to overcome the drawbacks of ML in terms of complex classification algorithms different deep learning-based algorithms are introduced like CNN, RNN, and HNN. In this paper, we have studied different deep learning algorithms and intended to propose a deep learning-based model to analyze the behavior of an individual using social media text. Results given by the proposed model can utilize in a range of different fields like business, education, industry, politics, psychology, security, etc.


2020 ◽  
Vol 10 (22) ◽  
pp. 8035
Author(s):  
Jenq-Haur Wang ◽  
Ting-Wei Liu ◽  
Xiong Luo

With the wide popularity of social media, it’s becoming more convenient for people to express their opinions online. To better understand what the public think about a topic, sentiment classification techniques have been widely used to estimate the overall orientation of opinions in post contents. However, users might have various degrees of influence depending on their participation in discussions on different topics. In this paper, we address the issues of combining sentiment classification and link analysis techniques for extracting stances of the public from social media. Since social media posts are usually very short, word embedding models are first used to learn different word usages in various contexts. Then, deep learning methods such as Long Short-Term Memory (LSTM) are used to learn the long-distance context dependency among words for better estimation of sentiments. Third, we consider the major user participation in popular social media by adjusting the users weights to reflect their relative influence in user-post interaction graphs. Finally, we combine post sentiments and user influences into a total opinion score for extracting public stances. In the experiments, we evaluated the performance of our proposed approach for tweets about the 2016 U.S. Presidential Election. The best performance of sentiment classification can be observed with an F-measure of 72.97% for LSTM classifiers. This shows the effectiveness of deep learning methods in learning word usage in social media contexts. The experimental results on stance extraction showed the best performance of 0.68% Mean Absolute Error (MAE) in aggregating public stances on election candidates. This shows the potential of combining tweet sentiments and user participation structures for extracting the aggregate stances of the public on popular topics. Further investigation is needed to verify the performance in different social media sources.


2021 ◽  
Vol 3 (1) ◽  
pp. 243-262
Author(s):  
Antoine Pirovano ◽  
Hippolyte Heuberger ◽  
Sylvain Berlemont ◽  
SaÏd Ladjal ◽  
Isabelle Bloch

Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method.


Author(s):  
Marcel Bengs ◽  
Finn Behrendt ◽  
Julia Krüger ◽  
Roland Opfer ◽  
Alexander Schlaefer

Abstract Purpose Brain Magnetic Resonance Images (MRIs) are essential for the diagnosis of neurological diseases. Recently, deep learning methods for unsupervised anomaly detection (UAD) have been proposed for the analysis of brain MRI. These methods rely on healthy brain MRIs and eliminate the requirement of pixel-wise annotated data compared to supervised deep learning. While a wide range of methods for UAD have been proposed, these methods are mostly 2D and only learn from MRI slices, disregarding that brain lesions are inherently 3D and the spatial context of MRI volumes remains unexploited. Methods We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance compared to learning from slices. We evaluate and compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance. Results Using two publicly available segmentation data sets for evaluation, 3D VAEs outperform their 2D counterpart, highlighting the advantage of volumetric context. Also, our 3D erasing methods allow for further performance improvements. Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE. Conclusions We propose 3D deep learning methods for UAD in brain MRI combined with 3D erasing and demonstrate that 3D methods clearly outperform their 2D counterpart for anomaly segmentation. Also, our spatial erasing method allows for further performance improvements and reduces the requirement for large data sets.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 5
Author(s):  
Mudasir Ahmad Wani ◽  
Nancy Agarwal ◽  
Patrick Bours

The abundant dissemination of misinformation regarding coronavirus disease 2019 (COVID-19) presents another unprecedented issue to the world, along with the health crisis. Online social network (OSN) platforms intensify this problem by allowing their users to easily distort and fabricate the information and disseminate it farther and rapidly. In this paper, we study the impact of misinformation associated with a religious inflection on the psychology and behavior of the OSN users. The article presents a detailed study to understand the reaction of social media users when exposed to unverified content related to the Islamic community during the COVID-19 lockdown period in India. The analysis was carried out on Twitter users where the data were collected using three scraping packages, Tweepy, Selenium, and Beautiful Soup, to cover more users affected by this misinformation. A labeled dataset is prepared where each tweet is assigned one of the four reaction polarities, namely, E (endorse), D (deny), Q (question), and N (neutral). Analysis of collected data was carried out in five phases where we investigate the engagement of E, D, Q, and N users, tone of the tweets, and the consequence upon repeated exposure of such information. The evidence demonstrates that the circulation of such content during the pandemic and lockdown phase had made people more vulnerable in perceiving the unreliable tweets as fact. It was also observed that people absorbed the negativity of the online content, which induced a feeling of hatred, anger, distress, and fear among them. People with similar mindset form online groups and express their negative attitude to other groups based on their opinions, indicating the strong signals of social unrest and public tensions in society. The paper also presents a deep learning-based stance detection model as one of the automated mechanisms for tracking the news on Twitter as being potentially false. Stance classifier aims to predict the attitude of a tweet towards a news headline and thereby assists in determining the veracity of news by monitoring the distribution of different reactions of the users towards it. The proposed model, employing deep learning (convolutional neural network(CNN)) and sentence embedding (bidirectional encoder representations from transformers(BERT)) techniques, outperforms the existing systems. The performance is evaluated on the benchmark SemEval stance dataset. Furthermore, a newly annotated dataset is prepared and released with this study to help the research of this domain.


Sign in / Sign up

Export Citation Format

Share Document