Clinical Text Classification with Word Embedding Features vs. Bag-of-Words Features

Author(s):  
Yijun Shao ◽  
Stephanie Taylor ◽  
Nell Marshall ◽  
Craig Morioka ◽  
Qing Zeng-Treitler
PLoS ONE ◽  
2020 ◽  
Vol 15 (5) ◽  
pp. e0232525
Author(s):  
Yaakov HaCohen-Kerner ◽  
Daniel Miller ◽  
Yair Yigal

Author(s):  
S Hasanzadeh ◽  
S M Fakhrahmad ◽  
M Taheri

Abstract Recommender systems nowadays play an important role in providing helpful information for users, especially in ecommerce applications. Many of the proposed models use rating histories of the users in order to predict unknown ratings. Recently, users’ reviews as a valuable source of knowledge have attracted the attention of researchers in this field and a new category denoted as review-based recommender systems has emerged. In this study, we make use of the information included in user reviews as well as available rating scores to develop a review-based rating prediction system. The proposed scheme attempts to handle the uncertainty problem of the rating histories, by fuzzifying the given ratings. Another advantage of the proposed system is the use of a word embedding representation model for textual reviews, instead of using traditional models such as binary bag of words and TFIDF 1 vector space. It also makes use of the helpfulness voting scores, in order to prune data and achieve better results. The effectiveness of the rating prediction scheme as well as the final recommender system was evaluated against the Amazon dataset. Experimental results revealed that the proposed recommender system outperforms its counterparts and can be used as a suitable tool in ecommerce environments.


Author(s):  
Muhammad Zulqarnain ◽  
Rozaida Ghazali ◽  
Muhammad Ghulam Ghouse ◽  
Muhammad Faheem Mushtaq

Text classification has become very serious problem for big organization to manage the large amount of online data and has been extensively applied in the tasks of Natural Language Processing (NLP). Text classification can support users to excellently manage and exploit meaningful information require to be classified into various categories for further use. In order to best classify texts, our research efforts to develop a deep learning approach which obtains superior performance in text classification than other RNNs approaches. However, the main problem in text classification is how to enhance the classification accuracy and the sparsity of the data semantics sensitivity to context often hinders the classification performance of texts. In order to overcome the weakness, in this paper we proposed unified structure to investigate the effects of word embedding and Gated Recurrent Unit (GRU) for text classification on two benchmark datasets included (Google snippets and TREC). GRU is a well-known type of recurrent neural network (RNN), which is ability of computing sequential data over its recurrent architecture. Experimentally, the semantically connected words are commonly near to each other in embedding spaces. First, words in posts are changed into vectors via word embedding technique. Then, the words sequential in sentences are fed to GRU to extract the contextual semantics between words. The experimental results showed that proposed GRU model can effectively learn the word usage in context of texts provided training data. The quantity and quality of training data significantly affected the performance. We evaluated the performance of proposed approach with traditional recurrent approaches, RNN, MV-RNN and LSTM, the proposed approach is obtained better results on two benchmark datasets in the term of accuracy and error rate.


Author(s):  
Hung D. Nguyen ◽  
Tru H. Cao

Electronic medical records (EMR) have emerged as an important source of data for research in medicine andinformation technology, as they contain much of valuable human medical knowledge in healthcare and patienttreatment. This paper tackles the problem of coreference resolution in Vietnamese EMRs. Unlike in English ones,in Vietnamese clinical texts, verbs are often used to describe disease symptoms. So we first define rules to annotateverbs as mentions and consider coreference between verbs and other noun or adjective mentions possible. Thenwe propose a support vector machine classifier on bag-of-words vector representation of mentions that takes intoaccount the special characteristics of Vietnamese language to resolve their coreference. The achieved F1 scoreon our dataset of real Vietnamese EMRs provided by a hospital in Ho Chi Minh city is 91.4%. To the best of ourknowledge, this is the first research work in coreference resolution on Vietnamese clinical texts.Keywords: Clinical text, support vector machine, bag-of-words vector, lexical similarity, unrestricted coreference


Sign in / Sign up

Export Citation Format

Share Document