Applying Natural Language Processing, Information Retrieval and Machine Learning to Decision Support in Medical Coordination in an Emergency Medicine Context

Author(s):  
Juliana Tarossi Pollettini ◽  
Hugo Cesar Pessotti ◽  
Antonio Pazin Filho ◽  
Evandro Eduardo Seron Ruiz ◽  
Mario Sergio Adolfi Junior
2012 ◽  
Vol 24 (2) ◽  
pp. 117-126 ◽  
Author(s):  
Mahmuda Rahman

Key words: Natural language processing; C4.5 classification; DSS, machine learning; KNN clustering; SVMDOI: http://dx.doi.org/10.3329/bjsr.v24i2.10768 Bangladesh J. Sci. Res. 24(2):117-126, 2011 (December) 


2019 ◽  
Vol 53 (2) ◽  
pp. 3-10
Author(s):  
Muthu Kumar Chandrasekaran ◽  
Philipp Mayr

The 4 th joint BIRNDL workshop was held at the 42nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019) in Paris, France. BIRNDL 2019 intended to stimulate IR researchers and digital library professionals to elaborate on new approaches in natural language processing, information retrieval, scientometrics, and recommendation techniques that can advance the state-of-the-art in scholarly document understanding, analysis, and retrieval at scale. The workshop incorporated different paper sessions and the 5 th edition of the CL-SciSumm Shared Task.


Author(s):  
Saravanakumar Kandasamy ◽  
Aswani Kumar Cherukuri

Semantic similarity quantification between concepts is one of the inevitable parts in domains like Natural Language Processing, Information Retrieval, Question Answering, etc. to understand the text and their relationships better. Last few decades, many measures have been proposed by incorporating various corpus-based and knowledge-based resources. WordNet and Wikipedia are two of the Knowledge-based resources. The contribution of WordNet in the above said domain is enormous due to its richness in defining a word and all of its relationship with others. In this paper, we proposed an approach to quantify the similarity between concepts that exploits the synsets and the gloss definitions of different concepts using WordNet. Our method considers the gloss definitions, contextual words that are helping in defining a word, synsets of contextual word and the confidence of occurrence of a word in other word’s definition for calculating the similarity. The evaluation based on different gold standard benchmark datasets shows the efficiency of our system in comparison with other existing taxonomical and definitional measures.


2021 ◽  
Author(s):  
Yusuf Yilmaz ◽  
Alma Jurado Nunez ◽  
Ali Ariaeinejad ◽  
Mark Lee ◽  
Jonathan Sherbino ◽  
...  

BACKGROUND Residents receive a numeric performance rating (e.g., 1-7 scoring scale) along with a narrative (i.e., qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study evaluates the accuracy of a decision support system for raters using natural language processing (NLP) and machine learning (ML). METHODS NLP was performed retrospectively on a complete dataset of narrative comments (i.e., text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using bag-of-n-grams technique with three input types: unigram, bigrams, and trigrams. Supervised machine learning models using linear regression were trained for two outputs using the original ratings and dichotomized ratings (at risk or not). Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database consisted of 7,199 unique direct observation assessments, containing both narrative comments and a 3 to 7 rating in imbalanced distribution (3-5: 726, and 6-7: 4,871 ratings). Total of 141 unique raters from five different hospitals and 45 unique residents participated over the course of five academic years. When comparing the three different input types for diagnosing if a trainee would be rated low (i.e., 1-5) or high (i.e., 6 or 7), our accuracy for trigrams was (87%), bigrams (86%), and unigrams (82%). We also found that all three input types had better prediction accuracy when using a bimodal cut (e.g., lower or higher) compared to predicting performance along the full 7-scale (50-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for work-based assessments. The words generated in WBAs can be a worthy dataset to augment human decisions for educators tasked with processing large volumes of narrative assessments. CLINICALTRIAL N/A


Sign in / Sign up

Export Citation Format

Share Document