scholarly journals Analysis of Risk Factors and Symptoms of Burnout Syndrome in Colombian School Teachers under Statutes 2277 and 1278 Using Machine Learning Interpretation

2020 ◽  
Vol 9 (3) ◽  
pp. 30
Author(s):  
Hugo F. Posada-Quintero ◽  
Paula N. Molano-Vergara ◽  
Ronald M. Parra-Hernández ◽  
Jorge I. Posada-Quintero

In 2002, the Colombian ministry of education released statute 1278, for teaching professionalization, superseding statute 2277 of 1977. Although statute 1278 was intended to increase the quality of the education service and teachers’ remuneration, there is evidence that the abundant evaluations and hindered promotion system introduced by statute 1278 resulted in an impairment of the quality of life of the teachers, and a higher incidence of burnout syndrome. We used two techniques for machine learning interpretability, SHapley Additive exPlanation summary plots and predictor importance, to interpret support vector machine and decision tree machine learning models, respectively, to better understand the differences on risk factors and symptoms of burnout syndrome in school teachers under statutes 2277 and 1278. We have surveyed 54 school teachers between August and October 2018, 17 under statute 2277, and 37 under statute 1278. Among the risk factors and symptoms of burnout syndrome considered in this study, we found that the satisfaction with earnt income was the most relevant risk factor, followed by the overtime work and the perceived severity of the sanctions on lower performance. The most relevant symptoms of burnout were fatigue at the end of the day, and frequent headaches. This methodology can be potentially used in other contexts and social groups, allowing institutional authorities and policy makers to allocate resources to specific issues affecting a particular group of workers.

Author(s):  
David Opeoluwa Oyewola ◽  
Emmanuel Gbenga Dada ◽  
Juliana Ngozi Ndunagu ◽  
Terrang Abubakar Umar ◽  
Akinwunmi S.A

Since the declaration of COVID-19 as a global pandemic, it has been transmitted to more than 200 nations of the world. The harmful impact of the pandemic on the economy of nations is far greater than anything suffered in almost a century. The main objective of this paper is to apply Structural Equation Modeling (SEM) and Machine Learning (ML) to determine the relationships among COVID-19 risk factors, epidemiology factors and economic factors. Structural equation modeling is a statistical technique for calculating and evaluating the relationships of manifest and latent variables. It explores the causal relationship between variables and at the same time taking measurement error into account. Bagging (BAG), Boosting (BST), Support Vector Machine (SVM), Decision Tree (DT) and Random Forest (RF) Machine Learning techniques was applied to predict the impact of COVID-19 risk factors. Data from patients who came into contact with coronavirus disease were collected from Kaggle database between 23 January 2020 and 24 June 2020. Results indicate that COVID-19 risk factors have negative effects on epidemiology factors. It also has negative effects on economic factors.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


Author(s):  
Monalisa Ghosh ◽  
Chetna Singhal

Video streaming services top the internet traffic surging forward a competitive environment to impart best quality of experience (QoE) to the users. The standard codecs utilized in video transmission systems eliminate the spatiotemporal redundancies in order to decrease the bandwidth requirement. This may adversely affect the perceptual quality of videos. To rate a video quality both subjective and objective parameters can be used. So, it is essential to construct frameworks which will measure integrity of video just like humans. This chapter focuses on application of machine learning to evaluate the QoE without requiring human efforts with higher accuracy of 86% and 91% employing the linear and support vector regression respectively. Machine learning model is developed to forecast the subjective quality of H.264 videos obtained after streaming through wireless networks from the subjective scores.


Biology ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 47
Author(s):  
Shi-Jer Lou ◽  
Ming-Feng Hou ◽  
Hong-Tai Chang ◽  
Hao-Hsien Lee ◽  
Chong-Chi Chiu ◽  
...  

Machine learning algorithms have proven to be effective for predicting survival after surgery, but their use for predicting 10-year survival after breast cancer surgery has not yet been discussed. This study compares the accuracy of predicting 10-year survival after breast cancer surgery in the following five models: a deep neural network (DNN), K nearest neighbor (KNN), support vector machine (SVM), naive Bayes classifier (NBC) and Cox regression (COX), and to optimize the weighting of significant predictors. The subjects recruited for this study were breast cancer patients who had received breast cancer surgery (ICD-9 cm 174–174.9) at one of three southern Taiwan medical centers during the 3-year period from June 2007, to June 2010. The registry data for the patients were randomly allocated to three datasets, one for training (n = 824), one for testing (n = 177), and one for validation (n = 177). Prediction performance comparisons revealed that all performance indices for the DNN model were significantly (p < 0.001) higher than in the other forecasting models. Notably, the best predictor of 10-year survival after breast cancer surgery was the preoperative Physical Component Summary score on the SF-36. The next best predictors were the preoperative Mental Component Summary score on the SF-36, postoperative recurrence, and tumor stage. The deep-learning DNN model is the most clinically useful method to predict and to identify risk factors for 10-year survival after breast cancer surgery. Future research should explore designs for two-level or multi-level models that provide information on the contextual effects of the risk factors on breast cancer survival.


Water ◽  
2021 ◽  
Vol 13 (18) ◽  
pp. 2457
Author(s):  
Manel Naloufi ◽  
Françoise S. Lucas ◽  
Sami Souihi ◽  
Pierre Servais ◽  
Aurélie Janne ◽  
...  

Exposure to contaminated water during aquatic recreational activities can lead to gastrointestinal diseases. In order to decrease the exposure risk, the fecal indicator bacteria Escherichia coli is routinely monitored, which is time-consuming, labor-intensive, and costly. To assist the stakeholders in the daily management of bathing sites, models have been developed to predict the microbiological quality. However, model performances are highly dependent on the quality of the input data which are usually scarce. In our study, we proposed a conceptual framework for optimizing the selection of the most adapted model, and to enrich the training dataset. This frameword was successfully applied to the prediction of Escherichia coli concentrations in the Marne River (Paris Area, France). We compared the performance of six machine learning (ML)-based models: K-nearest neighbors, Decision Tree, Support Vector Machines, Bagging, Random Forest, and Adaptive boosting. Based on several statistical metrics, the Random Forest model presented the best accuracy compared to the other models. However, 53.2 ± 3.5% of the predicted E. coli densities were inaccurately estimated according to the mean absolute percentage error (MAPE). Four parameters (temperature, conductivity, 24 h cumulative rainfall of the previous day the sampling, and the river flow) were identified as key variables to be monitored for optimization of the ML model. The set of values to be optimized will feed an alert system for monitoring the microbiological quality of the water through combined strategy of in situ manual sampling and the deployment of a network of sensors. Based on these results, we propose a guideline for ML model selection and sampling optimization.


2019 ◽  
Author(s):  
Rüdiger Pryss ◽  
Winfried Schlee ◽  
Burkhard Hoppenstedt ◽  
Manfred Reichert ◽  
Myra Spiliopoulou ◽  
...  

BACKGROUND Tinnitus is often described as the phantom perception of a sound and is experienced by 5.1% to 42.7% of the population worldwide, at least once during their lifetime. The symptoms often reduce the patient’s quality of life. The TrackYourTinnitus (TYT) mobile health (mHealth) crowdsensing platform was developed for two operating systems (OS)—Android and iOS—to help patients demystify the daily moment-to-moment variations of their tinnitus symptoms. In all platforms developed for more than one OS, it is important to investigate whether the crowdsensed data predicts the OS that was used in order to understand the degree to which the OS is a confounder that is necessary to consider. OBJECTIVE In this study, we explored whether the mobile OS—Android and iOS—used during user assessments can be predicted by the dynamic daily-life TYT data. METHODS TYT mainly applies the paradigms ecological momentary assessment (EMA) and mobile crowdsensing to collect dynamic EMA (EMA-D) daily-life data. The dynamic daily-life TYT data that were analyzed included eight questions as part of the EMA-D questionnaire. In this study, 518 TYT users were analyzed, who each completed at least 11 EMA-D questionnaires. Out of these, 221 were iOS users and 297 were Android users. The iOS users completed, in total, 14,708 EMA-D questionnaires; the number of EMA-D questionnaires completed by the Android users was randomly reduced to the same number to properly address the research question of the study. Machine learning methods—a feedforward neural network, a decision tree, a random forest classifier, and a support vector machine—were applied to address the research question. RESULTS Machine learning was able to predict the mobile OS used with an accuracy up to 78.94% based on the provided EMA-D questionnaires on the assessment level. In this context, the daily measurements regarding how users concentrate on the actual activity were particularly suitable for the prediction of the mobile OS used. CONCLUSIONS In the work at hand, two particular aspects have been revealed. First, machine learning can contribute to EMA-D data in the medical context. Second, based on the EMA-D data of TYT, we found that the accuracy in predicting the mobile OS used has several implications. Particularly, in clinical studies using mobile devices, the OS should be assessed as a covariate, as it might be a confounder.


10.2196/32662 ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. e32662
Author(s):  
Imjin Ahn ◽  
Hansle Gwon ◽  
Heejun Kang ◽  
Yunha Kim ◽  
Hyeram Seo ◽  
...  

Background Effective resource management in hospitals can improve the quality of medical services by reducing labor-intensive burdens on staff, decreasing inpatient waiting time, and securing the optimal treatment time. The use of hospital processes requires effective bed management; a stay in the hospital that is longer than the optimal treatment time hinders bed management. Therefore, predicting a patient’s hospitalization period may support the making of judicious decisions regarding bed management. Objective First, this study aims to develop a machine learning (ML)–based predictive model for predicting the discharge probability of inpatients with cardiovascular diseases (CVDs). Second, we aim to assess the outcome of the predictive model and explain the primary risk factors of inpatients for patient-specific care. Finally, we aim to evaluate whether our ML-based predictive model helps manage bed scheduling efficiently and detects long-term inpatients in advance to improve the use of hospital processes and enhance the quality of medical services. Methods We set up the cohort criteria and extracted the data from CardioNet, a manually curated database that specializes in CVDs. We processed the data to create a suitable data set by reindexing the date-index, integrating the present features with past features from the previous 3 years, and imputing missing values. Subsequently, we trained the ML-based predictive models and evaluated them to find an elaborate model. Finally, we predicted the discharge probability within 3 days and explained the outcomes of the model by identifying, quantifying, and visualizing its features. Results We experimented with 5 ML-based models using 5 cross-validations. Extreme gradient boosting, which was selected as the final model, accomplished an average area under the receiver operating characteristic curve score that was 0.865 higher than that of the other models (ie, logistic regression, random forest, support vector machine, and multilayer perceptron). Furthermore, we performed feature reduction, represented the feature importance, and assessed prediction outcomes. One of the outcomes, the individual explainer, provides a discharge score during hospitalization and a daily feature influence score to the medical team and patients. Finally, we visualized simulated bed management to use the outcomes. Conclusions In this study, we propose an individual explainer based on an ML-based predictive model, which provides the discharge probability and relative contributions of individual features. Our model can assist medical teams and patients in identifying individual and common risk factors in CVDs and can support hospital administrators in improving the management of hospital beds and other resources.


2022 ◽  
Vol 355 ◽  
pp. 03008
Author(s):  
Yang Zhang ◽  
Lei Zhang ◽  
Yabin Ma ◽  
Jinsen Guan ◽  
Zhaoxia Liu ◽  
...  

In this study, an electronic nose model composed of seven kinds of metal oxide semiconductor sensors was developed to distinguish the milk source (the dairy farm to which milk belongs), estimate the content of milk fat and protein in milk, to identify the authenticity and evaluate the quality of milk. The developed electronic nose is a low-cost and non-destructive testing equipment. (1) For the identification of milk sources, this paper uses the method of combining the electronic nose odor characteristics of milk and the component characteristics to distinguish different milk sources, and uses Principal Component Analysis (PCA) and Linear Discriminant Analysis , LDA) for dimensionality reduction analysis, and finally use three machine learning algorithms such as Logistic Regression (LR), Support Vector Machine (SVM) and Random Forest (RF) to build a milk source (cow farm) Identify the model and evaluate and compare the classification effects. The experimental results prove that the classification effect of the SVM-LDA model based on the electronic nose odor characteristics is better than other single feature models, and the accuracy of the test set reaches 91.5%. The RF-LDA and SVM-LDA models based on the fusion feature of the two have the best effect Set accuracy rate is as high as 96%. (2) The three algorithms, Gradient Boosting Decision Tree (GBDT), Extreme Gradient Boosting (XGBoost) and Random Forest (RF), are used to construct the electronic nose odor data for milk fat rate and protein rate. The method of estimating the model, the results show that the RF model has the best estimation performance( R2 =0.9399 for milk fat; R2=0.9301for milk protein). And it prove that the method proposed in this study can improve the estimation accuracy of milk fat and protein, which provides a technical basis for predicting the quality of dairy products.


Author(s):  
Haewon Byeon

Background and Objectives: This study developed a support vector machine (SVM) algorithm-based prediction model with considering influence factors associated with the swallowing quality-of-life as the predictor variables and provided baseline information for enhancing the swallowing quality of elderly people’s lives in the future. Methods and Material: This study sampled 142 elderly people equal to or older than 65 years old who were using a senior welfare center. The swallowing problem associated quality of life was defined by the swallowing quality-of-life (SWAL-QOL). In order to verify the predictive power of the model, this study compared the predictive power of the Gaussian function with that of a linear algorithm, polynomial algorithm, and a sigmoid algorithm. Results: A total of 33.9% of the subjects decreased in swallowing quality-of-life. The swallowing quality-of-life prediction model for the elderly, based on the SVM, showed both preventive factors and risk factors. Risk factors were denture use, experience of using aspiration in the past one month, being economically inactive, having a mean monthly household income <2 million KRW, being an elementary school graduate or below, female, 75 years old or older, living alone, requiring time for finishing one meal on average ≤15 min or ≥40 min, having depression, stress, and cognitive impairment. Conclusions: It is necessary to monitor the high-risk group constantly in order to maintain the swallowing quality-of-life in the elderly based on the prevention and risk factors associated with the swallowing quality-of-life derived from this prediction model.


2017 ◽  
Vol 35 (15_suppl) ◽  
pp. 6596-6596
Author(s):  
Frank Po-Yen Lin ◽  
Chloe Martin ◽  
Simon Kocbek ◽  
Anthony M. Joshua ◽  
Rachel Fitz-Gerald Dear ◽  
...  

6596 Background: Knowing which factors compromise quality of life (QoL) in patients undergoing cancer treatments can help oncologists provide more effective care. To identify these factors, we conducted a single-centered cross-sectional study examining the relationships between patient-reported QoL, adverse events (AE), and treatment characteristics. Methods: Consecutive patients attending an outpatient chemotherapy unit completed two questionnaires (EORTC QLQ-C30 and National Cancer Institute PRO-CTCAE) per visit to identify factors contributing to the lowest global QoL score [QLQ-C30 QL2, range 0 (worst)–100 (best)] over a 6-week period. QL2 was correlated to each PRO-CTCAE item and treatment characteristic (tumor type, drug class, number of cycles, and treatment intent) using multiple regression, adjusted for age, sex, and use of concurrent radiotherapy. To determine whether QoL can be reliably modeled by machine learning, ten algorithms were compared for performance in classifying patients into dichotomized QL2 subgroups. Results: One hundred and fifteen of 130 patients (157/244 visits) completed up to 6 sets of questionnaires (median QL2: 67, IQR: 50–83). No difference was found between QL2 and treatment characteristics (at α Bonferroni=5×10-4). However, QL2 was significantly associated with AE in gastrointestinal, respiratory, attention, pain, sleep/wake, and mood categories. Using AE as covariates, support vector machine with radial basis kernel was the best at classifying patients into QoL groups (mean bootstrapped area under ROC curve 0.812, 95% CI 0.700–0.925). Conclusions: Patient-reported QoL is associated with multiple AE, but not with characteristics of systemic therapy. Machine learning analysis suggests that a combined AE analysis may reliably characterize a patient’s QoL. [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document