scholarly journals Automated Smart Home Assessment to Support Pain Management: Multiple Methods Analysis

10.2196/23943 ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. e23943 ◽  
Author(s):  
Roschelle L Fritz ◽  
Marian Wilson ◽  
Gordana Dermody ◽  
Maureen Schmitter-Edgecombe ◽  
Diane J Cook

Background Poorly managed pain can lead to substance use disorders, depression, suicide, worsening health, and increased use of health services. Most pain assessments occur in clinical settings away from patients’ natural environments. Advances in smart home technology may allow observation of pain in the home setting. Smart homes recognizing human behaviors may be useful for quantifying functional pain interference, thereby creating new ways of assessing pain and supporting people living with pain. Objective This study aimed to determine if a smart home can detect pain-related behaviors to perform automated assessment and support intervention for persons with chronic pain. Methods A multiple methods, secondary data analysis was conducted using historic ambient sensor data and weekly nursing assessment data from 11 independent older adults reporting pain across 1-2 years of smart home monitoring. A qualitative approach was used to interpret sensor-based data of 27 unique pain events to support clinician-guided training of a machine learning model. A periodogram was used to calculate circadian rhythm strength, and a random forest containing 100 trees was employed to train a machine learning model to recognize pain-related behaviors. The model extracted 550 behavioral markers for each sensor-based data segment. These were treated as both a binary classification problem (event, control) and a regression problem. Results We found 13 clinically relevant behaviors, revealing 6 pain-related behavioral qualitative themes. Quantitative results were classified using a clinician-guided random forest technique that yielded a classification accuracy of 0.70, sensitivity of 0.72, specificity of 0.69, area under the receiver operating characteristic curve of 0.756, and area under the precision-recall curve of 0.777 in comparison to using standard anomaly detection techniques without clinician guidance (0.16 accuracy achieved; P<.001). The regression formulation achieved moderate correlation, with r=0.42. Conclusions Findings of this secondary data analysis reveal that a pain-assessing smart home may recognize pain-related behaviors. Utilizing clinicians’ real-world knowledge when developing pain-assessing machine learning models improves the model’s performance. A larger study focusing on pain-related behaviors is warranted to improve and test model performance.

2020 ◽  
Author(s):  
Roschelle L Fritz ◽  
Marian Wilson ◽  
Gordana Dermody ◽  
Maureen Schmitter-Edgecombe ◽  
Diane J Cook

BACKGROUND Poorly managed pain can lead to substance use disorders, depression, suicide, worsening health, and increased use of health services. Most pain assessments occur in clinical settings away from patients’ natural environments. Advances in smart home technology may allow observation of pain in the home setting. Smart homes recognizing human behaviors may be useful for quantifying functional pain interference, thereby creating new ways of assessing pain and supporting people living with pain. OBJECTIVE This study aimed to determine if a smart home can detect pain-related behaviors to perform automated assessment and support intervention for persons with chronic pain. METHODS A multiple methods, secondary data analysis was conducted using historic ambient sensor data and weekly nursing assessment data from 11 independent older adults reporting pain across 1-2 years of smart home monitoring. A qualitative approach was used to interpret sensor-based data of 27 unique pain events to support clinician-guided training of a machine learning model. A periodogram was used to calculate circadian rhythm strength, and a random forest containing 100 trees was employed to train a machine learning model to recognize pain-related behaviors. The model extracted 550 behavioral markers for each sensor-based data segment. These were treated as both a binary classification problem (event, control) and a regression problem. RESULTS We found 13 clinically relevant behaviors, revealing 6 pain-related behavioral qualitative themes. Quantitative results were classified using a clinician-guided random forest technique that yielded a classification accuracy of 0.70, sensitivity of 0.72, specificity of 0.69, area under the receiver operating characteristic curve of 0.756, and area under the precision-recall curve of 0.777 in comparison to using standard anomaly detection techniques without clinician guidance (0.16 accuracy achieved; <i>P</i>&lt;.001). The regression formulation achieved moderate correlation, with <i>r</i>=0.42. CONCLUSIONS Findings of this secondary data analysis reveal that a pain-assessing smart home may recognize pain-related behaviors. Utilizing clinicians’ real-world knowledge when developing pain-assessing machine learning models improves the model’s performance. A larger study focusing on pain-related behaviors is warranted to improve and test model performance.


2020 ◽  
Vol 223 (3) ◽  
pp. 437.e1-437.e15
Author(s):  
Joshua Guedalia ◽  
Michal Lipschuetz ◽  
Michal Novoselsky-Persky ◽  
Sarah M. Cohen ◽  
Amihai Rottenstreich ◽  
...  

Author(s):  
R. Meenal ◽  
Prawin Angel Michael ◽  
D. Pamela ◽  
E. Rajasekaran

The complex numerical climate models pose a big challenge for scientists in weather predictions, especially for tropical system. This paper is focused on presenting the importance of weather prediction using machine learning (ML) technique. Recently many researchers recommended that the machine learning models can produce sensible weather predictions in spite of having no precise knowledge of atmospheric physics. In this work, global solar radiation (GSR) in MJ/m2/day and wind speed in m/s is predicted for Tamil Nadu, India using a random forest ML model. The random forest ML model is validated with measured wind and solar radiation data collected from IMD, Pune. The prediction results based on the random forest ML model are compared with statistical regression models and SVM ML model. Overall, random forest machine learning model has minimum error values of 0.750 MSE and R2 score of 0.97. Compared to regression models and SVM ML model, the prediction results of random forest ML model are more accurate. Thus, this study neglects the need for an expensive measuring instrument in all potential locations to acquire the solar radiation and wind speed data.


2020 ◽  
Author(s):  
Nicola Bodini ◽  
Mike Optis

Abstract. The extrapolation of wind speeds measured at a meteorological mast to wind turbine hub heights is a key component in a bankable wind farm energy assessment and a significant source of uncertainty. Industry-standard methods for extrapolation include the power law and logarithmic profile. The emergence of machine-learning applications in wind energy has led to several studies demonstrating substantial improvements in vertical extrapolation accuracy in machine-learning methods over these conventional power law and logarithmic profile methods. In all cases, these studies assess relative model performance at a measurement site where, critically, the machine-learning algorithm requires knowledge of the hub-height wind speeds in order to train the model. This prior knowledge provides fundamental advantages to the site-specific machine-learning model over the power law and log profile, which, by contrast, are not highly tuned to hub-height measurements but rather can generalize to any site. Furthermore, there is no practical benefit in applying a machine-learning model at a site where hub-height winds are known; rather, its performance at nearby locations (i.e., across a wind farm site) without hub-height measurements is of most practical interest. To more fairly and practically compare machine-learning-based extrapolation to standard approaches, we implemented a round-robin extrapolation model comparison, in which a random forest machine-learning model is trained and evaluated at different sites and then compared against the power law and logarithmic profile. We consider 20 months of lidar and sonic anemometer data collected at four sites between 50–100 kilometers apart in the central United States. We find that the random forest outperforms the standard extrapolation approaches, especially when incorporating surface measurements as inputs to include the influence of atmospheric stability. When compared at a single site (the traditional comparison approach), the machine-learning improvement in mean absolute error was 28 % and 23 % over the power law and logarithmic profile, respectively. Using the round-robin approach proposed here, this improvement drops to 19 % and 14 %, respectively. These latter values better represent practical model performance, and we conclude that round-robin validation should be the standard for machine-learning-based, wind-speed extrapolation methods.


Machine learning is a prominent tool for getting data from large amounts of information. Whereas a good amount of machine learning analysis has targeted on increasing the accuracy and potency of coaching and reasoning algorithms, there is less attention within the equally vital issues of observing the standard of information fed into the machine learning model. The standard of huge information is far away from good. Recent studies have shown that poor quality will bring serious errors to the result of big data analysis and this could have an effect on in making additional precise results from the information. Advantages of data preprocessing within the context of ML are advanced detection of errors, model-quality improves by the usage of better data, savings in engineering hours to debug issues


2020 ◽  
Vol 143 (1) ◽  
Author(s):  
Jinlong Liu ◽  
Christopher Ulishney ◽  
Cosmin Emil Dumitrescu

Abstract Engine calibration requires detailed feedback information that can reflect the combustion process as the optimized objective. Indicated mean effective pressure (IMEP) is such an indicator describing an engine’s capacity to do work under different combinations of control variables. In this context, it is of interest to find cost-effective solutions that will reduce the number of experimental tests. This paper proposes a random forest machine learning model as a cost-effective tool for optimizing engine performance. Specifically, the model estimated IMEP for a natural gas spark ignited engine obtained from a converted diesel engine. The goal was to develop an economical and robust tool that can help reduce the large number of experiments usually required throughout the design and development of internal combustion engines. The data used for building such correlative model came from engine experiments that varied the spark advance, fuel-air ratio, and engine speed. The inlet conditions and the coolant/oil temperature were maintained constant. As a result, the model inputs were the key engine operation variables that affect engine performance. The trained model was shown to be able to predict the combustion-related feedback information with good accuracy (R2 ≈ 0.9 and MSE ≈ 0). In addition, the model accurately reproduced the effect of control variables on IMEP, which would help narrow the choice of operating conditions for future designs of experiment. Overall, the machine learning approach presented here can provide new chances for cost-efficient engine analysis and diagnostics work.


2022 ◽  
Author(s):  
Joko Sampurno ◽  
Valentin Vallaeys ◽  
Randy Ardianto ◽  
Emmanuel Hanert

Abstract. Flood forecasting based on water level modeling is an essential non-structural measure against compound flooding over the globe. With its vulnerability increased under climate change, every coastal area became urgently needs a water level model for better flood risk management. Unfortunately, for local water management agencies in developing countries building such a model is challenging due to the limited computational resources and the scarcity of observational data. Here, we attempt to solve the issue by proposing an integrated hydrodynamic and machine learning approach to predict compound flooding in those areas. As a case study, this integrated approach is implemented in Pontianak, the densest coastal urban area over the Kapuas River delta, Indonesia. Firstly, we built a hydrodynamic model to simulate several compound flooding scenarios, and the outputs are then used to train the machine learning model. To obtain a robust machine learning model, we consider three machine learning algorithms, i.e., Random Forest, Multi Linear Regression, and Support Vector Machine. The results show that this integrated scheme is successfully working. The Random Forest performs as the most accurate algorithm to predict flooding hazards in the study area, with RMSE = 0.11 m compared to SVM (RMSE = 0.18 m) and MLR (RMSE = 0.19 m). The machine-learning model with the RF algorithm can predict ten out of seventeen compound flooding events during the testing phase. Therefore, the random forest is proposed as the most appropriate algorithm to build a reliable ML model capable of assessing the compound flood hazards in the area of interest.


Gut ◽  
2021 ◽  
pp. gutjnl-2021-324060
Author(s):  
Raghav Sundar ◽  
Nesaretnam Barr Kumarakulasinghe ◽  
Yiong Huak Chan ◽  
Kazuhiro Yoshida ◽  
Takaki Yoshikawa ◽  
...  

ObjectiveTo date, there are no predictive biomarkers to guide selection of patients with gastric cancer (GC) who benefit from paclitaxel. Stomach cancer Adjuvant Multi-Institutional group Trial (SAMIT) was a 2×2 factorial randomised phase III study in which patients with GC were randomised to Pac-S-1 (paclitaxel +S-1), Pac-UFT (paclitaxel +UFT), S-1 alone or UFT alone after curative surgery.DesignThe primary objective of this study was to identify a gene signature that predicts survival benefit from paclitaxel chemotherapy in GC patients. SAMIT GC samples were profiled using a customised 476 gene NanoString panel. A random forest machine-learning model was applied on the NanoString profiles to develop a gene signature. An independent cohort of metastatic patients with GC treated with paclitaxel and ramucirumab (Pac-Ram) served as an external validation cohort.ResultsFrom the SAMIT trial 499 samples were analysed in this study. From the Pac-S-1 training cohort, the random forest model generated a 19-gene signature assigning patients to two groups: Pac-Sensitive and Pac-Resistant. In the Pac-UFT validation cohort, Pac-Sensitive patients exhibited a significant improvement in disease free survival (DFS): 3-year DFS 66% vs 40% (HR 0.44, p=0.0029). There was no survival difference between Pac-Sensitive and Pac-Resistant in the UFT or S-1 alone arms, test of interaction p<0.001. In the external Pac-Ram validation cohort, the signature predicted benefit for Pac-Sensitive (median PFS 147 days vs 112 days, HR 0.48, p=0.022).ConclusionUsing machine-learning techniques on one of the largest GC trials (SAMIT), we identify a gene signature representing the first predictive biomarker for paclitaxel benefit.Trial registration numberUMIN Clinical Trials Registry: C000000082 (SAMIT); ClinicalTrials.gov identifier, 02628951 (South Korean trial)


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Shuwei Yin ◽  
Xiao Tian ◽  
Jingjing Zhang ◽  
Peisen Sun ◽  
Guanglin Li

Abstract Background Circular RNA (circRNA) is a novel type of RNA with a closed-loop structure. Increasing numbers of circRNAs are being identified in plants and animals, and recent studies have shown that circRNAs play an important role in gene regulation. Therefore, identifying circRNAs from increasing amounts of RNA-seq data is very important. However, traditional circRNA recognition methods have limitations. In recent years, emerging machine learning techniques have provided a good approach for the identification of circRNAs in animals. However, using these features to identify plant circRNAs is infeasible because the characteristics of plant circRNA sequences are different from those of animal circRNAs. For example, plants are extremely rich in splicing signals and transposable elements, and their sequence conservation in rice, for example is far less than that in mammals. To solve these problems and better identify circRNAs in plants, it is urgent to develop circRNA recognition software using machine learning based on the characteristics of plant circRNAs. Results In this study, we built a software program named PCirc using a machine learning method to predict plant circRNAs from RNA-seq data. First, we extracted different features, including open reading frames, numbers of k-mers, and splicing junction sequence coding, from rice circRNA and lncRNA data. Second, we trained a machine learning model by the random forest algorithm with tenfold cross-validation in the training set. Third, we evaluated our classification according to accuracy, precision, and F1 score, and all scores on the model test data were above 0.99. Fourth, we tested our model by other plant tests, and obtained good results, with accuracy scores above 0.8. Finally, we packaged the machine learning model built and the programming script used into a locally run circular RNA prediction software, Pcirc (https://github.com/Lilab-SNNU/Pcirc). Conclusion Based on rice circRNA and lncRNA data, a machine learning model for plant circRNA recognition was constructed in this study using random forest algorithm, and the model can also be applied to plant circRNA recognition such as Arabidopsis thaliana and maize. At the same time, after the completion of model construction, the machine learning model constructed and the programming scripts used in this study are packaged into a localized circRNA prediction software Pcirc, which is convenient for plant circRNA researchers to use.


Sign in / Sign up

Export Citation Format

Share Document