scholarly journals A Machine Learning Approach to Predict Hypotensive Events in ICU Settings

2019 ◽  
Author(s):  
Mina Chookhachizadeh Moghadam ◽  
Ehsan Masoumi ◽  
Nader Bagherzadeh ◽  
Davinder Ramsingh ◽  
Guann-Pyng Li ◽  
...  

AbstractPurposePredicting hypotension well in advance provides physicians with enough time to respond with proper therapeutic measures. However, the real-time prediction of hypotension with high positive predictive value (PPV) is a challenge due to the dynamic changes in patients’ physiological status under the drug administration which is limiting the amount of useful data available for the algorithm.MethodsTo mimic real-time monitoring, we developed a machine learning algorithm that uses most of the available data points from patients’ record to train and test the algorithm. The algorithm predicts hypotension up to 30 minutes in advance based on only 5 minutes of patient’s physiological history. A novel evaluation method is proposed to assess the algorithm performance as a function of time at every timestamp within 30 minutes prior to hypotension. This evaluation approach provides statistical tools to find the best possible prediction window.ResultsDuring 181,000 minutes of monitoring of about 400 patients, the algorithm demonstrated 94% accuracy, 85% sensitivity and 96% specificity in predicting hypotension within 30 minutes of the events. A high PPV of 81% obtained and the algorithm predicted 80% of the events 25 minutes prior to their onsets. It was shown that choosing a classification threshold that maximizes the F1 score during the training phase contributes to a high PPV and sensitivity.ConclusionThis study reveals the promising potential of the machine learning algorithms in real-time prediction of hypotensive events in ICU setting based on short-term physiological history.

2021 ◽  
Author(s):  
Catherine Ollagnier ◽  
Claudia Kasper ◽  
Anna Wallenbeck ◽  
Linda Keeling ◽  
Siavash A Bigdeli

Tail biting is a detrimental behaviour that impacts the welfare and health of pigs. Early detection of tail biting precursor signs allows for preventive measures to be taken, thus avoiding the occurrence of the tail biting event. This study aimed to build a machine-learning algorithm for real time detection of upcoming tail biting outbreaks, using feeding behaviour data recorded by an electronic feeder. Prediction capacities of seven machine learning algorithms (e.g., random forest, neural networks) were evaluated from daily feeding data collected from 65 pens originating from 2 herds of grower-finisher pigs (25-100kg), in which 27 tail biting events occurred. Data were divided into training and testing data, either by randomly splitting data into 75% (training set) and 25% (testing set), or by randomly selecting pens to constitute the testing set. The random forest algorithm was able to predict 70% of the upcoming events with an accuracy of 94%, when predicting events in pens for which it had previous data. The detection of events for unknown pens was less sensitive, and the neural network model was able to detect 14% of the upcoming events with an accuracy of 63%. A machine-learning algorithm based on ongoing data collection should be considered for implementation into automatic feeder systems for real time prediction of tail biting events.


Author(s):  
Yuan Gong ◽  
Boyang Li ◽  
Christian Poellabauer ◽  
Yiyu Shi

In recent years, many efforts have demonstrated that modern machine learning algorithms are vulnerable to adversarial attacks, where small, but carefully crafted, perturbations on the input can make them fail. While these attack methods are very effective, they only focus on scenarios where the target model takes static input, i.e., an attacker can observe the entire original sample and then add a perturbation at any point of the sample. These attack approaches are not applicable to situations where the target model takes streaming input, i.e., an attacker is only able to observe past data points and add perturbations to the remaining (unobserved) data points of the input. In this paper, we propose a real-time adversarial attack scheme for machine learning models with streaming inputs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Osama Siddig ◽  
Hany Gamal ◽  
Salaheldin Elkatatny ◽  
Abdulazeez Abdulraheem

AbstractRock elastic properties such as Poisson’s ratio influence wellbore stability, in-situ stresses estimation, drilling performance, and hydraulic fracturing design. Conventionally, Poisson’s ratio estimation requires either laboratory experiments or derived from sonic logs, the main concerns of these methods are the data and samples availability, costs, and time-consumption. In this paper, an alternative real-time technique utilizing drilling parameters and machine learning was presented. The main added value of this approach is that the drilling parameters are more likely to be available and could be collected in real-time during drilling operation without additional cost. These parameters include weight on bit, penetration rate, pump rate, standpipe pressure, and torque. Two machine learning algorithms were used, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To train and test the models, 2905 data points from one well were used, while 2912 data points from a different well were used for model validation. The lithology of both wells contains carbonate, sandstone, and shale. Optimization on different tuning parameters in the algorithm was conducted to ensure the best prediction was achieved. A good match between the actual and predicted Poisson’s ratio was achieved in both methods with correlation coefficients between 0.98 and 0.99 using ANN and between 0.97 and 0.98 using ANFIS. The average absolute percentage error values were between 1 and 2% in ANN predictions and around 2% when ANFIS was used. Based on these results, the employment of drilling data and machine learning is a strong tool for real-time prediction of geomechanical properties without additional cost.


2018 ◽  
Author(s):  
Robbin Bouwmeester ◽  
Lennart Martens ◽  
Sven Degroeve

AbstractLiquid chromatography is a core component of almost all mass spectrometric analyses of (bio)molecules. Because of the high-throughput nature of mass spectrometric analyses, the interpretation of these chromatographic data increasingly relies on informatics solutions that attempt to predict an analyte’s retention time. The key components of such predictive algorithms are the features these are supplies with, and the actual machine learning algorithm used to fit the model parameters.We here therefore evaluate the performance of seven machine learning algorithms on 36 distinct metabolomics data sets, using two distinct feature sets. Interestingly, the results show that no single learning algorithm performs optimally for all data sets, with different algorithm types achieving top performance for different types of analytes or different protocols. Our results can thus be used to find an optimal retention time prediction algorithm for specific analytes or protocols. Importantly, however, our results also show that blending different types of models together decreases the error on outliers, indicating that the combination of several approaches holds substantial promise for the development of more generic, high-performing algorithms.


Sign in / Sign up

Export Citation Format

Share Document