scholarly journals STEP-OP: Short-term Event Prediction in the Operating Room using Hybrid Deep Learning to Forecast Five-Minute Intraoperative Hypotension (Preprint)

10.2196/31311 ◽  
2021 ◽  
Author(s):  
Sooho Choe ◽  
Eunjeong Park ◽  
Wooseok Shin ◽  
Bonah Koo ◽  
Dongjin Shin ◽  
...  
2021 ◽  
Author(s):  
Sooho Choe ◽  
Eunjeong Park ◽  
Wooseok Shin ◽  
Bonah Koo ◽  
Dongjin Shin ◽  
...  

BACKGROUND Intraoperative hypotension has an adverse impact on postoperative outcomes, However, it is difficult to predict and treat intraoperative hypotension with individual clinical parameter in advance. OBJECTIVE To develop a prediction model to forecast five-minute intraoperative hypotension based on the weighted average ensemble of individual neural networks, which utilize the biosignals recorded during non-cardiac surgery. METHODS In this retrospective observational study, arterial wave form was recorded during non-cardiac operation held between August 2016 and December 2019, at Seoul National University Hospital, Seoul, South Korea. We analyzed the arterial waveforms from the big data in VitalDB repository of electronic health records. We defined 2 s hypotension as the moving average of arterial pressure under 65 mm Hg for 2 s, and intraoperative hypotensive events as the case in which 2 s hypotension lasts for at least 60 s. We developed an artificial intelligence-enabled process called short-term event prediction in the operating room (STEP-OP) for predicting short-term intraoperative hypotension. RESULTS The study was performed on 18,813 subjects undergoing non-cardiac surgeries. Deep-learning algorithms (convolutional neural network [CNN] and recurrent neural network [RNN]) using raw waveforms as input showed a greater area under the precision-recall curve (AUPRC) scores than the logistic regression algorithm (0.698 [95% confidence interval {CI}, 0.690–0.705], 0.706 [95% CI, 0.698–0.715]), compared with 0.673 (95% CI, 0.665–0.682), respectively. STEP-OP performed better and had greater AUPRC values than RNN and CNN algorithms (0.716 [95% CI, 0.708–0.723]). CONCLUSIONS We developed STEP-OP, the weighted average of deep-learning models. It predicted intraoperative hypotension more accurately than the CNN, RNN, and logistic regression models. CLINICALTRIAL The study was approved by the institutional review board of Seoul National University Hospital (H-2008-175-1152). (Trial Registration: ClinicalTrials.gov NCT02914444). Arterial Pressure; artificial intelligence; biosignals; deep learning; hypotension; machine learning


2021 ◽  
Vol 296 ◽  
pp. 126564
Author(s):  
Md Alamgir Hossain ◽  
Ripon K. Chakrabortty ◽  
Sondoss Elsawah ◽  
Michael J. Ryan

2021 ◽  
pp. 1-1
Author(s):  
Lianjie Jiang ◽  
Xinli Wang ◽  
Wei Li ◽  
Lei Wang ◽  
Xiaohong Yin ◽  
...  

2021 ◽  
Vol 11 (6) ◽  
pp. 2742
Author(s):  
Fatih Ünal ◽  
Abdulaziz Almalaq ◽  
Sami Ekici

Short-term load forecasting models play a critical role in distribution companies in making effective decisions in their planning and scheduling for production and load balancing. Unlike aggregated load forecasting at the distribution level or substations, forecasting load profiles of many end-users at the customer-level, thanks to smart meters, is a complicated problem due to the high variability and uncertainty of load consumptions as well as customer privacy issues. In terms of customers’ short-term load forecasting, these models include a high level of nonlinearity between input data and output predictions, demanding more robustness, higher prediction accuracy, and generalizability. In this paper, we develop an advanced preprocessing technique coupled with a hybrid sequential learning-based energy forecasting model that employs a convolution neural network (CNN) and bidirectional long short-term memory (BLSTM) within a unified framework for accurate energy consumption prediction. The energy consumption outliers and feature clustering are extracted at the advanced preprocessing stage. The novel hybrid deep learning approach based on data features coding and decoding is implemented in the prediction stage. The proposed approach is tested and validated using real-world datasets in Turkey, and the results outperformed the traditional prediction models compared in this paper.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 924
Author(s):  
Moslem Imani ◽  
Hoda Fakour ◽  
Wen-Hau Lan ◽  
Huan-Chin Kao ◽  
Chi Ming Lee ◽  
...  

Despite the great significance of precisely forecasting the wind speed for development of the new and clean energy technology and stable grid operators, the stochasticity of wind speed makes the prediction a complex and challenging task. For improving the security and economic performance of power grids, accurate short-term wind power forecasting is crucial. In this paper, a deep learning model (Long Short-term Memory (LSTM)) has been proposed for wind speed prediction. Knowing that wind speed time series is nonlinear stochastic, the mutual information (MI) approach was used to find the best subset from the data by maximizing the joint MI between subset and target output. To enhance the accuracy and reduce input characteristics and data uncertainties, rough set and interval type-2 fuzzy set theory are combined in the proposed deep learning model. Wind speed data from an international airport station in the southern coast of Iran Bandar-Abbas City was used as the original input dataset for the optimized deep learning model. Based on the statistical results, the rough set LSTM (RST-LSTM) model showed better prediction accuracy than fuzzy and original LSTM, as well as traditional neural networks, with the lowest error for training and testing datasets in different time horizons. The suggested model can support the optimization of the control approach and the smooth procedure of power system. The results confirm the superior capabilities of deep learning techniques for wind speed forecasting, which could also inspire new applications in meteorology assessment.


Author(s):  
Claire Brenner ◽  
Jonathan Frame ◽  
Grey Nearing ◽  
Karsten Schulz

ZusammenfassungDie Verdunstung ist ein entscheidender Prozess im globalen Wasser‑, Energie- sowie Kohlenstoffkreislauf. Daten zur räumlich-zeitlichen Dynamik der Verdunstung sind daher von großer Bedeutung für Klimamodellierungen, zur Abschätzung der Auswirkungen der Klimakrise sowie nicht zuletzt für die Landwirtschaft.In dieser Arbeit wenden wir zwei Machine- und Deep Learning-Methoden für die Vorhersage der Verdunstung mit täglicher und halbstündlicher Auflösung für Standorte des FLUXNET-Datensatzes an. Das Long Short-Term Memory Netzwerk ist ein rekurrentes neuronales Netzwerk, welchen explizit Speichereffekte berücksichtigt und Zeitreihen der Eingangsgrößen analysiert (entsprechend physikalisch-basierten Wasserbilanzmodellen). Dem gegenüber gestellt werden Modellierungen mit XGBoost, einer Entscheidungsbaum-Methode, die in diesem Fall nur Informationen für den zu bestimmenden Zeitschritt erhält (entsprechend physikalisch-basierten Energiebilanzmodellen). Durch diesen Vergleich der beiden Modellansätze soll untersucht werden, inwieweit sich durch die Berücksichtigung von Speichereffekten Vorteile für die Modellierung ergeben.Die Analysen zeigen, dass beide Modellansätze gute Ergebnisse erzielen und im Vergleich zu einem ausgewerteten Referenzdatensatz eine höhere Modellgüte aufweisen. Vergleicht man beide Modelle, weist das LSTM im Mittel über alle 153 untersuchten Standorte eine bessere Übereinstimmung mit den Beobachtungen auf. Allerdings zeigt sich eine Abhängigkeit der Güte der Verdunstungsvorhersage von der Vegetationsklasse des Standorts; vor allem wärmere, trockene Standorte mit kurzer Vegetation werden durch das LSTM besser repräsentiert, wohingegen beispielsweise in Feuchtgebieten XGBoost eine bessere Übereinstimmung mit den Beobachtung liefert. Die Relevanz von Speichereffekten scheint daher zwischen Ökosystemen und Standorten zu variieren.Die präsentierten Ergebnisse unterstreichen das Potenzial von Methoden der künstlichen Intelligenz für die Beschreibung der Verdunstung.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1151
Author(s):  
Carolina Gijón ◽  
Matías Toril ◽  
Salvador Luna-Ramírez ◽  
María Luisa Marí-Altozano ◽  
José María Ruiz-Avilés

Network dimensioning is a critical task in current mobile networks, as any failure in this process leads to degraded user experience or unnecessary upgrades of network resources. For this purpose, radio planning tools often predict monthly busy-hour data traffic to detect capacity bottlenecks in advance. Supervised Learning (SL) arises as a promising solution to improve predictions obtained with legacy approaches. Previous works have shown that deep learning outperforms classical time series analysis when predicting data traffic in cellular networks in the short term (seconds/minutes) and medium term (hours/days) from long historical data series. However, long-term forecasting (several months horizon) performed in radio planning tools relies on short and noisy time series, thus requiring a separate analysis. In this work, we present the first study comparing SL and time series analysis approaches to predict monthly busy-hour data traffic on a cell basis in a live LTE network. To this end, an extensive dataset is collected, comprising data traffic per cell for a whole country during 30 months. The considered methods include Random Forest, different Neural Networks, Support Vector Regression, Seasonal Auto Regressive Integrated Moving Average and Additive Holt–Winters. Results show that SL models outperform time series approaches, while reducing data storage capacity requirements. More importantly, unlike in short-term and medium-term traffic forecasting, non-deep SL approaches are competitive with deep learning while being more computationally efficient.


Sign in / Sign up

Export Citation Format

Share Document