scholarly journals Deep Learning Can Differentiate IDH-Mutant from IDH-Wild GBM

2021 ◽  
Vol 11 (4) ◽  
pp. 290
Author(s):  
Luca Pasquini ◽  
Antonio Napolitano ◽  
Emanuela Tagliente ◽  
Francesco Dellepiane ◽  
Martina Lucignani ◽  
...  

Isocitrate dehydrogenase (IDH) mutant and wildtype glioblastoma multiforme (GBM) often show overlapping features on magnetic resonance imaging (MRI), representing a diagnostic challenge. Deep learning showed promising results for IDH identification in mixed low/high grade glioma populations; however, a GBM-specific model is still lacking in the literature. Our aim was to develop a GBM-tailored deep-learning model for IDH prediction by applying convoluted neural networks (CNN) on multiparametric MRI. We selected 100 adult patients with pathologically demonstrated WHO grade IV gliomas and IDH testing. MRI sequences included: MPRAGE, T1, T2, FLAIR, rCBV and ADC. The model consisted of a 4-block 2D CNN, applied to each MRI sequence. Probability of IDH mutation was obtained from the last dense layer of a softmax activation function. Model performance was evaluated in the test cohort considering categorical cross-entropy loss (CCEL) and accuracy. Calculated performance was: rCBV (accuracy 83%, CCEL 0.64), T1 (accuracy 77%, CCEL 1.4), FLAIR (accuracy 77%, CCEL 1.98), T2 (accuracy 67%, CCEL 2.41), MPRAGE (accuracy 66%, CCEL 2.55). Lower performance was achieved on ADC maps. We present a GBM-specific deep-learning model for IDH mutation prediction, with a maximal accuracy of 83% on rCBV maps. Highest predictivity achieved on perfusion images possibly reflects the known link between IDH and neoangiogenesis through the hypoxia inducible factor.

2019 ◽  
Vol 39 (5) ◽  
pp. 47-59 ◽  
Author(s):  
Sugeerth Murugesan ◽  
Sana Malik ◽  
Fan Du ◽  
Eunyee Koh ◽  
Tuan Manh Lai

BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


2019 ◽  
Vol 15 ◽  
pp. P280-P281
Author(s):  
Shangran Qiu ◽  
Megan S. Heydari ◽  
Matthew I. Miller ◽  
Prajakta S. Joshi ◽  
Benjamin C. Wong ◽  
...  

2021 ◽  
Vol 9 (9) ◽  
pp. 1006
Author(s):  
Jiahao Qi ◽  
Jundong Zhang ◽  
Qingyan Meng

In the intelligent perception of the marine engine room, visual identification of auxiliary equipment is the prerequisite for defect recognition and anomaly detection. To improve the detection accuracy, this study presents an auxiliary equipment detector in the cabin based on a deep learning model. Owing to the compact layout of pipeline networks and the large disparity in the equipment scales, we initially adopted RetinaNet as the basic framework, and introduced the single channel plain architecture RepVGG as the feature extraction network to simplify the complexity and improve realtime detection. Secondly, the Neighbor Erasing and Transferring Mechanism (NETM) was applied in the feature pyramid to deal with more complicated scale variations. Then, the complete IoU (CIoU) regression loss function was used instead of smooth L1, and the DIoU Soft-NMS mechanism was proposed to alleviate the misdetection in congested cabins. Further, comparison experiments and ablation experiments were performed on the auxiliary equipment in a marine engine room (AEMER) dataset to validate the efficacy of these strategies on the model performance boost. Specifically, our model can correctly detect 93.44% of coolers, 100.00% of diesel engines, 60.26% of meters, 95.30% of pumps, 55.01% of reservoirs, 97.68% of oil separators, and 74.37% of valves in a practical cabin.


2021 ◽  
Author(s):  
Anass El Houd ◽  
Charbel El Hachem ◽  
Loic Painvin

Abstract The welding seams visual inspection is still manually operated by humans in different companies, so the result of the test is still highly subjective and expensive. At present, the integration of deep learning methods for welds classification is a research focus in engineering applications. This work intends to apprehend and emphasize the contribution of deep learning model explainability to the improvement of welding seams classification accuracy and reliability, two of the various metrics affecting the production lines and cost in the automotive industry. For this purpose, we implement a novel hybrid method that relies on combining the model prediction scores and visual explanation heatmap of the model in order to make a more accurate classification of welding seam defects and improve both its performance and its reliability. The results show that the hybrid model performance is relatively above our target performance and helps to increase the accuracy by at least 18%, which presents new perspectives to the developments of deep Learning explainability and interpretability.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Veerraju Gampala ◽  
Praful Vijay Nandankar ◽  
M. Kathiravan ◽  
S. Karunakaran ◽  
Arun Reddy Nalla ◽  
...  

Purpose The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open research COVID-19 data set. As COVID-19 has an up-to-date data collection from the government, deep learning techniques can be used to predict future outbreak of coronavirus. The existing long short-term memory (LSTM) model is fine-tuned to forecast the outbreak of COVID-19 with better accuracy, and an empirical data exploration with advanced picturing has been made to comprehend the outbreak of coronavirus. Design/methodology/approach This research work presents a fine-tuned LSTM deep learning model using three hidden layers, 200 LSTM unit cells, one activation function ReLu, Adam optimizer, loss function is mean square error, the number of epochs 200 and finally one dense layer to predict one value each time. Findings LSTM is found to be more effective in forecasting future predictions. Hence, fine-tuned LSTM model predicts accurate results when applied to COVID-19 data set. Originality/value The fine-tuned LSTM model is developed and tested for the first time on COVID-19 data set to forecast outbreak of pandemic according to the authors’ knowledge.


Author(s):  
Rui Tian ◽  
Ping Zhou ◽  
Mengyuan Li ◽  
Jinfeng Tan ◽  
Zifeng Cui ◽  
...  

Abstract Human papillomavirus (HPV) integrating into human genome is the main cause of cervical carcinogenesis. HPV integration selection preference shows strong dependence on local genomic environment. Due to this theory, it is possible to predict HPV integration sites. However, a published bioinformatic tool is not available to date. Thus, we developed an attention-based deep learning model DeepHPV to predict HPV integration sites by learning environment features automatically. In total, 3608 known HPV integration sites were applied to train the model, and 584 reviewed HPV integration sites were used as the testing dataset. DeepHPV showed an area under the receiver-operating characteristic (AUROC) of 0.6336 and an area under the precision recall (AUPR) of 0.5670. Adding RepeatMasker and TCGA Pan Cancer peaks improved the model performance to 0.8464 and 0.8501 in AUROC and 0.7985 and 0.8106 in AUPR, respectively. Next, we tested these trained models on independent database VISDB and found the model adding TCGA Pan Cancer performed better (AUROC: 0.7175, AUPR: 0.6284) than the model adding RepeatMasker peaks (AUROC: 0.6102, AUPR: 0.5577). Moreover, we introduced attention mechanism in DeepHPV and enriched the transcription factor binding sites including BHLHA15, CHR, COUP-TFII, DMRTA2, E2A, HIC1, INR, NPAS, Nr5a2, RARa, SCL, Snail1, Sox10, Sox3, Sox4, Sox6, STAT6, Tbet, Tbx5, TEAD, Tgif2, ZNF189, ZNF416 near attention intensive sites. Together, DeepHPV is a robust and explainable deep learning model, providing new insights into HPV integration preference and mechanism. Availability: DeepHPV is available as an open-source software and can be downloaded from https://github.com/JiuxingLiang/DeepHPV.git, Contact: [email protected], [email protected], [email protected]


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuta Kumazu ◽  
Nao Kobayashi ◽  
Naoki Kitamura ◽  
Elleuch Rayan ◽  
Paul Neculoiu ◽  
...  

AbstractThe prediction of anatomical structures within the surgical field by artificial intelligence (AI) is expected to support surgeons’ experience and cognitive skills. We aimed to develop a deep-learning model to automatically segment loose connective tissue fibers (LCTFs) that define a safe dissection plane. The annotation was performed on video frames capturing a robot-assisted gastrectomy performed by trained surgeons. A deep-learning model based on U-net was developed to output segmentation results. Twenty randomly sampled frames were provided to evaluate model performance by comparing Recall and F1/Dice scores with a ground truth and with a two-item questionnaire on sensitivity and misrecognition that was completed by 20 surgeons. The model produced high Recall scores (mean 0.606, maximum 0.861). Mean F1/Dice scores reached 0.549 (range 0.335–0.691), showing acceptable spatial overlap of the objects. Surgeon evaluators gave a mean sensitivity score of 3.52 (with 88.0% assigning the highest score of 4; range 2.45–3.95). The mean misrecognition score was a low 0.14 (range 0–0.7), indicating very few acknowledged over-detection failures. Thus, AI can be trained to predict fine, difficult-to-discern anatomical structures at a level convincing to expert surgeons. This technology may help reduce adverse events by determining safe dissection planes.


Sign in / Sign up

Export Citation Format

Share Document