scholarly journals Automated Final Lesion Segmentation in Posterior Circulation Acute Ischemic Stroke Using Deep Learning

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1621
Author(s):  
Riaan Zoetmulder ◽  
Praneeta R. Konduri ◽  
Iris V. Obdeijn ◽  
Efstratios Gavves ◽  
Ivana Išgum ◽  
...  

Final lesion volume (FLV) is a surrogate outcome measure in anterior circulation stroke (ACS). In posterior circulation stroke (PCS), this relation is plausibly understudied due to a lack of methods that automatically quantify FLV. The applicability of deep learning approaches to PCS is limited due to its lower incidence compared to ACS. We evaluated strategies to develop a convolutional neural network (CNN) for PCS lesion segmentation by using image data from both ACS and PCS patients. We included follow-up non-contrast computed tomography scans of 1018 patients with ACS and 107 patients with PCS. To assess whether an ACS lesion segmentation generalizes to PCS, a CNN was trained on ACS data (ACS-CNN). Second, to evaluate the performance of only including PCS patients, a CNN was trained on PCS data. Third, to evaluate the performance when combining the datasets, a CNN was trained on both datasets. Finally, to evaluate the performance of transfer learning, the ACS-CNN was fine-tuned using PCS patients. The transfer learning strategy outperformed the other strategies in volume agreement with an intra-class correlation of 0.88 (95% CI: 0.83–0.92) vs. 0.55 to 0.83 and a lesion detection rate of 87% vs. 41–77 for the other strategies. Hence, transfer learning improved the FLV quantification and detection rate of PCS lesions compared to the other strategies.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


Reports ◽  
2019 ◽  
Vol 2 (4) ◽  
pp. 26 ◽  
Author(s):  
Govind Chada

Increasing radiologist workloads and increasing primary care radiology services make it relevant to explore the use of artificial intelligence (AI) and particularly deep learning to provide diagnostic assistance to radiologists and primary care physicians in improving the quality of patient care. This study investigates new model architectures and deep transfer learning to improve the performance in detecting abnormalities of upper extremities while training with limited data. DenseNet-169, DenseNet-201, and InceptionResNetV2 deep learning models were implemented and evaluated on the humerus and finger radiographs from MURA, a large public dataset of musculoskeletal radiographs. These architectures were selected because of their high recognition accuracy in a benchmark study. The DenseNet-201 and InceptionResNetV2 models, employing deep transfer learning to optimize training on limited data, detected abnormalities in the humerus radiographs with 95% CI accuracies of 83–92% and high sensitivities greater than 0.9, allowing for these models to serve as useful initial screening tools to prioritize studies for expedited review. The performance in the case of finger radiographs was not as promising, possibly due to the limitations of large inter-radiologist variation. It is suggested that the causes of this variation be further explored using machine learning approaches, which may lead to appropriate remediation.


2020 ◽  
Vol 14 ◽  
Author(s):  
Chenyi Zeng ◽  
Lin Gu ◽  
Zhenzhong Liu ◽  
Shen Zhao

In recent years, there have been multiple works of literature reviewing methods for automatically segmenting multiple sclerosis (MS) lesions. However, there is no literature systematically and individually review deep learning-based MS lesion segmentation methods. Although the previous review also included methods based on deep learning, there are some methods based on deep learning that they did not review. In addition, their review of deep learning methods did not go deep into the specific categories of Convolutional Neural Network (CNN). They only reviewed these methods in a generalized form, such as supervision strategy, input data handling strategy, etc. This paper presents a systematic review of the literature in automated multiple sclerosis lesion segmentation based on deep learning. Algorithms based on deep learning reviewed are classified into two categories through their CNN style, and their strengths and weaknesses will also be given through our investigation and analysis. We give a quantitative comparison of the methods reviewed through two metrics: Dice Similarity Coefficient (DSC) and Positive Predictive Value (PPV). Finally, the future direction of the application of deep learning in MS lesion segmentation will be discussed.


2012 ◽  
Vol 116 (6) ◽  
pp. 1258-1266 ◽  
Author(s):  
Adnan H. Siddiqui ◽  
Adib A. Abla ◽  
Peter Kan ◽  
Travis M. Dumont ◽  
Shady Jahshan ◽  
...  

Object The use of flow-diverting stents has gained momentum as a curative approach in the treatment of complex proximal anterior circulation intracranial aneurysms. There have been some reported attempts of treating formidable lesions in the posterior circulation. Posterior circulation giant fusiform aneurysms have a particularly aggressive natural history. To date, no one approach has been shown to be comprehensively effective or low risk. The authors report the initial results, including the significant morbidity and mortality encountered, with flow diversion in the treatment of large or giant fusiform vertebrobasilar aneurysms at Millard Fillmore Gates Circle Hospital. Methods The authors retrospectively reviewed their prospectively collected endovascular database to identify patients with intracranial aneurysms who underwent treatment with flow-diverting devices and determined that 7 patients had presented with symptomatic large or giant fusiform vertebrobasilar aneurysms. The outcomes of these patients, based on the modified Rankin Scale (mRS), were tabulated, as were the complications experienced. Results Among the 7 patients, Pipeline devices were placed in 6 patients and Silk devices in 1 patient. At the last follow-up evaluation, 4 patients had died (mRS score of 6), all of whom were treated with the Pipeline device. The other 3 patients had mRS scores of 5 (severe disability), 1, and 0. The deaths included posttreatment aneurysm ruptures in 2 patients and lack of improvement in neurological status related to presenting brainstem infarcts and subsequent withdrawal of care in the other 2 patients. Conclusions Whether flow diversion will be an effective strategy for treatment of large or giant fusiform vertebrobasilar aneurysms remains to be seen. The authors' initial experience suggests substantial morbidity and mortality associated with the treatment and with the natural history. As outcomes data slowly become available for patients receiving these devices for fusiform posterior circulation aneurysms, practitioners should use these devices judiciously.


2018 ◽  
Author(s):  
Bhavna J. Antony ◽  
Stefan Maetschke ◽  
Rahil Garnavi

AbstractSpectral-domain optical coherence tomography (SDOCT) is a non-invasive imaging modality that generates high-resolution volumetric images. This modality finds widespread usage in ophthalmology for the diagnosis and management of various ocular conditions. The volumes generated can contain 200 or more B-scans. Manual inspection of such large quantity of scans is time consuming and error prone in most clinical settings. Here, we present a method for the generation of visual summaries of SDOCT volumes, wherein a small set of B-scans that highlight the most clinically relevant features in a volume are extracted. The method was trained and evaluated on data acquired from age-related macular degeneration patients, and “relevance” was defined as the presence of visibly discernible structural abnormalities. The summarisation system consists of a detection module, where relevant B-scans are extracted from the volume, and a set of rules that determines which B-scans are included in the visual summary. Two deep learning approaches are presented and compared for the classification of B-scans - transfer learning and de novo learning. Both approaches performed comparably with AUCs of 0.97 and 0.96, respectively, obtained on an independent test set. The de novo network, however, was 98% smaller than the transfer learning approach, and had a run-time that was also significantly shorter.


Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi134-vi134
Author(s):  
Jacob Ellison ◽  
Francesco Caliva ◽  
Pablo Damasceno ◽  
Tracy Luks ◽  
Marisa LaFontaine ◽  
...  

Abstract Although current advances for automated glioma lesion segmentation and volumetric measurements using deep learning have yielded high performance on newly-diagnosed patients, response assessment in neuro-oncology still relies on manually-drawn, cross-sectional areas of the tumor because these models do not generalize to patients in the post-treatment setting, where they are most needed in the clinic. Surgical resections, adjuvant treatment, or disease progression can alter the characteristics of these lesions on T2-weighted imaging, causing measures of segmentation accuracy, typically measured by Dice coefficients of overlap (DCs), to drop by ~15%. To improve the generalizability of T2-lesion segmentation to patients with glioma post-treatment, we evaluated the effects of: 1) training with different proportions of newly-diagnosed and treated gliomas, 2) applying transfer learning from pre- to post-treatment domains, and 3) incorporating a loss term that spatially weights the lesion boundaries with greater emphasis in training. Using 425 patients (208 newly-diagnosed, 217 post-Tx, with 25 treated patients withheld as a test set) and a top-performing model previously trained on newly-diagnosed gliomas, we found that DCs increased by 10% (to 0.84) then plateaued after including ~25% of post-treatment patients in training. Transfer learning (pre-training on newly-diagnosed and finetuning with post-treatment data) significantly improved Hausdorf distances (HDs), a measure more sensitive to changes at the lesion boundaries, by 17% after including 26% post-treatment images in training, while DCs remained similar. Although modifying our loss functions with boundary-weighted penalizations resulted in comparable DCs to using standard DC loss, HD measures were further reduced by 26%, suggesting that HDs may be a more sensitive metric to subtle changes in segmentation accuracy than DCs. Current work is evaluating their utility in providing accurate volumes for real-time response assessment in the clinic using workflows that have recently been deployed on our clinical PACs system.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1606
Author(s):  
Daniela Onita ◽  
Adriana Birlutiu ◽  
Liviu P. Dinu

Images and text represent types of content that are used together for conveying a message. The process of mapping images to text can provide very useful information and can be included in many applications from the medical domain, applications for blind people, social networking, etc. In this paper, we investigate an approach for mapping images to text using a Kernel Ridge Regression model. We considered two types of features: simple RGB pixel-value features and image features extracted with deep-learning approaches. We investigated several neural network architectures for image feature extraction: VGG16, Inception V3, ResNet50, Xception. The experimental evaluation was performed on three data sets from different domains. The texts associated with images represent objective descriptions for two of the three data sets and subjective descriptions for the other data set. The experimental results show that the more complex deep-learning approaches that were used for feature extraction perform better than simple RGB pixel-value approaches. Moreover, the ResNet50 network architecture performs best in comparison to the other three deep network architectures considered for extracting image features. The model error obtained using the ResNet50 network is less by approx. 0.30 than other neural network architectures. We extracted natural language descriptors of images and we made a comparison between original and generated descriptive words. Furthermore, we investigated if there is a difference in performance between the type of text associated with the images: subjective or objective. The proposed model generated more similar descriptions to the original ones for the data set containing objective descriptions whose vocabulary is simpler, bigger and clearer.


2021 ◽  
Vol 11 ◽  
Author(s):  
Nam Nhut Phan ◽  
Chi-Cheng Huang ◽  
Ling-Ming Tseng ◽  
Eric Y. Chuang

We proposed a highly versatile two-step transfer learning pipeline for predicting the gene signature defining the intrinsic breast cancer subtypes using unannotated pathological images. Deciphering breast cancer molecular subtypes by deep learning approaches could provide a convenient and efficient method for the diagnosis of breast cancer patients. It could reduce costs associated with transcriptional profiling and subtyping discrepancy between IHC assays and mRNA expression. Four pretrained models such as VGG16, ResNet50, ResNet101, and Xception were trained with our in-house pathological images from breast cancer patient with recurrent status in the first transfer learning step and TCGA-BRCA dataset for the second transfer learning step. Furthermore, we also trained ResNet101 model with weight from ImageNet for comparison to the aforementioned models. The two-step deep learning models showed promising classification results of the four breast cancer intrinsic subtypes with accuracy ranging from 0.68 (ResNet50) to 0.78 (ResNet101) in both validation and testing sets. Additionally, the overall accuracy of slide-wise prediction showed even higher average accuracy of 0.913 with ResNet101 model. The micro- and macro-average area under the curve (AUC) for these models ranged from 0.88 (ResNet50) to 0.94 (ResNet101), whereas ResNet101_imgnet weighted with ImageNet archived an AUC of 0.92. We also show the deep learning model prediction performance is significantly improved relatively to the common Genefu tool for breast cancer classification. Our study demonstrated the capability of deep learning models to classify breast cancer intrinsic subtypes without the region of interest annotation, which will facilitate the clinical applicability of the proposed models.


Sign in / Sign up

Export Citation Format

Share Document