scholarly journals Identifying Patient–Ventilator Asynchrony on a Small Dataset Using Image-Based Transfer Learning

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4149
Author(s):  
Qing Pan ◽  
Mengzhe Jia ◽  
Qijie Liu ◽  
Lingwei Zhang ◽  
Jie Pan ◽  
...  

Mechanical ventilation is an essential life-support treatment for patients who cannot breathe independently. Patient–ventilator asynchrony (PVA) occurs when ventilatory support does not match the needs of the patient and is associated with a series of adverse clinical outcomes. Deep learning methods have shown a strong discriminative ability for PVA detection, but they require a large number of annotated data for model training, which hampers their application to this task. We developed a transfer learning architecture based on pretrained convolutional neural networks (CNN) and used it for PVA recognition based on small datasets. The one-dimensional signal was converted to a two-dimensional image, and features were extracted by the CNN using pretrained weights for classification. A partial dropping cross-validation technique was developed to evaluate model performance on small datasets. When using large datasets, the performance of the proposed method was similar to that of non-transfer learning methods. However, when the amount of data was reduced to 1%, the accuracy of transfer learning was approximately 90%, whereas the accuracy of the non-transfer learning was less than 80%. The findings suggest that the proposed transfer learning method can obtain satisfactory accuracies for PVA detection when using small datasets. Such a method can promote the application of deep learning to detect more types of PVA under various ventilation modes.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Author(s):  
Süleyman UZUN ◽  
Sezgin KAÇAR ◽  
Burak ARICIOĞLU

Abstract In this study, for the first time in the literature, identification of different chaotic systems by classifying graphic images of their time series with deep learning methods is aimed. For this purpose, a data set is generated that consists of the graphic images of time series of the most known three chaotic systems: Lorenz, Chen, and Rossler systems. The time series are obtained for different parameter values, initial conditions, step size and time lengths. After generating the data set, a high-accuracy classification is performed by using transfer learning method. In the study, the most accepted deep learning models of the transfer learning methods are employed. These models are SqueezeNet, VGG-19, AlexNet, ResNet50, ResNet101, DenseNet201, ShuffleNet and GoogLeNet. As a result of the study, classification accuracy is found between 96% and 97% depending on the problem. Thus, this study makes association of real time random signals with a mathematical system possible.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Yixiang Deng ◽  
Lu Lu ◽  
Laura Aponte ◽  
Angeliki M. Angelidi ◽  
Vera Novak ◽  
...  

AbstractAccurate prediction of blood glucose variations in type 2 diabetes (T2D) will facilitate better glycemic control and decrease the occurrence of hypoglycemic episodes as well as the morbidity and mortality associated with T2D, hence increasing the quality of life of patients. Owing to the complexity of the blood glucose dynamics, it is difficult to design accurate predictive models in every circumstance, i.e., hypo/normo/hyperglycemic events. We developed deep-learning methods to predict patient-specific blood glucose during various time horizons in the immediate future using patient-specific every 30-min long glucose measurements by the continuous glucose monitoring (CGM) to predict future glucose levels in 5 min to 1 h. In general, the major challenges to address are (1) the dataset of each patient is often too small to train a patient-specific deep-learning model, and (2) the dataset is usually highly imbalanced given that hypo- and hyperglycemic episodes are usually much less common than normoglycemia. We tackle these two challenges using transfer learning and data augmentation, respectively. We systematically examined three neural network architectures, different loss functions, four transfer-learning strategies, and four data augmentation techniques, including mixup and generative models. Taken together, utilizing these methodologies we achieved over 95% prediction accuracy and 90% sensitivity for a time period within the clinically useful 1 h prediction horizon that would allow a patient to react and correct either hypoglycemia and/or hyperglycemia. We have also demonstrated that the same network architecture and transfer-learning methods perform well for the type 1 diabetes OhioT1DM public dataset.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Michael Franco-Garcia ◽  
Alex Benasutti ◽  
Larry Pearlstein ◽  
Mohammed Alabsi

Intelligent fault diagnosis utilizing deep learning algorithms has been widely investigated recently. Although previous results demonstrated excellent performance, features learned by Deep Neural Networks (DNN) are part of a large black box. Consequently, lack of understanding of underlying physical meanings embedded within the features can lead to poor performance when applied to different but related datasets i.e. transfer learning applications. This study will investigate the transfer learning performance of a Convolution Neural Network (CNN) considering 4 different operating conditions. Utilizing the Case Western Reserve University (CWRU) bearing dataset, the CNN will be trained to classify 12 classes. Each class represents a unique differentfault scenario with varying severity i.e. inner race fault of 0.007”, 0.014” diameter. Initially, zero load data will be utilized for model training and the model will be tuned until testing accuracy above 99% is obtained. The model performance will be evaluated by feeding vibration data collected when the load is varied to 1, 2 and 3 HP. Initial results indicated that the classification accuracy will degrade substantially. Hence, this paper will visualize convolution kernels in time and frequency domains and will investigate the influence of changing loads on fault characteristics, network classification mechanism and activation strength.


2020 ◽  
Author(s):  
Gherman Novakovsky ◽  
Manu Saraswat ◽  
Oriol Fornes ◽  
Sara Mostafavi ◽  
Wyeth W. Wasserman

AbstractBackgroundDeep learning has proven to be a powerful technique for transcription factor (TF) binding prediction, but requires large training datasets. Transfer learning can reduce the amount of data required for deep learning, while improving overall model performance, compared to training a separate model for each new task.ResultsWe assess a transfer learning strategy for TF binding prediction consisting of a pre-training step, wherein we train a multi-task model with multiple TFs, and a fine-tuning step, wherein we initialize single-task models for individual TFs with the weights learned by the multi-task model, after which the single-task models are trained at a lower learning rate. We corroborate that transfer learning improves model performance, especially if in the pre-training step the multi-task model is trained with biologically-relevant TFs. We show the effectiveness of transfer learning for TFs with ∼500 ChIP-seq peak regions. Using model interpretation techniques, we demonstrate that the features learned in the pre-training step are refined in the fine-tuning step to resemble the binding motif of the target TF (i.e. the recipient of transfer learning in the fine-tuning step). Moreover, pre-training with biologically-relevant TFs allows single-task models in the fine-tuning step to learn features other than the motif of the target TF.ConclusionsOur results confirm that transfer learning is a powerful technique for TF binding prediction.


2021 ◽  
Vol 25 (1) ◽  
pp. 87-100
Author(s):  
Meng Gao ◽  
◽  
Haodong Wang ◽  
Weizheng Shen ◽  
Zhongbin Su ◽  
...  

In dairy herd management, it is significant and irreplaceable for veterinarians to make rapid and effective diagnosis of dairy cow diseases. Based on electronic medical records, deep learning (DL) has been widely used to support clinical decisions for humans. However, this method is rarely adopted in veterinary diagnosis. In addition, most DL models are driven by large datasets, failing to utilize the knowledge acquired by veterinarians in subjective experience, which is critical to disease diagnosis. To address these problems, this paper proposes a DL method for disease diagnosis of dairy cow: convolutional neural network (CNN) based on knowledge graph and transfer learning (KGTL_CNN). Firstly, the structural knowledge was extracted from a knowledge graph of dairy cow diseases, and treated as part of the inputs to the CNN based on knowledge graph (KG_CNN). Then, the model performance was enhanced through pre-training by transfer learning. To verify its performance, experiments were carried out on dairy cow clinical datasets. The results show that our model performed satisfactorily on disease diagnosis: the KG_CNN and KGTL_CNN achieved an F1-score of 85.87% and 86.77%, respectively, higher than that of typical CNN by 6.58% and 7.7%. The research results greatly promote the effective, fast, and automatic clinical diagnosis of dairy cow diseases.


2020 ◽  
Vol 10 (7) ◽  
pp. 442 ◽  
Author(s):  
You Wang ◽  
Ming Zhang ◽  
RuMeng Wu ◽  
Han Gao ◽  
Meng Yang ◽  
...  

Silent speech decoding is a novel application of the Brain–Computer Interface (BCI) based on articulatory neuromuscular activities, reducing difficulties in data acquirement and processing. In this paper, spatial features and decoders that can be used to recognize the neuromuscular signals are investigated. Surface electromyography (sEMG) data are recorded from human subjects in mimed speech situations. Specifically, we propose to utilize transfer learning and deep learning methods by transforming the sEMG data into spectrograms that contain abundant information in time and frequency domains and are regarded as channel-interactive. For transfer learning, a pre-trained model of Xception on the large image dataset is used for feature generation. Three deep learning methods, Multi-Layer Perception, Convolutional Neural Network and bidirectional Long Short-Term Memory, are then trained using the extracted features and evaluated for recognizing the articulatory muscles’ movements in our word set. The proposed decoders successfully recognized the silent speech and bidirectional Long Short-Term Memory achieved the best accuracy of 90%, outperforming the other two algorithms. Experimental results demonstrate the validity of spectrogram features and deep learning algorithms.


2020 ◽  
Author(s):  
Eric Yi ◽  
Yanling Liu

Abstract Background Tumor classification and feature quantification from H&E histology images are critical tasks for cancer diagnosis, cancer research, and treatment. However, both tasks involve tedious and time-consuming manual examination of histology images. We explored the usage of deep learning methods in segmentation and classification of histology images of cancer tissue for their potential in computer-aided tumor diagnosis and other clinical and research applications. Specifically, we evaluated performance of selected deep learning methods in stroma and glandular objects segmentation in tumor image data and tumor images classification. We automated these tasks to help facilitate downstream tumor image analysis, reduce the labor load of pathologists, and provide them with a second opinion on their analysis. Methods We modified a patch-based U-Net model and trained it to perform stroma detection and segmentation in cancer tissue. Then the semantic segmentation capabilities of the U-Net model were compared with that of a DeepLabV3+ model. We explored the possible use of transfer learning to train a patch-based model to classify cancer tissue images as carcinoma and sarcoma and to further classify them as carcinoma subtypes. Results In spite of the limited dataset available for the pilot study, we found that the DeepLabV3+ model performed biomedical image segmentation more effectively than U-Net when k-fold cross-validation was utilized, but U-Net still showed promise as an effective and efficient model when we used a customized validation approach. We believe that the DeepLabV3+ model can perform segmentation with even more accuracy if computation resource constraints are removed or if more data is used to augment the result. In terms of tumor classification, our selected models also consistently achieve test accuracies above 80%, with a model trained using transfer learning with VGG-16 network as the feature extractors, or convolutional base performing best. For multi-class tumor subtype classification, we also observed promising test accuracies from our models, and a customized post-processing method provided even higher prediction accuracy on test set images and this method can be further investigated. Conclusions This pilot exploratory study provided strong evidence for the powerful potentials of deep learning models for segmentation and classification of tumor image data.


2021 ◽  
Author(s):  
kaiwen wu ◽  
Bo Xu ◽  
Ying Wu

Abstract Manual recognition of breast ultrasound images is a heavy workload for radiologists and misdiagnosis. Traditional machine learning methods and deep learning methods require huge data sets and a lot of time for training. To solve the above problems, this paper had proposed a deep transfer learning method. the transfer learning models ResNet18 and ResNet50 after pre-training on the ImageNet dataset, and the ResNet18 and ResNet50 models without pre-training. The dataset consists of 131 breast ultrasound images (109 benign and 22 malignant), all of which had been collected, labeled and provided by UDIAT Diagnostic Center. The experimental results had shown that the pre-trained ResNet18 model has the best classification performance on breast ultrasound images. It had achieved an accuracy of 93.9%, an F1score of 0.94, and an area under the receiver operating characteristic curve (AUC) of 0.944. Compared with ordinary deep learning models, its classification performance had been greatly improved, which had proved the significant advantages of deep transfer learning in the classification of small samples of medical images.


Sign in / Sign up

Export Citation Format

Share Document