scholarly journals Deep Learning Methods in the Diagnosis of Sacroiliitis from Plain Pelvic Radiographs

2021 ◽  
Author(s):  
Kemal Üreten ◽  
Yüksel Maraş ◽  
Semra Duran ◽  
Kevser Gök

Abstract Objectives The aim of this study is to develop a computer-aided diagnosis method to assist physicians in evaluating sacroiliac radiographs. Methods Convolutional neural networks, a deep learning method, were used in this retrospective study. Transfer learning was implemented with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. Normal pelvic radiographs (n = 290) and pelvic radiographs with sacroiliitis (n = 295) were used for the training of networks. Results The training results were evaluated with the criteria of accuracy, sensitivity, specificity and precision calculated from the confusion matrix and AUC (Area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. Pre-trained VGG-16 model revealed accuracy, sensitivity, specificity, precision and AUC figures of 89.9%, 90.9%, 88.9%, 88.9% and 0.96 with test images, respectively. These results were 84.3%, 91.9%, 78.8%, 75.6 and 0.92 with pre-trained ResNet-101, and 82.0%, 79.6%, 85.0%, 86.7% and 0.90 with pre-trained inception-v3, respectively. Conclusions Successful results were obtained with all three models in this study where transfer learning was applied with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. This method can assist clinicians in the diagnosis of sacroiliitis, provide them with a second objective interpretation, and also reduce the need for advanced imaging methods such as magnetic resonance imaging (MRI).

2021 ◽  
Vol 14 (3) ◽  
pp. 1231-1247
Author(s):  
Lokesh Singh ◽  
Rekh Ram Janghel ◽  
Satya Prakash Sahu

Purpose:Less contrast between lesions and skin, blurriness, darkened lesion images, presence of bubbles, hairs are the artifactsmakes the issue challenging in timely and accurate diagnosis of melanoma. In addition, huge similarity amid nevus lesions and melanoma pose complexity in investigating the melanoma even for the expert dermatologists. Method: In this work, a computer-aided diagnosis for melanoma detection (CAD-MD) system is designed and evaluated for the early and accurate detection of melanoma using thepotentials of machine, and deep learning-based transfer learning for the classification of pigmented skin lesions. The designed CAD-MD comprises of preprocessing, segmentation, feature extraction and classification. Experiments are conducted on dermoscopic images of PH2 and ISIC 2016 publicly available datasets using machine learning and deep learning-based transfer leaning models in twofold: first, with actual images, second, with augmented images. Results:Optimal results are obtained on augmented lesion images using machine learning and deep learning models on PH2 and ISIC-16 dataset. The performance of the CAD-MD system is evaluated using accuracy, sensitivity, specificity, dice coefficient, and jacquard Index. Conclusion:Empirical results show that using the potentials of deep learning-based transfer learning model VGG-16 has significantly outperformed all employed models with an accuracy of 99.1% on the PH2 dataset.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shu-Hui Wang ◽  
Xin-Jun Han ◽  
Jing Du ◽  
Zhen-Chang Wang ◽  
Chunwang Yuan ◽  
...  

Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.


2021 ◽  
Author(s):  
Zheng Wang ◽  
Qingjun Qian ◽  
Jianfang Zhang ◽  
Caihong Duo ◽  
Wen He ◽  
...  

Abstract Background: The diagnosis of pneumoconiosis relies primarily on chest radiographs and exhibits significant variability between physicians. Computer-aided diagnosis (CAD) can improve the accuracy and consistency of these diagnoses. However, CAD based on machine learning requires extensive human intervention and time-consuming training. As such, deep learning has become a popular tool for the development of CAD models. In this study, the clinical applicability of CAD based on deep learning was verified for pneumoconiosis patients.Methods: Chest radiographs were collected from 5424 occupational health examiners who met the inclusion criteria. The data were divided into training, validation, and test sets. The CAD algorithm was then trained and applied to processing of the validation set, while the test set was used to evaluate diagnostic efficacy. Three junior and three senior physicians provided independent diagnoses using images from the test set and a comprehensive diagnosis for comparison with the CAD results. A receiver operating characteristic (ROC) curve was used to evaluate the diagnostic efficiency of the proposed CAD system. A McNemar test was used to evaluate diagnostic sensitivity and specificity for pneumoconiosis, both before and after the use of CAD. A kappa consistency test was used to evaluate the diagnostic consistency for both the algorithm and the clinicians.Results: ROC results suggested the proposed CAD model achieved high accuracy in the diagnosis of pneumoconiosis, with a kappa value of 0.90. The sensitivity, specificity, and kappa values for the junior doctors increased from 0.86 to 0.98, 0.68 to 0.86, and 0.54 to 0.84, respectively (p<0.05), when CAD was applied. However, metrics for the senior doctors were not significantly different.Conclusion: DL-based CAD can improve the diagnostic sensitivity, specificity, and consistency of pneumoconiosis diagnoses, particularly for junior physicians.


Medicina ◽  
2021 ◽  
Vol 57 (11) ◽  
pp. 1148
Author(s):  
Marie Takahashi ◽  
Tomoyuki Fujioka ◽  
Toshihiro Horii ◽  
Koichiro Kimura ◽  
Mizuki Kimura ◽  
...  

Background and Objectives: This study aimed to investigate whether predictive indicators for the deterioration of respiratory status can be derived from the deep learning data analysis of initial chest computed tomography (CT) scans of patients with coronavirus disease 2019 (COVID-19). Materials and Methods: Out of 117 CT scans of 75 patients with COVID-19 admitted to our hospital between April and June 2020, we retrospectively analyzed 79 CT scans that had a definite time of onset and were performed prior to any medication intervention. Patients were grouped according to the presence or absence of increased oxygen demand after CT scan. Quantitative volume data of lung opacity were measured automatically using a deep learning-based image analysis system. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of the opacity volume data were calculated to evaluate the accuracy of the system in predicting the deterioration of respiratory status. Results: All 79 CT scans were included (median age, 62 years (interquartile range, 46–77 years); 56 (70.9%) were male. The volume of opacity was significantly higher for the increased oxygen demand group than for the nonincreased oxygen demand group (585.3 vs. 132.8 mL, p < 0.001). The sensitivity, specificity, and AUC were 76.5%, 68.2%, and 0.737, respectively, in the prediction of increased oxygen demand. Conclusion: Deep learning-based quantitative analysis of the affected lung volume in the initial CT scans of patients with COVID-19 can predict the deterioration of respiratory status to improve treatment and resource management.


2021 ◽  
Vol 26 (2) ◽  
pp. 191-200
Author(s):  
Prasenjit Das ◽  
Jay Kant Pratap Singh Yadav ◽  
Arun Kumar Yadav

Tomato maturity classification is the process that classifies the tomatoes based on their maturity by its life cycle. It is green in color when it starts to grow; at its pre-ripening stage, it is Yellow, and when it is ripened, its color is Red. Thus, a tomato maturity classification task can be performed based on the color of tomatoes. Conventional skill-based methods cannot fulfill modern manufacturing management's precise selection criteria in the agriculture sector since they are time-consuming and have poor accuracy. The automatic feature extraction behavior of deep learning networks is most efficient in image classification and recognition tasks. Hence, this paper outlines an automated grading system for tomato maturity classification in terms of colors (Red, Green, Yellow) using the pre-trained network, namely 'AlexNet,' based on Transfer Learning. This study aims to formulate a low-cost solution with the best performance and accuracy for Tomato Maturity Grading. The results are gathered in terms of Accuracy, Loss curves, and confusion matrix. The results showed that the proposed model outperforms the other deep learning and the machine learning (ML) techniques used by researchers for tomato classification tasks in the last few years, obtaining 100% accuracy.


2017 ◽  
Vol 59 (9) ◽  
pp. 1102-1109
Author(s):  
Seonji Jeong ◽  
Ja-Young Choi ◽  
Yu Suhn Kang ◽  
Hye Jin Yoo ◽  
Sae Hoon Kim ◽  
...  

Background Deep, high-grade bursal-sided supraspinatus tendon tears are sometimes preoperatively misinterpreted as full-thickness tears on shoulder magnetic resonance imaging (MRI). Purpose To determine the usefulness of disproportionate fluid sign for differentiating high-grade bursal-sided partial-thickness tears from full-thickness tears on conventional MRI. Material and Methods Preoperative MRIs of 198 patients with arthroscopically confirmed high-grade bursal-sided partial-thickness tears and full-thickness tears were independently reviewed by two readers on two occasions. The presence of high-grade bursal-sided partial-thickness tears with a confidence level using a five-point grading scale was assessed based on tear depth alone and also in combination with disproportionate fluid sign, defined as a prominent subdeltoid or subacromial-subdeltoid bursal fluid distension with a relative paucity of effusion in the glenohumeral joint. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated, as well as inter-observer reliability. Results The disproportionate fluid sign was identified in 60/74 (81.2%) bursal-sided partial-thickness tears and 9/124 (7.5%) full-thickness tears. The sensitivity and accuracy of the diagnosis of bursal-sided tear were higher when disproportionate fluid sign was used in conjunction with the tear depth, compared with tear depth alone ( P < 0.001). There was excellent inter-observer agreement for disproportionate fluid sign and deep bursal-sided tear. The AUCs were significantly higher in combination with disproportionate fluid sign. Conclusion The disproportionate fluid sign indicates the presence of a deep, high-grade bursal-sided partial-thickness tear, which can be misinterpreted as a full-thickness tear. Thus, it can provide greater diagnostic assistance to less-experienced radiologists and clinicians.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Veerayuth Kittichai ◽  
Theerakamol Pengsakul ◽  
Kemmapon Chumchuen ◽  
Yudthana Samung ◽  
Patchara Sriwichai ◽  
...  

AbstractMicroscopic observation of mosquito species, which is the basis of morphological identification, is a time-consuming and challenging process, particularly owing to the different skills and experience of public health personnel. We present deep learning models based on the well-known you-only-look-once (YOLO) algorithm. This model can be used to simultaneously classify and localize the images to identify the species of the gender of field-caught mosquitoes. The results indicated that the concatenated two YOLO v3 model exhibited the optimal performance in identifying the mosquitoes, as the mosquitoes were relatively small objects compared with the large proportional environment image. The robustness testing of the proposed model yielded a mean average precision and sensitivity of 99% and 92.4%, respectively. The model exhibited high performance in terms of the specificity and accuracy, with an extremely low rate of misclassification. The area under the receiver operating characteristic curve (AUC) was 0.958 ± 0.011, which further demonstrated the model accuracy. Thirteen classes were detected with an accuracy of 100% based on a confusion matrix. Nevertheless, the relatively low detection rates for the two species were likely a result of the limited number of wild-caught biological samples available. The proposed model can help establish the population densities of mosquito vectors in remote areas to predict disease outbreaks in advance.


Author(s):  
Hwaseong Ryu ◽  
Seung Yeon Shin ◽  
Jae Young Lee ◽  
Kyoung Mu Lee ◽  
Hyo-jin Kang ◽  
...  

Abstract Objectives To develop a convolutional neural network system to jointly segment and classify a hepatic lesion selected by user clicks in ultrasound images. Methods In total, 4309 anonymized ultrasound images of 3873 patients with hepatic cyst (n = 1214), hemangioma (n = 1220), metastasis (n = 1001), or hepatocellular carcinoma (HCC) (n = 874) were collected and annotated. The images were divided into 3909 training and 400 test images. Our network is composed of one shared encoder and two inference branches used for segmentation and classification and takes the concatenation of an input image and two Euclidean distance maps of foreground and background clicks provided by a user as input. The performance of hepatic lesion segmentation was evaluated based on the Jaccard index (JI), and the performance of classification was based on accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC). Results We achieved performance improvements by jointly conducting segmentation and classification. In the segmentation only system, the mean JI was 68.5%. In the classification only system, the accuracy of classifying four types of hepatic lesions was 79.8%. The mean JI and classification accuracy were 68.5% and 82.2%, respectively, for the proposed joint system. The optimal sensitivity and specificity and the AUROC of classifying benign and malignant hepatic lesions of the joint system were 95.0%, 86.0%, and 0.970, respectively. The respective sensitivity, specificity, and the AUROC for classifying four hepatic lesions of the joint system were 86.7%, 89.7%, and 0.947. Conclusions The proposed joint system exhibited fair performance compared to segmentation only and classification only systems. Key Points • The joint segmentation and classification system using deep learning accurately segmented and classified hepatic lesions selected by user clicks in US examination. • The joint segmentation and classification system for hepatic lesions in US images exhibited higher performance than segmentation only and classification only systems. • The joint segmentation and classification system could assist radiologists with minimal experience in US imaging by characterizing hepatic lesions.


2021 ◽  
Vol 2128 (1) ◽  
pp. 012012
Author(s):  
Mohamed R. Shoaib ◽  
Mohamed R. Elshamy ◽  
Taha E. Taha ◽  
Adel S. El-Fishawy ◽  
Fathi E. Abd El-Samie

Abstract Brain tumor is an acute cancerous disease that results from abnormal and uncontrollable cell division. Brain tumors are classified via biopsy, which is not normally done before the brain ultimate surgery. Recent advances and improvements in deep learning technology helped the health industry in getting accurate disease diagnosis. In this paper, a Convolutional Neural Network (CNN) is adopted with image pre-processing to classify brain Magnetic Resonance (MR) images into four classes: glioma tumor, meningioma tumor, pituitary tumor and normal patients, is provided. We use a transfer learning model, a CNN-based model that is designed from scratch, a pre-trained inceptionresnetv2 model and a pre-trained inceptionv3 model. The performance of the four proposed models is tested using evaluation metrics including accuracy, sensitivity, specificity, precision, F1_score, Matthew’s correlation coefficient, error, kappa and false positive rate. The obtained results show that the two proposed models are very effective in achieving accuracies of 93.15% and 91.24% for the transfer learning model and BRAIN-TUMOR-net based on CNN, respectively. The inceptionresnetv2 model achieves an accuracy of 86.80% and the inceptionv3 model achieves an accuracy of 85.34%. Practical implementation of the proposed models is presented.


Sign in / Sign up

Export Citation Format

Share Document