scholarly journals P3‐1: Differential model of the deep convolutional neural network between sarcoidosis and lymphoma in 18F‐FDG‐PET/CT

Respirology ◽  
2021 ◽  
Vol 26 (S3) ◽  
pp. 136-137
2020 ◽  
Author(s):  
Keisuke Kawauchi ◽  
Sho Furuya ◽  
Kenji Hirata ◽  
Chietsugu Katoh ◽  
Osamu Manabe ◽  
...  

Abstract Background: As the number of PET/CT scanners increases and FDG PET/CT becomes a common imaging modality for oncology, the demands for automated detection systems on artificial intelligence (AI) to prevent human oversight and misdiagnosis are rapidly growing. We aimed to develop a convolutional neural network (CNN)-based system that can classify whole-body FDG PET as 1) benign, 2) malignant, or 3) equivocal. Methods: This retrospective study investigated 3,485 sequential patients with malignant or suspected malignant disease, who underwent whole-body FDG PET/CT at our institute. All the cases were classified into the 3 categories by a nuclear medicine physician. A residual network (ResNet)-based CNN architecture was built for classifying patients into the 3 categories. In addition, we performed region-based analysis of the CNN (head-and-neck, chest, abdomen, and pelvic region). Results: There were 1,280 (37%), 1,450 (42%), and 755 (22%) patients classified as benign, malignant and equivocal, respectively. In patient-based analysis, the CNN predicted benign, malignant and equivocal images with 99.4%, 99.4%, and 87.5% accuracy, respectively. In region-based analysis, the prediction was correct with the probability of 97.3% (head-and-neck), 96.6% (chest), 92.8% (abdomen) and 99.6% (pelvic region), respectively. Conclusion: The CNN-based system reliably classified FDG PET images into 3 categories, indicating that it could be helpful for physicians as a double-checking system to prevent oversight and misdiagnosis.


2019 ◽  
Author(s):  
Keisuke Kawauchi ◽  
Sho Furuya ◽  
Kenji Hirata ◽  
Chietsugu Katoh ◽  
Osamu Manabe ◽  
...  

Abstract Background As the number of PET/CT scanners increases and FDG PET/CT becomes a common imaging modality for oncology, the demands for automated detection systems on artificial intelligence (AI) to prevent human oversight and misdiagnosis are rapidly growing. We aimed to develop a convolutional neural network (CNN)-based system that can classify whole-body FDG PET as 1) benign, 2) malignant, or 3) equivocal.Methods This retrospective study investigated 3,485 sequential patients with malignant or suspected malignant disease, who underwent whole-body FDG PET/CT at our institute. All the cases were classified into the 3 categories by a nuclear medicine physician. A residual network (ResNet)-based CNN architecture was built for classifying patients into the 3 categories. This network was trained with PET images. Five-fold cross-validations were carried out to estimate the classification performance. In addition, we examined whether the CNN could determine the location of the malignant uptake, be it in the head-and-neck region, chest, abdomen, or pelvic region.Results There were 1,280 (37%), 1,450 (42%) and 755 (22%) patients classified as benign, malignant and equivocal, respectively. In patient-based analysis, the CNN predicted benign and malignant images with 99.4% and 99.4% accuracy, respectively. Furthermore, in region-based analysis, the prediction was correct with the probability of 97.3% (head-and-neck), 96.6% (chest), 92.8% (abdomen) and 99.6% (pelvic region), respectively.Conclusion The CNN-based system reliably classified FDG PET images into 3 categories, indicating that it would be helpful for physicians as a double-checking system to prevent oversight and misdiagnosis.


BMC Cancer ◽  
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Keisuke Kawauchi ◽  
Sho Furuya ◽  
Kenji Hirata ◽  
Chietsugu Katoh ◽  
Osamu Manabe ◽  
...  

PLoS ONE ◽  
2019 ◽  
Vol 14 (10) ◽  
pp. e0223141 ◽  
Author(s):  
Paul Blanc-Durand ◽  
Maya Khalife ◽  
Brian Sgard ◽  
Sandeep Kaushik ◽  
Marine Soret ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Zhou Tao ◽  
Huo Bing-qiang ◽  
Lu Huiling ◽  
Shi Hongbin ◽  
Yang Pengfei ◽  
...  

Under the background of 18F-FDG-PET/CT multimodal whole-body imaging for lung tumor diagnosis, for the problems of network degradation and high dimension features during convolutional neural network (CNN) training, beginning with the perspective of dividing sample space, an E-ResNet-NRC (ensemble ResNet nonnegative representation classifier) model is proposed in this paper. The model includes the following steps: (1) Parameters of a pretrained ResNet model are initialized using transfer learning. (2) Samples are divided into three different sample spaces (CT, PET, and PET/CT) based on the differences in multimodal medical images PET/CT, and ROI of the lesion was extracted. (3) The ResNet neural network was used to extract ROI features and obtain feature vectors. (4) Individual classifier ResNet-NRC was constructed with nonnegative representation NRC at a fully connected layer. (5) Ensemble classifier E-ResNet-NRC was constructed using the “relative majority voting method.” Finally, two network models, AlexNet and ResNet-50, and three classification algorithms, nearest neighbor classification algorithm (NNC), softmax, and nonnegative representation classification algorithm (NRC), were combined to compare with the E-ResNet-NRC model in this paper. The experimental results show that the overall classification performance of the Ensemble E-ResNet-NRC model is better than the individual ResNet-NRC, and specificity and sensitivity are more higher; the E-ResNet-NRC has better robustness and generalization ability.


2020 ◽  
Author(s):  
Keisuke Kawauchi ◽  
Sho Furuya ◽  
Kenji Hirata ◽  
Chietsugu Katoh ◽  
Osamu Manabe ◽  
...  

Abstract Background: As the number of PET/CT scanners increases and FDG PET/CT becomes a common imaging modality for oncology, the demands for automated detection systems on artificial intelligence (AI) to prevent human oversight and misdiagnosis are rapidly growing. We aimed to develop a convolutional neural network (CNN)-based system that can classify whole-body FDG PET as 1) benign, 2) malignant or 3) equivocal. Methods: This retrospective study investigated 3,485 sequential patients with malignant or suspected malignant disease, who underwent whole-body FDG PET/CT at our institute. All the cases were classified into the 3 categories by a nuclear medicine physician. A residual network (ResNet)-based CNN architecture was built for classifying patients into the 3 categories. In addition, we performed a region-based analysis of CNN (head-and-neck, chest, abdomen, and pelvic region). Results: There were 1,280 (37%), 1,450 (42%), and 755 (22%) patients classified as benign, malignant and equivocal, respectively. In the patient-based analysis, CNN predicted benign, malignant and equivocal images with 99.4%, 99.4%, and 87.5% accuracy, respectively. In region-based analysis, the prediction was correct with the probability of 97.3% (head-and-neck), 96.6% (chest), 92.8% (abdomen) and 99.6% (pelvic region), respectively. Conclusion: The CNN-based system reliably classified FDG PET images into 3 categories, indicating that it could be helpful for physicians as a double-checking system to prevent oversight and misdiagnosis.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Xiaonan Shao ◽  
Rong Niu ◽  
Xiaoliang Shao ◽  
Jianxiong Gao ◽  
Yunmei Shi ◽  
...  

Abstract Purpose This work aims to train, validate, and test a dual-stream three-dimensional convolutional neural network (3D-CNN) based on fluorine 18 (18F)-fluorodeoxyglucose (FDG) PET/CT to distinguish benign lesions and invasive adenocarcinoma (IAC) in ground-glass nodules (GGNs). Methods We retrospectively analyzed patients with suspicious GGNs who underwent 18F-FDG PET/CT in our hospital from November 2011 to November 2020. The patients with benign lesions or IAC were selected for this study. According to the ratio of 7:3, the data were randomly divided into training data and testing data. Partial image feature extraction software was used to segment PET and CT images, and the training data after using the data augmentation were used for the training and validation (fivefold cross-validation) of the three CNNs (PET, CT, and PET/CT networks). Results A total of 23 benign nodules and 92 IAC nodules from 106 patients were included in this study. In the training set, the performance of PET network (accuracy, sensitivity, and specificity of 0.92 ± 0.02, 0.97 ± 0.03, and 0.76 ± 0.15) was better than the CT network (accuracy, sensitivity, and specificity of 0.84 ± 0.03, 0.90 ± 0.07, and 0.62 ± 0.16) (especially accuracy was significant, P-value was 0.001); in the testing set, the performance of both networks declined. However, the accuracy and sensitivity of PET network were still higher than that of CT network (0.76 vs. 0.67; 0.85 vs. 0.70). For dual-stream PET/CT network, its performance was almost the same as PET network in the training set (P-value was 0.372–1.000), while in the testing set, although its performance decreased, the accuracy and sensitivity (0.85 and 0.96) were still higher than both CT and PET networks. Moreover, the accuracy of PET/CT network was higher than two nuclear medicine physicians [physician 1 (3-year experience): 0.70 and physician 2 (10-year experience): 0.73]. Conclusion The 3D-CNN based on 18F-FDG PET/CT can be used to distinguish benign lesions and IAC in GGNs, and the performance is better when both CT and PET images are used together.


2020 ◽  
Vol 47 (3) ◽  
pp. 1058-1066
Author(s):  
Xiaofan Xiong ◽  
Timothy J. Linhardt ◽  
Weiren Liu ◽  
Brian J. Smith ◽  
Wenqing Sun ◽  
...  

2020 ◽  
Author(s):  
Keisuke Kawauchi ◽  
Sho Furuya ◽  
Kenji Hirata ◽  
Chietsugu Katoh ◽  
Osamu Manabe ◽  
...  

Abstract Background: As the number of PET/CT scanners increases and FDG PET/CT becomes a common imaging modality for oncology, the demands for automated detection systems on artificial intelligence (AI) to prevent human oversight and misdiagnosis are rapidly growing. We aimed to develop a convolutional neural network (CNN)-based system that can classify whole-body FDG PET as 1) benign, 2) malignant or 3) equivocal. Methods: This retrospective study investigated 3,485 sequential patients with malignant or suspected malignant disease, who underwent whole-body FDG PET/CT at our institute. All the cases were classified into the 3 categories by a nuclear medicine physician. A residual network (ResNet)-based CNN architecture was built for classifying patients into the 3 categories. In addition, we performed a region-based analysis of CNN (head-and-neck, chest, abdomen, and pelvic region). Results: There were 1,280 (37%), 1,450 (42%), and 755 (22%) patients classified as benign, malignant and equivocal, respectively. In the patient-based analysis, CNN predicted benign, malignant and equivocal images with 99.4%, 99.4%, and 87.5% accuracy, respectively. In region-based analysis, the prediction was correct with the probability of 97.3% (head-and-neck), 96.6% (chest), 92.8% (abdomen) and 99.6% (pelvic region), respectively. Conclusion: The CNN-based system reliably classified FDG PET images into 3 categories, indicating that it could be helpful for physicians as a double-checking system to prevent oversight and misdiagnosis.


Sign in / Sign up

Export Citation Format

Share Document