scholarly journals Lesion-Based Convolutional Neural Network in Diagnosis of Early Gastric Cancer

2020 ◽  
Vol 53 (2) ◽  
pp. 127-131 ◽  
Author(s):  
Hong Jin Yoon ◽  
Jie-Hyun Kim
2019 ◽  
Vol 23 (1) ◽  
pp. 126-132 ◽  
Author(s):  
Lan Li ◽  
Yishu Chen ◽  
Zhe Shen ◽  
Xuequn Zhang ◽  
Jianzhong Sang ◽  
...  

2021 ◽  
Vol 11 (23) ◽  
pp. 11275
Author(s):  
Atsushi Teramoto ◽  
Tomoyuki Shibata ◽  
Hyuga Yamada ◽  
Yoshiki Hirooka ◽  
Kuniaki Saito ◽  
...  

Upper gastrointestinal endoscopy is widely performed to detect early gastric cancers. As an automated detection method for early gastric cancer from endoscopic images, a method involving an object detection model, which is a deep learning technique, was proposed. However, there were challenges regarding the reduction in false positives in the detected results. In this study, we proposed a novel object detection model, U-Net R-CNN, based on a semantic segmentation technique that extracts target objects by performing a local analysis of the images. U-Net was introduced as a semantic segmentation method to detect early candidates for gastric cancer. These candidates were classified as gastric cancer cases or false positives based on box classification using a convolutional neural network. In the experiments, the detection performance was evaluated via the 5-fold cross-validation method using 1208 images of healthy subjects and 533 images of gastric cancer patients. When DenseNet169 was used as the convolutional neural network for box classification, the detection sensitivity and the number of false positives evaluated on a lesion basis were 98% and 0.01 per image, respectively, which improved the detection performance compared to the previous method. These results indicate that the proposed method will be useful for the automated detection of early gastric cancer from endoscopic images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Muhammad Aqeel Aslam ◽  
Cuili Xue ◽  
Yunsheng Chen ◽  
Amin Zhang ◽  
Manhua Liu ◽  
...  

AbstractDeep learning is an emerging tool, which is regularly used for disease diagnosis in the medical field. A new research direction has been developed for the detection of early-stage gastric cancer. The computer-aided diagnosis (CAD) systems reduce the mortality rate due to their effectiveness. In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. A Softmax classifier was then integrated to the proposed method of feature extraction, to classify gastric cancer from the breath samples. Precisely, we identified fifty peaks in each spectrum to distinguish the EGC, AGC, and healthy persons. This CAD system reduces the distance between the input and output by learning the features and preserve the structure of the input data set of breath samples. The features were extracted from the unlabeled data of the breath samples. After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. In last, fine-tuning of the developed neural network was carried out with labeled training data to make the model more reliable and repeatable. The proposed deep stacked sparse autoencoder neural network architecture exhibits excellent results, with an overall accuracy of 98.7% for advanced gastric cancer classification and 97.3% for early gastric cancer detection using breath analysis. Moreover, the developed model produces an excellent result for recall, precision, and f score value, making it suitable for clinical application.


Endoscopy ◽  
2019 ◽  
Vol 51 (06) ◽  
pp. 522-531 ◽  
Author(s):  
Lianlian Wu ◽  
Wei Zhou ◽  
Xinyue Wan ◽  
Jun Zhang ◽  
Lei Shen ◽  
...  

Abstract Background Gastric cancer is the third most lethal malignancy worldwide. A novel deep convolution neural network (DCNN) to perform visual tasks has been recently developed. The aim of this study was to build a system using the DCNN to detect early gastric cancer (EGC) without blind spots during esophagogastroduodenoscopy (EGD). Methods 3170 gastric cancer and 5981 benign images were collected to train the DCNN to detect EGC. A total of 24549 images from different parts of stomach were collected to train the DCNN to monitor blind spots. Class activation maps were developed to automatically cover suspicious cancerous regions. A grid model for the stomach was used to indicate the existence of blind spots in unprocessed EGD videos. Results The DCNN identified EGC from non-malignancy with an accuracy of 92.5 %, a sensitivity of 94.0 %, a specificity of 91.0 %, a positive predictive value of 91.3 %, and a negative predictive value of 93.8 %, outperforming all levels of endoscopists. In the task of classifying gastric locations into 10 or 26 parts, the DCNN achieved an accuracy of 90 % or 65.9 %, on a par with the performance of experts. In real-time unprocessed EGD videos, the DCNN achieved automated performance for detecting EGC and monitoring blind spots. Conclusions We developed a system based on a DCNN to accurately detect EGC and recognize gastric locations better than endoscopists, and proactively track suspicious cancerous lesions and monitor blind spots during EGD.


2021 ◽  
Vol 2 (3) ◽  
pp. 70-77
Author(s):  
Xin-Yi Feng ◽  
Xi Xu ◽  
Yun Zhang ◽  
Ye-Min Xu ◽  
Qiang She ◽  
...  

2021 ◽  
Vol 2 (3) ◽  
pp. 71-78
Author(s):  
Xin-Yi Feng ◽  
Xi Xu ◽  
Yun Zhang ◽  
Ye-Min Xu ◽  
Qiang She ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document