scholarly journals Automated Detection of Gastric Cancer by Retrospective Endoscopic Image Dataset Using U-Net R-CNN

2021 ◽  
Vol 11 (23) ◽  
pp. 11275
Author(s):  
Atsushi Teramoto ◽  
Tomoyuki Shibata ◽  
Hyuga Yamada ◽  
Yoshiki Hirooka ◽  
Kuniaki Saito ◽  
...  

Upper gastrointestinal endoscopy is widely performed to detect early gastric cancers. As an automated detection method for early gastric cancer from endoscopic images, a method involving an object detection model, which is a deep learning technique, was proposed. However, there were challenges regarding the reduction in false positives in the detected results. In this study, we proposed a novel object detection model, U-Net R-CNN, based on a semantic segmentation technique that extracts target objects by performing a local analysis of the images. U-Net was introduced as a semantic segmentation method to detect early candidates for gastric cancer. These candidates were classified as gastric cancer cases or false positives based on box classification using a convolutional neural network. In the experiments, the detection performance was evaluated via the 5-fold cross-validation method using 1208 images of healthy subjects and 533 images of gastric cancer patients. When DenseNet169 was used as the convolutional neural network for box classification, the detection sensitivity and the number of false positives evaluated on a lesion basis were 98% and 0.01 per image, respectively, which improved the detection performance compared to the previous method. These results indicate that the proposed method will be useful for the automated detection of early gastric cancer from endoscopic images.

2019 ◽  
Vol 32 (7) ◽  
pp. 1949-1958 ◽  
Author(s):  
Laigang Zhang ◽  
Zhou Sheng ◽  
Yibin Li ◽  
Qun Sun ◽  
Ying Zhao ◽  
...  

2019 ◽  
Vol 11 (3) ◽  
pp. 286 ◽  
Author(s):  
Jiangqiao Yan ◽  
Hongqi Wang ◽  
Menglong Yan ◽  
Wenhui Diao ◽  
Xian Sun ◽  
...  

Recently, methods based on Faster region-based convolutional neural network (R-CNN)have been popular in multi-class object detection in remote sensing images due to their outstandingdetection performance. The methods generally propose candidate region of interests (ROIs) througha region propose network (RPN), and the regions with high enough intersection-over-union (IoU)values against ground truth are treated as positive samples for training. In this paper, we find thatthe detection result of such methods is sensitive to the adaption of different IoU thresholds. Specially,detection performance of small objects is poor when choosing a normal higher threshold, while alower threshold will result in poor location accuracy caused by a large quantity of false positives.To address the above issues, we propose a novel IoU-Adaptive Deformable R-CNN framework formulti-class object detection. Specially, by analyzing the different roles that IoU can play in differentparts of the network, we propose an IoU-guided detection framework to reduce the loss of small objectinformation during training. Besides, the IoU-based weighted loss is designed, which can learn theIoU information of positive ROIs to improve the detection accuracy effectively. Finally, the class aspectratio constrained non-maximum suppression (CARC-NMS) is proposed, which further improves theprecision of the results. Extensive experiments validate the effectiveness of our approach and weachieve state-of-the-art detection performance on the DOTA dataset.


2019 ◽  
Vol 11 (13) ◽  
pp. 1516 ◽  
Author(s):  
Chang Lai ◽  
Jiyao Xu ◽  
Jia Yue ◽  
Wei Yuan ◽  
Xiao Liu ◽  
...  

With the development of ground-based all-sky airglow imager (ASAI) technology, a large amount of airglow image data needs to be processed for studying atmospheric gravity waves. We developed a program to automatically extract gravity wave patterns in the ASAI images. The auto-extraction program includes a classification model based on convolutional neural network (CNN) and an object detection model based on faster region-based convolutional neural network (Faster R-CNN). The classification model selects the images of clear nights from all ASAI raw images. The object detection model locates the region of wave patterns. Then, the wave parameters (horizontal wavelength, period, direction, etc.) can be calculated within the region of the wave patterns. Besides auto-extraction, we applied a wavelength check to remove the interference of wavelike mist near the imager. To validate the auto-extraction program, a case study was conducted on the images captured in 2014 at Linqu (36.2°N, 118.7°E), China. Compared to the result of the manual check, the auto-extraction recognized less (28.9% of manual result) wave-containing images due to the strict threshold, but the result shows the same seasonal variation as the references. The auto-extraction program applies a uniform criterion to avoid the accidental error in manual distinction of gravity waves and offers a reliable method to process large ASAI images for efficiently studying the climatology of atmospheric gravity waves.


2019 ◽  
Vol 23 (1) ◽  
pp. 126-132 ◽  
Author(s):  
Lan Li ◽  
Yishu Chen ◽  
Zhe Shen ◽  
Xuequn Zhang ◽  
Jianzhong Sang ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Ruixin Yang ◽  
Yingyan Yu

In the era of digital medicine, a vast number of medical images are produced every day. There is a great demand for intelligent equipment for adjuvant diagnosis to assist medical doctors with different disciplines. With the development of artificial intelligence, the algorithms of convolutional neural network (CNN) progressed rapidly. CNN and its extension algorithms play important roles on medical imaging classification, object detection, and semantic segmentation. While medical imaging classification has been widely reported, the object detection and semantic segmentation of imaging are rarely described. In this review article, we introduce the progression of object detection and semantic segmentation in medical imaging study. We also discuss how to accurately define the location and boundary of diseases.


2021 ◽  
Vol 1 (100) ◽  
pp. 68-77
Author(s):  
YAROSLAV M. TROFIMENKO ◽  
EVGENIY V. ERSHOV

A model for detecting objects in the image and a model for identifying steel-teeming ladles are discussed in the article. The object detection model is based on the use of a convolutional neural network. The identification model is based on the comparison of steel-teeming ladle features specific to the production process. The authors describe the adaptations of models to the conditions of the YOLOv3 architecture and the parameters of steel-teeming ladles in steel production. Simulation results are given at the end of the article.


Sign in / Sign up

Export Citation Format

Share Document