scholarly journals Computer Aided Diagnosis System for Detection of Cancer Cells on Cytological Pleural Effusion Images

2018 ◽  
Vol 2018 ◽  
pp. 1-21 ◽  
Author(s):  
Khin Yadanar Win ◽  
Somsak Choomchuay ◽  
Kazuhiko Hamamoto ◽  
Manasanan Raveesunthornkiat ◽  
Likit Rangsirattanakul ◽  
...  

Cytological screening plays a vital role in the diagnosis of cancer from the microscope slides of pleural effusion specimens. However, this manual screening method is subjective and time-intensive and it suffers from inter- and intra-observer variations. In this study, we propose a novel Computer Aided Diagnosis (CAD) system for the detection of cancer cells in cytological pleural effusion (CPE) images. Firstly, intensity adjustment and median filtering methods were applied to improve image quality. Cell nuclei were extracted through a hybrid segmentation method based on the fusion of Simple Linear Iterative Clustering (SLIC) superpixels and K-Means clustering. A series of morphological operations were utilized to correct segmented nuclei boundaries and eliminate any false findings. A combination of shape analysis and contour concavity analysis was carried out to detect and split any overlapped nuclei into individual ones. After the cell nuclei were accurately delineated, we extracted 14 morphometric features, 6 colorimetric features, and 181 texture features from each nucleus. The texture features were derived from a combination of color components based first order statistics, gray level cooccurrence matrix and gray level run-length matrix. A novel hybrid feature selection method based on simulated annealing combined with an artificial neural network (SA-ANN) was developed to select the most discriminant and biologically interpretable features. An ensemble classifier of bagged decision trees was utilized as the classification model for differentiating cells into either benign or malignant using the selected features. The experiment was carried out on 125 CPE images containing more than 10500 cells. The proposed method achieved sensitivity of 87.97%, specificity of 99.40%, accuracy of 98.70%, and F-score of 87.79%.

2019 ◽  
Author(s):  
Bei Hui ◽  
Jia-Jun Qiu ◽  
Jin-Heng Liu ◽  
Neng-Wen Ke

Abstract Background: In a pathological examination of pancreaticoduodenectomy for pancreatic head adenocarcinoma, resection margins have no cancer cells within 1 mm, the resection is considered as R0 resection; resection margins have cancer cells within 1 mm, the resection is recognized as R1 resection. The pathological examinations of the resection margins are complicated and depend on the subjective experiences of physicians to some extent. This study aims to design a computer-aided diagnosis (CAD) system based on texture features of preoperative computer tomography (CT) images to evaluate a resection margin was R0 or R1.Methods: This study retrospectively analyzed 86 patients who were diagnosed as pancreatic head adenocarcinoma by preoperative abdominal CT examination. These patients underwent pancreaticoduodenectomies, then their resection margins were pathologically diagnosed as R0 or R1. The CAD system consists of five stages: (i) delineate and segment regions of interest (ROIs); (ii) by solving discrete Laplacian equations with Dirichlet boundary conditions, fit ROIs to rectangular regions; (iii) enhance textures of ROIs combining wavelet transform and fractional differential; (iv) extract texture features combining wavelet transform and statistical analysis methods; (v) reduce features using principal component analysis (PCA) and perform classification using support vector machine (SVM), use a linear kernel function and leave-one-out cross-training and testing to reduce overfitting. Mann-Whitney U-test is used to explore associations between texture features and histopathological characteristics.Results: The developed CAD system achieved an AUC (area under receiver operating characteristic curve) of 0.8614 and an accuracy of 84.88%. Setting p-value ≤ 0.01 in the Mann-Whitney U-test, two features of run-length matrix, which derived from diagonal subbands in wavelet decomposition, showed statistically significant differences between R0 and R1.Conclusions: It indicates that the developed CAD system is rewarding for discriminating R0 from R1. Texture features can potentially enhance physicians' diagnostic ability.


2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


2018 ◽  
Vol 2 (1) ◽  
pp. 14-18
Author(s):  
Gokalp Cinarer ◽  
Bulent Gursel Emiroglu ◽  
Ahmet Hasim Yurttakal

Breast cancer is cancer that forms in the cells of the breasts. Breast cancer is the most common cancer diagnosed in women in the world. Breast cancer can occur in both men and women, but it's far more common in women. Early detection of breast cancer tumours is crucial in the treatment. In this study, we presented a computer aided diagnosis expectation maximization segmentation and co-occurrence texture features from wavelet approximation tumour image of each slice and evaluated the performance of SVM Algorithm. We tested the model on 50 patients, among them, 25 are benign and 25 malign. The 80% of the images are allocated for training and 20% of images reserved for testing. The proposed model classified 2 patients correctly with success rate of 80% in case of 5 Fold Cross-Validation  Keywords: Breast Cancer, Computer-Aided Diagnosis (CAD), Magnetic Resonance Imaging (MRI);


Early recognition and classification of pulmonary nodules by the use of computer-aided diagnosis (CAD) tools finds useful to reduce the death rate due to the illness of lung cancer. This paper devises a new CAD tool utilizing a segmentation based classification process for lung CT images. Initially, the input CT images are pre-processed by image enhancement and noise removal process. Then, watershed segmentation model is employed for the segmentation of the pre-processed images. Subsequently, the feature extraction process is carried out using Xecption model and random forest (RF) classifier is used of the identification of lung CT images as normal, benign or malignant. The use of RF model results to effective classification of the applied images. This model undergoes extensive experimentation against a benchmark lung CT image dataset and the results are investigated under several aspects. The obtained outcome pointed out the significant performance of the presented model over the compared methods.


Sign in / Sign up

Export Citation Format

Share Document