scholarly journals Representation of Differential Learning Method for Mitosis Detection

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Haider Ali ◽  
Hansheng Li ◽  
Ephrem Afele Retta ◽  
Imran Ul Haq ◽  
Zhenzhen Guo ◽  
...  

The breast cancer microscopy images acquire information about the patient’s ailment, and the automated mitotic cell detection outcomes have generally been utilized to ease the massive amount of pathologist’s work and help the pathologists make clinical decisions quickly. Several previous methods were introduced to solve automated mitotic cell count problems. However, they failed to differentiate between mitotic and nonmitotic cells and come up with an imbalance problem, which affects the performance. This paper proposes a Representation Differential Learning Method (RDLM) for mitosis detection through deep learning to detect the accurate mitotic cell area on pathological images. Our proposed method has been divided into two parts: Global bank Feature Pyramid Network (GLB-FPN) and focal loss (FL). The GLB feature fusion method with FPN essentially makes the encoder-decoder pay attention, to further extract the region of interest (ROIs) for mitotic cells. On this basis, we extend the GLB-FPN with a focal loss to mitigate the data imbalance problem during the training stage. Extensive experiments have shown that RDLM significantly outperforms on visualization view and achieves the best performance in quantitative matrices than other proposed approaches on the MITOS-ATYPIA-14 contest dataset. Our framework reaches a 0.692 F1-score. Additionally, RDLM achieves 5% improvements than GLB with FPN in F1-score on the mitosis detection task.

2020 ◽  
Vol 9 (3) ◽  
pp. 749 ◽  
Author(s):  
Tahir Mahmood ◽  
Muhammad Arsalan ◽  
Muhammad Owais ◽  
Min Beom Lee ◽  
Kang Ryoung Park

Breast cancer is the leading cause of mortality in women. Early diagnosis of breast cancer can reduce the mortality rate. In the diagnosis, the mitotic cell count is an important biomarker for predicting the aggressiveness, prognosis, and grade of breast cancer. In general, pathologists manually examine histopathology images under high-resolution microscopes for the detection of mitotic cells. However, because of the minute differences between the mitotic and normal cells, this process is tiresome, time-consuming, and subjective. To overcome these challenges, artificial-intelligence-based (AI-based) techniques have been developed which automatically detect mitotic cells in the histopathology images. Such AI techniques accelerate the diagnosis and can be used as a second-opinion system for a medical doctor. Previously, conventional image-processing techniques were used for the detection of mitotic cells, which have low accuracy and high computational cost. Therefore, a number of deep-learning techniques that demonstrate outstanding performance and low computational cost were recently developed; however, they still require improvement in terms of accuracy and reliability. Therefore, we present a multistage mitotic-cell-detection method based on Faster region convolutional neural network (Faster R-CNN) and deep CNNs. Two open datasets (international conference on pattern recognition (ICPR) 2012 and ICPR 2014 (MITOS-ATYPIA-14)) of breast cancer histopathology were used in our experiments. The experimental results showed that our method achieves the state-of-the-art results of 0.876 precision, 0.841 recall, and 0.858 F1-measure for the ICPR 2012 dataset, and 0.848 precision, 0.583 recall, and 0.691 F1-measure for the ICPR 2014 dataset, which were higher than those obtained using previous methods. Moreover, we tested the generalization capability of our technique by testing on the tumor proliferation assessment challenge 2016 (TUPAC16) dataset and found that our technique also performs well in a cross-dataset experiment which proved the generalization capability of our proposed technique.


2020 ◽  
Vol 34 (07) ◽  
pp. 12573-12580
Author(s):  
Jiangqiao Yan ◽  
Yue Zhang ◽  
Zhonghan Chang ◽  
Tengfei Zhang ◽  
Menglong Yan ◽  
...  

Feature pyramid is the mainstream method for multi-scale object detection. In most detectors with feature pyramid, each proposal is predicted based on feature grids pooled from only one feature level, which is assigned heuristically. Recent studies report that the feature representation extracted using this method is sub-optimal, since they ignore the valid information exists on other unselected layers of the feature pyramid. To address this issue, researchers present to fuse valid information across all feature levels. However, these methods can be further improved: the feature fusion strategies, which use common operation (element-wise max or sum) in most detectors, should be replaced by a more flexible way. In this work, a novel method called feature adaptive selection subnetwork (FAS-Net) is proposed to construct effective features for detecting objects of different scales. Particularly, its adaption consists of two level: global attention and local adaptive selection. First, we model the global context of each feature map with global attention based feature selection module (GAFSM), which can strengthen the effective features across each layer adaptively. Then we extract the features of each region of interest (RoI) on the entire feature pyramid to construct a RoI feature pyramid. Finally, the RoI feature pyramid is sent to the feature adaptive selection module (FASM) to integrate the strengthened features according to the input adaptively. Our FAS-Net can be easily extended to other two-stage object detectors with feature pyramid, and supports to analyze the importance of different feature levels for multi-scale objects quantitatively. Besides, FAS-Net can also be further applied to instance segmentation task and get consistent improvements. Experiments on PASCAL07/12 and MSCOCO17 demonstrate the effectiveness and generalization of the proposed method.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1623
Author(s):  
Ningwei Wang ◽  
Yaze Li ◽  
Hongzhe Liu

Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, previous works have tried to improve the performance in various object detection necks but have failed to extract features efficiently. To solve the insufficient features of objects, this work introduces some of the most advanced and representative network models based on the Faster R-CNN architecture, such as Libra R-CNN, Grid R-CNN, guided anchoring, and GRoIE. We observed the performance of Neighbour Feature Pyramid Network (NFPN) fusion, ResNet Region of Interest Feature Extraction (ResRoIE) and the Recursive Feature Pyramid (RFP) architecture at different scales of precision when these components were used in place of the corresponding original members in various networks obtained on the MS COCO dataset. Compared to the experimental results after replacing the neck and RoIE parts of these models with our Reinforced Neighbour Feature Fusion (RNFF) model, the average precision (AP) is increased by 3.2 percentage points concerning the performance of the baseline network.


Author(s):  
Zhenying Xu ◽  
Ziqian Wu ◽  
Wei Fan

Defect detection of electromagnetic luminescence (EL) cells is the core step in the production and preparation of solar cell modules to ensure conversion efficiency and long service life of batteries. However, due to the lack of feature extraction capability for small feature defects, the traditional single shot multibox detector (SSD) algorithm performs not well in EL defect detection with high accuracy. Consequently, an improved SSD algorithm with modification in feature fusion in the framework of deep learning is proposed to improve the recognition rate of EL multi-class defects. A dataset containing images with four different types of defects through rotation, denoising, and binarization is established for the EL. The proposed algorithm can greatly improve the detection accuracy of the small-scale defect with the idea of feature pyramid networks. An experimental study on the detection of the EL defects shows the effectiveness of the proposed algorithm. Moreover, a comparison study shows the proposed method outperforms other traditional detection methods, such as the SIFT, Faster R-CNN, and YOLOv3, in detecting the EL defect.


2019 ◽  
Vol 48 (7) ◽  
pp. 710004 ◽  
Author(s):  
刘辉 LIU Hui ◽  
何勇 HE Yong ◽  
何博侠 HE Bo-xia ◽  
刘志 LIU Zhi ◽  
顾士晨 GU Shi-chen

2019 ◽  
Vol 1372 ◽  
pp. 012026
Author(s):  
Tan Xiao Jian ◽  
Mustafa Nazahah ◽  
Mashor Mohd Yusoff ◽  
Ab Rahman Khairul Shakir
Keyword(s):  

Author(s):  
Taye Girma Debelee ◽  
Abrham Gebreselasie ◽  
Friedhelm Schwenker ◽  
Mohammadreza Amirian ◽  
Dereje Yohannes

In this paper, a modified adaptive K-means (MAKM) method is proposed to extract the region of interest (ROI) from the local and public datasets. The local image datasets are collected from Bethezata General Hospital (BGH) and the public datasets are from Mammographic Image Analysis Society (MIAS). The same image number is used for both datasets, 112 are abnormal and 208 are normal. Two texture features (GLCM and Gabor) from ROIs and one CNN based extracted features are considered in the experiment. CNN features are extracted using Inception-V3 pre-trained model after simple preprocessing and cropping. The quality of the features are evaluated individually and by fusing features to one another and five classifiers (SVM, KNN, MLP, RF, and NB) are used to measure the descriptive power of the features using cross-validation. The proposed approach was first evaluated on the local dataset and then applied to the public dataset. The results of the classifiers are measured using accuracy, sensitivity, specificity, kappa, computation time and AUC. The experimental analysis made using GLCM features from the two datasets indicates that GLCM features from BGH dataset outperformed that of MIAS dataset in all five classifiers. However, Gabor features from the two datasets scored the best result with two classifiers (SVM and MLP). For BGH and MIAS, SVM scored an accuracy of 99%, 97.46%, the sensitivity of 99.48%, 96.26% and specificity of 98.16%, 100% respectively. And MLP achieved an accuracy of 97%, 87.64%, the sensitivity of 97.40%, 96.65% and specificity of 96.26%, 75.73% respectively. Relatively maximum performance is achieved for feature fusion between Gabor and CNN based extracted features using MLP classifier. However, KNN, MLP, RF, and NB classifiers achieved almost 100% performance for GLCM texture features and SVM scored an accuracy of 96.88%, the sensitivity of 97.14% and specificity of 96.36%. As compared to other classifiers, NB has scored the least computation time in all experiments.


2019 ◽  
Vol 9 (3) ◽  
pp. 565 ◽  
Author(s):  
Hao Qu ◽  
Lilian Zhang ◽  
Xuesong Wu ◽  
Xiaofeng He ◽  
Xiaoping Hu ◽  
...  

The development of object detection in infrared images has attracted more attention in recent years. However, there are few studies on multi-scale object detection in infrared street scene images. Additionally, the lack of high-quality infrared datasets hinders research into such algorithms. In order to solve these issues, we firstly make a series of modifications based on Faster Region-Convolutional Neural Network (R-CNN). In this paper, a double-layer region proposal network (RPN) is proposed to predict proposals of different scales on both fine and coarse feature maps. Secondly, a multi-scale pooling module is introduced into the backbone of the network to explore the response of objects on different scales. Furthermore, the inception4 module and the position sensitive region of interest (ROI) align (PSalign) pooling layer are utilized to explore richer features of the objects. Thirdly, this paper proposes instance level data augmentation, which takes into account the imbalance between categories while enlarging dataset. In the training stage, the online hard example mining method is utilized to further improve the robustness of the algorithm in complex environments. The experimental results show that, compared with baseline, our detection method has state-of-the-art performance.


Sign in / Sign up

Export Citation Format

Share Document