Deep-learning method for tumor segmentation in breast DCE-MRI

Author(s):  
Lei Zhang ◽  
Zhimeng Luo ◽  
Ruimei Chai ◽  
Dooman Arefan ◽  
Jules Sumkin ◽  
...  
2019 ◽  
Author(s):  
Yuliia Kamkova ◽  
Hemin Ali Qadir ◽  
Ole Jakob ◽  
Rahul Prasanna Kumar

Author(s):  
Azimeh NV Dehkordi ◽  
Sedigheh Sina ◽  
Freshteh Khodadadi

Purpose: Glioma tumor segmentation is an essential step in clinical decision making. Recently, computer-aided methods have been widely used for rapid and accurate delineation of the tumor regions. Methods based on image feature extraction can be used as fast methods, while segmentation based on the physiology and pharmacokinetic of the tissues is more accurate. This study aims to compare the performance of tumor segmentation based on these two different methods. Materials and Methods: Nested Model Selection (NMS) based on Extended-Toft’s model was applied to 190 Dynamic Contrast-Enhanced MRI (DCE-MRI) slices acquired from 25 Glioblastoma Multiforme (GBM) patients in 70 time-points. A model with three pharmacokinetic parameters, Model 3, is usually assigned to tumor voxel based on the time-contrast concentration signal. We utilized Deep-Net as a CNN network, based on Deeplabv3+ and layers of pre-trained resnet18, which has been trained with 17288 T1-Contrast MRI slices with HGG brain tumor to predict the tumor region in our 190 DCE MRI T1 images. The NMS-based physiological tumor segmentation was considered as a reference to compare the results of tumor segmentation by Deep-Net. Dice, Jaccard, and overlay similarity coefficients were used to evaluate the tumor segmentation accuracy and reliability of the Deep tumor segmentation method. Results: The results showed a relatively high similarity coefficient (Dice coefficient: 0.73±0.15, Jaccard coefficient: 0.66±0.17, and overlay coefficient: 0.71±0.15) between deep learning tumor segmentation and the tumor region identified by the NMS method. The results indicate that the deep learning methods may be used as accurate and robust tumor segmentation. Conclusion: Deep learning-based segmentation can play a significant role to increase the segmentation accuracy in clinical application, if their training process is completely automatic and independent from human error.


2021 ◽  
Author(s):  
Edson Damasceno Carvalho ◽  
Romuere Rodrigues Veloso Silva ◽  
Mano Joseph Mathew ◽  
Flavio Henrique Duarte Araujo ◽  
Antonio Oseas De Carvalho Filho

Author(s):  
Jingchao Sun ◽  
Jianqiang Li ◽  
Qing Wang ◽  
Jijiang Yang ◽  
Ting Yang ◽  
...  

2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Sign in / Sign up

Export Citation Format

Share Document