scholarly journals Liver Fibrosis: Deep Convolutional Neural Network for Staging by Using Gadoxetic Acid–enhanced Hepatobiliary Phase MR Images

Radiology ◽  
2018 ◽  
Vol 287 (1) ◽  
pp. 146-155 ◽  
Author(s):  
Koichiro Yasaka ◽  
Hiroyuki Akai ◽  
Akira Kunimatsu ◽  
Osamu Abe ◽  
Shigeru Kiryu
Author(s):  
Hong Lu ◽  
Xiaofei Zou ◽  
Longlong Liao ◽  
Kenli Li ◽  
Jie Liu

Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .


2021 ◽  
Vol 68 (2) ◽  
pp. 2413-2429
Author(s):  
Tapan Kumar Das ◽  
Pradeep Kumar Roy ◽  
Mohy Uddin ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang ◽  
...  

2018 ◽  
Vol 95 ◽  
pp. 43-54 ◽  
Author(s):  
Odelin Charron ◽  
Alex Lallement ◽  
Delphine Jarnet ◽  
Vincent Noblet ◽  
Jean-Baptiste Clavier ◽  
...  

2020 ◽  
Vol 15 (2) ◽  
pp. 94-108
Author(s):  
R. Kala ◽  
P. Deepa

Background: Accurate detection of brain tumor and its severity is a challenging task in the medical field. So there is a need for developing brain tumor detecting algorithms and it is an emerging one for diagnosis, planning the treatment and outcome evaluation. Materials and Methods: Brain tumor segmentation method using deep learning classification and multi-modal composition has been developed using the deep convolutional neural networks. The different modalities of MRI such as T1, flair, T1C and T2 are given as input for the proposed method. The MR images from the different modalities are used in proportion to the information contents in the particular modality. The weights for the different modalities are calculated blockwise and the standard deviation of the block is taken as a proxy for the information content of the block. Then the convolution is performed between the input image of the T1, flair, T1C and T2 MR images and corresponding to the weight of the T1, flair, T1C, and T2 images. The convolution is summed between the different modalities of the MR images and its corresponding weight of the different modalities of the MR images to obtain a new composite image which is given as an input image to the deep convolutional neural network. The deep convolutional neural network performs segmentation through the different layers of CNN and different filter operations are performed in each layer to obtain the enhanced classification and segmented spatial consistency results. The analysis of the proposed method shows that the discriminatory information from the different modalities is effectively combined to increase the overall accuracy of segmentation. Results: The proposed deep convolutional neural network for brain tumor segmentation method has been analysed by using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). The complete, core and enhancing regions are validated with Dice Similarity Coefficient and Jaccard similarity index metric for the Challenge, Leaderboard, and Synthetic data set. To evaluate the classification rates, the metrics such as accuracy, precision, sensitivity, specificity, under-segmentation, incorrect segmentation and over segmentation also evaluated and compared with the existing methods. Experimental results exhibit a higher degree of precision in the segmentation compared to existing methods. Conclusion: In this work, deep convolution neural network with different modalities of MR image are used to detect the brain tumor. The new input image was created by convoluting the input image of the different modalities and their weights. The weights are determined using the standard deviation of the block. Segmentation accuracy is high with efficient appearance and spatial consistency. The assessment of segmented images is completely evaluated by using well-established metrics. In future, the proposed method will be considered and evaluated with other databases and the segmentation accuracy results should be analysed with the presence of different kind of noises.


2019 ◽  
Vol 30 (2) ◽  
pp. 1264-1273 ◽  
Author(s):  
Jeong Hyun Lee ◽  
Ijin Joo ◽  
Tae Wook Kang ◽  
Yong Han Paik ◽  
Dong Hyun Sinn ◽  
...  

2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document