scholarly journals 3D Quantum-inspired Self-supervised Tensor Network for Volumetric Segmentation of Brain MR Images

Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Tapan Kumar Gandhi ◽  
Bijaya Ketan Panigrahi ◽  
Richard Jiang

<div>This paper introduces a novel shallow self-supervised tensor neural network for volumetric segmentation of brain MR images obviating training or supervision. The proposed network is a 3D version of the Quantum-Inspired Self Supervised Neural Network (QIS-Net) architecture and is referred to as 3D Quantum-inspired Self-supervised Tensor Neural Network (3D-QNet). The underlying architecture of 3D-QNet is composed of a trinity of volumetric layers viz. input, intermediate and output layers inter-connected using a 26-connected third-order neighborhood-based topology for voxel-wise processing of 3D MR image data suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation</div><div>of tensor decomposition in quantum formalism leads to faster convergence of the network operations to preclude the inherent slow convergence problems faced by the self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3D-QNet is tailored and tested on the BRATS 2019 data set extensively in the experiments carried out. 3D-QNet has achieved promising dice similarity while compared with the intensively supervised convolutional network-based models 3D-UNet, Vox-ResNet, DRINet, and 3D-ESPNet, thus facilitating annotation free semantic segmentation using a self-supervised shallow network.</div>

2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Tapan Kumar Gandhi ◽  
Bijaya Ketan Panigrahi ◽  
Richard Jiang

<div>This paper introduces a novel shallow self-supervised tensor neural network for volumetric segmentation of brain MR images obviating training or supervision. The proposed network is a 3D version of the Quantum-Inspired Self Supervised Neural Network (QIS-Net) architecture and is referred to as 3D Quantum-inspired Self-supervised Tensor Neural Network (3D-QNet). The underlying architecture of 3D-QNet is composed of a trinity of volumetric layers viz. input, intermediate and output layers inter-connected using a 26-connected third-order neighborhood-based topology for voxel-wise processing of 3D MR image data suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation</div><div>of tensor decomposition in quantum formalism leads to faster convergence of the network operations to preclude the inherent slow convergence problems faced by the self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3D-QNet is tailored and tested on the BRATS 2019 data set extensively in the experiments carried out. 3D-QNet has achieved promising dice similarity while compared with the intensively supervised convolutional network-based models 3D-UNet, Vox-ResNet, DRINet, and 3D-ESPNet, thus facilitating annotation free semantic segmentation using a self-supervised shallow network.</div>


2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Tapan Kumar Gandhi ◽  
Bijaya Ketan Panigrahi

<div>This paper introduces a novel shallow self-supervised tensor neural network for volumetric segmentation of brain MR images obviating training or supervision. The proposed network is a 3D version of the Quantum-Inspired Self Supervised Neural Network (QIS-Net) architecture and is referred to as 3D Quantum-inspired Self-supervised Tensor Neural Network (3D-QNet). The underlying architecture of 3D-QNet is composed of a trinity of volumetric layers viz. input, intermediate and output layers inter-connected using a 26-connected third-order neighborhood-based topology for voxel-wise processing of 3D MR image data suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation</div><div>of tensor decomposition in quantum formalism leads to faster convergence of the network operations to preclude the inherent slow convergence problems faced by the self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3D-QNet is tailored and tested on the BRATS 2019 data set extensively in the experiments carried out. 3D-QNet has achieved promising dice similarity while compared with the intensively supervised convolutional network-based models 3D-UNet, Vox-ResNet, DRINet, and 3D-ESPNet, thus facilitating annotation free semantic segmentation using a self-supervised shallow network.</div>


2021 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Tapan Kumar Gandhi ◽  
Bijaya Ketan Panigrahi ◽  
Richard Jiang

<div>This paper introduces a novel shallow 3D self-supervised tensor neural network for volumetric segmentation of medical images with merits of obviating training and supervision. The proposed network is referred to as 3D Quantum-inspired Self-supervised Tensor Neural Network (3D-QNet). The underlying architecture of 3D-QNet is composed of a trinity of volumetric layers viz. input, intermediate and output layers inter-connected using an S-connected third-order neighborhood-based topology for voxel-wise processing of 3D medical image data suitable for semantic segmentation. Each of the volumetric layers contains quantum neurons designated by qubits or quantum bits. The incorporation of tensor decomposition in quantum formalism leads to faster convergence of the network operations to preclude the inherent slow convergence problems faced by the classical supervised and self-supervised networks. The segmented volumes are obtained once the network converges. The suggested 3D-QNet is tailored and tested on the BRATS 2019 Brain MR image data set and Liver Tumor Segmentation Challenge (LiTS17) data set extensively in our experiments. 3D-QNet has achieved promising dice similarity as compared to the intensively supervised convolutional network-based models like 3D-UNet, Vox-ResNet, DRINet, and 3D-ESPNet, showing a potential advantage of our self-supervised shallow network on facilitating semantic segmentation.</div>


2019 ◽  
Vol 1 (01) ◽  
pp. 11-19 ◽  
Author(s):  
James Deva Koresh H

The paper puts forward a real time traffic sign sensing (detection and recognition) frame work for enhancing the vehicles capability in order to have a save driving, path planning. The proposed method utilizes the capsules neural network that outperforms the convolutional neural network by eluding the necessities for the manual effort. The capsules network provides a better resistance for the spatial variance and the high reliability in the sensing of the traffic sign compared to the convolutional network. The evaluation of the capsule network with the Indian traffic data set shows a 15% higher accuracy when compared with the CNN and the RNN.


2021 ◽  
Author(s):  
Ritu Lahoti ◽  
Sunil Kumar Vengalil ◽  
Punith B Venkategowda ◽  
Neelam Sinha ◽  
Vinod Veera Reddy

2020 ◽  
Vol 12 (6) ◽  
pp. 1015 ◽  
Author(s):  
Kan Zeng ◽  
Yixiao Wang

Classification algorithms for automatically detecting sea surface oil spills from spaceborne Synthetic Aperture Radars (SARs) can usually be regarded as part of a three-step processing framework, which briefly includes image segmentation, feature extraction, and target classification. A Deep Convolutional Neural Network (DCNN), named the Oil Spill Convolutional Network (OSCNet), is proposed in this paper for SAR oil spill detection, which can do the latter two steps of the three-step processing framework. Based on VGG-16, the OSCNet is obtained by designing the architecture and adjusting hyperparameters with the data set of SAR dark patches. With the help of the big data set containing more than 20,000 SAR dark patches and data augmentation, the OSCNet can have as many as 12 weight layers. It is a relatively deep Deep Learning (DL) network for SAR oil spill detection. It is shown by the experiments based on the same data set that the classification performance of OSCNet has been significantly improved compared to that of traditional machine learning (ML). The accuracy, recall, and precision are improved from 92.50%, 81.40%, and 80.95% to 94.01%, 83.51%, and 85.70%, respectively. An important reason for this improvement is that the distinguishability of the features learned by OSCNet itself from the data set is significantly higher than that of the hand-crafted features needed by traditional ML algorithms. In addition, experiments show that data augmentation plays an important role in avoiding over-fitting and hence improves the classification performance. OSCNet has also been compared with other DL classifiers for SAR oil spill detection. Due to the huge differences in the data sets, only their similarities and differences are discussed at the principle level.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xingyu Xie ◽  
Bin Lv

Convolutional Neural Network- (CNN-) based GAN models mainly suffer from problems such as data set limitation and rendering efficiency in the segmentation and rendering of painting art. In order to solve these problems, this paper uses the improved cycle generative adversarial network (CycleGAN) to render the current image style. This method replaces the deep residual network (ResNet) of the original network generator with a dense connected convolutional network (DenseNet) and uses the perceptual loss function for adversarial training. The painting art style rendering system built in this paper is based on perceptual adversarial network (PAN) for the improved CycleGAN that suppresses the limitation of the network model on paired samples. The proposed method also improves the quality of the image generated by the artistic style of painting and further improves the stability and speeds up the network convergence speed. Experiments were conducted on the painting art style rendering system based on the proposed model. Experimental results have shown that the image style rendering method based on the perceptual adversarial error to improve the CycleGAN + PAN model can achieve better results. The PSNR value of the generated image is increased by 6.27% on average, and the SSIM values are all increased by about 10%. Therefore, the improved CycleGAN + PAN image painting art style rendering method produces better painting art style images, which has strong application value.


2021 ◽  
Author(s):  
Masaki Ikuta

<div><div><div><p>Many algorithms and methods have been proposed for Computed Tomography (CT) image reconstruction, partic- ularly with the recent surge of interest in machine learning and deep learning methods. The majority of recently proposed methods are, however, limited to the image domain processing where deep learning is used to learn the mapping from a noisy image data set to a true image data set. While deep learning-based methods can produce higher quality images than conventional model-based post-processing algorithms, these methods have lim- itations. Deep learning-based methods used in the image domain are not sufficient for compensating for lost information during a forward and a backward projection in CT image reconstruction especially with a presence of high noise. In this paper, we propose a new Recurrent Neural Network (RNN) architecture for CT image reconstruction. We propose the Gated Momentum Unit (GMU) that has been extended from the Gated Recurrent Unit (GRU) but it is specifically designed for image processing inverse problems. This new RNN cell performs an iterative optimization with an accelerated convergence. The GMU has a few gates to regulate information flow where the gates decide to keep important long-term information and discard insignificant short- term detail. Besides, the GMU has a likelihood term and a prior term analogous to the Iterative Reconstruction (IR). This helps ensure estimated images are consistent with observation data while the prior term makes sure the likelihood term does not overfit each individual observation data. We conducted a synthetic image study along with a real CT image study to demonstrate this proposed method achieved the highest level of Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM). Also, we showed this algorithm converged faster than other well-known methods.</p></div></div></div>


2021 ◽  
Vol 11 (2) ◽  
pp. 487-496
Author(s):  
Li Liu ◽  
Chi Hua ◽  
Zixuan Cheng ◽  
Yunfeng Ji

Advances in medical imaging skills have promoted the influence of medical imaging in neuroscience. Having advanced medical imaging technology is essential for the medical industry. Magnetic resonance imaging (MRI) plays a central role in medical imaging. It plays a key role in the treatment of various human diseases. Doctors analyze brain size, shape, and location in brain MR images to assess brain disease and develop a medical plan. The manual division of brain tissue by experts is heavy and subjective. Therefore, the study of automatic segmentation of brain MR images has practical significance. Because the characteristics of brain MRI images are low contrast and high noise, which seriously affects the accuracy of image segmentation, the current image segmentation methods have some limitations in this application. In this paper, multiple self-organizing feature maps neural network (SOM-NN) are utilized to construct a parallel self-organizing feature maps neural network (PSOM-NN), which converts the segmentation problem of brain images into the classification problem of PSOMNN. The experiments show that SOM has strong self-learning ability in learning and training, and the parallel ability of PSOM-NN model greatly reduces the segmentation time, improves the real-time performance of the model, and helps to realize fully automatic or semi-automatic segmentation of the lesion area. PSOM can promote the improvement of segmentation accuracy and facilitate intelligent diagnosis.


Sign in / Sign up

Export Citation Format

Share Document