scholarly journals Thyroid Nodule Classification in Ultrasound Images by Fusion of Conventional Features and Res-GAN Deep Features

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Yuan Hang

In spite of the gargantuan number of patients affected by the thyroid nodule, the detection at an early stage is still a challenging task. Thyroid ultrasonography (US) is a noninvasive, inexpensive procedure widely used to detect and evaluate the thyroid nodules. The ultrasonography method for image classification is a computer-aided diagnostic technology based on image features. In this paper, we illustrate a method which involves the combination of the deep features with the conventional features together to form a hybrid feature space. Several image enhancement techniques, such as histogram equalization, Laplacian operator, logarithm transform, and Gamma correction, are undertaken to improve the quality and characteristics of the image before feature extraction. Among these methods, applying histogram equalization not only improves the brightness and contrast of the image but also achieves the highest classification accuracy at 69.8%. We extract features such as histograms of oriented gradients, local binary pattern, SIFT, and SURF and combine them with deep features of residual generative adversarial network. We compare the ResNet18, a residual convolutional neural network with 18 layers, with the Res-GAN, a residual generative adversarial network. The experimental result shows that Res-GAN outperforms the former model. Besides, we fuse SURF with deep features with a random forest model as a classifier, which achieves 95% accuracy.

2018 ◽  
Vol 9 (4) ◽  
pp. 48-63 ◽  
Author(s):  
S. Saranya Rubini ◽  
A. Kunthavai ◽  
M.B. Sachin ◽  
S. Deepak Venkatesh

Retinal image analysis plays an important part in identifying various eye related diseases such as diabetic retinopathy (DR), glaucoma and many others. Accurate segmentation of blood vessels plays an important part in identifying the retinal diseases at an early stage. In this article, an unsupervised approach based on contour detection has been proposed for effective segmentation of retinal blood vessels. The proposed morphological contour-based blood vessel segmentation (MCBVS) method performs preprocessing using contrast limited adaptive histogram equalization followed by alternate sequential filtering to generate a noise-free image. The resultant image undergoes Otsu thresholding for candidate extraction followed by contour detection to properly segment the blood vessels. The MCBVS method has been tested on the DRIVE dataset and the experimental result shows that the proposed method achieved a sensitivity, specificity and accuracy of 58.79%, 90.77% and 86.7%, respectively. The MCBVS method performs better than the existing methods Sobel, Prewitt and Modified U-Net in terms of accuracy.


2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


2018 ◽  
Author(s):  
Gongbo Liang ◽  
Sajjad Fouladvand ◽  
Jie Zhang ◽  
Michael A. Brooks ◽  
Nathan Jacobs ◽  
...  

AbstractComputed tomography (CT) is a widely-used diag-reproducibility regarding radiomic features, such as intensity, nostic image modality routinely used for assessing anatomical tissue characteristics. However, non-standardized imaging pro-tocols are commonplace, which poses a fundamental challenge in large-scale cross-center CT image analysis. One approach to address the problem is to standardize CT images using generative adversarial network models (GAN). GAN learns the data distribution of training images and generate synthesized images under the same distribution. However, existing GAN models are not directly applicable to this task mainly due to the lack of constraints on the mode of data to generate. Furthermore, they treat every image equally, but in real applications, some images are more difficult to standardize than the others. All these may lead to the lack-of-detail problem in CT image synthesis. We present a new GAN model called GANai to mitigate the differences in radiomic features across CT images captured using non-standard imaging protocols. Given source images, GANai composes new images by specifying a high-level goal that the image features of the synthesized images should be similar to those of the standard images. GANai introduces an alternative improvement training strategy to alternatively and steadily improve model performance. The new training strategy enables a series of technical improvements, including phase-specific loss functions, phase-specific training data, and the adoption of ensemble learning, leading to better model performance. The experimental results show that GANai is significantly better than the existing state-of-the-art image synthesis algorithms on CT image standardization. Also, it significantly improves the efficiency and stability of GAN model training.


Author(s):  
Saba Saleem ◽  
Javeria Amin ◽  
Muhammad Sharif ◽  
Muhammad Almas Anjum ◽  
Muhammad Iqbal ◽  
...  

AbstractWhite blood cells (WBCs) are a portion of the immune system which fights against germs. Leukemia is the most common blood cancer which may lead to death. It occurs due to the production of a large number of immature WBCs in the bone marrow that destroy healthy cells. To overcome the severity of this disease, it is necessary to diagnose the shapes of immature cells at an early stage that ultimately reduces the modality rate of the patients. Recently different types of segmentation and classification methods are presented based upon deep-learning (DL) models but still have some limitations. This research aims to propose a modified DL approach for the accurate segmentation of leukocytes and their classification. The proposed technique includes two core steps: preprocessing-based classification and segmentation. In preprocessing, synthetic images are generated using a generative adversarial network (GAN) and normalized by color transformation. The optimal deep features are extracted from each blood smear image using pretrained deep models i.e., DarkNet-53 and ShuffleNet. More informative features are selected by principal component analysis (PCA) and fused serially for classification. The morphological operations based on color thresholding with the deep semantic method are utilized for leukemia segmentation of classified cells. The classification accuracy achieved with ALL-IDB and LISC dataset is 100% and 99.70% for the classification of leukocytes i.e., blast, no blast, basophils, neutrophils, eosinophils, lymphocytes, and monocytes, respectively. Whereas semantic segmentation achieved 99.10% and 98.60% for average and global accuracy, respectively. The proposed method achieved outstanding outcomes as compared to the latest existing research works.


Author(s):  
Jinning Li ◽  
Yexiang Xue

We propose the Dual Scribble-to-Painting Network (DSP-Net), which is able to produce artistic paintings based on user-generated scribbles. In scribble-to-painting transformation, a neural net has to infer additional details of the image, given relatively sparse information contained in the outlines of the scribble. Therefore, it is more challenging than classical image style transfer, in which the information content is reduced from photos to paintings. Inspired by the human cognitive process, we propose a multi-task generative adversarial network, which consists of two jointly trained neural nets -- one for generating artistic images and the other one for semantic segmentation. We demonstrate that joint training on these two tasks brings in additional benefit. Experimental result shows that DSP-Net outperforms state-of-the-art models both visually and quantitatively. In addition, we publish a large dataset for scribble-to-painting transformation.


2019 ◽  
Author(s):  
Ingo Fruend ◽  
Jaykishan Patel ◽  
Elee D. Stalker

AbstractHigher levels of visual processing are progressively more invariant to low-level visual factors such as contrast. Although this invariance trend has been well documented for simple stimuli like gratings and lines, it is difficult to characterize such invariances in images with naturalistic complexity. Here, we use a generative image model based on a hierarchy of learned visual features—a Generative Adversarial Network—to constrain image manipulations to remain within the vicinity of the manifold of natural images. This allows us to quantitatively characterize visual discrimination behaviour for naturalistically complex, non-linear image manipulations. We find that human tuning to such manipulations has a factorial structure. The first factor governs image contrast with discrimination thresholds following a power law with an exponent between 0.5 and 0.6, similar to contrast discrimination performance for simpler stimuli. A second factor governs image content with approximately constant discrimination thresholds throughout the range of images studied. These results support the idea that human perception factors out image contrast relatively early on, allowing later stages of processing to extract higher level image features in a stable and robust way.


2021 ◽  
Author(s):  
Dong Sui ◽  
Maozu Guo ◽  
Xiaoxuan Ma ◽  
Julian Baptiste ◽  
Lei Zhang

Abstract Background: Precision medicine, a popular treatment strategy, has become increasingly important to the development of targeted therapy. To correlate medical imaging with prognostic and genomic data, researches in radiomics and radiogenomics have provide many pre-de_ned image features to describe image information quantitatively or qualitatively. However, in previous researches, there are only statistical results which proves high correlation among multi-source medical data, but those can't give intuitive and visual result. Results: In this paper, a deep learning based radio-genomics framework is provided to construct the linkage from lung tumor images to genomics data and implement generation process in turn, which form a bi-direction framework to map multi-source medical data. The imaging features are extracted from auto-encoder under the condition of genomics data. It can obtain much more relevant features than traditional radio-genomics methods. Finally, we use generative adversarial network to transform genomics data onto tumor images, which gives a cogent result to explain the linkage between them. Conclusions: Our proposed framework provides a deep learning method to do radio-genomics researches more functionally and intuitively.


2021 ◽  
Vol 40 ◽  
pp. 03013
Author(s):  
Kaustubh Gayadhankar ◽  
Rishi Patel ◽  
Hrithik Lodha ◽  
Swapnil Shinde

In Today’s date plagiarism is a very important aspect because content originality is the client's prior requirement. Many people on the internet use others' images and get publicity while the owner of the image or data won′t get anything out of it. Many users copy the data or image features from the other users and modify it a little bit or create an artificial replica of it. With sufficient computational power and volume of data, the GAN models are capable enough to produce fake images that look very much similar to the real images. These kinds of images are generally not detected by modern plagiarism systems. GAN stands for generative adversarial network. It has two neural networks working inside. The first one is the generator which generates a random image and the second one is the discriminator which identifies whether the image being generated is a real or a fake image. In this paper, we have proposed a system that has been trained on both fake images (GAN Generated images) and real images and will help us in flagging whether the image is plagiarised or a real image.


Sign in / Sign up

Export Citation Format

Share Document