scholarly journals Optimization of FireNet for Liver Lesion Classification

Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1237
Author(s):  
Gedeon Kashala Kabe ◽  
Yuqing Song ◽  
Zhe Liu

In recent years, deep learning techniques, and in particular convolutional neural networks (CNNs) methods have demonstrated a superior performance in image classification and visual object recognition. In this work, we propose a classification of four types of liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues using convolutional neural networks with a succinct model called FireNet. We improved speed for quick classification and decreased the model size and the number of parameters by using fire modules from SqueezeNet. We have used bypass connection by adding it around Fire modules for learning a residual function between input and output, and to solve the vanishing gradient problem. We have proposed a new Particle Swarm Optimization (NPSO) to optimize the network parameters in order to further boost the performance of the proposed FireNet. The experimental results show that the parameters of FireNet are 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. The size of FireNet is reduced 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet. The final accuracy of our proposed FireNet model was 89.2%.

2020 ◽  
Vol 3 (1) ◽  
pp. 445-454
Author(s):  
Celal Buğra Kaya ◽  
Alperen Yılmaz ◽  
Gizem Nur Uzun ◽  
Zeynep Hilal Kilimci

Pattern classification is related with the automatic finding of regularities in dataset through the utilization of various learning techniques. Thus, the classification of the objects into a set of categories or classes is provided. This study is undertaken to evaluate deep learning methodologies to the classification of stock patterns. In order to classify patterns that are obtained from stock charts, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long-short term memory networks (LSTMs) are employed. To demonstrate the efficiency of proposed model in categorizing patterns, hand-crafted image dataset is constructed from stock charts in Istanbul Stock Exchange and NASDAQ Stock Exchange. Experimental results show that the usage of convolutional neural networks exhibits superior classification success in recognizing patterns compared to the other deep learning methodologies.


2021 ◽  
Author(s):  
Veerayuth Kittichai ◽  
Morakot Kaewthamasorn ◽  
Suchansa Thanee ◽  
Rangsan Jomtarak ◽  
Kamonpob Klanboot ◽  
...  

Abstract Background: The infections of an avian malaria parasite (Plasmodium gallinaceum) in domestic chickens presents a major threat to poultry industry because it cause economical loss in both quality and quantity of meat and egg productions. Deep learning algorithms have been developed to identify avian malaria infections and classify its blood stage development. Methods: In this study, four types of deep convolutional neural networks namely Darknet, Darknet19, darknet19_448x448 and Densenet 201 are used to classify P. gallinaceum blood stages. We randomly collected dataset of 10,548 single-cell images consisting of four parasite stages from ten-infected blood films stained by Giemsa. All images were confirmed by three well-trained examiners. Results: In the model-wise comparison, the four neural network models gave us high values in the mean average precision at least 95%. Darknet can reproduce a superior performance in classification of the P. gallinaceum development stages across any other model architectures. In addition, Darknet also has best performance in multiple class-wise classification, scoring the average values of greater than 99% in accuracy, specificity, sensitivity, precision, and F1-score.Conclusions: Therefore, Darknet model is more suitable in the classification of P. gallinaceum blood stages than the other three models. The result may contribute us to develop the rapid screening tool for further assist non-expert in filed study where is lack of specific instrument for avian malaria diagnostic.


2019 ◽  
Author(s):  
Astrid A. Zeman ◽  
J. Brendan Ritchie ◽  
Stefania Bracci ◽  
Hans Op de Beeck

AbstractDeep Convolutional Neural Networks (CNNs) are gaining traction as the benchmark model of visual object recognition, with performance now surpassing humans. While CNNs can accurately assign one image to potentially thousands of categories, network performance could be the result of layers that are tuned to represent the visual shape of objects, rather than object category, since both are often confounded in natural images. Using two stimulus sets that explicitly dissociate shape from category, we correlate these two types of information with each layer of multiple CNNs. We also compare CNN output with fMRI activation along the human visual ventral stream by correlating artificial with biological representations. We find that CNNs encode category information independently from shape, peaking at the final fully connected layer in all tested CNN architectures. Comparing CNNs with fMRI brain data, early visual cortex (V1) and early layers of CNNs encode shape information. Anterior ventral temporal cortex encodes category information, which correlates best with the final layer of CNNs. The interaction between shape and category that is found along the human visual ventral pathway is echoed in multiple deep networks. Our results suggest CNNs represent category information independently from shape, much like the human visual system.


2021 ◽  
Vol 18 (2) ◽  
pp. 27-39
Author(s):  
Michel Costa ◽  
◽  
Vanessa Rezende ◽  
Cledisson Martins ◽  
Adam Santos ◽  
...  

Convolutional neural networks (CNNs) are one of the deep learning techniques that, due to the computational advance of the last few years, have leveraged the area of computer vision, allowing substantial gains in the most varied classification problems, especially those involving digital images. In this context, this paper aims to propose a methodology for the classification of multiple pathologies related to different plant species. Initially, this methodology involved the image processing and the generation of ten new databases, varying between 50 and 66 classes with greater representation. After training the models (VGG16, RestNet101v1, ResNet101v2, ResNetXt50, and DenseNet169), a comparative study was conducted based on widely used classification metrics, such as test accuracy, f1-score, and area under the curve. To attest the significance of the results, Friedman’s nonparametric statistical test and two post-hoc procedures were performed, which demonstrated that ResNetXt50 and DenseNet169 obtained superior performances when compared with VGG16 and ResNets.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 423
Author(s):  
Gabriel Díaz ◽  
Billy Peralta ◽  
Luis Caro ◽  
Orietta Nicolis

Automatic recognition of visual objects using a deep learning approach has been successfully applied to multiple areas. However, deep learning techniques require a large amount of labeled data, which is usually expensive to obtain. An alternative is to use semi-supervised models, such as co-training, where multiple complementary views are combined using a small amount of labeled data. A simple way to associate views to visual objects is through the application of a degree of rotation or a type of filter. In this work, we propose a co-training model for visual object recognition using deep neural networks by adding layers of self-supervised neural networks as intermediate inputs to the views, where the views are diversified through the cross-entropy regularization of their outputs. Since the model merges the concepts of co-training and self-supervised learning by considering the differentiation of outputs, we called it Differential Self-Supervised Co-Training (DSSCo-Training). This paper presents some experiments using the DSSCo-Training model to well-known image datasets such as MNIST, CIFAR-100, and SVHN. The results indicate that the proposed model is competitive with the state-of-art models and shows an average relative improvement of 5% in accuracy for several datasets, despite its greater simplicity with respect to more recent approaches.


2017 ◽  
Author(s):  
Courtney J. Spoerer ◽  
Patrick McClure ◽  
Nikolaus Kriegeskorte

Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and nonhuman primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognising objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognise objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Veerayuth Kittichai ◽  
Morakot Kaewthamasorn ◽  
Suchansa Thanee ◽  
Rangsan Jomtarak ◽  
Kamonpob Klanboot ◽  
...  

AbstractThe infection of an avian malaria parasite (Plasmodium gallinaceum) in domestic chickens presents a major threat to the poultry industry because it causes economic loss in both the quality and quantity of meat and egg production. Computer-aided diagnosis has been developed to automatically identify avian malaria infections and classify the blood infection stage development. In this study, four types of deep convolutional neural networks, namely Darknet, Darknet19, Darknet19-448 and Densenet201 are used to classify P. gallinaceum blood stages. We randomly collected a dataset of 12,761 single-cell images consisting of three parasite stages from ten-infected blood films stained by Giemsa. All images were confirmed by three well-trained examiners. The study mainly compared several image classification models and used both qualitative and quantitative data for the evaluation of the proposed models. In the model-wise comparison, the four neural network models gave us high values with a mean average accuracy of at least 97%. The Darknet can reproduce a superior performance in the classification of the P. gallinaceum development stages across any other model architectures. Furthermore, the Darknet has the best performance in multiple class-wise classification, with average values of greater than 99% in accuracy, specificity, and sensitivity. It also has a low misclassification rate (< 1%) than the other three models. Therefore, the model is more suitable in the classification of P. gallinaceum blood stages. The findings could help us create a fast-screening method to help non-experts in field studies where there is a lack of specialized instruments for avian malaria diagnostics.


2021 ◽  
Vol 87 (4) ◽  
pp. 295-308
Author(s):  
Qimin Cheng ◽  
Yuan Xu ◽  
Peng Fu ◽  
Jinling Li ◽  
Wei Wang ◽  
...  

Deep learning techniques, especially convolutional neural networks, have boosted performance in analyzing and understanding remotely sensed images to a great extent. However, existing scene-classification methods generally neglect local and spatial information that is vital to scene classification of remotely sensed images. In this study, a method of scene classification for remotely sensed images based on pretrained densely connected convolutional neural networks combined with an ensemble classifier is proposed to tackle the under-utilization of local and spatial information for image classification. Specifically, we first exploit the pretrained DenseNet and fine-tuned it to release its potential in remote-sensing image feature representation. Second, a spatial-pyramid structure and an improved Fisher-vector coding strategy are leveraged to further strengthen representation capability and the robustness of the feature map captured from convolutional layers. Then we integrate an ensemble classifier in our network architecture considering that lower attention to feature descriptors. Extensive experiments are conducted, and the proposed method achieves superior performance on UC Merced, AID, and NWPU-RESISC45 data sets.


Sign in / Sign up

Export Citation Format

Share Document