scholarly journals A Configurable Architecture for Running Hybrid Convolutional Neural Networks in Low-Density FPGAs

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 107229-107243 ◽  
Author(s):  
Mario P. Vestias ◽  
Rui P. Duarte ◽  
Jose T. De Sousa ◽  
Horacio C. Neto
Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1321 ◽  
Author(s):  
Mário P. Véstias ◽  
Rui Policarpo Duarte ◽  
José T. de Sousa ◽  
Horácio C. Neto

Edge devices are becoming smarter with the integration of machine learning methods, such as deep learning, and are therefore used in many application domains where decisions have to be made without human intervention. Deep learning and, in particular, convolutional neural networks (CNN) are more efficient than previous algorithms for several computer vision applications such as security and surveillance, where image and video analysis are required. This better efficiency comes with a cost of high computation and memory requirements. Hence, running CNNs in embedded computing devices is a challenge for both algorithm and hardware designers. New processing devices, dedicated system architectures and optimization of the networks have been researched to deal with these computation requirements. In this paper, we improve the inference execution times of CNNs in low density FPGAs (Field-Programmable Gate Arrays) using fixed-point arithmetic, zero-skipping and weight pruning. The developed architecture supports the execution of large CNNs in FPGA devices with reduced on-chip memory and computing resources. With the proposed architecture, it is possible to infer an image in AlexNet in 2.9 ms in a ZYNQ7020 and 1.0 ms in a ZYNQ7045 with less than 1% accuracy degradation. These results improve previous state-of-the-art architectures for CNN inference.


2020 ◽  
Vol 77 ◽  
pp. 103136 ◽  
Author(s):  
Mário P. Véstias ◽  
Rui P. Duarte ◽  
José T. de Sousa ◽  
Horácio C. Neto

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document