Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

2015 ◽  
Author(s):  
Zhengping Ji ◽  
Ilia Ovsiannikov ◽  
Yibing Wang ◽  
Lilong Shi ◽  
Qiang Zhang
2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Qi Zhao ◽  
Shuchang Lyu ◽  
Boxue Zhang ◽  
Wenquan Feng

Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.


Author(s):  
Zhengsu Chen ◽  
Jianwei Niu ◽  
Xuefeng Liu ◽  
Shaojie Tang

Convolutional neural networks (CNNs) have achieved remarkable success in image recognition. Although the internal patterns of the input images are effectively learned by the CNNs, these patterns only constitute a small proportion of useful patterns contained in the input images. This can be attributed to the fact that the CNNs will stop learning if the learned patterns are enough to make a correct classification. Network regularization methods like dropout and SpatialDropout can ease this problem. During training, they randomly drop the features. These dropout methods, in essence, change the patterns learned by the networks, and in turn, forces the networks to learn other patterns to make the correct classification. However, the above methods have an important drawback. Randomly dropping features is generally inefficient and can introduce unnecessary noise. To tackle this problem, we propose SelectScale. Instead of randomly dropping units, SelectScale selects the important features in networks and adjusts them during training. Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.


2018 ◽  
Vol 22 (S4) ◽  
pp. 9371-9383 ◽  
Author(s):  
Xiaoning Zhu ◽  
Qingyue Meng ◽  
Bojian Ding ◽  
Lize Gu ◽  
Yixian Yang

2018 ◽  
Vol 7 (3.3) ◽  
pp. 119
Author(s):  
B Lokesh ◽  
Ravoori Charishma ◽  
Natuva Hiranmai

Farmers face a multitude of problems nowadays such as lower crop production, tumultuous weather patterns, and crop infections. All of these issues can be solved if they have access to the right information. The current methods of information retrieval, such as search engine lookup and talking to an Agriculture Officer, have multiple defects. A more suitable solution, that we are proposing, is an android application, available at all times, that can give succinct answers to any question a farmer may pose. The application will include an image recognition component that will be able to recognize a variety of crop diseases in the case that the farmer does not know what he is dealing with and is unable to describe it.  Image recognition is the ability of a computer to recognize and distinguish between different objects, and is actually a much harder problem to solve than it seems. We are using Tensorflow, a tool that uses convolutional neural networks, to implement it  


Sign in / Sign up

Export Citation Format

Share Document