Fruit Image Classification Using Convolutional Neural Networks

2019 ◽  
Vol 7 (4) ◽  
pp. 51-70
Author(s):  
Shawon Ashraf ◽  
Ivan Kadery ◽  
Md Abdul Ahad Chowdhury ◽  
Tahsin Zahin Mahbub ◽  
Rashedur M. Rahman

Convolutional neural networks (CNN) are the most popular class of models for image recognition and classification task nowadays. Most of the superstores and fruit vendors resort to human inspection to check the quality of the fruits stored in their inventory. However, this process can be automated. We propose a system that can be trained with a fruit image dataset and then detect whether a fruit is rotten or fresh from an input image. We built the initial model using the Inception V3 model and trained with our dataset applying transfer learning.

Geosciences ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 336
Author(s):  
Rafael Pires de Lima ◽  
David Duarte

Convolutional neural networks (CNN) are currently the most widely used tool for the classification of images, especially if such images have large within- and small between- group variance. Thus, one of the main factors driving the development of CNN models is the creation of large, labelled computer vision datasets, some containing millions of images. Thanks to transfer learning, a technique that modifies a model trained on a primary task to execute a secondary task, the adaptation of CNN models trained on such large datasets has rapidly gained popularity in many fields of science, geosciences included. However, the trade-off between two main components of the transfer learning methodology for geoscience images is still unclear: the difference between the datasets used in the primary and secondary tasks; and the amount of available data for the primary task itself. We evaluate the performance of CNN models pretrained with different types of image datasets—specifically, dermatology, histology, and raw food—that are fine-tuned to the task of petrographic thin-section image classification. Results show that CNN models pretrained on ImageNet achieve higher accuracy due to the larger number of samples, as well as a larger variability in the samples in ImageNet compared to the other datasets evaluated.


2020 ◽  
Vol 34 (10) ◽  
pp. 13943-13944
Author(s):  
Kira Vinogradova ◽  
Alexandr Dibrov ◽  
Gene Myers

Convolutional neural networks have become state-of-the-art in a wide range of image recognition tasks. The interpretation of their predictions, however, is an active area of research. Whereas various interpretation methods have been suggested for image classification, the interpretation of image segmentation still remains largely unexplored. To that end, we propose seg-grad-cam, a gradient-based method for interpreting semantic segmentation. Our method is an extension of the widely-used Grad-CAM method, applied locally to produce heatmaps showing the relevance of individual pixels for semantic segmentation.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jahanzaib Latif ◽  
Shanshan Tu ◽  
Chuangbai Xiao ◽  
Sadaqat Ur Rehman ◽  
Mazhar Sadiq ◽  
...  

In parallel with the development of various emerging fields such as computer vision and related technologies, e.g., iris identification and glaucoma detection, criminals are developing their methods. It is the foremost reason for the blindness of human beings that affects the eye’s optic nerve. Fundus photography is carried out to examine this eye disease. Medical experts evaluate fundus photographs, which is a time-consuming visual inspection. Most current systems for automated glaucoma detection in fundus images rely on segmentation-based features nuanced by the underlying segmentation methods. Convolutional neural networks (CNNs) are powerful tools for solving image classification tasks, as they can learn highly discriminative features from raw pixel intensities. However, their applicability to medical image analysis is limited by the nonavailability of large sets of annotated data required for training. In this work, we aim to accelerate this process using a computer-aided diagnosis of this severe disease with the help of transfer learning based on deep convolutional neural networks. We have suggested the Inception V-3 approach for image classification based on convolution neural networks. Our developed model has the potential to address this CNN model’s problem of classification accuracy, and with imaging data, our proposed method outperforms recent state-of-the-art approaches. The case study for digital forensics is an essential component of emerging technologies, and hence glaucoma detection plays a vital role in it.


2020 ◽  
Vol 5 (4) ◽  
pp. 48-53
Author(s):  
Mohamad Shahmil Saari ◽  
Romiza Md Nor ◽  
Huzaifah A Hamid

Harumanis mango cultivar is special to Perlis (north state of Malaysia) and has been declared in the national agenda as a special fruit. For those who are not acquainted with aromatic mango, it is difficult to tell the distinction between Harumanis and the others . By using image recognition, people can identify Harumanis feature details by image recognition technique where algorithm is applied to recognize the mango. Convolutional neural networks method is a suitable technique for the creation of a multi - fruit in re al - time classification sorter with the camera and for the detection of moving fruit. Furthermore, the accuracy of the image classification can be improved by increasing the number of datasets, the distance of images from the camera, and the labelling proce ss. This project used Mobile Net architecture model because it consumes less computational power and it can also provide efficiency of the accuracy. A w eb - based i mage r ecognition s ystem for d etecting Harumanis m angoes was developed and known as CamPauh to recognize four classes of mango which are H arumanis, apple mango, other type s of mango es and not mango. CamPauh ca n identify different type of mangoes and the result was stored into the database and appeared on the websit e. E valuation on the accuracy was conducted discussed to support users ’ satisfaction in identifying the correct mango type.


Sign in / Sign up

Export Citation Format

Share Document