scholarly journals Disease and Defect Detection System for Raspberries Based on Convolutional Neural Networks

2021 ◽  
Vol 11 (24) ◽  
pp. 11868
Author(s):  
José Naranjo-Torres ◽  
Marco Mora ◽  
Claudio Fredes ◽  
Andres Valenzuela

Raspberries are fruit of great importance for human beings. Their products are segmented by quality. However, estimating raspberry quality is a manual process carried out at the reception of the fruit processing plant,and is thus exposed to factors that could distort the measurement. The agriculture industry has increased the use of deep learning (DL) in computer vision systems. Non-destructive and computer vision equipment and methods are proposed to solve the problem of estimating the quality of raspberries in a tray. To solve the issue of estimating the quality of raspberries in a picking tray, prototype equipment is developed to determine the quality of raspberry trays using computer vision techniques and convolutional neural networks from images captured in the visible RGB spectrum. The Faster R–CNN object-detection algorithm is used, and different pretrained CNN networks are evaluated as a backbone to develop the software for the developed equipment. To avoid imbalance in the dataset, an individual object-detection model is trained and optimized for each detection class. Finally, both hardware and software are effectively integrated. A conceptual test is performed in a real industrial scenario, thus achieving an automatic evaluation of the quality of the raspberry tray, in this way eliminating the intervention of the human expert and eliminating errors involved in visual analysis. Excellent results were obtained in the conceptual test performed, reaching in some cases precision of 100%, reducing the evaluation time per raspberry tray image to 30 s on average, allowing the evaluation of a larger and representative sample of the raspberry batch arriving at the processing plant.

2019 ◽  
Vol 9 (14) ◽  
pp. 2865 ◽  
Author(s):  
Kyungmin Jo ◽  
Yuna Choi ◽  
Jaesoon Choi ◽  
Jong Woo Chung

More than half of post-operative complications can be prevented, and operation performances can be improved based on the feedback gathered from operations or notifications of the risks during operations in real time. However, existing surgical analysis methods are limited, because they involve time-consuming processes and subjective opinions. Therefore, the detection of surgical instruments is necessary for (a) conducting objective analyses, or (b) providing risk notifications associated with a surgical procedure in real time. We propose a new real-time detection algorithm for detection of surgical instruments using convolutional neural networks (CNNs). This algorithm is based on an object detection system YOLO9000 and ensures continuity of detection of the surgical tools in successive imaging frames based on motion vector prediction. This method exhibits a constant performance irrespective of a surgical instrument class, while the mean average precision (mAP) of all the tools is 84.7, with a speed of 38 frames per second (FPS).


Author(s):  
Anwaar Ulhaq

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85–100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


Computer vision is a scientific field that deals with how computers can acquire significant level comprehension from computerized images or videos. One of the keystones of computer vision is object detection that aims to identify relevant features from video or image to detect objects. Backbone is the first stage in object detection algorithms that play a crucial role in object detection. Object detectors are usually provided with backbone networks designed for image classification. Object detection performance is highly based on features extracted by backbones, for instance, by simply replacing a backbone with its extended version, a large accuracy metric grows up. Additionally, the backbone's importance is demonstrated by its efficiency in real-time object detection. In this paper, we aim to accumulate the crucial role of the deep learning era and convolutional neural networks in particular in object detection tasks. We have analyzed and have been concentrating on a wide range of reviews on convolutional neural networks used as the backbone of object detection models. Building, therefore, a review of backbones that help researchers and scientists to use it as a guideline for their works.


The global development and progress in scientific paraphernalia and technology is the fundamental reason for the rapid increasein the data volume. Several significant techniques have been introducedfor image processing and object detection owing to this advancement. The promising features and transfer learning of ConvolutionalNeural Network (CNN) havegained much attention around the globe by researchers as well as computer vision society, as a result of which, several remarkable breakthroughs were achieved. This paper comprehensively reviews the data classification, history as well as architecture of CNN and well-known techniques bytheir boons and absurdities. Finally, a discussion for implementation of CNN over object detection for effectual results based on their critical analysis and performances is presented


Convolutional Neural Networks(CNNs) are a floating area in Deep Learning. Now a days CNNs are used inside the more note worthy some portion of the Object Recognition tasks. It is used in stand-out utility regions like Speech Recognition, Pattern Acknowledgment, Computer Vision, Object Detection and extraordinary photograph handling programs. CNN orders the realities in light of an opportunity regard. Right now, inside and out assessment of CNN shape and projects are built up. A relative examine of different assortments of CNN are too portrayed on this work.


Author(s):  
Haoyu Dong ◽  
Shijie Liu ◽  
Shi Han ◽  
Zhouyu Fu ◽  
Dongmei Zhang

Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for table detection to meet the domain-specific requirement on precise table boundary detection; third, we propose an effective uncertainty metric to guide an active learning based smart sampling algorithm, which enables the efficient build-up of a training dataset with 22,176 tables on 10,220 sheets with broad coverage of diverse table structures and layouts. Our evaluation shows that TableSense is highly effective with 91.3% recall and 86.5% precision in EoB-2 metric, a significant improvement over both the current detection algorithm that are used in commodity spreadsheet tools and state-of-the-art convolutional neural networks in computer vision.


Author(s):  
Anwaar Ulhaq ◽  
Asim Khan

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85–100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


Sign in / Sign up

Export Citation Format

Share Document