scholarly journals A Mature-Tomato Detection Algorithm Using Machine Learning and Color Analysis

Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2023 ◽  
Author(s):  
Guoxu Liu ◽  
Shuyi Mao ◽  
Jae Ho Kim

An algorithm was proposed for automatic tomato detection in regular color images to reduce the influence of illumination and occlusion. In this method, the Histograms of Oriented Gradients (HOG) descriptor was used to train a Support Vector Machine (SVM) classifier. A coarse-to-fine scanning method was developed to detect tomatoes, followed by a proposed False Color Removal (FCR) method to remove the false-positive detections. Non-Maximum Suppression (NMS) was used to merge the overlapped results. Compared with other methods, the proposed algorithm showed substantial improvement in tomato detection. The results of tomato detection in the test images showed that the recall, precision, and F1 score of the proposed method were 90.00%, 94.41 and 92.15%, respectively.

2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Chao Mi ◽  
Xin He ◽  
Haiwei Liu ◽  
Youfang Huang ◽  
Weijian Mi

With the development of port automation, most operational fields utilizing heavy equipment have gradually become unmanned. It is therefore imperative to monitor these fields in an effective and real-time manner. In this paper, a fast human-detection algorithm is proposed based on image processing. To speed up the detection process, the optimized histograms of oriented gradients (HOG) algorithm that can avoid the large number of double calculations of the original HOG and ignore insignificant features is used to describe the contour of the human body in real time. Based on the HOG features, using a training sample set consisting of scene images of a bulk port, a support vector machine (SVM) classifier combined with the AdaBoost classifier is trained to detect human. Finally, the results of the human detection experiments on Tianjin Port show that the accuracy of the proposed optimized algorithm has roughly the same accuracy as a traditional algorithm, while the proposed algorithm only takes 1/7 the amount of time. The accuracy and computing time of the proposed fast human-detection algorithm were verified to meet the security requirements of unmanned port areas.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Hongyu Hu ◽  
Zhaowei Qu ◽  
Zhihui Li ◽  
Jinhui Hu ◽  
Fulu Wei

A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG) feature extraction method is proposed later. A support vector machine (SVM) classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.


2020 ◽  
Vol 39 (4) ◽  
pp. 5725-5736
Author(s):  
Jiang Min

In view of the defects and shortcomings of the traditional target detection and tracking algorithm in accurately detecting targets and targets in different scenarios, based on the current research status and technical level of target detection and tracking at home and abroad, this paper proposes a target detection algorithm and tracking method using neural network algorithm, and applies it to the athlete training model. Based on the Alex-Net network structure, this paper designs a three-layer convolutional layer and two layers of fully connected layers. The last layer is used as the input of the SVM classifier, and the target classification result is obtained by the SVM classifier. In addition, this article adds SPP-Layer between the convolutional layer and the fully connected layer, enabling the same dimension of the Feature Map to be obtained before the fully connected layer for different sized input images. The research results show that the proposed method has certain recognition effect and can be applied to athlete training.


2017 ◽  
Vol 3 (2) ◽  
pp. 191-194 ◽  
Author(s):  
Tamer Abdulbaki Alshirbaji ◽  
Nour Aldeen Jalal ◽  
Lars Mündermann ◽  
Knut Möller

AbstractSmoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM) classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames) is around 84%, with the sensitivity (i.e. correctly detected smoke frames) and the specificity (i.e. correctly detected non-smoke frames) are 89% and 80%, respectively.


2021 ◽  
Author(s):  
Muhammad Zubair

Traditionally, the heart sound classification process is performed by first finding the elementary heart sounds of the phonocardiogram (PCG) signal. After detecting sounds S1 and S2, the features like envelograms, Mel frequency cepstral coefficients (MFCC), kurtosis, etc., of these sounds are extracted. These features are used for the classification of normal and abnormal heart sounds, which leads to an increase in computational complexity. In this paper, we have proposed a fully automated algorithm to localize heart sounds using K-means clustering. The K-means clustering model can differentiate between the primitive heart sounds like S1, S2, S3, S4 and the rest of the insignificant sounds like murmurs without requiring the excessive pre-processing of data. The peaks detected from the noisy data are validated by implementing five classification models with 30 fold cross-validation. These models have been implemented on a publicly available PhysioNet/Cinc challenge 2016 database. Lastly, to classify between normal and abnormal heart sounds, the localized labelled peaks from all the datasets were fed as an input to the various classifiers such as support vector machine (SVM), K-nearest neighbours (KNN), logistic regression, stochastic gradient descent (SGD) and multi-layer perceptron (MLP). To validate the superiority of the proposed work, we have compared our reported metrics with the latest state-of-the-art works. Simulation results show that the highest classification accuracy of 94.75% is achieved by the SVM classifier among all other classifiers.


Author(s):  
Lakshmi Sarvani Videla ◽  
M. Ashok Kumar P

The detection of person fatigue is one of the important tasks to detect drowsiness in the domain of image processing. Though lots of work has been carried out in this regard, there is a void of work shows the exact correctness. In this chapter, the main objective is to present an efficient approach that is a combination of both eye state detection and yawn in unconstrained environments. In the first proposed method, the face region and then eyes and mouth are detected. Histograms of Oriented Gradients (HOG) features are extracted from detected eyes. These features are fed to Support Vector Machine (SVM) classifier that classifies the eye state as closed or not closed. Distance between intensity changes in the mouth map is used to detect yawn. In second proposed method, off-the-shelf face detectors and facial landmark detectors are used to detect the features, and a novel eye and mouth metric is proposed. The eye results obtained are checked for consistency with yawn detection results in both the proposed methods. If any one of the results is indicating fatigue, the result is considered as fatigue. Second proposed method outperforms first method on two standard data sets.


2021 ◽  
Author(s):  
Muhammad Zubair

Traditionally, the heart sound classification process is performed by first finding the elementary heart sounds of the phonocardiogram (PCG) signal. After detecting sounds S1 and S2, the features like envelograms, Mel frequency cepstral coefficients (MFCC), kurtosis, etc., of these sounds are extracted. These features are used for the classification of normal and abnormal heart sounds, which leads to an increase in computational complexity. In this paper, we have proposed a fully automated algorithm to localize heart sounds using K-means clustering. The K-means clustering model can differentiate between the primitive heart sounds like S1, S2, S3, S4 and the rest of the insignificant sounds like murmurs without requiring the excessive pre-processing of data. The peaks detected from the noisy data are validated by implementing five classification models with 30 fold cross-validation. These models have been implemented on a publicly available PhysioNet/Cinc challenge 2016 database. Lastly, to classify between normal and abnormal heart sounds, the localized labelled peaks from all the datasets were fed as an input to the various classifiers such as support vector machine (SVM), K-nearest neighbours (KNN), logistic regression, stochastic gradient descent (SGD) and multi-layer perceptron (MLP). To validate the superiority of the proposed work, we have compared our reported metrics with the latest state-of-the-art works. Simulation results show that the highest classification accuracy of 94.75% is achieved by the SVM classifier among all other classifiers.


2021 ◽  
Vol 8 (2) ◽  
pp. 8-14
Author(s):  
Julkar Nine ◽  
Aarti Kishor Anapunje

Vehicle detection is one of the primal challenges of modern driver-assistance systems owing to the numerous factors, for instance, complicated surroundings, diverse types of vehicles with varied appearance and magnitude, low-resolution videos, fast-moving vehicles. It is utilized for multitudinous applications including traffic surveillance and collision prevention. This paper suggests a Vehicle Detection algorithm developed on Image Processing and Machine Learning. The presented algorithm is predicated on a Support Vector Machine(SVM) Classifier which employs feature vectors extracted via Histogram of Gradients(HOG) approach conducted on a semi-real time basis. A comparison study is presented stating the performance metrics of the algorithm on different datasets.


2020 ◽  
Author(s):  
Faisal Hussain ◽  
Muhammad Basit Umair ◽  
Muhammad Ehatisham-ul-Haq ◽  
Ivan Miguel Pires ◽  
Tânia Valente ◽  
...  

Abstract Falling is a commonly occurring mishap with elderly people, which may cause serious injuries. Thus, rapid fall detection is very important in order to mitigate the severe effects of fall among the elderly people. Many fall monitoring systems based on the accelerometer have been proposed for the fall detection. However, many of them mistakenly identify the daily life activities as fall or fall as daily life activity. To this aim, an efficient machine learning-based fall detection algorithm has been proposed in this paper. The proposed algorithm detects fall with efficient sensitivity, specificity, and accuracy as compared to the state-of-the-art techniques. A publicly available dataset with a very simple and computationally efficient set of features is used to accurately detect the fall incident. The proposed algorithm reports and accuracy of 99.98% with the Support Vector Machine(SVM) classifier.


Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 707 ◽  
Author(s):  
Yongchao Song ◽  
Yongfeng Ju ◽  
Kai Du ◽  
Weiyu Liu ◽  
Jiacheng Song

Shadows and normal light illumination and road and non-road areas are two pairs of contradictory symmetrical individuals. To achieve accurate road detection, it is necessary to remove interference caused by uneven illumination, such as shadows. This paper proposes a road detection algorithm based on a learning and illumination-independent image to solve the following problems: First, most road detection methods are sensitive to variation of illumination. Second, with traditional road detection methods based on illumination invariability, it is difficult to determine the calibration angle of the camera axis, and the sampling of road samples can be distorted. The proposed method contains three stages: The establishment of a classifier, the online capturing of an illumination-independent image, and the road detection. During the establishment of a classifier, a support vector machine (SVM) classifier for the road block is generated through training with the multi-feature fusion method. During the online capturing of an illumination-independent image, the road interest region is obtained by using a cascaded Hough transform parameterized by a parallel coordinate system. Five road blocks are obtained through the SVM classifier, and the RGB (Red, Green, Blue) space of the combined road blocks is converted to a geometric mean log chromatic space. Next, the camera axis calibration angle for each frame is determined according to the Shannon entropy so that the illumination-independent image of the respective frame is obtained. During the road detection, road sample points are extracted with the random sampling method. A confidence interval classifier of the road is established, which could separate a road from its background. This paper is based on public datasets and video sequences, which records roads of Chinese cities, suburbs, and schools in different traffic scenes. The author compares the method proposed in this paper with other sound video-based road detection methods and the results show that the method proposed in this paper can achieve a desired detection result with high quality and robustness. Meanwhile, the whole detection system can meet the real-time processing requirement.


Sign in / Sign up

Export Citation Format

Share Document