scholarly journals Acceptability of Artificial Intelligence in Poultry Processing and Classification Efficiencies of Different Classification Models in the Categorisation of Breast Fillet Myopathies

2021 ◽  
Vol 12 ◽  
Author(s):  
Aftab Siddique ◽  
Samira Shirzaei ◽  
Alice E. Smith ◽  
Jaroslav Valenta ◽  
Laura J. Garner ◽  
...  

Breast meat from modern fast-growing big birds is affected with myopathies such as woody breast (WB), white striping, and spaghetti meat (SM). The detection and separation of the myopathy-affected meat can be carried out at processing plants using technologies such as bioelectrical impedance analysis (BIA). However, BIA raw data from myopathy-affected breast meat are extremely complicated, especially because of the overlap of these myopathies in individual breast fillets and the human error associated with the assignment of fillet categories. Previous research has shown that traditional statistical techniques such as ANOVA and regression, among others, are insufficient in categorising fillets affected with myopathies by BIA. Therefore, more complex data analysis tools can be used, such as support vector machines (SVMs) and backpropagation neural networks (BPNNs), to classify raw poultry breast myopathies using their BIA patterns, such that the technology can be beneficial for the poultry industry in detecting myopathies. Freshly deboned (3–3.5 h post slaughter) breast fillets (n = 100 × 3 flocks) were analysed by hand palpation for WB (0-normal; 1-mild; 2-moderate; 3-Severe) and SM (presence and absence) categorisation. BIA data (resistance and reactance) were collected on each breast fillet; the algorithm of the equipment calculated protein and fat index. The data were analysed by linear discriminant analysis (LDA), and with SVM and BPNN with 70::30: training::test data set. Compared with the LDA analysis, SVM separated WB with a higher accuracy of 71.04% for normal (data for normal and mild merged), 59.99% for moderate, and 81.48% for severe WB. Compared with SVM, the BPNN training model accurately (100%) separated normal WB fillets with and without SM, demonstrating the ability of BIA to detect SM. Supervised learning algorithms, such as SVM and BPNN, can be combined with BIA and successfully implemented in poultry processing to detect breast fillet myopathies.

Signals ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 188-208
Author(s):  
Mert Sevil ◽  
Mudassir Rashid ◽  
Mohammad Reza Askari ◽  
Zacharie Maloney ◽  
Iman Hajizadeh ◽  
...  

Wearable devices continuously measure multiple physiological variables to inform users of health and behavior indicators. The computed health indicators must rely on informative signals obtained by processing the raw physiological variables with powerful noise- and artifacts-filtering algorithms. In this study, we aimed to elucidate the effects of signal processing techniques on the accuracy of detecting and discriminating physical activity (PA) and acute psychological stress (APS) using physiological measurements (blood volume pulse, heart rate, skin temperature, galvanic skin response, and accelerometer) collected from a wristband. Data from 207 experiments involving 24 subjects were used to develop signal processing, feature extraction, and machine learning (ML) algorithms that can detect and discriminate PA and APS when they occur individually or concurrently, classify different types of PA and APS, and estimate energy expenditure (EE). Training data were used to generate feature variables from the physiological variables and develop ML models (naïve Bayes, decision tree, k-nearest neighbor, linear discriminant, ensemble learning, and support vector machine). Results from an independent labeled testing data set demonstrate that PA was detected and classified with an accuracy of 99.3%, and APS was detected and classified with an accuracy of 92.7%, whereas the simultaneous occurrences of both PA and APS were detected and classified with an accuracy of 89.9% (relative to actual class labels), and EE was estimated with a low mean absolute error of 0.02 metabolic equivalent of task (MET).The data filtering and adaptive noise cancellation techniques used to mitigate the effects of noise and artifacts on the classification results increased the detection and discrimination accuracy by 0.7% and 3.0% for PA and APS, respectively, and by 18% for EE estimation. The results demonstrate the physiological measurements from wristband devices are susceptible to noise and artifacts, and elucidate the effects of signal processing and feature extraction on the accuracy of detection, classification, and estimation of PA and APS.


Kybernetes ◽  
2019 ◽  
Vol 49 (10) ◽  
pp. 2547-2567 ◽  
Author(s):  
Himanshu Sharma ◽  
Anu G. Aggarwal

Purpose The experiential nature of travel and tourism services has popularized the importance of electronic word-of-mouth (EWOM) among potential customers. EWOM has a significant influence on hotel booking intention of customers as they tend to trust EWOM more than the messages spread by marketers. Amid abundant reviews available online, it becomes difficult for travelers to identify the most significant ones. This questions the credibility of reviewers as various online businesses allow reviewers to post their feedback using nickname or email address rather than using real name, photo or other personal information. Therefore, this study aims to determine the factors leading to reviewer credibility. Design/methodology/approach The paper proposes an econometric model to determine the variables that affect the reviewer’s credibility in the hospitality and tourism sector. The proposed model uses quantifiable variables of reviewers and reviews to estimate reviewer credibility, defined in terms of proportion of number of helpful votes received by a reviewer to the number of total reviews written by him. This covers both aspects of source credibility i.e. trustworthiness and expertness. The authors have used the data set of TripAdvisor.com to validate the models. Findings Regression analysis significantly validated the econometric models proposed here. To check the predictive efficiency of the models, predictive modeling using five commonly used classifiers such as random forest (RF), linear discriminant analysis, k-nearest neighbor, decision tree and support vector machine is performed. RF gave the best accuracy for the overall model. Practical implications The findings of this research paper suggest various implications for hoteliers and managers to help retain credible reviewers in the online travel community. This will help them to achieve long term relationships with the clients and increase their trust in the brand. Originality/value To the best of authors’ knowledge, this study performs an econometric modeling approach to find determinants of reviewer credibility, not conducted in previous studies. Moreover, the study contracts from earlier works by considering it to be an endogenous variable, rather than an exogenous one.


Author(s):  
Clyde Coelho ◽  
Aditi Chattopadhyay

This paper proposes a computationally efficient methodology for classifying damage in structural hotspots. Data collected from a sensor instrumented lug joint subjected to fatigue loading was preprocessed using a linear discriminant analysis (LDA) to extract features that are relevant for classification and reduce the dimensionality of the data. The data is then reduced in the feature space by analyzing the structure of the mapped clusters and removing the data points that do not affect the construction of interclass separating hyperplanes. The reduced data set is used to train a support vector machines (SVM) based classifier and the results of the classification problem are compared to those when the entire data set is used for training. To further improve the efficiency of the classification scheme, the SVM classifiers are arranged in a binary tree format to reduce the number of comparisons that are necessary. The experimental results show that the data reduction does not reduce the ability of the classifier to distinguish between classes while providing a nearly fourfold decrease in the amount of training data processed.


2018 ◽  
Vol 57 (05/06) ◽  
pp. 272-279 ◽  
Author(s):  
Isadora Cardoso ◽  
Eliana Almeida ◽  
Hector Allende-Cid ◽  
Alejandro Frery ◽  
Rangaraj Rangayyan ◽  
...  

Computational Intelligence Re-meets Medical Image Processing A Comparison of Some Nature-Inspired Optimization Metaheuristics Applied in Biomedical Image Registration Background Diffuse lung diseases (DLDs) are a diverse group of pulmonary disorders, characterized by inflammation of lung tissue, which may lead to permanent loss of the ability to breathe and death. Distinguishing among these diseases is challenging to physicians due their wide variety and unknown causes. Computer-aided diagnosis (CAD) is a useful approach to improve diagnostic accuracy, by combining information provided by experts with Machine Learning (ML) methods. Objectives Exploring the potential of dimensionality reduction combined with ML methods for diagnosis of DLDs; improving the classification accuracy over state-of-the-art methods. Methods A data set composed of 3252 regions of interest (ROIs) was used, from which 28 features were extracted per ROI. We used Principal Component Analysis, Linear Discriminant Analysis, and Stepwise Selection – Forward, Backward, and Forward-Backward to reduce feature dimensionality. The feature subsets obtained were used as input to the following ML methods: Support Vector Machine, Gaussian Mixture Model, k-Nearest Neighbor, and Deep Feedforward Neural Network. We also applied a Deep Convolutional Neural Network directly to the ROIs. Results We achieved the maximum reduction from 28 to 5 dimensions using LDA. The best classification results were obtained by DFNN, with 99.60% of overall accuracy. Conclusions This work contributes to the analysis and selection of features that can efficiently characterize the DLDs studied.


2018 ◽  
Vol 10 (2) ◽  
pp. 36 ◽  
Author(s):  
Michael James Kangas ◽  
Christina L Wilson ◽  
Raychelle M Burks ◽  
Jordyn Atwater ◽  
Rachel M Lukowicz ◽  
...  

Colorimetric sensor arrays incorporating red, green, and blue (RGB) image analysis use value changes from multiple sensors for the identification and quantification of various analytes. RGB data can be easily obtained using image analysis software such as ImageJ. Subsequent chemometric analysis is becoming a key component of colorimetric array RGB data analysis, though literature contains mainly principal component analysis (PCA) and hierarchical cluster analysis (HCA). Seeking to expand the chemometric methods toolkit for array analysis, we explored the performance of nine chemometric methods were compared for the task of classifying 631 solutions (0.1 to 3 M) of acetic acid, malonic acid, lysine, and ammonia using an eight sensor colorimetric array. PCA and LDA (linear discriminant analysis) were effective for visualizing the dataset. For classification, linear discriminant analysis (LDA), (k nearest neighbors) KNN, (soft independent modelling by class analogy) SIMCA, recursive partitioning and regression trees (RPART), and hit quality index (HQI) were very effective with each method classifying compounds with over 90% correct assignments. Support vector machines (SVM) and partial least squares – discriminant analysis (PLS-DA) struggled with ~85 and 39% correct assignments, respectively. Additional mathematical treatments of the data set, such as incrementally increasing the exponents, did not improve the performance of LDA and KNN. The literature precedence indicates that the most common methods for analyzing colorimetric arrays are PCA, LDA, HCA, and KNN. To our knowledge, this is the first report of comparing and contrasting several more diverse chemometric methods to analyze the same colorimetric array data.


2006 ◽  
Vol 15 (03) ◽  
pp. 397-410 ◽  
Author(s):  
MANNES POEL ◽  
TACO EKKEL

Based on the hypothesis that the sound of the infant cry contains information on the infant's health status, research has been done on how to improve classification of neonate crying sounds into categories called 'normal' and 'abnormal' - the latter referring to some hypoxia-related disorder. Research in this field is hindered by lack of test cases and limited understanding of feature relevance. The research described here combines various ways of dealing with the small data set problem. First, feature pre-selection is done using sequential backwards elimination of possible combinations where the performance of the set of features is tested by a Probabilistic Neural Network which has the advantage of fast learning. Using these features a neural network committee, consisting of Radial Basis Function Neural Networks, was trained on the data, using bootstrapping. This construction yields a multi-classifier system with an overall classification performance of 85% on the so-called "All Cry Units" (ACU) data set, an increase of 34% with respect to the a priori probability of 51%. Several leave-1-out experiments for Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) and Neural Networks (NN) have been conducted in order to compare the performance of the multi-classifier system.


2005 ◽  
Vol 57 (3-4) ◽  
pp. 219-238
Author(s):  
Tapabrata Maiti ◽  
Pushpal Mukhopadhyay

Prostate cancer is one of the most common cancers in American men. Management of prostate cancer depends on its stage, because only cancers that are confined to the organ of origin are potentially curable by radical prostatectomy. In this article we have considered different statistical methods to predict the probabilities of non­organ confined prostate cancer based on its clinical stage. Modern computer intensive methods such as bagging, neural networks and support vector machines are compared to more classical methods such as linear, quadratic and logistic discrimination and less computer intensive nonparametric methods such as smoothing splines and classification trees . All these methods are allied to a dataset from a recent prostate cancer study. We have presented sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy for each of the methods. The study shows linear discriminant analysis and support vector machine perform better than other methods for this data set, at least from a predictive view point .


Author(s):  
Shima Zarei

Face Recognition is one of the most important issues in Image processing tasks. It is important because it uses for various purposes in real world such as Criminal detection or for detecting fraud in passport and visa check in airports. Face book is a nice example of Face recognition application, when it sends notification to one user’s friends who are recognized by their images that user uploaded in face book page. To solve Face Recognition problem different methods are introduced such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and Hidden Markov Models (HMM) which are explained and analyzed. Also algorithms like; Eigen face, Fisher face and Local Binary Pattern Histogram (LBPH) which are simplest and most accurate methods are implemented in this project for AT&T dataset to recognize the most similar face to other faces in this data set. To this end these algorithms are explained and advantages and disadvantages of each one are analyzed as well. Consequently, the best method is selected with comparison between the results of face reconstruction by Engine face, Fisher face and Local binary pattern histogram methods. In this project Eigen face method has best result. It should be noted that for implementing face recognition algorithms color map methods are used to distinguish the facial features more precisely. In this work Rainbow color map in Eigen Face algorithm and HSV color map in Fisher Face algorithm are utilized and results shows that HSV color map is more accurate than rainbow color map.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 367
Author(s):  
Janez Lapajne ◽  
Matej Knapič ◽  
Uroš Žibrat

Hyperspectral imaging is a popular tool used for non-invasive plant disease detection. Data acquired with it usually consist of many correlated features; hence most of the acquired information is redundant. Dimensionality reduction methods are used to transform the data sets from high-dimensional, to low-dimensional (in this study to one or a few features). We have chosen six dimensionality reduction methods (partial least squares, linear discriminant analysis, principal component analysis, RandomForest, ReliefF, and Extreme gradient boosting) and tested their efficacy on a hyperspectral data set of potato tubers. The extracted or selected features were pipelined to support vector machine classifier and evaluated. Tubers were divided into two groups, healthy and infested with Meloidogyne luci. The results show that all dimensionality reduction methods enabled successful identification of inoculated tubers. The best and most consistent results were obtained using linear discriminant analysis, with 100% accuracy in both potato tuber inside and outside images. Classification success was generally higher in the outside data set, than in the inside. Nevertheless, accuracy was in all cases above 0.6.


2013 ◽  
Vol 459 ◽  
pp. 228-231 ◽  
Author(s):  
Hao Yang ◽  
Song Wu

Electroencephalogram (EEG) is generally used in Brain-Computer Interface (BCI) applications to measure the brain signals. However, the multichannel EEG signals characterized by unrelated and redundant features will deteriorate the classification accuracy. This paper presents a method based on common spatial pattern (CSP) for feature extraction and support vector machine with genetic algorithm (SVM-GA) as a classifier, the GA is used to optimize the kernel parameters setting. The proposed algorithm is performed on data set Iva of BCI Competition III. Results show that the proposed method outperforms the conventional linear discriminant analysis (LDA) in average classification performance.


Sign in / Sign up

Export Citation Format

Share Document