scholarly journals Application of texture features and machine learning methods to grains segmentation in rock material images

2020 ◽  
Author(s):  
Karolina Nurzynska ◽  
Sebastian Iwaszenko

The segmentation of rock grains on images depicting bulk rock materials is considered. The rocks material images are transformed by selected texture operators, to obtain a set of features describing them. The first order features, second-order features, run-length matrix, grey tone difference matrix, and Laws' energies are used for that purpose. The features are classified using k-nearest neighbours, support vector machines, and artificial neural networks classifiers. The results show that the border of rocks grains can be determined with above 70% accuracy. The multi-texture approach was also investigated, leading to an increase in accuracy to over 77% for early-fusion of features. Attempts were made to reduce feature space dimensionality by manually picking features as well as by use of the principal component analysis. The outcomes showed a significant decrease in accuracy. The obtained results have been visually compared with the ground truth. The observed compliance can be considered satisfactory.

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4523 ◽  
Author(s):  
Carlos Cabo ◽  
Celestino Ordóñez ◽  
Fernando Sáchez-Lasheras ◽  
Javier Roca-Pardiñas ◽  
and Javier de Cos-Juez

We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together.


2011 ◽  
Vol 2011 ◽  
pp. 1-28 ◽  
Author(s):  
Zhongqiang Chen ◽  
Zhanyan Liang ◽  
Yuan Zhang ◽  
Zhongrong Chen

Grayware encyclopedias collect known species to provide information for incident analysis, however, the lack of categorization and generalization capability renders them ineffective in the development of defense strategies against clustered strains. A grayware categorization framework is therefore proposed here to not only classify grayware according to diverse taxonomic features but also facilitate evaluations on grayware risk to cyberspace. Armed with Support Vector Machines, the framework builds learning models based on training data extracted automatically from grayware encyclopedias and visualizes categorization results with Self-Organizing Maps. The features used in learning models are selected with information gain and the high dimensionality of feature space is reduced by word stemming and stopword removal process. The grayware categorizations on diversified features reveal that grayware typically attempts to improve its penetration rate by resorting to multiple installation mechanisms and reduced code footprints. The framework also shows that grayware evades detection by attacking victims' security applications and resists being removed by enhancing its clotting capability with infected hosts. Our analysis further points out that species in categoriesSpywareandAdwarecontinue to dominate the grayware landscape and impose extremely critical threats to the Internet ecosystem.


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Ersen Yılmaz

An expert system having two stages is proposed for cardiac arrhythmia diagnosis. In the first stage, Fisher score is used for feature selection to reduce the feature space dimension of a data set. The second stage is classification stage in which least squares support vector machines classifier is performed by using the feature subset selected in the first stage to diagnose cardiac arrhythmia. Performance of the proposed expert system is evaluated by using an arrhythmia data set which is taken from UCI machine learning repository.


2019 ◽  
Vol 11 (11) ◽  
pp. 3222 ◽  
Author(s):  
Pascal Schirmer ◽  
Iosif Mporas

In this paper we evaluate several well-known and widely used machine learning algorithms for regression in the energy disaggregation task. Specifically, the Non-Intrusive Load Monitoring approach was considered and the K-Nearest-Neighbours, Support Vector Machines, Deep Neural Networks and Random Forest algorithms were evaluated across five datasets using seven different sets of statistical and electrical features. The experimental results demonstrated the importance of selecting both appropriate features and regression algorithms. Analysis on device level showed that linear devices can be disaggregated using statistical features, while for non-linear devices the use of electrical features significantly improves the disaggregation accuracy, as non-linear appliances have non-sinusoidal current draw and thus cannot be well parametrized only by their active power consumption. The best performance in terms of energy disaggregation accuracy was achieved by the Random Forest regression algorithm.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Oliver Kramer

Cascade support vector machines have been introduced as extension of classic support vector machines that allow a fast training on large data sets. In this work, we combine cascade support vector machines with dimensionality reduction based preprocessing. The cascade principle allows fast learning based on the division of the training set into subsets and the union of cascade learning results based on support vectors in each cascade level. The combination with dimensionality reduction as preprocessing results in a significant speedup, often without loss of classifier accuracies, while considering the high-dimensional pendants of the low-dimensional support vectors in each new cascade level. We analyze and compare various instantiations of dimensionality reduction preprocessing and cascade SVMs with principal component analysis, locally linear embedding, and isometric mapping. The experimental analysis on various artificial and real-world benchmark problems includes various cascade specific parameters like intermediate training set sizes and dimensionalities.


Author(s):  
Sanjay Kumar Sonbhadra ◽  
Sonali Agarwal ◽  
P. Nagabhushan

Existing dimensionality reduction (DR) techniques such as principal component analysis (PCA) and its variants are not suitable for target class mining due to the negligence of unique statistical properties of class-of-interest (CoI) samples. Conventionally, these approaches utilize higher or lower eigenvalued principal components (PCs) for data transformation; but the higher eigenvalued PCs may split the target class, whereas lower eigenvalued PCs do not contribute significant information and wrong selection of PCs leads to performance degradation. Considering these facts, the present research offers a novel target class-guided feature extraction method. In this approach, initially, the eigendecomposition is performed on variance–covariance matrix of only the target class samples, where the higher- and lower-valued eigenvectors are rejected via statistical analysis, and the selected eigenvectors are utilized to extract the most promising feature subspace. The extracted feature-subset gives a more tighter description of the CoI with enhanced associativity among target class samples and ensures the strong separation from nontarget class samples. One-class support vector machine (OCSVM) is evaluated to validate the performance of learned features. To obtain optimized values of hyperparameters of OCSVM a novel [Formula: see text]-ary search-based autonomous method is also proposed. Exhaustive experiments with a wide variety of datasets are performed in feature-space (original and reduced) and eigenspace (obtained from original and reduced features) to validate the performance of the proposed approach in terms of accuracy, precision, specificity and sensitivity.


2021 ◽  
Vol 163 (A3) ◽  
Author(s):  
B Shabani ◽  
J Ali-Lavroff ◽  
D S Holloway ◽  
S Penev ◽  
D Dessi ◽  
...  

An onboard monitoring system can measure features such as stress cycles counts and provide warnings due to slamming. Considering current technology trends there is the opportunity of incorporating machine learning methods into monitoring systems. A hull monitoring system has been developed and installed on a 111 m wave piercing catamaran (Hull 091) to remotely monitor the ship kinematics and hull structural responses. Parallel to that, an existing dataset of a similar vessel (Hull 061) was analysed using unsupervised and supervised learning models; these were found to be beneficial for the classification of bow entry events according to key kinematic parameters. A comparison of different algorithms including linear support vector machines, naïve Bayes and decision tree for the bow entry classification were conducted. In addition, using empirical probability distributions, the likelihood of wet-deck slamming was estimated given a vertical bow acceleration threshold of 1  in head seas, clustering the feature space with the approximate probabilities of 0.001, 0.030 and 0.25.


Author(s):  
Vanika Singhal ◽  
Preety Singh

Acute Lymphoblastic Leukemia is a cancer of blood caused due to increase in number of immature lymphocyte cells. Detection is done manually by skilled pathologists which is time consuming and depends on the skills of the pathologist. The authors propose a methodology for discrimination of a normal lymphocyte cell from a malignant one by processing the blood sample image. Automatic detection process will reduce the diagnosis time and not be limited by human interpretation. The lymphocyte images are classified based on two types of extracted features: shape and texture. To identify prominent shape features, Correlation based Feature Selection is applied. Principal Component Analysis is applied on the texture features to reduce their dimensionality. Support Vector Machine is used for classification. It is observed that 16 shape features are able to give a classification accuracy of 92.3% and that changes in the geometrical properties of the nucleus emerge as significant features contributing towards detecting a malignant lymphocyte.


Sign in / Sign up

Export Citation Format

Share Document