scholarly journals 3D Solid Texture Classification Using Locally-Oriented Wavelet Transforms

2017 ◽  
Vol 26 (4) ◽  
pp. 1899-1910 ◽  
Author(s):  
Yashin Dicente Cid ◽  
Henning Muller ◽  
Alexandra Platon ◽  
Pierre-Alexandre Poletti ◽  
Adrien Depeursinge
2016 ◽  
Vol 2016 ◽  
pp. 1-6 ◽  
Author(s):  
Juan Wang ◽  
Jiangshe Zhang ◽  
Jie Zhao

Texture classification is an important research topic in image processing. In 2012, scattering transform computed by iterating over successive wavelet transforms and modulus operators was introduced. This paper presents new approaches for texture features extraction using scattering transform. Scattering statistical features and scattering cooccurrence features are derived from subbands of the scattering decomposition and original images. And these features are used for classification for the four datasets containing 20, 30, 112, and 129 texture images, respectively. Experimental results show that our approaches have the promising results in classification.


Author(s):  
Pullela S V V S R Kumar ◽  
Vasamsetti. Ch. Sekhararao ◽  
Ayanavalli Ramadevi ◽  
Ch. N.Durga Swathi ◽  
P Raviraja Reddy Mallidi

2007 ◽  
Vol 66 (6) ◽  
pp. 505-512
Author(s):  
A. D. Kukharev ◽  
Yu. S. Evstifeev ◽  
V. G. Yakovlev

2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


2014 ◽  
Vol 1 (3) ◽  
pp. 23-31
Author(s):  
Basava Raju ◽  
◽  
K. Y. Rama Devi ◽  
P. V. Kumar ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document