Estimation of Fractal Dimension in Different Color Model

Author(s):  
Sumitra Kisan ◽  
Sarojananda Mishra ◽  
Ajay Chawda ◽  
Sanjay Nayak

This article describes how the term fractal dimension (FD) plays a vital role in fractal geometry. It is a degree that distinguishes the complexity and the irregularity of fractals, denoting the amount of space filled up. There are many procedures to evaluate the dimension for fractal surfaces, like box count, differential box count, and the improved differential box count method. These methods are basically used for grey scale images. The authors' objective in this article is to estimate the fractal dimension of color images using different color models. The authors have proposed a novel method for the estimation in CMY and HSV color spaces. In order to achieve the result, they performed test operation by taking number of color images in RGB color space. The authors have presented their experimental results and discussed the issues that characterize the approach. At the end, the authors have concluded the article with the analysis of calculated FDs for images with different color space.

Author(s):  
PEICHUNG SHIH ◽  
CHENGJUN LIU

Content-based face image retrieval is concerned with computer retrieval of face images (of a given subject) based on the geometric or statistical features automatically derived from these images. It is well known that color spaces provide powerful information for image indexing and retrieval by means of color invariants, color histogram, color texture, etc. This paper assesses comparatively the performance of content-based face image retrieval in different color spaces using a standard algorithm, the Principal Component Analysis (PCA), which has become a popular algorithm in the face recognition community. In particular, we comparatively assess 12 color spaces (RGB, HSV, YUV, YCbCr, XYZ, YIQ, L*a*b*, U*V*W*, L*u*v*, I1I2I3, HSI, and rgb) by evaluating seven color configurations for every single color space. A color configuration is defined by an individual or a combination of color component images. Take the RGB color space as an example, possible color configurations are R, G, B, RG, RB, GB and RGB. Experimental results using 600 FERET color images corresponding to 200 subjects and 456 FRGC (Face Recognition Grand Challenge) color images of 152 subjects show that some color configurations, such as YV in the YUV color space and YI in the YIQ color space, help improve face retrieval performance.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Dina Khattab ◽  
Hala Mousher Ebied ◽  
Ashraf Saad Hussein ◽  
Mohamed Fahmy Tolba

This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied withRGB,HSV,CMY,XYZ, andYUVcolor spaces. The comparative study and experimental results using different color images show thatRGBcolor space is the best color space representation for the set of the images used.


2010 ◽  
Vol 26-28 ◽  
pp. 48-54
Author(s):  
Jin Ling Wei ◽  
Jun Meng ◽  
Wei Song

According to the analysis of every feature element’s grey images in RGB color space and HSI color space, each of the elements represents different information of the color image. From the analysis of the Histogram of color images, the value range of hue H basically keeps stable, which is proved by experiments to be the most stable and representative one. Finally we illustrated by application instances that the method of recognition and tracking of the objective moving robot based on hue character H is applicable.


2015 ◽  
Vol 743 ◽  
pp. 317-320
Author(s):  
Ravi Subban ◽  
Pasupathi Perumalsamy ◽  
G. Annalakshmi

This paper presents a novel method for skin segmentation in color images using piece-wise linear bound skin detection. Various color schemes are investigated and evaluated to find the effect of color space transformation over the skin detection performance. The comprehensive knowledge about the various color spaces helps in skin color modeling evaluation. The absence of the luminance component increases performance, which also supports in finding the appropriate color space for skin detection. The single color component produces the better performance than combined color component and reduces computational complexity.


2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.


2020 ◽  
Author(s):  
Dalí Dos Santos ◽  
Adriano Silva ◽  
Paulo De Faria ◽  
Bruno Travençolo ◽  
Marcelo Do Nascimento

Oral epithelial dysplasia is a common precancerous lesion type that can be graded as mild, moderate and severe. Although not all oral epithelial dysplasia become cancer over time, this premalignant condition has a significant rate of progressing to cancer and the early treatment has been shown to be considerably more successful. The diagnosis and distinctions between mild, moderate, and severe grades are made by pathologists through a complex and time-consuming process where some cytological features, including nuclear shape, are analysed. The use of computer-aided diagnosis can be applied as a tool to aid and enhance the pathologist decisions. Recently, deep learning based methods are earning more and more attention and have been successfully applied to nuclei segmentation problems in several scenarios. In this paper, we evaluated the impact of different color spaces transformations for automated nuclei segmentation on histological images of oral dysplastic tissues using fully convolutional neural networks (CNN). The CNN were trained using different color spaces from a dataset of tongue images from mice diagnosed with oral epithelial dysplasia. The CIE L*a*b* color space transformation achieved the best averaged accuracy over all analyzed color space configurations (88.2%). The results show that the chrominance information, or the color values, does not play the most significant role for nuclei segmentation purpose on a mice tongue histopathological images dataset.


Sign in / Sign up

Export Citation Format

Share Document