scholarly journals Combination of Color and Significant Edge Features in Image Retrieval

Author(s):  
Chen Liu ◽  
Zhou Wei
2015 ◽  
Vol 734 ◽  
pp. 596-599 ◽  
Author(s):  
Deng Ping Fan ◽  
Juan Wang ◽  
Xue Mei Liang

The Context-Aware Saliency (CA) model—is a new model used for saliency detection—has strong limitations: It is very time consuming. This paper improved the shortcoming of this model namely Fast-CA and proposed a novel framework for image retrieval and image representation. The proposed framework derives from Fast-CA and multi-texton histogram. And the mechanisms of visual attention are simulated and used to detect saliency areas of an image. Furthermore, a very simple threshold method is adopted to detect the dominant saliency areas. Color, texture and edge features are further extracted to describe image content within the dominant saliency areas, and then those features are integrated into one entity as image representation, where image representation is so called the dominant saliency areas histogram (DSAH) and used for image retrieval. Experimental results indicate that our algorithm outperform multi-texton histogram (MTH) and edge histogram descriptors (EHD) on Corel dataset with 10000 natural images.


2016 ◽  
Vol 25 (3) ◽  
pp. 441-454 ◽  
Author(s):  
H. Kavitha ◽  
M.V. Sudhamani

AbstractIn this work, we present a combination of edge feature and distribution of the gradient orientation of an object technique for content-based image retrieval (CBIR). First, the bidimensional empirical mode decomposition (BEMD) technique is employed to get the edge features of an image. Later, the information about the gradient orientation is obtained by the histogram of oriented gradient (HOG) descriptor. These two features are extracted from the images and stored in the database for further usage. When the user submits the query image, the features are extracted in same way and compared with the features of the data set images. Based on the similarity, the relevant images have been selected as a resultant set. These images are ranked from higher similarity to lower similarity and displayed on the user interface. The experiments are carried out using the Columbia Object Image Library (COIL-100) dataset. The COIL-100 database is a collection of 7200 color images belonging to 100 various objects, each with 72 different orientations. Our proposed method results are high with precision and recall values of 93.00 and 77.70, respectively. Taken individually, the precision and recall values for BEMD are 82.25 and 68.54 and for HOG are 85.00, 71.10, respectively. The observation from the experimental result is that the combined method performs better than the individual methods. Experiments are conducted in the presence of noise, and the robustness of the method is verified.


2018 ◽  
Vol 6 (9) ◽  
pp. 259-273
Author(s):  
Priyanka Saxena ◽  
Shefali

Content Based Image Retrieval system automatically retrieves the most relevant images to the query image by extracting the visual features instead of keywords from images. Over the years, several researches have been conducted in this field but the system still faces the challenge of semantic gap and subjectivity of human perception. This paper proposes the extraction of low-level visual features by employing color moment, Local Binary Pattern and Canny Edge Detection techniques for extracting color, texture and edge features respectively. The combination of these features is used in conjunction with Support Vector Machine to reduce the retrieval time and improve the overall precision. Also, the challenge of semantic gap between low and high level features is addressed by incorporating Relevance Feedback. Average precision value of 0.782 was obtained by combining the color, texture and edge features, 0.896 was obtained by using combined features with SVM, 0.882 was obtained by using combined features with Relevance Feedback to overcome the challenge of semantic gap. Experimental results exhibit improved performance than other state of the art techniques.


2013 ◽  
Vol 32 (5) ◽  
pp. 1280-1282
Author(s):  
Fu-min LIU ◽  
Zhi-bin ZHANG

Sign in / Sign up

Export Citation Format

Share Document