Review of CBIR Related with Low Level and High Level Features

2016 ◽  
Vol 7 (1) ◽  
pp. 27-40 ◽  
Author(s):  
Tamil Kodi ◽  
G. Rosline Nesa Kumari ◽  
S. Maruthu Perumal

The method of retrieving pictures from the massive image info is termed as content based mostly image retrieval (CBIR). CBIR is that the standard analysis space of interest. CBIR paves the approach of user interaction with giant info by satisfying their queries within the sort of pictures. This paper discusses the recital of a CBIR system that is in and of itself repressed by the options adopted to symbolize the pictures within the record and conjointly study the approaches of a spread of ways that deals with the extraction of options supported low and high level options of images with the query image provided. The most contribution of this work could be a comprehensive comparison between the low level and high level feature approaches to CBIR.To retrieve the pictures in a good manner this paper provides associate platform for victimization the ways which can able to specialize in each low level and high level options and created clarification regarding high level options will retrieve images a lot of relevant to the query image provided.

Author(s):  
Siddhivinayak Kulkarni

Developments in technology and the Internet have led to an increase in number of digital images and videos. Thousands of images are added to WWW every day. Content based Image Retrieval (CBIR) system typically consists of a query example image, given by the user as an input, from which low-level image features are extracted. These low level image features are used to find images in the database which are most similar to the query image and ranked according their similarity. This chapter evaluates various CBIR techniques based on fuzzy logic and neural networks and proposes a novel fuzzy approach to classify the colour images based on their content, to pose a query in terms of natural language and fuse the queries based on neural networks for fast and efficient retrieval. A number of experiments were conducted for classification, and retrieval of images on sets of images and promising results were obtained.


2020 ◽  
Vol 8 (4) ◽  
pp. 1-20
Author(s):  
Girija G. Chiddarwar ◽  
S.Phani Kumar

Since shape is the most important feature for recognizing objects, it has to be extracted accurately in order to enhance the content based image retrieval system, but challenges prevailed in extracting shape features of an object in an image due to inability of shape descriptor which extracts a limited number of different shapes that are not invariant, alongside the inability to extracting features of overlapping objects, and the shape connotation gap problem between low level and high level features. In order to overcome these problems, this work proposes a Superintend Gross Silhouette Descriptor which uses pixel coordinates on spatial domain of the image for finding the real shape of the object by means of straight lines so it has the ability to detect the overlapped objects as well as the polygonal shapes. After being extracted, features would be trained using a random woodland classifier which classifies the features into a group of classes at maximum convergence for mitigating the shape connotation problem. At the time of retrieval, the features of the query image would be tested with trained features for measuring the similarity by the dynamite correlation coefficient method, which is a measure of the linear correlation so it would render the absolute value of the correlation coefficient which maintains the relationship strength among features.


2012 ◽  
Vol 2012 ◽  
pp. 1-19 ◽  
Author(s):  
Chih-Fong Tsai

Content-based image retrieval (CBIR) systems require users to query images by their low-level visual content; this not only makes it hard for users to formulate queries, but also can lead to unsatisfied retrieval results. To this end, image annotation was proposed. The aim of image annotation is to automatically assign keywords to images, so image retrieval users are able to query images by keywords. Image annotation can be regarded as the image classification problem: that images are represented by some low-level features and some supervised learning techniques are used to learn the mapping between low-level features and high-level concepts (i.e., class labels). One of the most widely used feature representation methods is bag-of-words (BoW). This paper reviews related works based on the issues of improving and/or applying BoW for image annotation. Moreover, many recent works (from 2006 to 2012) are compared in terms of the methodology of BoW feature generation and experimental design. In addition, several different issues in using BoW are discussed, and some important issues for future research are discussed.


2017 ◽  
Vol 19 (11) ◽  
pp. 2545-2560 ◽  
Author(s):  
Lei Ma ◽  
Hongliang Li ◽  
Fanman Meng ◽  
Qingbo Wu ◽  
King Ngi Ngan

2021 ◽  
Author(s):  
Kambiz Jarrah

The overall objective of this thesis is to present a methodology for guiding adaptations of an RBF-based relevance feedback network, embedded in automatic content-based image retrieval (CBIR) systems, through the principle of unsupervised hierarchical clustering. The main focus of this thesis is two-fold: introducing a new member of Self-Organizing Tree Map (SOTM) family, the Directed self-organizing tree map (DSOTM) that not only provides a partial supervision on cluster generation by forcing divisions away from the query class, but also presents an objective verdict on resemblance of the input pattern as its tree structure grows; and using a base-10 Genetic Algorithm (GA) approach to accurately determine the contribution of individual feature vectors for a successful retrieval in a so-called "feature weight detection process." The DSOTM is quite attractive in CBIR since it aims to reduce both user workload and subjectivity. Repetitive user interaction steps are replaced by a DSOTM module, which adaptively guides relevance feedback, to bridge the gap between low-level image descriptors and high-level semantics. To further reduce this gap and achieve an enhanced performance for the automatic CBIR system under study, a GA-based approach was proposed in conjunction with the DSOTM. The resulting framework is referred to as GA-based CBIR (GA-CBIR) and aims to import human subjectivity by automatically adjusting the search process to what the system evolves "to believe" is significant content within the query. In this engine, traditional GA operators work closely with the DSOTM to better attune the apparent discriminative characteristics observed in an image by a human user.


2006 ◽  
Vol 06 (03) ◽  
pp. 357-375
Author(s):  
ZAHER AL AGHBARI

In the field of content-based image retrieval, there exist a gap between low-level descriptions of image content and the semantic needs of users to query image databases. This paper demonstrates an approach to image retrieval founded on classifying image regions hierarchically based on their semantics (e.g. sky, snow, rocks, etc.) that resemble peoples' perception rather than on low-level features (e.g. color, texture, shape, etc.). Particularly, we consider outdoor images and automatically classify their regions based on their semantics using a support vector machines (SVMs). The SVMs learns the semantics of specified classes from specific low-level feature of the test image regions. Image regions are, first, segmented using a hill-climbing approach. Then, those regions are classified by the SVMs. Such semantic classification allows the implementation of intuitive query interface. As we show in our experiments, the high precision of semantic classification justifies the feasibility of our approach.


2021 ◽  
Author(s):  
Rui Zhang

This thesis is primarily focused on the information combination at different levels of a statistical pattern classification framework for image annotation and retrieval. Based on the previous study within the fields of image annotation and retrieval, it has been well-recognized that the low-level visual features, such as color and texture, and high-level features, such as textual description and context, are distinct yet complementary in terms of their distributions and the corresponding discriminative powers of dealing with machine-based recognition and retrieval tasks. Therefore, effective feature combination for image annotation and retrieval has become a desirable and promising perspective from which the semantic gap can be further bridged. Motivated by this fact, the combination of the visual and context modalities and that of different features in the visual domain are tackled by developing two statistical patterns classification approaches considering that the features of the visual modality and those across different modalities exhibit different degrees of heterogeneities, and thus, should be treated differently. Regarding the cross-modality feature combination, a Bayesian framework is proposed to integrate visual content and context, which has been applied to various image annotation and retrieval frameworks. In terms of the combination of different low-level features in the visual domain, the problem is tackled with a novel method that combines texture and color features via a mixture model of their joint distribution. To evaluate the proposed frameworks, many different datasets are employed in the experiments, including the COREL database for image retrieval and the MSRC, LabelMe, PASCAL VOC2009, and an animal image database collected by ourselves for image annotation. Using various evaluation criteria, the first framework is shown to be more effective than the methods purely based on the low-level features or high-level context. As for the second, the experimental results demonstrate not only its superior performance to other feature combination methods but also its ability to discover visual clusters using texture and color simultaneously. Moreover, a demo search engine based on the Bayesian framework is implemented and available online.


Author(s):  
Arun Kulkarni ◽  
Leonard Brown

With advances in computer technology and the World Wide Web there has been an explosion in the amount and complexity of multimedia data that are generated, stored, transmitted, analyzed, and accessed. In order to extract useful information from this huge amount of data, many content-based image retrieval (CBIR) systems have been developed in the last decade. A Typical CBIR system captures image features that represent image properties such as color, texture, or shape of objects in the query image and try to retrieve images from the database with similar features. Recent advances in CBIR systems include relevance feedback based interactive systems. The main advantage of CBIR systems with relevance feedback is that these systems take into account the gap between the high-level concepts and low-level features and subjectivity of human perception of visual content. CBIR systems with relevance feedback are more efficient than conventional CBIR systems; however, these systems depend on human interaction. In this chapter, we describe a new approach for image storage and retrieval called association-based image retrieval (ABIR). The authors try to mimic human memory. The human brain stores and retrieves images by association. They use a generalized bi-directional associative memory (GBAM) to store associations between feature vectors that represent images stored in the database. Section I introduces the reader to the CBIR system. In Section II, they present architecture for the ABIR system, Section III deals with preprocessing and feature extraction techniques, and Section IV presents various models of GBAM. In Section V, they present case studies.


Sign in / Sign up

Export Citation Format

Share Document