Mathematical properties of information theoretic image similarity measures

2006 ◽  
Author(s):  
Oskar Škrinjar
Author(s):  
Hilmi Demir

Philosophers have used information theoretic concepts and theorems for philosophical purposes since the publication of Shannon’s seminal work, “The Mathematical Theory of Communication”. The efforts of different philosophers led to the formation of Philosophy of Information as a subfield of philosophy in the late 1990s (Floridi, in press). Although a significant part of those efforts was devoted to the mathematical formalism of information and communication theory, a thorough analysis of the fundamental mathematical properties of information-carrying relations has not yet been done. The point here is that a thorough analysis of the fundamental properties of information-carrying relations will shed light on some important controversies. The overall aim of this chapter is to begin this process of elucidation. It therefore includes a detailed examination of three semantic theories of information: Dretske’s entropy-based framework, Harms’ theory of mutual information and Cohen and Meskin’s counterfactual theory. These three theories are selected because they represent all lines of reasoning available in the literature in regard to the relevance of Shannon’s mathematical theory of information for philosophical purposes. Thus, the immediate goal is to cover the entire landscape of the literature with respect to this criterion. Moreover, this chapter offers a novel analysis of the transitivity of information-carrying relations.


2018 ◽  
Vol 2018 ◽  
pp. 1-18 ◽  
Author(s):  
Mohammed Abdulameer Aljanabi ◽  
Zahir M. Hussain ◽  
Song Feng Lu

Image similarity and image recognition are modern and rapidly growing technologies because of their wide use in the field of digital image processing. It is possible to recognize the face image of a specific person by finding the similarity between the images of the same person face and this is what we will address in detail in this paper. In this paper, we designed two new measures for image similarity and image recognition simultaneously. The proposed measures are based mainly on a combination of information theory and joint histogram. Information theory has a high capability to predict the relationship between image intensity values. The joint histogram is based mainly on selecting a set of local pixel features to construct a multidimensional histogram. The proposed approach incorporates the concepts of entropy and a modified 1D version of the 2D joint histogram of the two images under test. Two entropy measures were considered, Shannon and Renyi, giving a rise to two joint histogram-based, information-theoretic similarity measures: SHS and RSM. The proposed methods have been tested against powerful Zernike-moments approach with Euclidean and Minkowski distance metrics for image recognition and well-known statistical approaches for image similarity such as structural similarity index measure (SSIM), feature similarity index measure (FSIM) and feature-based structural measure (FSM). A comparison with a recent information-theoretic measure (ISSIM) has also been considered. A measure of recognition confidence is introduced in this work based on similarity distance between the best match and the second-best match in the face database during the face recognition process. Simulation results using AT&T and FEI face databases show that the proposed approaches outperform existing image recognition methods in terms of recognition confidence. TID2008 and IVC image databases show that SHS and RSM outperform existing similarity methods in terms of similarity confidence.


2013 ◽  
Vol 427-429 ◽  
pp. 1537-1543 ◽  
Author(s):  
Ya Fen Wang ◽  
Feng Zhen Zhang ◽  
Shan Jian Liu ◽  
Meng Huang

In this paper, we study an information theoretic approach to image similarity measurement for content-base image retrieval. In this novel scheme, similarities are measured by the amount of information the images contained about one another mutual information (MI). The given approach is based on the premise that two similar images should have high mutual information, or equivalently, the querying image should convey high information about those similar to it. The method first generates a set of statistically representative visual patterns and uses the distributions of these patterns as images content descriptors. To measure the similarity of two images, we develop a method to compute the mutual information between their content descriptors. Two images with larger descriptor mutual information are regarded as more similar. We present experimental results, which demonstrate that mutual information is a more effective image similarity measure than those have been used in the literature such as Kullback-Leibler divergence and L2 norms.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Shiwei Yu ◽  
Ting-Zhu Huang

Information measures are capable of providing us with fundamental methodologies to analyze uncertainty and unveiling the substantive characteristics of random variables. In this paper, we address the issues of different types of entropies through q-generalized Kolmogorov-Nagumo averages, which lead to the propositions of the survival Rényi entropy and survival Tsallis entropy. Therefore, we make an inventory of eight types of entropies and then classify them into two categories: the density entropy that is defined on density functions and survival entropy that is defined on survival functions. This study demonstrates that, for each type of the density entropy, there exists a kind of the survival entropy corresponding to it. Furthermore, the similarity measures and normalized similarity measures are, respectively, proposed for each type of entropies. Generally, functionals of different types of information-theoretic metrics are equally diverse, while, simultaneously, they also exhibit some unifying features in all their manifestations. We present the unifying frameworks for entropies, similarity measures, and normalized similarity measures, which helps us deal with the available information measures as a whole and move from one functional to another in harmony with various applications.


Author(s):  
Jun Long ◽  
Qunfeng Liu ◽  
Xinpan Yuan ◽  
Chengyuan Zhang ◽  
Junfeng Liu ◽  
...  

Image similarity measures play an important role in nearest neighbor search and duplicate detection for large-scale image datasets. Recently, Minwise Hashing (or Minhash) and its related hashing algorithms have achieved great performances in large-scale image retrieval systems. However, there are a large number of comparisons for image pairs in these applications, which may spend a lot of computation time and affect the performance. In order to quickly obtain the pairwise images that theirs similarities are higher than the specific thresholdT(e.g., 0.5), we propose a dynamic threshold filter of Minwise Hashing for image similarity measures. It greatly reduces the calculation time by terminating the unnecessary comparisons in advance. We also find that the filter can be extended to other hashing algorithms, on when the estimator satisfies the binomial distribution, such as b-Bit Minwise Hashing, One Permutation Hashing, etc. In this pager, we use the Bag-of-Visual-Words (BoVW) model based on the Scale Invariant Feature Transform (SIFT) to represent the image features. We have proved that the filter is correct and effective through the experiment on real image datasets.


2015 ◽  
Vol 17 (47) ◽  
pp. 32053-32056 ◽  
Author(s):  
Hugo J. Bohórquez

The linear dependence between the density per particle σ and the electron density ρ facilitates the theoretical study of the N-scaling rules for quantum information functionals and their atomic partitions.


Sign in / Sign up

Export Citation Format

Share Document