information theoretic methods
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 3)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Shujian Yu ◽  
Luis Sanchez Giraldo ◽  
Jose Principe

We present a review on the recent advances and emerging opportunities around the theme of analyzing deep neural networks (DNNs) with information-theoretic methods. We first discuss popular information-theoretic quantities and their estimators. We then introduce recent developments on information-theoretic learning principles (e.g., loss functions, regularizers and objectives) and their parameterization with DNNs. We finally briefly review current usages of information-theoretic concepts in a few modern machine learning problems and list a few emerging opportunities.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
David H. Foster

AbstractSmall changes in daylight in the environment can produce large changes in reflected light, even over short intervals of time. Do these changes limit the visual recognition of surfaces by their colour? To address this question, information-theoretic methods were used to estimate computationally the maximum number of surfaces in a sample that can be identified as the same after an interval. Scene data were taken from successive hyperspectral radiance images. With no illumination change, the average number of surfaces distinguishable by colour was of the order of 10,000. But with an illumination change, the average number still identifiable declined rapidly with change duration. In one condition, the number after two minutes was around 600, after 10 min around 200, and after an hour around 70. These limits on identification are much lower than with spectral changes in daylight. No recoding of the colour signal is likely to recover surface identity lost in this uncertain environment.


Author(s):  
Mariam Haroutunian ◽  
Tigran Badasyan

Maintaining the security of digital systems with a huge amount of data is one of the main concerns of IT specialists in these times. Anomaly detection in systems is one of the solutions to overcome this challenge. Anomaly detection means ¯nding patterns that are not normal or deviate from normal behavior in a system. Anomaly detection has various applications in bio-informatics, image processing, cyber security, security for databases, etc. There are many groups of methods that are used for anomaly detection including statistical methods, neural network methods and information theoretic methods. In this paper we survey pros and cons of anomaly detection based on information theoretic techniques


2019 ◽  
Author(s):  
Umang Varma ◽  
Justin Colacino ◽  
Anna Gilbert

AbstractSingle cell RNA-sequencing (scRNA-seq) technologies have generated an expansive amount of new biological information, revealing new cellular populations and hierarchical relationships. A number of technologies complementary to scRNA-seq rely on the selection of a smaller number of marker genes (or features) to accurately differentiate cell types within a complex mixture of cells. In this paper, we benchmark differential expression methods against information-theoretic feature selection methods to evaluate the ability of these algorithms to identify small and efficient sets of genes that are informative about cell types. Unlike differential methods, that are strictly binary and univariate, information-theoretic methods can be used as any combination of binary or multiclass and univariate or multivariate. We show for some datasets, information theoretic methods can reveal genes that are both distinct from those selected by traditional algorithms and that are as informative, if not more, of the class labels. We also present detailed and principled theoretical analyses of these algorithms. All information theoretic methods in this paper are implemented in our PicturedRocks Python package that is compatible with the widely used scanpy package.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 551 ◽  
Author(s):  
Hector Zenil ◽  
Narsis Kiani ◽  
Jesper Tegnér

Information-theoretic-based measures have been useful in quantifying network complexity. Here we briefly survey and contrast (algorithmic) information-theoretic methods which have been used to characterize graphs and networks. We illustrate the strengths and limitations of Shannon’s entropy, lossless compressibility and algorithmic complexity when used to identify aspects and properties of complex networks. We review the fragility of computable measures on the one hand and the invariant properties of algorithmic measures on the other demonstrating how current approaches to algorithmic complexity are misguided and suffer of similar limitations than traditional statistical approaches such as Shannon entropy. Finally, we review some current definitions of algorithmic complexity which are used in analyzing labelled and unlabelled graphs. This analysis opens up several new opportunities to advance beyond traditional measures.


Sign in / Sign up

Export Citation Format

Share Document