optimal coding
Recently Published Documents


TOTAL DOCUMENTS

112
(FIVE YEARS 3)

H-INDEX

12
(FIVE YEARS 0)

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1300
Author(s):  
Josef Klafka ◽  
Daniel Yurovsky

Optimal coding theories of language predict that speakers will keep the amount of information in their utterances relatively uniform under the constraints imposed by their language, but how much do these constraints influence information structure, and how does this influence vary across languages? We present a novel method for characterizing the information structure of sentences across a diverse set of languages. While the structure of English is broadly consistent with the shape predicted by optimal coding, many languages are not consistent with this prediction. We proceed to show that the characteristic information curves of languages are partly related to a variety of typological features from phonology to word order. These results present an important step in the direction of exploring upper bounds for the extent to which linguistic codes can be optimal for communication.


2021 ◽  
Author(s):  
Kuan Hsieh ◽  
Cynthia Rush ◽  
Ramji Venkataramanan

Author(s):  
Ramon Ferrer-i-Cancho ◽  
Christian Bentz ◽  
Caio Seguin
Keyword(s):  

2020 ◽  
Author(s):  
Oren Forkosh

ABSTRACTNeural networks seem to be able to handle almost any task they face. This feat involves coping efficiently with different data types, at multiple scales, and with varying statistical properties. Here, we show that this so-called optimal coding can occur at the single-neuron level and does not require adaptation. Differentiator neurons, i.e., neurons that spike whenever there is an increase in the input stimuli, are capable of capturing arbitrary statistics and scale of practically any stimulus they encounter. We show this optimality both analytically and using simulations, which demonstrate how an ideal neuron can handle drastically different probability distributions. While the mechanism we present is an oversimplification of “real” neurons and does not necessarily capture all neuron types, this is also its strength since it can function alongside other neuronal goals such as data manipulation and learning. Depicting the simplicity of neural response to complex stimuli, this result may also indicate a straightforward way to improve current artificial neural networks.


2020 ◽  
Vol 66 (3) ◽  
pp. 1920-1933 ◽  
Author(s):  
Qian Yu ◽  
Mohammad Ali Maddah-Ali ◽  
A. Salman Avestimehr

Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 965
Author(s):  
Marta Zárraga-Rodríguez ◽  
Jesús Gutiérrez-Gutiérrez ◽  
Xabier Insausti

In this paper, we present a low-complexity coding strategy to encode (compress) finite-length data blocks of Gaussian vector sources. We show that for large enough data blocks of a Gaussian asymptotically wide sense stationary (AWSS) vector source, the rate of the coding strategy tends to the lowest possible rate. Besides being a low-complexity strategy it does not require the knowledge of the correlation matrix of such data blocks. We also show that this coding strategy is appropriate to encode the most relevant Gaussian vector sources, namely, wide sense stationary (WSS), moving average (MA), autoregressive (AR), and ARMA vector sources.


Sign in / Sign up

Export Citation Format

Share Document