systematic encoding
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 1)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Elizabeth Musz ◽  
Janice Chen

When we retell our past experiences, we aim to reproduce some version of the original events; this reproduced version is often temporally compressed relative to the original. How does such compression of memories manifest in brain activity? One possibility is that a compressed retrieved memory manifests as a neural pattern which is more dissimilar to the original, relative to a more detailed or vivid memory. However, we argue that measuring raw dissimilarity alone is insufficient, as it confuses a variety of interesting and uninteresting changes. To address this problem, we examine brain pattern changes that are consistent across people. We show that temporal compression in individuals' retelling of past events predicts systematic encoding-to-recall transformations in a number of higher associative regions. These findings elucidate how neural representations are not simply reactivated, but can also be transformed due to temporal compression during a universal form of human memory expression: verbal retelling.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1301
Author(s):  
Erdal Arıkan

Polarization adjusted convolutional (PAC) codes are a class of codes that combine channel polarization with convolutional coding. PAC codes are of interest for their high performance. This paper presents a systematic encoding and shortening method for PAC codes. Systematic encoding is important for lowering the bit-error rate (BER) of PAC codes. Shortening is important for adjusting the block length of PAC codes. It is shown that systematic encoding and shortening of PAC codes can be carried out in a unified framework.


Author(s):  
Adrian Wheeldon ◽  
Rishad Shafik ◽  
Tousif Rahman ◽  
Jie Lei ◽  
Alex Yakovlev ◽  
...  

Energy efficiency continues to be the core design challenge for artificial intelligence (AI) hardware designers. In this paper, we propose a new AI hardware architecture targeting Internet of Things applications. The architecture is founded on the principle of learning automata, defined using propositional logic. The logic-based underpinning enables low-energy footprints as well as high learning accuracy during training and inference, which are crucial requirements for efficient AI with long operating life. We present the first insights into this new architecture in the form of a custom-designed integrated circuit for pervasive applications. Fundamental to this circuit is systematic encoding of binarized input data fed into maximally parallel logic blocks. The allocation of these blocks is optimized through a design exploration and automation flow using field programmable gate array-based fast prototypes and software simulations. The design flow allows for an expedited hyperparameter search for meeting the conflicting requirements of energy frugality and high accuracy. Extensive validations on the hardware implementation of the new architecture using single- and multi-class machine learning datasets show potential for significantly lower energy than the existing AI hardware architectures. In addition, we demonstrate test accuracy and robustness matching the software implementation, outperforming other state-of-the-art machine learning algorithms. This article is part of the theme issue ‘Advanced electromagnetic non-destructive evaluation and smart monitoring’.


2019 ◽  
Vol 8 (3) ◽  
pp. 877-880 ◽  
Author(s):  
Ruslan Morozov ◽  
Peter Trifonov

2017 ◽  
Vol E100.D (1) ◽  
pp. 42-51 ◽  
Author(s):  
Minoru KURIBAYASHI ◽  
Masakatu MORII
Keyword(s):  
Qr Code ◽  

Sign in / Sign up

Export Citation Format

Share Document