scratch pad memory
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 0)

H-INDEX

18
(FIVE YEARS 0)

Author(s):  
Hao Wen ◽  
Wei Zhang

Leakage energy has become an increasingly large fraction of total energy consumption, making it important to reduce leakage energy for improving the overall energy efficiency for embedded processors. In this paper, we explore how to reduce the cache leakage energy efficiently in a hybrid Scratch-Pad Memory (SPM) and cache architecture. Different from stand-alone cache, since the frequently used data may be allocated to the SPM for rapid retrieval in the hybrid architecture, the access frequency to the cache is reduced. It is possible to place the cache lines of the hybrid SPM-cache into the low power mode more aggressively than traditional leakage management for regular caches, which can reduce more leakage energy without significant performance degradation. Also, we propose a Hybrid Drowsy-Gated VDD (HDG) technique, which can adaptively exploit both short and long idle intervals of cache accesses to minimize leakage energy with insignificant performance overhead. In addition, we discussed the impact of cache size on the idle intervals of accesses, which will affect the efficiency of leakage management methods that exploit the idle intervals to reduce leakage energy.


Author(s):  
Chabane Hemdani ◽  
Rachida Aoudjit ◽  
Mustapha Lalam ◽  
Khaled Slimani

<p>This paper proposes a low-cost architecture to improve the management SPM (Scratch Pad Memory) in dynamic and multitasking modes. In this context, our management strategy SPM based on Programmable Automaton implemented in Xilinx Vertex-5 FPGA is entirely different from prior research works. SPM is generally managed by software (by a strong programming logic or by compilation). But our Programmable Automaton facilitates access to SPM in order to move code or data and liberates space in SPM. After this step, software takes over content management of SPM (what part of code or data should be placed in SPM, locates spaces of Heap and Stack). So the performance of the programs is actually improved thanks to minimization of the access latency at the DRAM (Dynamic Random Access Memory or Main Memory).</p>


Author(s):  
Ing-Jer Huang ◽  
Chun-Hung Lai ◽  
Yun-Chung Yang ◽  
Hsu-Kang Dow ◽  
Hung-Lun Chen

2015 ◽  
Vol 8 (0) ◽  
pp. 100-104
Author(s):  
Takuya Hatayama ◽  
Hideki Takase ◽  
Kazuyoshi Takagi ◽  
Naofumi Takagi

2014 ◽  
Vol 6 (4) ◽  
pp. 69-72 ◽  
Author(s):  
Meikang Qiu ◽  
Zhi Chen ◽  
Meiqin Liu

Sign in / Sign up

Export Citation Format

Share Document