Hybrid circuit-switched network for on-chip communication in large-scale chip-multiprocessors

2014 ◽  
Vol 74 (9) ◽  
pp. 2818-2830 ◽  
Author(s):  
Hongyin Luo ◽  
Shaojun Wei ◽  
Deming Chen ◽  
Donghui Guo
2016 ◽  
Vol 65 (5) ◽  
pp. 1656-1662 ◽  
Author(s):  
Pejman Lotfi-Kamran ◽  
Mehdi Modarressi ◽  
Hamid Sarbazi-Azad

2010 ◽  
pp. 3-20 ◽  
Author(s):  
Mark A. Anders ◽  
Himanshu Kaul ◽  
Ram K. Krishnamurthy ◽  
Shekhar Y. Borkar

2014 ◽  
Vol 13 (4) ◽  
pp. 1-36 ◽  
Author(s):  
Luis Angel D. Bathen ◽  
Nikil D. Dutt
Keyword(s):  

Nanophotonics ◽  
2020 ◽  
Vol 9 (13) ◽  
pp. 4193-4198 ◽  
Author(s):  
Midya Parto ◽  
William E. Hayenga ◽  
Alireza Marandi ◽  
Demetrios N. Christodoulides ◽  
Mercedeh Khajavikhan

AbstractFinding the solution to a large category of optimization problems, known as the NP-hard class, requires an exponentially increasing solution time using conventional computers. Lately, there has been intense efforts to develop alternative computational methods capable of addressing such tasks. In this regard, spin Hamiltonians, which originally arose in describing exchange interactions in magnetic materials, have recently been pursued as a powerful computational tool. Along these lines, it has been shown that solving NP-hard problems can be effectively mapped into finding the ground state of certain types of classical spin models. Here, we show that arrays of metallic nanolasers provide an ultra-compact, on-chip platform capable of implementing spin models, including the classical Ising and XY Hamiltonians. Various regimes of behavior including ferromagnetic, antiferromagnetic, as well as geometric frustration are observed in these structures. Our work paves the way towards nanoscale spin-emulators that enable efficient modeling of large-scale complex networks.


2021 ◽  
Vol 64 (6) ◽  
pp. 107-116
Author(s):  
Yakun Sophia Shao ◽  
Jason Cemons ◽  
Rangharajan Venkatesan ◽  
Brian Zimmer ◽  
Matthew Fojtik ◽  
...  

Package-level integration using multi-chip-modules (MCMs) is a promising approach for building large-scale systems. Compared to a large monolithic die, an MCM combines many smaller chiplets into a larger system, substantially reducing fabrication and design costs. Current MCMs typically only contain a handful of coarse-grained large chiplets due to the high area, performance, and energy overheads associated with inter-chiplet communication. This work investigates and quantifies the costs and benefits of using MCMs with finegrained chiplets for deep learning inference, an application domain with large compute and on-chip storage requirements. To evaluate the approach, we architected, implemented, fabricated, and tested Simba, a 36-chiplet prototype MCM system for deep-learning inference. Each chiplet achieves 4 TOPS peak performance, and the 36-chiplet MCM package achieves up to 128 TOPS and up to 6.1 TOPS/W. The MCM is configurable to support a flexible mapping of DNN layers to the distributed compute and storage units. To mitigate inter-chiplet communication overheads, we introduce three tiling optimizations that improve data locality. These optimizations achieve up to 16% speedup compared to the baseline layer mapping. Our evaluation shows that Simba can process 1988 images/s running ResNet-50 with a batch size of one, delivering an inference latency of 0.50 ms.


Sign in / Sign up

Export Citation Format

Share Document