scholarly journals ChIP-on-chip significance analysis reveals large-scale binding and regulation by human transcription factor oncogenes

2008 ◽  
Vol 106 (1) ◽  
pp. 244-249 ◽  
Author(s):  
A. A. Margolin ◽  
T. Palomero ◽  
P. Sumazin ◽  
A. Califano ◽  
A. A. Ferrando ◽  
...  
2007 ◽  
Author(s):  
Adam Margolin ◽  
Teresa Palomero ◽  
Pavel Sumazin ◽  
Andrea Califano ◽  
Adolfo Ferrando ◽  
...  

2007 ◽  
Vol 8 (S8) ◽  
Author(s):  
Adam A Margolin ◽  
Teresa Palomero ◽  
Adolfo A Ferrando ◽  
Andrea Califano ◽  
Gustavo Stolovitzky

2015 ◽  
Vol 47 (12) ◽  
pp. 1393-1401 ◽  
Author(s):  
Matthew T Maurano ◽  
Eric Haugen ◽  
Richard Sandstrom ◽  
Jeff Vierstra ◽  
Anthony Shafer ◽  
...  

2016 ◽  
Vol 48 (1) ◽  
pp. 101-101
Author(s):  
Matthew T Maurano ◽  
Eric Haugen ◽  
Richard Sandstrom ◽  
Jeff Vierstra ◽  
Anthony Shafer ◽  
...  

Nanophotonics ◽  
2020 ◽  
Vol 9 (13) ◽  
pp. 4193-4198 ◽  
Author(s):  
Midya Parto ◽  
William E. Hayenga ◽  
Alireza Marandi ◽  
Demetrios N. Christodoulides ◽  
Mercedeh Khajavikhan

AbstractFinding the solution to a large category of optimization problems, known as the NP-hard class, requires an exponentially increasing solution time using conventional computers. Lately, there has been intense efforts to develop alternative computational methods capable of addressing such tasks. In this regard, spin Hamiltonians, which originally arose in describing exchange interactions in magnetic materials, have recently been pursued as a powerful computational tool. Along these lines, it has been shown that solving NP-hard problems can be effectively mapped into finding the ground state of certain types of classical spin models. Here, we show that arrays of metallic nanolasers provide an ultra-compact, on-chip platform capable of implementing spin models, including the classical Ising and XY Hamiltonians. Various regimes of behavior including ferromagnetic, antiferromagnetic, as well as geometric frustration are observed in these structures. Our work paves the way towards nanoscale spin-emulators that enable efficient modeling of large-scale complex networks.


2021 ◽  
Vol 64 (6) ◽  
pp. 107-116
Author(s):  
Yakun Sophia Shao ◽  
Jason Cemons ◽  
Rangharajan Venkatesan ◽  
Brian Zimmer ◽  
Matthew Fojtik ◽  
...  

Package-level integration using multi-chip-modules (MCMs) is a promising approach for building large-scale systems. Compared to a large monolithic die, an MCM combines many smaller chiplets into a larger system, substantially reducing fabrication and design costs. Current MCMs typically only contain a handful of coarse-grained large chiplets due to the high area, performance, and energy overheads associated with inter-chiplet communication. This work investigates and quantifies the costs and benefits of using MCMs with finegrained chiplets for deep learning inference, an application domain with large compute and on-chip storage requirements. To evaluate the approach, we architected, implemented, fabricated, and tested Simba, a 36-chiplet prototype MCM system for deep-learning inference. Each chiplet achieves 4 TOPS peak performance, and the 36-chiplet MCM package achieves up to 128 TOPS and up to 6.1 TOPS/W. The MCM is configurable to support a flexible mapping of DNN layers to the distributed compute and storage units. To mitigate inter-chiplet communication overheads, we introduce three tiling optimizations that improve data locality. These optimizations achieve up to 16% speedup compared to the baseline layer mapping. Our evaluation shows that Simba can process 1988 images/s running ResNet-50 with a batch size of one, delivering an inference latency of 0.50 ms.


2008 ◽  
Vol 100 (1) ◽  
pp. 51-61 ◽  
Author(s):  
Caroline Dreuillet ◽  
Maryannick Harper ◽  
Jeanne Tillit ◽  
Michel Kress ◽  
Michèle Ernoult-Lange

Sign in / Sign up

Export Citation Format

Share Document