What is driving the 3D TSV technologies business: Market update and technical trends

Author(s):  
Santosh Kumar ◽  
Thibault Buisson

Through-silicon vias (TSVs) have now become the preferred interconnect choice for high-end memory. They are also an enabling technology for heterogeneous integration of logic circuits with CIS, MEMS, sensors, and RF filters. In the near future they will also enable photonics and LED function integration. The market for 3D TSV and 2.5D interconnect is expected to reach around 2.1 million wafers in 2021, expanding at an 18% CAGR. The growth is driven by increased adoption of 3D memory devices in high-end graphics, high-performance computing, networking and data centers, and penetration into new areas, including fingerprint and ambient light sensors, RF filters and LEDs. CIS still commanded more than 80% share of TSV market wafer volume in 2015, although this will decrease to around 56% by 2021. This is primarily due to the growth of the other TSV applications, led by 3D memories, RF filters and fingerprint sensors. However, hybrid stacked technology, which uses direct copper-copper bonding, not TSVs, will penetrate around 38% of CIS production by 2021. The presentation will explain the market's dynamics and give an overview of all segments and key markets of the TSV based 3D/2.5D IC packaging.

Acta Numerica ◽  
2012 ◽  
Vol 21 ◽  
pp. 379-474 ◽  
Author(s):  
J. J. Dongarra ◽  
A. J. van der Steen

This article describes the current state of the art of high-performance computing systems, and attempts to shed light on near-future developments that might prolong the steady growth in speed of such systems, which has been one of their most remarkable characteristics. We review the different ways devised to speed them up, both with regard to components and their architecture. In addition, we discuss the requirements for software that can take advantage of existing and future architectures.


Author(s):  
M. B. Giles ◽  
I. Reguly

High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers.


Photonics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 31
Author(s):  
Nikolaos-Panteleimon (Pandelis) Diamantopoulos ◽  
Suguru Yamaoka ◽  
Takuro Fujii ◽  
Hidetaka Nishi ◽  
Koji Takeda ◽  
...  

Near-future upgrades of intra data center networks and high-performance computing systems would require optical interconnects capable of operating at beyond 100 Gbps/lane. In order for this evolution to be achieved in a sustainable way, high-speed yet energy-efficient transceivers are in need. Towards this direction we have previously demonstrated directly-modulated lasers (DMLs) capable of operating at 50 Gbps/lane with sub-pJ/bit efficiencies based on our novel membrane-III-V-on-Si technology. However, there exists an inherent tradeoff between modulation speed and power consumption due to the carrier-photon dynamics in DMLs. In this work, we alleviate this tradeoff by introducing photon–photon resonance dynamics in our energy-efficient membrane DMLs-on-Si design and demonstrate a device with a maximum 3-dB bandwidth of 47.5 GHz. This denotes a bandwidth increase of more than 2x times compared to our previous membrane DMLs-on-Si. Moreover, the DML is capable of delivering 60-GBaud PAM-4 signals under Ethernet’s KP4-FEC threshold (net data rate of 113.42 Gbps) over 2-km of standard single-mode fiber transmission. DC energy-efficiencies of 0.17 pJ/bit at 25 °C and 0.34 pJ/bit at 50 °C have been achieved for the > 100-Gbps signals. Deploying such DMLs in an integrated multichannel transceiver should ensure a smooth evolution towards Terabit-class Ethernet links and on-board optics subsystems.


Author(s):  
Raed AlDhubhani ◽  
Fathy Eassa ◽  
Faisal Saeed

Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.


Author(s):  
Raed AlDhubhani ◽  
Fathy Eassa ◽  
Faisal Saeed

Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-19 ◽  
Author(s):  
Paweł Czarnul ◽  
Jerzy Proficz ◽  
Krzysztof Drypczewski

This paper provides a review of contemporary methodologies and APIs for parallel programming, with representative technologies selected in terms of target system type (shared memory, distributed, and hybrid), communication patterns (one-sided and two-sided), and programming abstraction level. We analyze representatives in terms of many aspects including programming model, languages, supported platforms, license, optimization goals, ease of programming, debugging, deployment, portability, level of parallelism, constructs enabling parallelism and synchronization, features introduced in recent versions indicating trends, support for hybridity in parallel execution, and disadvantages. Such detailed analysis has led us to the identification of trends in high-performance computing and of the challenges to be addressed in the near future. It can help to shape future versions of programming standards, select technologies best matching programmers’ needs, and avoid potential difficulties while using high-performance computing systems.


Author(s):  
Mark H. Ellisman

The increased availability of High Performance Computing and Communications (HPCC) offers scientists and students the potential for effective remote interactive use of centralized, specialized, and expensive instrumentation and computers. Examples of instruments capable of remote operation that may be usefully controlled from a distance are increasing. Some in current use include telescopes, networks of remote geophysical sensing devices and more recently, the intermediate high voltage electron microscope developed at the San Diego Microscopy and Imaging Resource (SDMIR) in La Jolla. In this presentation the imaging capabilities of a specially designed JEOL 4000EX IVEM will be described. This instrument was developed mainly to facilitate the extraction of 3-dimensional information from thick sections. In addition, progress will be described on a project now underway to develop a more advanced version of the Telemicroscopy software we previously demonstrated as a tool to for providing remote access to this IVEM (Mercurio et al., 1992; Fan et al., 1992).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document