scholarly journals Accelerators for Artificial Intelligence and High-Performance Computing

Computer ◽  
2020 ◽  
Vol 53 (2) ◽  
pp. 14-22
Author(s):  
Dejan Milojicic
Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1029
Author(s):  
Anabi Hilary Kelechi ◽  
Mohammed H. Alsharif ◽  
Okpe Jonah Bameyi ◽  
Paul Joan Ezra ◽  
Iorshase Kator Joseph ◽  
...  

Power-consuming entities such as high performance computing (HPC) sites and large data centers are growing with the advance in information technology. In business, HPC is used to enhance the product delivery time, reduce the production cost, and decrease the time it takes to develop a new product. Today’s high level of computing power from supercomputers comes at the expense of consuming large amounts of electric power. It is necessary to consider reducing the energy required by the computing systems and the resources needed to operate these computing systems to minimize the energy utilized by HPC entities. The database could improve system energy efficiency by sampling all the components’ power consumption at regular intervals and the information contained in a database. The information stored in the database will serve as input data for energy-efficiency optimization. More so, device workload information and different usage metrics are stored in the database. There has been strong momentum in the area of artificial intelligence (AI) as a tool for optimizing and processing automation by leveraging on already existing information. This paper discusses ideas for improving energy efficiency for HPC using AI.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
E. A. Huerta ◽  
Asad Khan ◽  
Edward Davis ◽  
Colleen Bushell ◽  
William D. Gropp ◽  
...  

Abstract Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.


2020 ◽  
Vol 245 ◽  
pp. 09011
Author(s):  
Michael Hildreth ◽  
Kenyi Paolo Hurtado Anampa ◽  
Cody Kankel ◽  
Scott Hampton ◽  
Paul Brenner ◽  
...  

The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves. This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources. Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA’s dependence on Kubernetes support at the workers level.


Author(s):  
J. Charles Victor ◽  
P. Alison Paprica ◽  
Michael Brudno ◽  
Carl Virtanen ◽  
Walter Wodchis ◽  
...  

IntroductionCanadian provincial health systems have a data advantage – longitudinal population-wide data for publicly funded health services, in many cases going back 20 years or more. With the addition of high performance computing (HPC), these data can serve as the foundation for leading-edge research using machine learning and artificial intelligence. Objectives and ApproachThe Institute for Clinical Evaluative Sciences (ICES) and HPC4Health are creating the Ontario Data Safe Haven (ODSH) – a secure HPC cloud located within the HPC4Health physical environment at the Hospital for Sick Children in Toronto. The ODSH will allow research teams to post, access and analyze individual datasets over which they have authority, and enable linkage to Ontario administrative and other data. To start, the ODSH is focused on creating a private cloud meeting ICES’ legislated privacy and security requirements to support HPC-intensive analyses of ICES data. The first ODSH projects are partnerships between ICES scientists and machine learning. ResultsAs of March 2018, the technological build of the ODSH was tested and completed and the privacy and security policy framework and documentation were completed. We will present the structure of the ODSH, including the architectural choices made when designing the environment, and planned functionality in the future. We will describe the experience to-date for the very first analysis done using the ODSH: the automatic mining of clinical terminology in primary care electronic medical records using deep neural networks. We will also present the plans for a high-cost user Risk Dashboard program of research, co-designed by ICES scientists and health faculty from the Vector Institute for artificial intelligence, that will make use of the ODSH beginning May 2018. Conclusion/ImplicationsThrough a partnership of ICES, HPC4Health and the Vector Institute, a secure private cloud ODSH has been created as is starting to be used in leading edge machine learning research studies that make use of Ontario’s population-wide data assets.


Photoniques ◽  
2020 ◽  
pp. 40-44
Author(s):  
Bicky A. Marquez ◽  
Matthew J. Filipovich ◽  
Emma R. Howard ◽  
Viraj Bangari ◽  
Zhimu Guo ◽  
...  

Artificial intelligence enabled by neural networks has enabled applications in many fields (e.g. medicine, finance, autonomous vehicles). Software implementations of neural networks on conventional computers are limited in speed and energy efficiency. Neuromorphic engineering aims to build processors in which hardware mimic neurons and synapses in brain for distributed and parallel processing. Neuromorphic engineering enabled by silicon photonics can offer subnanosecond latencies, and can extend the domain of artificial intelligence applications to high-performance computing and ultrafast learning. We discuss current progress and challenges on these demonstrations to scale to practical systems for training and inference.


Sign in / Sign up

Export Citation Format

Share Document