Large-Scale Integrated Photonics for High-Performance Computer Networks

Author(s):  
R. G. Beausoleil
2019 ◽  
Vol 7 (1) ◽  
pp. 55-70
Author(s):  
Moh. Zikky ◽  
M. Jainal Arifin ◽  
Kholid Fathoni ◽  
Agus Zainal Arifin

High-Performance Computer (HPC) is computer systems that are built to be able to solve computational loads. HPC can provide a high-performance technology and short the computing processes timing. This technology was often used in large-scale industries and several activities that require high-level computing, such as rendering virtual reality technology. In this research, we provide Tawaf’s Virtual Reality with 1000 of Pilgrims and realistic surroundings of Masjidil-Haram as the interactive and immersive simulation technology by imitating them with 3D models. Thus, the main purpose of this study is to calculate and to understand the processing time of its Virtual Reality with the implementation of tawaf activities using various platforms; such as computer and Android smartphone. The results showed that the outer-line or outer rotation of Kaa’bah mostly consumes minimum times although he must pass the longer distance than the closer one.  It happened because the agent with the closer area to Kaabah is facing the crowded peoples. It means an obstacle has the more impact than the distances in this case.


Author(s):  
D Y Polukarov ◽  
A P Bogdan

Modelling large-scale networks requires significant computational resources on a computer that produces a simulation. Moreover, the complexity of the calculations increases nonlinearly with increasing volume of the simulated network. On the other hand, cluster computing has gained considerable popularity recently. The idea of using cluster computing structures for modelling computer networks arises naturally. This paper describes the creation of software which combines an interactive mode of operation, including a graphical user interface for the OMNeT++ environment, with a batch mode of operation more natural to the high-performance cluster, "Sergey Korolev". The architecture of such a solution is developed. An example of using this approach is also given.


2021 ◽  
Author(s):  
Depeng Zuo ◽  
Guangyuan Kan ◽  
Hongquan Sun ◽  
Hongbin Zhang ◽  
Ke Liang

Abstract. The Generalized Likelihood Uncertainty Estimation (GLUE) method has been thrived for decades, huge number of applications in the field of hydrological model have proved its effectiveness in uncertainty and parameter estimation. However, for many years, the poor computational efficiency of GLUE hampers its further applications. A feasible way to solve this problem is the integration of modern CPU-GPU hybrid high performance computer cluster technology to accelerate the traditional GLUE method. In this study, we developed a CPU-GPU hybrid computer cluster-based highly parallel large-scale GLUE method to improve its computational efficiency. The Intel Xeon multi-core CPU and NVIDIA Tesla many-core GPU were adopted in this study. The source code was developed by using the MPICH2, C++ with OpenMP 2.0, and CUDA 6.5. The parallel GLUE method was tested by a widely-used hydrological model (the Xinanjiang model) to conduct performance and scalability investigation. Comparison results indicated that the parallel GLUE method outperformed the traditional serial method and have good application prospect on super computer clusters such as the ORNL Summit and Sierra of the TOP500 super computers around the world.


2014 ◽  
Vol 22 (2) ◽  
pp. 59-74 ◽  
Author(s):  
Alex D. Breslow ◽  
Ananta Tiwari ◽  
Martin Schulz ◽  
Laura Carrington ◽  
Lingjia Tang ◽  
...  

Co-location, where multiple jobs share compute nodes in large-scale HPC systems, has been shown to increase aggregate throughput and energy efficiency by 10–20%. However, system operators disallow co-location due to fair-pricing concerns, i.e., a pricing mechanism that considers performance interference from co-running jobs. In the current pricing model, application execution time determines the price, which results in unfair prices paid by the minority of users whose jobs suffer from co-location. This paper presents POPPA, a runtime system that enables fair pricing by delivering precise online interference detection and facilitates the adoption of supercomputers with co-locations. POPPA leverages a novel shutter mechanism – a cyclic, fine-grained interference sampling mechanism to accurately deduce the interference between co-runners – to provide unbiased pricing of jobs that share nodes. POPPA is able to quantify inter-application interference within 4% mean absolute error on a variety of co-located benchmark and real scientific workloads.


2011 ◽  
Vol 291-294 ◽  
pp. 3044-3049
Author(s):  
Hong Bo Liang ◽  
Yi Ping Yao ◽  
Xiao Dong Mu

High performance simulation has great prospect of application in the fields of Materials Science and Engineering. In high performance simulation, high performance computers are used to improve the performance of simulation. As one of the simulation standards, HLA simulation was greatly applied in computer simulation. In HLA simulation domain, many RTIs are designed to support the simulation in LAN/WAN environment. Because of the general TCP/UDP communication mechanism, high simulation performance can’t be achieved by these software on high performance computer. To improve the simulation performance, a customized RTI software for high performance computer and PC hybrid environment is designed. By using of partially hierarchical design on functional distributed architecture, large scale simulation can be supported. An adaptive communication mechanism is proposed, which can adapt communication between different RTI components to shared memory, Infiniband and Ethernet automatically, thus can greatly improve communication performance. In addition, this paper explains the related design in this customized RTI.


Sign in / Sign up

Export Citation Format

Share Document