scholarly journals Scientific Visualization in Astronomy: Towards the Petascale Astronomy Era

2011 ◽  
Vol 28 (2) ◽  
pp. 150-170 ◽  
Author(s):  
Amr Hassan ◽  
Christopher J. Fluke

AbstractAstronomy is entering a new era of discovery, coincident with the establishment of new facilities for observation and simulation that will routinely generate petabytes of data. While an increasing reliance on automated data analysis is anticipated, a critical role will remain for visualization-based knowledge discovery. We have investigated scientific visualization applications in astronomy through an examination of the literature published during the last two decades. We identify the two most active fields for progress — visualization of large-N particle data and spectral data cubes—discuss open areas of research, and introduce a mapping between astronomical sources of data and data representations used in general-purpose visualization tools. We discuss contributions using high-performance computing architectures (e.g. distributed processing and GPUs), collaborative astronomy visualization, the use of workflow systems to store metadata about visualization parameters, and the use of advanced interaction devices. We examine a number of issues that may be limiting the spread of scientific visualization research in astronomy and identify six grand challenges for scientific visualization research in the Petascale Astronomy Era.

Author(s):  
Domingo Benitez

Many accelerator-based computers have demonstrated that they can be faster and more energy-efficient than traditional high-performance multi-core computers. Two types of programmable accelerators are available in high-performance computing: general-purpose accelerators such as GPUs, and customizable accelerators such as FPGAs, although general-purpose accelerators have received more attention. This chapter reviews the state-of-the-art and current trends of high-performance customizable computers (HPCC) and their use in Computational Science and Engineering (CSE). A top-down approach is used to be more accessible to the non-specialists. The “top view” is provided by a taxonomy of customizable computers. This abstract view is accompanied with a performance comparison of common CSE applications on HPCC systems and high-performance microprocessor-based computers. The “down view” examines software development, describing how CSE applications are programmed on HPCC computers. Additionally, a cost analysis and an example illustrate the origin of the benefits. Finally, the future of the high-performance customizable computing is analyzed.


2021 ◽  
Author(s):  
Matthias Arzt ◽  
Joran Deschamps ◽  
Christopher Schmied ◽  
Tobias Pietzsch ◽  
Deborah Schmidt ◽  
...  

We present Labkit, a user-friendly Fiji plugin for the segmentation of microscopy image data. It offers easy to use manual and automated image segmentation routines that can be rapidly applied to single- and multi-channel images as well as to timelapse movies in 2D or 3D. Labkit is specifically designed to work efficiently on big image data and enables users of consumer laptops to conveniently work with multiple-terabyte images. This efficiency is achieved by using ImgLib2 and BigDataViewer as the foundation of our software. Furthermore, memory efficient and fast random forest based pixel classification inspired by the Waikato Environment for Knowledge Analysis (Weka) is implemented. Optionally we harness the power of graphics processing units (GPU) to gain additional runtime performance. Labkit is easy to install on virtually all laptops and workstations. Additionally, Labkit is compatible with high performance computing (HPC) clusters for distributed processing of big image data. The ability to use pixel classifiers trained in Labkit via the ImageJ macro language enables our users to integrate this functionality as a processing step in automated image processing workflows. Last but not least, Labkit comes with rich online resources such as tutorials and examples that will help users to familiarize themselves with available features and how to best use \Labkit in a number of practical real-world use-cases.


2019 ◽  
pp. 112-115
Author(s):  
M. Z. Benenson

The  article  discusses  the  use  of  graphics  processing  units  for  solving  large  system  of  linear  algebraic  equations  (SLAE).  A heterogeneous multiprocessor computing platform produced by the NIIVK, whose architecture allows the integration of general‑ purpose microprocessor modules with graphics processor modules was used as an equipment for solving SLAEs. The description  of the SLAE solution program, developed on the basis of the CUBLAS CUDA software interface library, is given. A method is proposed for increasing the accuracy of calculations of linear systems based on the use of a modified Gauss method. It has been  established that the use of the modified Gauss method practically does not increase the program operation time with a significant  increase in the accuracy of calculations. It is concluded that the use of graphics processors for solving SLAEs allows processing  matrices of a larger size compared to the use of general‑purpose microprocessors.


AI Magazine ◽  
2010 ◽  
Vol 31 (1) ◽  
pp. 75 ◽  
Author(s):  
Christopher Barrett ◽  
Keith Bisset ◽  
Jonathan Leidig ◽  
Achla Marathe ◽  
Madhav V. Marathe

We discuss an interaction-based approach to study the coevolution between socio-technical networks, individual behaviors, and contagion processes on these networks. We use epidemics in human population as an example of this phenomenon. The methods consist of developing synthetic yet realistic national-scale networks using a first principles approach. Unlike simple random graph techniques, these methods combine real world data sources with behavioral and social theories to synthesize detailed social contact (proximity) networks. Individual-based models of within-host disease progression and inter-host transmission are then used to model the contagion process. Finally, models of individual behaviors are composed with disease progression models to develop a realistic representation of the complex system in which individual behaviors and the social network adapt to the contagion. These methods are embodied within Simdemics – a general purpose modeling environment to support pandemic planning and response. Simdemics is designed specifically to be scalable to networks with 300 million agents – the underlying algorithms and methods in Simdemics are all high-performance computing oriented methods. New advances in network science, machine learning, high performance computing, data mining and behavioral modeling were necessary to develop Simdemics. Simdemics is combined with two other environments, Simfrastructure and Didactic, to form an integrated cyberenvironment. The integrated cyber-environment provides the end-user flexible and seamless Internet based access to Simdemics. Service-oriented architectures play a critical role in delivering the desired services to the end user. Simdemics, in conjunction with the integrated cyber-environment, has been used in over a dozen user defined case studies. These case studies were done to support specific policy questions that arose in the context of planning the response to pandemics (e.g., H1N1, H5N1) and human initiated bio-terrorism events. These studies played a crucial role in the continual development and improvement of the cyber-environment.


2021 ◽  
Vol 4 (3) ◽  
pp. 40
Author(s):  
Abdul Majeed

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.


2018 ◽  
Vol 7 (2) ◽  
pp. 70-74
Author(s):  
Dhruv Chander Pant ◽  
O. P. Gupta

The main challenges bioinformatics applications facing today are to manage, analyze and process a huge volume of genome data. This type of analysis and processing is very difficult using general purpose computer systems. So the need of distributed computing, cloud computing and high performance computing in bioinformatics applications arises. Now distributed computers, cloud computers and multi-core processors are available at very low cost to deal with bulk amount of genome data. Along with these technological developments in distributed computing, many efforts are being done by the scientists and bioinformaticians to parallelize and implement the algorithms to take the maximum advantage of the additional computational power. In this paper a few bioinformatics algorithms have been discussed. The parallelized implementations of these algorithms have been explained. The performance of these parallelized algorithms has been also analyzed. It has been also observed that in parallel implementations of the various bioinformatics algorithms, impact of communication subsystems with respect to the job sizes should also be analyzed.


Author(s):  
Dorian Krause ◽  
Philipp Thörnig

JURECA is a petaflop-scale, general-purpose supercomputer operated by Jülich Supercomputing Centre at Forschungszentrum Jülich. Utilizing a flexible cluster architecture based on T-Platforms V-Class blades and a balanced selection of best of its kind components the system supports a wide variety of high-performance computing and data analytics workloads and offers a low entrance barrier for new users.


2021 ◽  
Vol 43 (5) ◽  
pp. C335-C357
Author(s):  
Giovanni Isotton ◽  
Matteo Frigo ◽  
Nicolò Spiezia ◽  
Carlo Janna

Sign in / Sign up

Export Citation Format

Share Document