Towards High-Level Parallel Patterns in OpenCL

Author(s):  
Jiri Dokulil ◽  
Siegfried Benkner
Keyword(s):  
2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Claudia Misale ◽  
Giulio Ferrero ◽  
Massimo Torquati ◽  
Marco Aldinucci

In this paper, we advocate high-level programming methodology for next generation sequencers (NGS) alignment tools for both productivity and absolute performance. We analyse the problem of parallel alignment and review the parallelisation strategies of the most popular alignment tools, which can all be abstracted to a single parallel paradigm. We compare these tools to their porting onto the FastFlow pattern-based programming framework, which provides programmers with high-level parallel patterns. By using a high-level approach, programmers are liberated from all complex aspects of parallel programming, such as synchronisation protocols, and task scheduling, gaining more possibility for seamless performance tuning. In this work, we show some use cases in which, by using a high-level approach for parallelising NGS tools, it is possible to obtain comparable or even better absolute performance for all used datasets.


Author(s):  
Vladimir Janjic ◽  
Christopher Brown ◽  
Adam D. Barwell

AbstractParallel patterns are a high-level programming paradigm that enables non-experts in parallelism to develop structured parallel programs that are maintainable, adaptive, and portable whilst achieving good performance on a variety of parallel systems. However, there still exists a large base of legacy-parallel code developed using ad-hoc methods and incorporating low-level parallel/concurrency libraries such as pthreads without any parallel patterns in the fundamental design. This code would benefit from being restructured and rewritten into pattern-based code. However, the process of rewriting the code is laborious and error-prone, due to typical concurrency and pthreading code being closely intertwined throughout the business logic of the program. In this paper, we present a new software restoration methodology, to transform legacy-parallel programs implemented using pthreads into structured farm and pipeline patterned equivalents. We demonstrate our restoration technique on a number of benchmarks, allowing the introduction of patterned farm and pipeline parallelism in the resulting code; we record improvements in cyclomatic complexity and speedups on a number of representative benchmarks.


Computing ◽  
2021 ◽  
Author(s):  
Adriano Vogel ◽  
Gabriele Mencagli ◽  
Dalvan Griebler ◽  
Marco Danelutto ◽  
Luiz Gustavo Fernandes

AbstractSeveral real-world parallel applications are becoming more dynamic and long-running, demanding online (at run-time) adaptations. Stream processing is a representative scenario that computes data items arriving in real-time and where parallel executions are necessary. However, it is challenging for humans to monitor and manually self-optimize complex and long-running parallel executions continuously. Moreover, although high-level and structured parallel programming aims to facilitate parallelism, several issues still need to be addressed for improving the existing abstractions. In this paper, we extend self-adaptiveness for supporting autonomous and online changes of the parallel pattern compositions. Online self-adaptation is achieved with an online profiler that characterizes the applications, which is combined with a new self-adaptive strategy and a model for smooth transitions on reconfigurations. The solution provides a new abstraction layer that enables application programmers to define non-functional requirements instead of hand-tuning complex configurations. Hence, we contribute with additional abstractions and flexible self-adaptation for responsiveness at run-time. The proposed solution is evaluated with applications having different processing characteristics, workloads, and configurations. The results show that it is possible to provide additional abstractions, flexibility, and responsiveness while achieving performance comparable to the best static configuration executions.


2014 ◽  
Vol 24 (03) ◽  
pp. 1441005 ◽  
Author(s):  
Michel Steuwer ◽  
Michael Haidl ◽  
Stefan Breuer ◽  
Sergei Gorlatch

The implementation of stencil computations on modern, massively parallel systems with GPUs and other accelerators currently relies on manually-tuned coding using low-level approaches like OpenCL and CUDA. This makes development of stencil applications a complex, time-consuming, and error-prone task. We describe how stencil computations can be programmed in our SkelCL approach that combines high-level programming abstractions with competitive performance on multi-GPU systems. SkelCL extends the OpenCL standard by three high-level features: 1) pre-implemented parallel patterns (a.k.a. skeletons); 2) container data types for vectors and matrices; 3) automatic data (re)distribution mechanism. We introduce two new SkelCL skeletons which specifically target stencil computations – MapOverlap and Stencil – and we describe their use for particular application examples, discuss their efficient parallel implementation, and report experimental results on systems with multiple GPUs. Our evaluation of three real-world applications shows that stencil code written with SkelCL is considerably shorter and offers competitive performance to hand-tuned OpenCL code.


Author(s):  
David P. Bazett-Jones ◽  
Mark L. Brown

A multisubunit RNA polymerase enzyme is ultimately responsible for transcription initiation and elongation of RNA, but recognition of the proper start site by the enzyme is regulated by general, temporal and gene-specific trans-factors interacting at promoter and enhancer DNA sequences. To understand the molecular mechanisms which precisely regulate the transcription initiation event, it is crucial to elucidate the structure of the transcription factor/DNA complexes involved. Electron spectroscopic imaging (ESI) provides the opportunity to visualize individual DNA molecules. Enhancement of DNA contrast with ESI is accomplished by imaging with electrons that have interacted with inner shell electrons of phosphorus in the DNA backbone. Phosphorus detection at this intermediately high level of resolution (≈lnm) permits selective imaging of the DNA, to determine whether the protein factors compact, bend or wrap the DNA. Simultaneously, mass analysis and phosphorus content can be measured quantitatively, using adjacent DNA or tobacco mosaic virus (TMV) as mass and phosphorus standards. These two parameters provide stoichiometric information relating the ratios of protein:DNA content.


Author(s):  
J. S. Wall

The forte of the Scanning transmission Electron Microscope (STEM) is high resolution imaging with high contrast on thin specimens, as demonstrated by visualization of single heavy atoms. of equal importance for biology is the efficient utilization of all available signals, permitting low dose imaging of unstained single molecules such as DNA.Our work at Brookhaven has concentrated on: 1) design and construction of instruments optimized for a narrow range of biological applications and 2) use of such instruments in a very active user/collaborator program. Therefore our program is highly interactive with a strong emphasis on producing results which are interpretable with a high level of confidence.The major challenge we face at the moment is specimen preparation. The resolution of the STEM is better than 2.5 A, but measurements of resolution vs. dose level off at a resolution of 20 A at a dose of 10 el/A2 on a well-behaved biological specimen such as TMV (tobacco mosaic virus). To track down this problem we are examining all aspects of specimen preparation: purification of biological material, deposition on the thin film substrate, washing, fast freezing and freeze drying. As we attempt to improve our equipment/technique, we use image analysis of TMV internal controls included in all STEM samples as a monitor sensitive enough to detect even a few percent improvement. For delicate specimens, carbon films can be very harsh-leading to disruption of the sample. Therefore we are developing conducting polymer films as alternative substrates, as described elsewhere in these Proceedings. For specimen preparation studies, we have identified (from our user/collaborator program ) a variety of “canary” specimens, each uniquely sensitive to one particular aspect of sample preparation, so we can attempt to separate the variables involved.


2020 ◽  
Vol 29 (4) ◽  
pp. 738-761
Author(s):  
Tess K. Koerner ◽  
Melissa A. Papesh ◽  
Frederick J. Gallun

Purpose A questionnaire survey was conducted to collect information from clinical audiologists about rehabilitation options for adult patients who report significant auditory difficulties despite having normal or near-normal hearing sensitivity. This work aimed to provide more information about what audiologists are currently doing in the clinic to manage auditory difficulties in this patient population and their views on the efficacy of recommended rehabilitation methods. Method A questionnaire survey containing multiple-choice and open-ended questions was developed and disseminated online. Invitations to participate were delivered via e-mail listservs and through business cards provided at annual audiology conferences. All responses were anonymous at the time of data collection. Results Responses were collected from 209 participants. The majority of participants reported seeing at least one normal-hearing patient per month who reported significant communication difficulties. However, few respondents indicated that their location had specific protocols for the treatment of these patients. Counseling was reported as the most frequent rehabilitation method, but results revealed that audiologists across various work settings are also successfully starting to fit patients with mild-gain hearing aids. Responses indicated that patient compliance with computer-based auditory training methods was regarded as low, with patients generally preferring device-based rehabilitation options. Conclusions Results from this questionnaire survey strongly suggest that audiologists frequently see normal-hearing patients who report auditory difficulties, but that few clinicians are equipped with established protocols for diagnosis and management. While many feel that mild-gain hearing aids provide considerable benefit for these patients, very little research has been conducted to date to support the use of hearing aids or other rehabilitation options for this unique patient population. This study reveals the critical need for additional research to establish evidence-based practice guidelines that will empower clinicians to provide a high level of clinical care and effective rehabilitation strategies to these patients.


2006 ◽  
Vol 175 (4S) ◽  
pp. 260-260
Author(s):  
Rile Li ◽  
Hong Dai ◽  
Thomas M. Wheeler ◽  
Anna Frolov ◽  
Gustavo Ayala

Sign in / Sign up

Export Citation Format

Share Document