OpenVX Integration Into the Visual Development Environment

Author(s):  
Alexey Syschikov ◽  
Boris Sedov ◽  
Konstantin Nedovodeev ◽  
Vera Ivanova

The OpenVX standard has appeared as an answer from the computer vision community to the challenge of accelerating vision applications on embedded heterogeneous platforms. It is designed to leverage the computer vision hardware potential with functional and performance portability. As long as VIPE has a powerful model of computation, it can incorporate various other models. This allows to extend facilities of a language or framework that is based on the model to be incorporated with visual programming support and provide access to the existing performance analysis and deployment tools. The authors present OpenVX integration into the VIPE IDE. VIPE addresses the need to design OpenVX graphs in a natural visual form with automatic generation of a full-fledged program, shielding a programmer from writing a bunch of boilerplate code. To the best of the authors' knowledge, this is the first use of a graphical notation for OpenVX programming. Using VIPE to develop OpenVX programs also enables the performance analysis tools.

2010 ◽  
Vol 1 (2) ◽  
pp. 23-39 ◽  
Author(s):  
Mahesh Rajan ◽  
Douglas Doerfler ◽  
Courtenay T. Vaughan ◽  
Marcus Epperson ◽  
Jeff Ogden

In a recent acquisition by DOE/NNSA several large capacity computing clusters called TLCC have been installed at the DOE labs: SNL, LANL and LLNL. TLCC architecture with ccNUMA, multi-socket, multi-core nodes, and InfiniBand interconnect, is representative of the trend in HPC architectures. This paper examines application performance on TLCC contrasting them with Red Storm/Cray XT4. TLCC and Red Storm share similar AMD processors and memory DIMMs. Red Storm however has single socket nodes and custom interconnect. Micro-benchmarks and performance analysis tools help understand the causes for the observed performance differences. Control of processor and memory affinity on TLCC with the numactl utility is shown to result in significant performance gains and is essential to attenuate the detrimental impact of OS interference and cache-coherency overhead. While previous studies have investigated impact of affinity control mostly in the context of small SMP systems, the focus of this paper is on highly parallel MPI applications.


Author(s):  
Mahesh Rajan ◽  
Douglas Doerfler ◽  
Courtenay T. Vaughan ◽  
Marcus Epperson ◽  
Jeff Ogden

In a recent acquisition by DOE/NNSA several large capacity computing clusters called TLCC have been installed at the DOE labs: SNL, LANL and LLNL. TLCC architecture with ccNUMA, multi-socket, multi-core nodes, and InfiniBand interconnect, is representative of the trend in HPC architectures. This paper examines application performance on TLCC contrasting them with Red Storm/Cray XT4. TLCC and Red Storm share similar AMD processors and memory DIMMs. Red Storm however has single socket nodes and custom interconnect. Micro-benchmarks and performance analysis tools help understand the causes for the observed performance differences. Control of processor and memory affinity on TLCC with the numactl utility is shown to result in significant performance gains and is essential to attenuate the detrimental impact of OS interference and cache-coherency overhead. While previous studies have investigated impact of affinity control mostly in the context of small SMP systems, the focus of this paper is on highly parallel MPI applications.


Sign in / Sign up

Export Citation Format

Share Document