Aho-Corasick String Matching on Shared and Distributed-Memory Parallel Architectures

2012 ◽  
Vol 23 (3) ◽  
pp. 436-443 ◽  
Author(s):  
Antonino Tumeo ◽  
Oreste Villa ◽  
Daniel G. Chavarria-Miranda
1991 ◽  
Vol 01 (02) ◽  
pp. 103-111 ◽  
Author(s):  
MICHEL COSNARD ◽  
AFONSO FERREIRA

We propose new models of SIMD distributed memory parallel computers. We define concurrent read/write access also for machines other than PRAM. Our goal is to unify the description of abstract models of parallel machines with the aim of building a complexity theory where all models can be soundly compared. As an example, we introduce the Hypercube Random Access Machine with concurrent read/write capabilities, and show that it can solve some problems faster than the PRAM.


1996 ◽  
Vol 5 (2) ◽  
pp. 147-160 ◽  
Author(s):  
Steven M. Fitzgerald ◽  
Rodney R. Oldehoeft

Applicative languages have been proposed for defining algorithms for parallel architectures because they are implicitly parallel and lack side effects. However, straightforward implementations of applicative-language compilers may induce large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. To eliminate the unnecessary copying of data, the Sisal compiler uses both build-in-place and update-in-place analyses. These optimizations remove unnecessary array copy operations through compile-time analysis. Both build-in-place and update-in-place are based on hierarchical ragged arrays, i.e., the vector-of-vectors array model. Although this array model is convenient for certain applications, many optimizations are precluded, e.g., vectorization. To compensate for this deficiency, new languages, such as Sisal 2.0, have extended array models that allow for both high-level array operations to be performed and efficient implementations to be devised. In this article, we introduce a new method to perform update-in-place analysis that is applicable to arrays stored either in hierarchical or in contiguous storage. Consequently, the array model that is appropriate for an application can be selected without the loss of performance. Moreover, our analysis is more amenable for distributed memory and large software systems.


2003 ◽  
Vol 13 (03) ◽  
pp. 437-448 ◽  
Author(s):  
ANTONIO J. DORTA ◽  
JESÚS A. GONZÁLEZ ◽  
CASIANO RODRÍGUEZ ◽  
FRANCISCO DE SANDE

The skeletal approach to the development of parallel applications has been revealed to be one of the most successful and has been widely explored in the recent years. The goal of this approach is to develop a methodology of parallel programming based on a restricted set of parallel constructs. This paper presents llc, a parallel skeletal language, the theoretical model that gives support to the language and a prototype implementation for its compiler. The language is based on directives, uses a C-like syntax and gives support to the most widely used skeletal constructs. llCoMP is a source to source compiler for the language built on top of MPI. We evaluate the performance of our prototype compiler using four different parallel architectures and three algorithms. We present the results obtained in both shared and distributed memory architectures. Our model guarantees the portability of the language to any platform and its simplicity greatly eases its implementation.


Sign in / Sign up

Export Citation Format

Share Document