Logic Programming languages, such as Prolog, offer a great potential for the
exploitation of implicit parallelism. One of the most noticeable sources of
implicit parallelism in Prolog programs is or-parallelism. Or-parallelism
arises from the simultaneous evaluation of a subgoal call against the clauses
that match that call. Nowadays, multicores and clusters of multicores are
becoming the norm and, although, many parallel Prolog systems have been
developed in the past, to the best of our knowledge, none of them was
specially designed to explore the combination of shared and distributed
memory architectures. Conceptually, an or-parallel Prolog system consists of
two components: an or-parallel engine (i.e., a set of independent Prolog
engines which we named a team of workers) and a scheduler. In this work, we
propose a team-based scheduling model to efficiently exploit parallelism
between different or-parallel engines running on top of clusters of
multicores. Our proposal defines a layered approach where a second-level
scheduler specifies a clean interface for scheduling work between the base
or-parallel engines, thus enabling different scheduling combinations to be
used for distributing work among workers inside a team and among teams.