The process of writing large parallel programs is complicated by
the
need to specify both the parallel behaviour of the program and the algorithm
that is to be used to compute its
result. This paper introduces evaluation strategies: lazy higher-order
functions that control the
parallel evaluation of non-strict functional languages. Using evaluation
strategies, it is possible
to achieve a clean separation between algorithmic and behavioural code.
The result is enhanced
clarity and shorter parallel programs. Evaluation strategies are a very
general concept: this
paper shows how they can be used to model a wide range of commonly used
programming
paradigms, including divide-and-conquer parallelism, pipeline
parallelism, producer/consumer
parallelism, and data-oriented parallelism. Because they are based on unrestricted
higher-order
functions, they can also capture irregular parallel structures. Evaluation
strategies are not
just of theoretical interest: they have evolved out of our experience in
parallelising several
large-scale parallel applications, where they have proved invaluable in
helping to manage the
complexities of parallel behaviour. Some of these applications are described
in detail here.
The largest application we have studied to date, Lolita, is a 40,000 line
natural language
engineering system. Initial results show that for these programs we can
achieve acceptable
parallel performance, for relatively little programming effort.