Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-19T05:34:05.772Z Has data issue: false hasContentIssue false

Estimating the overlap between dependent computations for automatic parallelization

Published online by Cambridge University Press:  06 July 2011

PAUL BONE
Affiliation:
Department of Computer Science and Software Engineering The University of Melbourne and National ICT Australia (NICTA), Australia (e-mail: [email protected], [email protected])
ZOLTAN SOMOGYI
Affiliation:
Department of Computer Science and Software Engineering The University of Melbourne and National ICT Australia (NICTA), Australia (e-mail: [email protected], [email protected])
PETER SCHACHTE
Affiliation:
Department of Computer Science and Software Engineering The University of Melbourne, Australia (e-mail: [email protected])

Abstract

Researchers working on the automatic parallelization of programs have long known that too much parallelism can be even worse for performance than too little, because spawning a task to be run on another CPU incurs overheads. Autoparallelizing compilers have therefore long tried to use granularity analysis to ensure that they only spawn off computations whose cost will probably exceed the spawn-off cost by a comfortable margin. However, this is not enough to yield good results, because data dependencies may also limit the usefulness of running computations in parallel. If one computation blocks almost immediately and can resume only after another has completed its work, then the cost of parallelization again exceeds the benefit. We present a set of algorithms for recognizing places in a program where it is worthwhile to execute two or more computations in parallel that pay attention to the second of these issues as well as the first. Our system uses profiling information to compute the times at which a procedure call consumes the values of its input arguments and the times at which it produces the values of its output arguments. Given two calls that may be executed in parallel, our system uses the times of production and consumption of the variables they share to determine how much their executions would overlap if they were run in parallel, and therefore whether executing them in parallel is a good idea or not. We have implemented this technique for Mercury in the form of a tool that uses profiling data to generate recommendations about what to parallelize, for the Mercury compiler to apply on the next compilation of the program. We present preliminary results that show that this technique can yield useful parallelization speedups, while requiring nothing more from the programmer than representative input data for the profiling run.

Type
Regular Papers
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bevemyr, J., Lindgren, T. and Millroth, H. 1993. Reform Prolog: The language and its implementation. In Proceedings of the Tenth International Conference on Logic Programming. Budapest, Hungary, 283298.Google Scholar
Casas, A., Carro, M. and Hermenegildo, M. V. 2007. Annotation algorithms for unrestricted independent AND-parallelism in logic programs. In Proceedings of the 17th International Symposium on Logic-based Program Synthesis and Transformation. Lyngby, Denmark, 138153.Google Scholar
Gras, D. C. and Hermenegildo, M. V. 2009. Non-strict independence-based program parallelization using sharing and freeness information. Theoretical Computer Science 410 (46), 47044723.CrossRefGoogle Scholar
Halstead, R. H. 1984. Implementation of MultiLisp: Lisp on a multiprocessor. In Proceedings of the 1984 ACM Symposium on List and Functional Programming. Austin, Texas, 917.Google Scholar
Harris, T. and Singh, S. 2007. Feedback directed implicit parallelism. SIGPLAN Notices 42 (9), 251264.CrossRefGoogle Scholar
Lopez, P., Hermenegildo, M. and Debray, S. 1996. A methodology for granularity-based control of parallelism in logic programs. Journal of Symbolic Computation 22 (4), 715734.CrossRefGoogle Scholar
Marlow, S., Jones, S. P. and Singh, S. 2009. Runtime support for multicore Haskell. SIGPLAN Notices 44 (9), 6578.Google Scholar
Muthukumar, K., Bueno, F., de la Banda, M. J. G. and Hermenegildo, M. V. 1999. Automatic compile-time parallelization of logic programs for restricted, goal level, independent AND-parallelism. Journal of Logic Programming 38 (2), 165218.Google Scholar
Pontelli, E., Gupta, G., Pulvirenti, F. and Ferro, A. 1997. Automatic compile-time parallelization of prolog programs for dependent and-parallelism. In Proceedings of the 14th International Conference on Logic Programming. Leuven, Belgium, 108122.Google Scholar
Shen, K., Costa, V. S. and King, A. 1999. Distance: A new metric for controlling granularity for parallel execution. Journal of Functional and Logic Programming 1999 (1).Google Scholar
Somogyi, Z., Henderson, F. and Conway, T. 1996. The execution algorithm of Mercury, an efficient purely declarative logic programming language. Journal of Logic Programming 26 (1–3), 1764.CrossRefGoogle Scholar
Tannier, J. 2007. Parallel Mercury. M.S. thesis, Institut d'informatique, Facultés Universitaires Notre-Dame de la Paix, 21, rue Grandgagnage, B-5000 Namur, Belgium.Google Scholar
von Praun, C., Ceze, L. and Caşcaval, C. 2007. Implicit parallelism with ordered transactions. In Proceedings of the 12th Symposium on Principles and Practice of Parallel Programming. San Jose, California, 7989.Google Scholar
Wang, P. and Somogyi, Z. 2011. Minimizing the overheads of dependent AND-parallelism. In Proceedings of the 27th International Conference on Logic Programming. Lexington, Kentucky.Google Scholar