Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-22T10:21:29.094Z Has data issue: false hasContentIssue false

A fully distributed unstructured Navier-Stokes solver for large-scale aeroelasticity computations

Published online by Cambridge University Press:  04 July 2016

G. Barakos
Affiliation:
Centre for Vibration Engineering Mechanical Engineering Department, Imperial College of Science, Technology and Medicine, London, UK
M. Vahdati
Affiliation:
Centre for Vibration Engineering Mechanical Engineering Department, Imperial College of Science, Technology and Medicine, London, UK
A.I. Sayma
Affiliation:
Centre for Vibration Engineering Mechanical Engineering Department, Imperial College of Science, Technology and Medicine, London, UK
C. Bréard
Affiliation:
Centre for Vibration Engineering Mechanical Engineering Department, Imperial College of Science, Technology and Medicine, London, UK
M. Imregun
Affiliation:
Centre for Vibration Engineering Mechanical Engineering Department, Imperial College of Science, Technology and Medicine, London, UK

Abstract

This paper presents the development and validation of a parallel unsteady flow and aeroelasticity code for large-scale numerical models used in turbo machinery applications. The work is based on an existing unstructured Navier-Stokes solver developed over the past ten years by the Aeroelasticity Research Group at Imperial College Vibration University Technology Centre. The single-process multiple-data paradigm was adopted for the parallelisation of the solver and several validation cases were considered. The computational mesh was divided into several sub-sections using a domain decomposition technique. The performance and numerical accuracy of the parallel solver was validated across several computer platforms for various problem sizes. In cases where the solution could be obtained on a single CPU, the serial and parallel versions of the code were found to produce identical results. Studies on up to 32 CPUs showed varying levels of parallelisation efficiency, an almost linear speed-up being obtained in some cases. Finally, an industrial configuration, a 17 blade row turbine with a 47 million point mesh, was discussed to illustrate the potential of the proposed large-scale modelling methodology.

Type
Research Article
Copyright
Copyright © Royal Aeronautical Society 2001 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Vahdati, M. and Imregun, M. A non-linear aeroelasticity analysis of a fan blade using unstructured meshes, IMechE J — Part C, 1996, 210, pp 549564.Google Scholar
2. Sayma, A.I., Vahdati, M., Sbardella, L. and Imregun, M. Modelling of three dimensional viscous compressible turbomachinery flows using unstructured hybrid grids, AIAA J, 2000, 38, (6), pp 945954.Google Scholar
3. Sayma, A.I., Vahdati, M., Imregun, M. and Green, J.S. Whole-as sembly flutter analysis of a low-pressure turbine blade, Aeronaut J, 1998, 102, (1018), pp 459463.Google Scholar
4. Vahdati, M. and M., Imregun An application of the ICED-ALE methodology to integrated non-linear aeroelasticity analyses, Engineering Computations, 1997, 14, (2-3), pp 281307.Google Scholar
5. Sayma, A.I, Vahdati, M. and Imregun, M. An integrated nonlinear approach for turbomachinery forced response prediction. Part I: Formulation, J Fluids and Structures, 2000, 14, (1), pp 87101.Google Scholar
6. Vahdati, M., Sayma, A.I., Lee, S.J. and Imregun, M. Multi blade-row forced response predictions for a rig with articulated inlet guide vanes, In Proceedings of the National 4th High Cycle Fatigue Conference, Monterey, USA, 1999.Google Scholar
7. Ji, S. and Liu, F. Flutter computation of turbomachinery cascades using a parallel unsteady Navier-stokes code, AIAA J, 1999, 37, (3), pp 320326.Google Scholar
8. Karypis, G. and Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs, Technical Report TR 95-035, Computer Science Department, University of Minnesota, Minneapolis MN55455, USA, 1995. Available from http://www.cs.umn.edu/karypis Google Scholar
9. Walshaw, C. Parallel Jostle Library Interface: Vesion 1.2.1 School of Computing and Mathematical Sciences, University of Greenwich, June 2000. Available at http://www.gre.ac.uk/jostle Google Scholar
10. Preis, R. and Diekmann, R. The PARTY Partitioning-Library, User Guide — Version 1.1 Technical Report tr-rsfb-96-024, University of Paderborn, September 1996.Google Scholar
11. Garey, M., Johnson, D., and Stockmeyer, L. Some simplified NP- complete graph problems, Theoretical Computer Science, 1976, 1, pp 237267.Google Scholar
12. Farhat, C. A simple and efficient automatic FEM domain decomposer, Computers and Structures, 1988, 28, pp 579602.Google Scholar
13. Message Passing Interface Forum, MPI: A message passing interface standard. May 1994.Google Scholar
14. Karypis, G. and Kumar, V. Metis: unstructured graph partitioning and sparse matrix ordering, Version 2.0 User Manual, Computer Science Department, University of Minnesota, Minneapolis MN55455, USA, 1995. Available from http://www.cs.umn.edu/karypis Google Scholar
15. Vanderstraeten, D., Keunings, R. and Farhat, C., Beyond conven tional mesh partitioning algorithms and the minimum edge cut criterion: Impact on realistic applications. In Bailey, D.H. et al(Eds), Parallel Processing for Scientific Computing, Proceedings of the Seventh SIAM Conference. SIAM, 1995, pp 611614.Google Scholar
16. Haimes, R. and Jordan, K.E., A tractable approach to understanding the results from large-scale 3D transient simulation. AIAA paper no 01–0918.2001.Google Scholar