Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-27T03:16:49.117Z Has data issue: false hasContentIssue false

A Parallel Implementation of Tight-Binding Molecular Dynamics Based on Reordering of Atoms and the Lanczos Eigen-Solver

Published online by Cambridge University Press:  10 February 2011

Luciano Colombot
Affiliation:
INFM and Dipartimento di Fisica, Universitk di Milano, via Celoria 16, 20133 Milano, Italy
William Sawyer
Affiliation:
CSCS-ETH, Swiss Scientific Computing Center, La Galleria, 6928 Manno, Switzerland
Djordje Marict
Affiliation:
CSCS-ETH, Swiss Scientific Computing Center, La Galleria, 6928 Manno, Switzerland
Get access

Abstract

We introduce an efficient and scalable parallel implementation of tight-binding molecular dynamics (TBMD) which employs reordering of the atoms in order to maximize datalocality of the distributed tight-binding (TB) Hamiltonian matrix. Reordering of the atom labels allows our new algorithm to scale well on parallel machines since most of the TB hopping integrals for a given atom are local to the processing element (PE) therefore minimizing communication. The sparse storage format and the distribution of the required eigenvectors reduces memory requirements per PE. The sparse storage format and a stabilized parallel Lanczos eigen-solver allow consideration of large problem sizes relevant to materials science. In addition, the implementation allows the calculation of the full spectrum of individual eigen-values/-vectors of the TB matrix at each time-step. This feature is a key issue when the dielectric and optical response must be computed during a TBMD simulation. We present a benchmark of our code and an analysis of the overall efficiency.

Type
Research Article
Copyright
Copyright © Materials Research Society 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Colombo, L., in Proceedings of Physics Computing '94, Gruber, R. and M. Tomassini Eds. (Lugano, 1994)Google Scholar
2 Wang, C.Z. and Ho, K.M., Comp. Mat. Sci. 2 (1994) 93 Google Scholar
3 Clark, T., Hanxleden, R. v., Kennedy, K., Koelbel, C., Scott, L.R., in Proceedings of the Scalable High Performance Computing Conference (IEEE, 1992).Google Scholar
4 Ponnusamy, R., Saltz, J., Choudhary, A., Hwang, Y.S. and Fox, G., IEEE Transactions on Parallel and Distributed Systems 6, 8 (1995), pages 815–829.Google Scholar
5 Golub, G.H. and Loan, C.F.V., Matrix Computations (Johns Hopkins, 1989)Google Scholar
6 Cuthill, E.H. and McKee, J., in Proceedings of 24th Nat. Conf. Assoc. Comp. Mach. (ACM, 1969), page 157Google Scholar
7 Doi, S., Washio, T., Muramatsu, K., and Nakata, T.. In Preprints of Parallel CFD '94 (Kyoto Institute of Technology, Japan, May 1994), pages 3136.Google Scholar
8 Goodwin, L., Skinner, A.J., and Pettifor, D.G., Europhys. Lett. 9 (1989) 701 Google Scholar
9 Stich, I., Car, R., and Parrinello, M., Phys. Rev. B44 (1991) 11092 Google Scholar
10 Clémençon, C. et al., in Proceedings of the First International Workshop on Parallel Processing, Prasanna, V.K., Bhatkar, V.P., Patnaik, L.M., and Tripathi, S.K. Eds. (McGraw-Hill Publishing, New Delhi, 1994), page 110 Google Scholar