Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-v5vhk Total loading time: 0 Render date: 2024-07-05T05:22:18.644Z Has data issue: false hasContentIssue false

1 - Introduction

Published online by Cambridge University Press:  24 November 2021

David Landau
Affiliation:
University of Georgia
Kurt Binder
Affiliation:
Johannes Gutenberg Universität Mainz, Germany

Summary

In a Monte Carlo simulation we attempt to follow the ‘time dependence’ of a model for which change, or growth, does not proceed in some rigorously predefined fashion (e.g. according to Newton’s equations of motion) but rather in a stochastic manner which depends on a sequence of random numbers which is generated during the simulation. With a second, different sequence of random numbers the simulation will not give identical results but will yield values which agree with those obtained from the first sequence to within some ‘statistical error’. A very large number of different problems fall into this category: in percolation an empty lattice is gradually filled with particles by placing a particle on the lattice randomly with each ‘tick of the clock’. Lots of questions may then be asked about the resulting ‘clusters’ which are formed of neighboring occupied sites. Particular attention has been paid to the determination of the ‘percolation threshold’, i.e. the critical concentration of occupied sites for which an ‘infinite percolating cluster’ first appears. A percolating cluster is one which reaches from one boundary of a (macroscopic) system to the opposite one. The properties of such objects are of interest in the context of diverse physical problems such as conductivity of random mixtures, flow through porous rocks, behavior of dilute magnets, etc. Another example is diffusion limited aggregation (DLA), where a particle executes a random walk in space, taking one step at each time interval, until it encounters a ‘seed’ mass and sticks to it. The growth of this mass may then be studied as many random walkers are turned loose. The ‘fractal’ properties of the resulting object are of real interest, and while there is no accepted analytical theory of DLA to date, computer simulation is the method of choice. In fact, the phenomenon of DLA was first discovered by Monte Carlo simulation.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021

1.1 What is a Monte Carlo Simulation?

In a Monte Carlo simulation we attempt to follow the ‘time dependence’ of a model for which change, or growth, does not proceed in some rigorously predefined fashion (e.g. according to Newton’s equations of motion) but rather in a stochastic manner which depends on a sequence of random numbers which is generated during the simulation. With a second, different sequence of random numbers the simulation will not give identical results but will yield values which agree with those obtained from the first sequence to within some ‘statistical error’. A very large number of different problems fall into this category: in percolation an empty lattice is gradually filled with particles by placing a particle on the lattice randomly with each ‘tick of the clock’. Lots of questions may then be asked about the resulting ‘clusters’ which are formed of neighboring occupied sites. Particular attention has been paid to the determination of the ‘percolation threshold’, i.e. the critical concentration of occupied sites for which an ‘infinite percolating cluster’ first appears. A percolating cluster is one which reaches from one boundary of a (macroscopic) system to the opposite one. The properties of such objects are of interest in the context of diverse physical problems such as conductivity of random mixtures, flow through porous rocks, behavior of dilute magnets, etc. Another example is diffusion limited aggregation (DLA), where a particle executes a random walk in space, taking one step at each time interval, until it encounters a ‘seed’ mass and sticks to it. The growth of this mass may then be studied as many random walkers are turned loose. The ‘fractal’ properties of the resulting object are of real interest, and while there is no accepted analytical theory of DLA to date, computer simulation is the method of choice. In fact, the phenomenon of DLA was first discovered by Monte Carlo simulation.

Considering problems of statistical mechanics, we may be attempting to sample a region of phase space in order to estimate certain properties of the model, although we may not be moving in phase space along the same path which an exact solution to the time dependence of the model would yield. Remember that the task of equilibrium statistical mechanics is to calculate thermal averages of (interacting) many-particle systems: Monte Carlo simulations can do that, taking proper account of statistical fluctuations and their effects in such systems. Many of these models will be discussed in more detail in later chapters so we shall not provide further details here. Since the accuracy of a Monte Carlo estimate depends upon the thoroughness with which phase space is probed, improvement may be obtained by simply running the calculation a little longer to increase the number of samples. Unlike in the application of many analytic techniques (e.g. perturbation theory for which the extension to higher order may be prohibitively difficult), the improvement of the accuracy of Monte Carlo results is possible not just in principle but also in practice.

1.2 A Comment on the History of Monte Carlo Simulations

The concept of Monte Carlo simulation goes back to the 18th century. The 1733 edition of Histoire de L’Académie Royale des Sciences contains a report by Georges Louis Leclerc, Comte de Buffon, on a technique that could be used to estimate π by throwing needles onto a floor composed of parallel boards and measuring the fraction of the needles that fell only on top of a single board. (The original article spoke of throwing ‘baguettes’, but this term, meaning needles or rods, was not used to describe long, French bread until the 20th century!) Serious development of Monte Carlo methods began at Los Alamos National Laboratory in the 1940s as von Neumann and Ulam attempted to model neutron transport. The first computer used for Monte Carlo simulations was an analog model dubbed the ‘Fermiac’. Fig. 1.1 (left) shows Ulam holding the only Fermiac that was ever built, and Fig. 1.1 (right) shows a modern 0.2 Exaflop supercomputer used for Monte Carlo simulations today for comparison.

Fig. 1.1 (left) Stan Ulam holding the Fermiac (Photo courtesy of Los Alamos National Laboratory); (right) Summit Supercomputer at Oak Ridge National Laboratory. (Photo courtesy of Oak Ridge National Laboratory).

1.3 What Problems can We Solve with It?

The range of different physical phenomena which can be explored using Monte Carlo methods is exceedingly broad. Models which either naturally or through approximation can be discretized can be considered. The motion of individual atoms may be examined directly; e.g. in a binary (AB) metallic alloy where one is interested in interdiffusion or unmixing kinetics (if the alloy was prepared in a thermodynamically unstable state) the random hopping of atoms to neighboring sites can be modeled directly. This problem is complicated because the jump rates of the different atoms depend on the locally differing environment. Of course, in this description the quantum mechanics of atoms with potential barriers in the eV range is not explicitly considered, and the sole effect of phonons (lattice vibrations) is to provide a ‘heat bath’ which provides the excitation energy for the jump events. Because of a separation of time scales (the characteristic times between jumps are orders of magnitude larger than atomic vibration periods) this approach provides very good approximation. The same kind of arguments hold true for growth phenomena involving macroscopic objects, such as DLA growth of colloidal particles; since their masses are orders of magnitude larger than atomic masses, the motion of colloidal particles in fluids is well described by classical, random Brownian motion. These systems are hence well suited to study by Monte Carlo simulations which use random numbers to realize random walks. The thermal properties of a fluid may be studied by considering ‘blocks’ of fluid as individual particles, but these blocks will be far larger than individual molecules. As an example, we consider ‘micelle formation’ in lattice models of microemulsions (water–oil–surfactant fluid mixtures) in which each surfactant molecule may be modeled by two ‘dimers’ on the lattice (two occupied nearest neighbor sites on the lattice). Different effective interactions allow one dimer to mimic the hydrophilic group and the other dimer the hydrophobic group of the surfactant molecule. This model then allows the study of the size and shape of the aggregates of surfactant molecules (the micelles) as well as the kinetic aspects of their formation. In reality, this process is quite slow so that a deterministic molecular dynamics simulation (i.e. numerical integration of Newton’s second law) is very difficult, if at all feasible. This example shows that part of the ‘art’ of simulation is the appropriate choice (or invention) of a suitable (coarse-grained) model. Large collections of interacting classical particles are directly amenable to Monte Carlo simulation, and the behavior of interacting quantized particles is being studied either by transforming the system into a pseudo-classical model or by considering permutation properties directly. These considerations will be discussed in more detail in later chapters. Equilibrium properties of systems of interacting atoms have been extensively studied, as have a wide range of models for simple and complex fluids, magnetic materials, metallic alloys, adsorbed surface layers, etc. More recently polymer models have been studied with increasing frequency; note that the simplest model of a flexible polymer is a random walk, an object which is well suited for Monte Carlo simulation. Furthermore, some of the most significant advances in understanding the theory of elementary particles have been made using Monte Carlo simulations of lattice gauge models. A topic which finds increasing applications is the solution of the Schrödinger equation for many interacting quantum particles by Monte Carlo methods.

1.4 What Difficulties Will We Encounter?

1.4.1 Limited computer time and memory

Because of limits on computer speed there are some problems which are inherently not suited to computer simulation at this time. A simulation which requires years of CPU time on whatever machine is available is simply impractical. Similarly, a calculation which requires memory which far exceeds that which is available can be carried out only by using very sophisticated programming techniques which slow down running speeds and greatly increase the probability of errors. It is therefore important that the user first consider the requirements of both memory and CPU time before embarking on a project to ascertain whether or not there is a realistic possibility of obtaining the resources to simulate a problem properly. Of course, with the rapid advances being made by the computer industry, it may be necessary to wait only a few years for computer facilities to catch up to your needs. Sometimes the tractability of a problem may require the invention of a new, more efficient simulation algorithm. Of course, developing new strategies to overcome such difficulties constitutes an exciting field of research by itself.

1.4.2 Statistical and other errors

Assuming that the project can be done, there are still potential sources of error which must be considered. These difficulties will arise in many different situations with different algorithms so we wish to mention them briefly at this time without reference to any specific simulation approach. All computers operate with limited word length and hence limited precision for numerical values of any variable. Truncation and round-off errors may in some cases lead to serious problems. In addition there are statistical errors which arise as an inherent feature of the simulation algorithm due to the finite number of members in the ‘statistical sample’ which is generated. These errors must be estimated and then a ‘policy’ decision must be made, i.e. should more CPU time be used to reduce the statistical errors or should the CPU time available be used to study the properties of the system under other conditions. Lastly there may be systematic errors. In this text we shall not concern ourselves with tracking down errors in computer programming – although the practitioner must make a special effort to eliminate any such errors – but with more fundamental problems. An algorithm may fail to treat a particular situation properly, e.g. due to the finite number of particles which are simulated, etc. These various sources of error will be discussed in more detail in later chapters.

1.4.3 Knowledge that every practitioner should have

At this point it should be clear to the reader that for using this book for practical work he/she needs to have some knowledge about computer programming as well as experience with the use of computers to solve tasks with computational physics methods. Since the early days of computer simulation, this background knowledge also has been greatly expanded. For filling any gaps in background, a rich literature is available. An example of a useful text for students by Hartmann (Reference Hartmann2009) provides some introduction to basic concepts of software engineering, general aspects of algorithms and data structures, and gives hints on debugging, basic data analysis, data plotting, data fitting, etc. In the present book, we will assume that the reader already has such fundamental knowledge and will further increase relevant skills in the process of implementing the algorithms described in the following pages.

1.5 What Strategy should We Follow in Approaching a Problem?

Most new simulations face hidden pitfalls and difficulties which may not be apparent in early phases of the work. It is therefore often advisable to begin with a relatively simple program and use relatively small system sizes and modest running times. Sometimes there are special values of parameters for which the answers are already known (either from analytic solutions or from previous, high quality simulations) and these cases can be used to test a new simulation program. By proceeding in this manner one is able to uncover which are the parameter ranges of interest and what unexpected difficulties are present. It is then possible to refine the program and then to increase running times. Thus both CPU time and human time can be used most effectively. It makes little sense of course to spend a month to rewrite a computer program which may result in a total saving of only a few minutes of CPU time. If it happens that the outcome of such test runs shows that a new problem is not tractable with reasonable effort, it may be desirable to attempt to improve the situation by redefining the model or redirect the focus of the study. For example, in polymer physics the study of short chains (oligomers) by a given algorithm may still be feasible even though consideration of huge macromolecules may be impossible.

1.6 How do Simulations relate to Theory and Experiment?

In many cases theoretical treatments are available for models for which there is no perfect physical realization (at least at the present time). In this situation the only possible test for an approximate theoretical solution is to compare with ‘data’ generated from a computer simulation. As an example we wish to mention activity in growth models, such as diffusion limited aggregation, for which a very large body of simulation results already existed before corresponding experiments were carried out. It is not an exaggeration to say that interest in this field was created by simulations. Even more dramatic examples are those of reactor meltdown or large scale nuclear war: although we want to know what the results of such events would be, we do not want to carry out experiments. There are also real physical systems which are sufficiently complex that they are not presently amenable to theoretical treatment. An example is the problem of understanding the specific behavior of a system with many competing interactions and which is undergoing a phase transition. A model Hamiltonian which is believed to contain all the essential features of the physics may be proposed, and its properties may then be determined from simulations. If the simulation (which now plays the role of theory) disagrees with experiment, then a new Hamiltonian must be sought. An important advantage of the simulations is that different physical effects which are simultaneously present in real systems may be isolated and, through separate consideration by simulation, may provide a much better understanding. Consider, for example, the phase behavior of polymer blends – materials which have ubiquitous applications in the plastics industry. The miscibility of different macromolecules is a challenging problem in statistical physics in which there is a subtle interplay between complicated enthalpic contributions (strong covalent bonds compete with weak van der Waals forces, and Coulombic interactions and hydrogen bonds may be present as well) and entropic effects (configurational entropy of flexible macromolecules, entropy of mixing, etc.). Real materials are very difficult to understand because of various asymmetries between the constituents of such mixtures (e.g. in shape and size, degree of polymerization, flexibility, etc.). Simulations of simplified models can ‘switch off’ or ‘switch on’ these effects and thus determine the particular consequences of each contributing factor. We wish to emphasize that the aim of simulations is not to provide better ‘curve fitting’ to experimental data than does analytic theory. The goal is to create an understanding of physical properties and processes which is as complete as possible, making use of the perfect control of ‘experimental’ conditions in the ‘computer experiment’ and of the possibility to examine every aspect of system configurations in detail. The desired result is then the elucidation of the physical mechanisms that are responsible for the observed phenomena. We therefore view the relationship between theory, experiment, and simulation to be similar to those of the vertices of a triangle, as shown in Fig. 1.2: each is distinct, but each is strongly connected to the other two.

Fig. 1.2 Schematic view of the relationship between theory, experiment, and computer simulation.

1.7 Perspective

The Monte Carlo method has had a considerable history in physics. As far back as 1949 a review of the use of Monte Carlo simulations using ‘modern computing machines’ was presented by Metropolis and Ulam (Reference Metropolis and Ulam1949). In addition to giving examples they also emphasized the advantages of the method. Of course, in the following decades the kinds of problems they discussed could be treated with far greater sophistication than was possible in the first half of the twentieth century, and many such studies will be described in succeeding chapters. Now, Monte Carlo simulations are reaching into areas that are far afield of physics. In succeeding chapters we will also provide the reader with a taste of what is possible with these techniques in other areas of investigation. It is also quite telling that there are now several software products on the market that perform simple Monte Carlo simulations in concert with widely distributed spreadsheet software on PCs.

With the rapidly increasing growth of computer power which we are now seeing, coupled with the steady drop in price, it is clear that computer simulations will be able to rapidly increase in sophistication to allow more subtle comparisons to be made. Even now, the combination of new algorithms and new high performance computing platforms has routinely allowed simulations to be performed for more than 106 (in special cases exceeding 3 × 1011 (Kadau et al., Reference Kadau, Germann and Lomdahl2006)) particles (spins). As a consequence it is no longer possible to view the system and look for ‘interesting’ phenomena without the use of sophisticated visualization techniques. The sheer volume of data that we are capable of producing has also reached unmanageable proportions. In order to permit further advances in the interpretation of simulations, it is likely that the inclusion of intelligent ‘agents’ (in the computer science sense) for steering and visualization, along with new data structures, will be needed. Such topics are beyond the scope of the text, but the reader should be aware of the need to develop these new strategies.

References

Hartmann, A. (2009), Practical Guide to Computer Simulations (World Scientific, Singapore).Google Scholar
Histoire de l’Académie royale des sciences ANNÉE M.DCCXXXIII. (Académie royale des sciences, France, 1735), https://books.google.de/books?id=GOAEAAAAQAAJ.Google Scholar
Kadau, K., Germann, T. C., and Lomdahl, P. S. (2006), Int. J. Mod. Phys. C 17, 1755.CrossRefGoogle Scholar
Metropolis, N. and Ulam, S. (1949), J. Amer. Stat. Assoc. 44, 335.CrossRefGoogle Scholar
Figure 0

Fig. 1.1 (left) Stan Ulam holding the Fermiac (Photo courtesy of Los Alamos National Laboratory); (right) Summit Supercomputer at Oak Ridge National Laboratory. (Photo courtesy of Oak Ridge National Laboratory).

Figure 1

Fig. 1.2 Schematic view of the relationship between theory, experiment, and computer simulation.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Introduction
  • David Landau, University of Georgia, Kurt Binder, Johannes Gutenberg Universität Mainz, Germany
  • Book: A Guide to Monte Carlo Simulations in Statistical Physics
  • Online publication: 24 November 2021
  • Chapter DOI: https://doi.org/10.1017/9781108780346.002
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Introduction
  • David Landau, University of Georgia, Kurt Binder, Johannes Gutenberg Universität Mainz, Germany
  • Book: A Guide to Monte Carlo Simulations in Statistical Physics
  • Online publication: 24 November 2021
  • Chapter DOI: https://doi.org/10.1017/9781108780346.002
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Introduction
  • David Landau, University of Georgia, Kurt Binder, Johannes Gutenberg Universität Mainz, Germany
  • Book: A Guide to Monte Carlo Simulations in Statistical Physics
  • Online publication: 24 November 2021
  • Chapter DOI: https://doi.org/10.1017/9781108780346.002
Available formats
×