We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we present computational Monte Carlo methods to sample from probability distributions, including Bayesian posteriors, that do not permit direct sampling. In doing so, we introduce the basis for Monte Carlo and Markov chain Monte Carlo sampling schemes and delve into specific methods. These include, at first, samplers such as the Metropolis–Hastings algorithms and Gibbs samplers and discuss the interpretation of the output of these samplers including the concept of burn-in and sample correlation. We also discuss more advanced sampling schemes including auxiliary variable samplers, multiplicative random walk samplers, and Hamiltonian Monte Carlo.
Review of mathematical and statistical concepts includes some foundational materials such as probability densities, Monte Carlo methods and Bayes’ rule are covered. We provide concept reviews that provide additional learning to the previous chapters. We aim to generate first an intuitive understanding of statistical concepts, then, if the student is interested, dive deeper into the mathemetical derivations. For example, principal component analysis can be taught by deriving the equations and making the link with eigenvalue decomposition of the covariance matrix. Instead, we start from simple two- and three-dimensional datasets and appeal to the student’s insight into the geometrical aspect: the study of an ellipse, and how we can transform it to a circle. This geometric aspect is explained without equations, but instead with plots and figures that appeal to intuition starting from geometry. In general, it is our experience that students in the geosciences retain much more practical knowledge when presented with material starting from case studies and intuitive reasoning.
A simple introduction to the Metropolis Monte Carlo method with a discussion of the main types of boundary conditions. The location of phase transitions and the investigation of pair correlation functions, important for establishing the existence of long-range order in liquid crystal models is also introduced.
This study aimed to evaluate the dosimetric effects of the metal prosthesis in radiotherapy by Siemens Primus 15 MV linac accelerator. In addition, it proposed the new material could lead to less dose perturbation.
Materials and methods:
The depth dose distributions of typical hip prostheses were calculated for 15 MV photons by MCNP-4C code. Five metal prostheses were selected to reveal the correlation between material type, density and dose perturbations of prostheses. Furthermore, the effects of the location and thickness of the prosthesis on the dose perturbation were also discussed and analysed.
Results:
The results showed that the Co-Cr-Mo alloy as the prosthesis had more influence on the dose at the interface of metal tissue. The dose increased at the entrance of this prosthesis and experienced the reduction when passed through it. Finally, the impact of the new PEEK biomedical polymer materials was also investigated, and the lowest dose perturbations were introduced based on the obtained results.
Conclusion:
It was found that the mean relative dose before and after of PEEK prosthesis was 99·2 and 97·1%, respectively. Therefore, this new biomedical polymer material was proposed to replace the current metal implants.
We provide the first generic exact simulation algorithm for multivariate diffusions. Current exact sampling algorithms for diffusions require the existence of a transformation which can be used to reduce the sampling problem to the case of a constant diffusion matrix and a drift which is the gradient of some function. Such a transformation, called the Lamperti transformation, can be applied in general only in one dimension. So, completely different ideas are required for the exact sampling of generic multivariate diffusions. The development of these ideas is the main contribution of this paper. Our strategy combines techniques borrowed from the theory of rough paths, on the one hand, and multilevel Monte Carlo on the other.
This work explores the uncertainty of the inferred maize pollen emission rate using measurements and simulations of pollen dispersion at Grignon in France. Measurements were obtained via deposition of pollen on the ground in a canopy gap; simulations were conducted using the two-dimensional Lagrangian Stochastic Mechanistic mOdel for Pollen dispersion and deposition (SMOP). First, a quantitative evaluation of the model's performance was conducted using a global sensitivity analysis to analyse the convergence behaviour of the results and scatter diagrams. Then, a qualitative study was conducted to infer the pollen emission rate and calibrate the methodology against experimental data for several sets of variable values. The analysis showed that predicted and observed values were in good agreement and the calculated statistical indices were mostly within the range of acceptable model performance. Furthermore, it was revealed that the mean settling velocity and vertical leaf area index are the main variables affecting pollen deposition in the canopy gap. Finally, an estimated pollen emission rate was obtained according to a restricted setting, where the model studied includes no deposition on leaves, no resuspension and with horizontal pollen fluctuations either taken into account or not. The estimated pollen emission rate obtained was nearly identical to the measured quantity. In conclusion, the findings of the current study show that the described methodology could be an interesting approach for accurate prediction of maize pollen deposition and emission rates and may be appropriate for other pollen types.
Suppose X is a multidimensional diffusion process. Assume that at time zero the state of X is fully observed, but at time
$T>0$
only linear combinations of its components are observed. That is, one only observes the vector
$L X_T$
for a given matrix L. In this paper we show how samples from the conditioned process can be generated. The main contribution of this paper is to prove that guided proposals, introduced in [35], can be used in a unified way for both uniformly elliptic and hypo-elliptic diffusions, even when L is not the identity matrix. This is illustrated by excellent performance in two challenging cases: a partially observed twice-integrated diffusion with multiple wells and the partially observed FitzHugh–Nagumo model.
Partial differential equations are powerful tools for used to characterizing various physical systems. In practice, measurement errors are often present and probability models are employed to account for such uncertainties. In this paper we present a Monte Carlo scheme that yields unbiased estimators for expectations of random elliptic partial differential equations. This algorithm combines a multilevel Monte Carlo method (Giles (2008)) and a randomization scheme proposed by Rhee and Glynn (2012), (2013). Furthermore, to obtain an estimator with both finite variance and finite expected computational cost, we employ higher-order approximations.
Shiga toxin-producing Escherichia coli (STEC) is an important cause of gastroenteritis (GE) and haemolytic uraemic syndrome (HUS). Incidence of STEC illness is largely underestimated in notification data, particularly of serogroups other than O157 (‘non-O157’). Using HUS national notification data (2008–2012, excluding 2011), we modelled true annual incidence of STEC illness in Germany separately for O157 and non-O157 STEC, taking into account the groups’ different probabilities of causing bloody diarrhoea and HUS, and the resulting difference in their under-ascertainment. Uncertainty of input parameters was evaluated by stochastic Monte Carlo simulations. Median annual incidence (per 100 000 population) of STEC-associated HUS and STEC-GE was estimated at 0·11 [95% credible interval (CrI) 0·08-0·20], and 35 (95% CrI 12-145), respectively. German notification data underestimated STEC-associated HUS and STEC-GE incidences by factors of 1·8 and 32·3, respectively. Non-O157 STEC accounted for 81% of all STEC-GE, 51% of all bloody STEC-GE and 32% of all STEC-associated HUS cases. Non-O157 serogroups dominate incidence of STEC-GE and contribute significantly to STEC-associated HUS in Germany. This might apply to many other countries considering European surveillance data on HUS. Non-O157 STEC should be considered in parallel with STEC O157 when searching aetiology in patients with GE or HUS, and accounted for in modern surveillance systems.
We present an asymptotically optimal importance sampling for Monte Carlo simulation of the Laplace transform of exponential Brownian functionals which plays a prominent role in many disciplines. To this end we utilize the theory of large deviations to reduce finding an asymptotically optimal importance sampling measure to solving a calculus of variations problem. Closed-form solutions are obtained. In addition we also present a path to the test of regularity of optimal drift which is an issue in implementing the proposed method. The performance analysis of the method is provided through the Dothan bond pricing model.
The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ −16.4 and US$ −2.17, respectively) and also at 90 days (US$ −30.8 and US$ −0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model’s results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically viable when the kids diet consists mainly of either cow milk or fermented colostrum, especially when kids are slaughtered at 90 days of age.
Nonlinear filter problems arise in many applications such as communications and signal processing. Commonly used numerical simulation methods include Kalman filter method, particle filter method, etc. In this paper a novel numerical algorithm is constructed based on samples of the current state obtained by solving the state equation implicitly. Numerical experiments demonstrate that our algorithm is more accurate than the Kalman filter and more stable than the particle filter.
Computing the value of a high-dimensional integral can often be reduced to the problem of finding the ratio between the measures of two sets. Monte Carlo methods are often used to approximate this ratio, but often one set will be exponentially larger than the other, which leads to an exponentially large variance. A standard method of dealing with this problem is to interpolate between the sets with a sequence of nested sets where neighboring sets have relative measures bounded above by a constant. Choosing such a well-balanced sequence can rarely be done without extensive study of a problem. Here a new approach that automatically obtains such sets is presented. These well-balanced sets allow for faster approximation algorithms for integrals and sums using fewer samples, and better tempering and annealing Markov chains for generating random samples. Applications, such as finding the partition function of the Ising model and normalizing constants for posterior distributions in Bayesian methods, are discussed.
This paper presents a gaskinetic study and analytical results on high speed rarefied gas flows from a planar exit. The beginning of this paper reviews the results for planar free jet expanding into a vacuum, followed by an investigation of jet impingement on normally set plates with either a diffuse or a specular surface. Presented results include exact solutions for flowfield and surface properties. Numerical simulations with the direct simulation Monte Carlo method were performed to validate these analytical results, and good agreement with this is obtained for flows at high Knudsen numbers. These highly rarefied jet and jet impingement results can provide references for real jet and jet impingement flows.
A vertex i of a graph G = (V,E) is said to be controlled by $M \subseteq V$ if the majority of the elements of
the neighborhood of i (including itself) belong to M. The set
M is a monopoly in G if every vertex $i\in V$ is
controlled by M. Given a set $M \subseteq V$ and two graphs
G1 = ($V,E_1$) and G2 = ($V,E_2$) where $E_1\subseteq E_2$, the
monopoly verification problem (mvp) consists of deciding
whether there exists a sandwich graph G = (V,E) (i.e., a graph
where $E_1\subseteq E\subseteq E_2$) such that M is a monopoly
in G = (V,E). If the answer to the mvp is No, we then
consider the max-controlled set problem (mcsp), whose
objective is to find a sandwich graph G = (V,E) such that the
number of vertices of G controlled by M is maximized. The mvp can be solved in polynomial time; the mcsp, however, is
NP-hard. In this work, we present a deterministic polynomial time
approximation algorithm for the mcsp with ratio
$\frac{1}{2}$ + $\frac{1+\sqrt{n}}{2n-2}$, where n=|V|>4. (The
case $n\leq4$ is solved exactly by considering the parameterized
version of the mcsp.) The algorithm is obtained through the
use of randomized rounding and derandomization techniques based on
the method of conditional expectations. Additionally, we show how
to improve this ratio if good estimates of expectation are obtained in advance.
The aim of this paper is to study the stochastic monotonicity and continuity properties of the extinction time of Bellman-Harris branching processes depending on their reproduction laws. Moreover, we show their applications in an epidemiological context, obtaining an optimal criterion to establish the proportion of susceptible individuals in a given population that must be vaccinated in order to eliminate an infectious disease. First the spread of infection is modelled by a Bellman-Harris branching process. Finally, we provide a simulation-based method to determine the optimal vaccination policies.
Les pièces d'un lot, théoriquement identiques, ne peuvent réellement pas avoir des dimensions égales. Une cote ne sera réalisable que si l'on tolère un écart par rapport à l'idéal. Ce dernier est déterminé par un couple de grandeurs qui sont soit les bornes d'un intervalle, soit la moyenne et la variance du lot. Le plus souvent, on définit deux états limites maximal et minimal. Ces derniers, appelées tolérances, doivent être déterminés judicieusement. Un tolérancement utilise des indications syntaxiques et sémantiques pour apporter un sens. L'objectif de cet article est de proposer et de valider une méthodologie d'aide au choix et à la vérification des tolérances. Cette méthodologie est basée sur deux méthodes complémentaires : la méthode au pire des cas (intervalles) et la méthode de Monte Carlo (statistique). Une mise en œuvre informatique a montré la faisabilité de l'outil ainsi que l'apport majeur de la méthodologie proposée dans l'aide à la spécification, à l'optimisation et à la vérification des tolérances avec prise en compte des conditions de fabrication.
The paper deals with the asymptotic behavior of the bridge of a Gaussian process conditioned to stay in n fixed points at n fixed past instants. In particular, functional large deviation results are stated for small time. Several examples are considered: integrated or not fractional Brownian motions and m-fold integrated Brownian motion. As an application, the asymptotic behavior of the exit probability is studied and used for the practical purpose of the numerical computation, via Monte Carlo methods, of the hitting probability up to a given time of the unpinned process.
The frequency and determinants of abnormal test performance by normal individuals are critically important to clinical inference. Here we compare two approaches to predicting rates of abnormal test performance among healthy individuals with the rates actually shown by 327 neurologically normal adults aged 18–92 years. We counted how many participants produced abnormal scores, defined by three different cutoffs with test batteries of varied length, and the number of abnormal scores they produced. Observed rates generally were closer to predictions based on a series of Monte Carlo simulations than on the binomial model. They increased with the number of tests administered, decreased as more stringent cutoffs were used to identify abnormality, varied with the degree of correlation among test scores, and depended on individual differences in age, education, race, sex, and estimated premorbid IQ. Adjusting scores for demographic variables and premorbid IQ did not reduce rates of abnormal performance. However, it eliminated the contribution of these variables to rates of abnormal test performance. These findings raise fundamental questions about the nature and interpretation of abnormal test performance by normal, healthy adults. (JINS, 2008, 14, 436–445.)
An analytical model for determining the shear strength of concrete pile caps under the failure mode of diagonal-compression originally based on the softened strut-and-tie model is proposed. The failure probabilities of reinforced concrete pile caps are investigated by Monte Carlo method. The results indicate that the proposed model can accurately predict the shear strength of the pile caps. The distribution of the failure probabilities for pile caps designed to ACI 318-02 Appendix A and the proposed design method are more uniform than that designed to the ACI 318-99. The ACI 318-99 is very conservative and cannot provide a consistent safety for pile caps design. It is suggested that the procedures in the ACI 318-02 Appendix A should be moved to the main body of ACI 318-02 and the proposed design method should be incorporated into the current reinforced concrete pile cap design methods.