We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Researchers in the field of conjoint analysis know the index-of-fit values worsen as the judgmental error of evaluation increases. This simulation study provides guidelines on the goodness of fit based on distribution of index-of-fit for different conjoint analysis designs. The study design included the following factors: number of profiles, number of attributes, algorithm used and judgmental model used. Critical values are provided for deciding the statistical significance of conjoint analysis results. Using these cumulative distributions, the power of the test used to reject the null hypothesis of random ranking is calculated. The test is found to be quite powerful except for the case of very small residual degrees of freedom.
This paper proposes test statistics based on the likelihood ratio principle for testing equality of proportions in correlated data with additional incomplete samples. Powers of these tests are compared through Monte Carlo simulation with those of tests proposed recently by Ekbohm (based on an unbiased estimator) and Campbell (based on a Pearson-Chi-squared type statistic). Even though tests based on the maximum likelihood principle are theoretically expected to be superior to others, at least asymptotically, results from our simulations show that the gain in power could only be slight.
Several neural networks have been proposed in the general literature for pattern recognition and clustering, but little empirical comparison with traditional methods has been done. The results reported here compare neural networks using Kohonen learning with a traditional clustering method (K-means) in an experimental design using simulated data with known cluster solutions. Two types of neural networks were examined, both of which used unsupervised learning to perform the clustering. One used Kohonen learning with a conscience and the other used Kohonen learning without a conscience mechanism. The performance of these nets was examined with respect to changes in the number of attributes, the number of clusters, and the amount of error in the data. Generally, the K-means procedure had fewer points misclassified while the classification accuracy of neural networks worsened as the number of clusters in the data increased from two to five.
In this paper, we propose a cluster-MDS model for two-way one-mode continuous rating dissimilarity data. The model aims at partitioning the objects into classes and simultaneously representing the cluster centers in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of dissimilarities in a maximum likelihood framework. In each iteration, the probability that a dissimilarity belongs to each of the blocks conforming to a partition of the original dissimilarity matrix, and the rest of parameters, are estimated in a simulated annealing based algorithm. A model selection strategy is used to test the number of latent classes and the dimensionality of the problem. Both simulated and classical dissimilarity data are analyzed to illustrate the model.
A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll’s (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the method and compare it to an existing procedure called GENCOM, proposed by Green and Carroll (1988). We find that the proposed method outperforms the GENCOM algorithm both with respect to model fit and recovery of the true structure.
Lung cancer ranks high among the causes of mortality in cancer patients, as per the most recent World Health Organization report. Proton therapy offers a precise approach to treating lung cancer by delivering protons with high accuracy to the targeted site. However, inaccuracies in proton delivery can lead to increased toxicity in healthy tissues. This study aims to investigate the correlation between proton beam dose profiles in lung tumours and the scattered gamma particles.
Material and methods:
The study utilised the Gate simulation software to simulate proton beam radiation and an imaging system for prompt gamma imaging during proton therapy. An anthropomorphic Non-uniform rational B-spline (NURBS) cardiac and torso (NCAT) phantom was employed to replicate lung tumours of various sizes. The imaging system comprised a multi-slit collimation system, CsI(Tl) scintillator arrays and a multichannel data acquisition system. Simulations were conducted to explore the relationship between prompt gamma detection and proton range for different tumour sizes.
Results:
Following 60 MeV proton irradiation of the NCAT phantom, the study examined the gamma energy spectrum, identifying peak intensities at energies of 2.31, 3.8, 4.44, 5.27 and 6.13 MeV. Adjustments to the proton beam source tailored to tumour sizes achieved a coverage rate of 98%. Optimal energies ranging from 77 to 91.5 MeV were determined for varying tumour volumes, supported by dose distribution profiles and prompt gamma distribution illustrations.
Discussion:
The study evaluated the viability of utilising 2D gamma imaging with a multi-slit collimator scintillation camera for real-time monitoring of dose delivery during proton therapy for lung cancer. The findings indicated that this method is most suitable for small lung tumours (radius ≤ 12 mm) due to reduced gamma emission from larger tumours.
Conclusion:
While the study demonstrates promising results in range estimation using prompt gamma particles, challenges were encountered in accurately estimating large tumours using this method.
The solid Earth's medium is heterogeneous over a wide range of scales. Seismological observations, including envelope broadening with increasing distance from an earthquake source and the excitation of long-lasting coda waves, provide a means of investigating velocity inhomogeneities in the lithosphere. These phenomena have been studied primarily using radiative transfer theory with random medium modelling. This book presents the mathematical foundations of scalar- and vector-wave scattering in random media, using the Born or Eikonal approximation, which are useful for understanding random inhomogeneity spectra and the scattering characteristics of the solid Earth. A step-by-step Monte Carlo simulation procedure is presented for synthesizing the propagation of energy density for impulsive radiation from a source in random media. Simulation results are then verified by comparison with analytical solutions and finite-difference simulations. Presenting the latest seismological observations and analysis techniques, this is a useful reference for graduate students and researchers in geophysics and physics.
This article introduces a comprehensive framework that effectively combines experience rating and exposure rating approaches in reinsurance for both short-tail and long-tail businesses. The generic framework applies to all nonlife lines of business and products emphasizing nonproportional treaty business. The approach is based on three pillars that enable a coherent usage of all available information. The first pillar comprises an exposure-based generative model that emulates the generative process leading to the observed claims experience. The second pillar encompasses a standardized reduction procedure that maps each high-dimensional claim object to a few weakly coupled reduced random variables. The third pillar comprises calibrating the generative model with retrospective Bayesian inference. The derived calibration parameters are fed back into the generative model, and the reinsurance contracts covering future cover periods are rated by projecting the calibrated generative model to the cover period and applying the future contract terms.
In this manuscript, we address open questions raised by Dieker and Yakir (2014), who proposed a novel method of estimating (discrete) Pickands constants $\mathcal{H}^\delta_\alpha$ using a family of estimators $\xi^\delta_\alpha(T)$, $T>0$, where $\alpha\in(0,2]$ is the Hurst parameter, and $\delta\geq0$ is the step size of the regular discretization grid. We derive an upper bound for the discretization error $\mathcal{H}_\alpha^0 - \mathcal{H}_\alpha^\delta$, whose rate of convergence agrees with Conjecture 1 of Dieker and Yakir (2014) in the case $\alpha\in(0,1]$ and agrees up to logarithmic terms for $\alpha\in(1,2)$. Moreover, we show that all moments of $\xi_\alpha^\delta(T)$ are uniformly bounded and the bias of the estimator decays no slower than $\exp\{-\mathcal CT^{\alpha}\}$, as T becomes large.
Qu, Dassios, and Zhao (2021) suggested an exact simulation method for tempered stable Ornstein–Uhlenbeck processes, but their algorithms contain some errors. This short note aims to correct their algorithms and conduct some numerical experiments.
The Istanbul metroplex airspace, home to Atatürk (LTBA), Sabiha Gökçen (LTFJ), and Istanbul (LTFM) international airports, is a critical hub for international travel, trade and commerce between Europe and Asia. The high air traffic volume and the proximity of multiple airports make air traffic management (ATM) a significant challenge. To better manage this complex air traffic, it is necessary to conduct detailed analyses of the capacities of these airports and surrounding airspace. In this study, Monte Carlo simulation is used to determine the ultimate and practical capacities of the airport and surrounding airspace and compare them to identify any differences or limitations. The traffic mix, runway occupancy time and traffic distribution at airspace entry points are randomised variables that directly impact airport and airspace capacities and delays. The study aims to determine the current capacities of the runways and routes in the metroplex airspace and project the future capacities with the addition of new facilities. The results demonstrated that the actual bottleneck could be experienced in airspace, rather than runways, which was the focus of the previous literature. Thus, this study will provide valuable insights for stakeholders in the aviation industry to effectively manage air traffic in the metroplex airspace and meet the growing demand.
Monte Carlo (MC) simulations of interlayer molecular structure in monolayer hydrates of Na-saturated Wyoming-type montmorillonites and vermiculite were performed. Detailed comparison of the stimulation results with experimental diffraction and thermodynamic data for these clay-water systems indicated good semiquantitative to quantitative agreement. The MC simulations revealed that, in the monolayer hydrate, interlayer water molecules tend to increase their occupation of the midplane as layer charge increases. As the percentage of tetrahedral layer charge increases, water molecules are induced to interact with the siloxane surface O atoms through hydrogen bonding and Na+ counter-ions are induced to form inner-sphere surface complexes. These results suggest the need for careful diffraction experiments on a series of monolayer hydrates of montmorillonite whose layer charge and tetrahedral isomorphic substitution charge vary systematically.
Monte Carlo (MC) simulations of molecular structure in the interlayers of 2:1 Na-saturated clay minerals were performed to address several important simulation methodological issues. Investigation was focused on monolayer hydrates of the clay minerals because these systems provide a severe test of the quality and sensitivity of MC interlayer simulations. Comparisons were made between two leading models of the water-water interaction in condensed phases, and the sensitivity of the simulations to the size or shape of the periodically-repeated simulation cell was determined. The results indicated that model potential functions permitting significant deviations from the molecular environment in bulk liquid water are superior to those calibrated to mimic the bulk water structure closely. Increasing the simulation cell size or altering its shape from a rectangular 21.12 Å × 18.28 Å × 6.54 Å cell (about eight clay mineral unit cells) had no significant effect on the calculated interlayer properties.
Chapter 6 demonstrates one way that RIO can be used for exploratory data analysis: identifying statistically significant interaction terms. We show how exploring the relationships among cases offers important insights into the relationships between variables.
The adoption of genomic technologies in the context of hospital-based health technology assessment presents multiple practical and organizational challenges.
Objective
This study aimed to assist the Instituto Português de Oncologia de Lisboa Francisco Gentil (IPO Lisboa) decision makers in analyzing which acute myeloid leukemia (AML) genomic panel contracting strategies had the highest value-for-money.
Methods
A tailored, three-step approach was developed, which included: mapping clinical pathways of AML patients, building a multicriteria value model using the MACBETH approach to evaluate each genomic testing contracting strategy, and estimating the cost of each strategy through Monte Carlo simulation modeling. The value-for-money of three contracting strategies – “Standard of care (S1),” “FoundationOne Heme test (S2),” and “New diagnostic test infrastructure (S3)” – was then analyzed through strategy landscape and value-for-money graphs.
Results
Implementing a larger gene panel (S2) and investing in a new diagnostic test infrastructure (S3) were shown to generate extra value, but also to entail extra costs in comparison with the standard of care, with the extra value being explained by making available additional genetic information that enables more personalized treatment and patient monitoring (S2 and S3), access to a broader range of clinical trials (S2), and more complete databases to potentiate research (S3).
Conclusion
The proposed multimethodology provided IPO Lisboa decision makers with comprehensive and insightful information regarding each strategy’s value-for-money, enabling an informed discussion on whether to move from the current Strategy S1 to other competing strategies.
A theoretically consistent structural model facilitates definition and measurement of use and non-use benefits of ecosystem services. Unlike many previous approaches that utilize multiple stated choice situations, we apply this conceptual framework to a travel cost random utility model and a consequential single referendum contingent valuation research design for simultaneously estimating use and non-use willingness to pay for environmental quality improvement. We employ Monte Carlo generated data to evaluate properties of key parameters and examine the robustness of this method of measuring use and non-use values associated with quality change. The simulation study confirms that this new method, combined with simulated revealed and stated preference data can generally, but not always, be applied to successfully identify use and non-use values of various ecosystems while consistency is ensured.
In this chapter, we discuss the use of simulations for clinical trials. Simulation in statistics generally refers to repeated analyses of randomly generated datasets with known properties. Clinical trial simulation is required to explore, compare, and characterise operating characteristics and statistical properties of adaptive and other innovative trials with complex designs. Clinical trial simulation is an important tool that allows for comparison of different design choices during the planning stage to enhance the quality and feasibility of the trial. While simulations are most frequently used in adaptive and other complex trial designs, they can be applied to fixed trial designs.
Quantifying the multiscale hydraulic heterogeneity in aquifers and their effects on solute transport is the task of this chapter. Using spatial statistics, we explain how to quantify spatial variability of hydraulic properties or parameters in the aquifer using the stochastic or random field concept. In particular, we discuss spatial covariance, variogram, statistical homogeneity, heterogeneity, isotropy, and anisotropy concepts. Field examples complement the discussion. We then present a highly parameterized heterogeneous media (HPHM) approach for simulating flow and solute transport in aquifers with spatially varying hydraulic properties to meet our interest and observation scale. However, our limited ability to collect the needed information for this approach promotes alternatives such as Monte Carlo simulation, zonation, and equivalent homogeneous media (EHM) approaches with macrodispersion approaches. This chapter details the EHM with the macordispersion concept.
True and Error Theory (TET) provides a method to separate the variability of behavior into components due to changing true policy and to random error. TET is a testable theory that can serve as a statistical model, allowing one to evaluate substantive theories as nested, special cases. TET is more accurate descriptively and has theoretical advantages over previous approaches. This paper presents a freely available computer program in R that can be used to fit and evaluate both TET and substantive theories that are special cases of it. The program performs Monte Carlo simulations to generate distributions of test statistics and bootstrapping to provide confidence intervals on parameter estimates. Use of the program is illustrated by a reanalysis of previously published data testing whether what appeared to be violations of Expected Utility (EU) theory (Allais paradoxes) by previous methods might actually be consistent with EU theory.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
This short chapter recapitulates the substantive advances made by the individual chapters in this volume before closing remarks on the difference between using probability to represent epistemic uncertainty and modelling variability, two exercises that are easily confused, and on the use of models to answer historical questions.