We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Who joins extremist movements? Answering this question is beset by methodological challenges as survey techniques are infeasible and selective samples provide no counterfactual. Recruits can be assigned to contextual units, but this is vulnerable to problems of ecological inference. In this article, we elaborate a technique that combines survey and ecological approaches. The Bayesian hierarchical case–control design that we propose allows us to identify individual-level and contextual factors patterning the incidence of recruitment to extremism, while accounting for spatial autocorrelation, rare events, and contamination. We empirically validate our approach by matching a sample of Islamic State (ISIS) fighters from nine MENA countries with representative population surveys enumerated shortly before recruits joined the movement. High-status individuals in their early twenties with college education were more likely to join ISIS. There is more mixed evidence for relative deprivation. The accompanying extremeR package provides functionality for applied researchers to implement our approach.
Experiments have suggested that decisions from experience differ from decisions from description. In experience-based decisions, the decision makers often fail to maximise their payoffs. Previous authors have ascribed the effect of underweighting of rare outcomes to this deviation from maximisation. In this paper, I re-examine and provide further analysis on the effect with an experiment that involves a series of simple binary choice gambles. In the current experiment, decisions that bear small consequences are repeated hundreds of times, feedback on the consequence of each decision is provided immediately, and decision outcomes are accumulated. The participants have to learn about the outcome distributions through sampling, as they are not explicitly provided with prior information on the payoff structure. The current results suggest that the “hot stove effect” is stronger than suggested by previous research and is as important as the payoff variability effect and the effect of underweighting of rare outcomes in analysing decisions from experience in which the features of gambles must be learned through a sampling process.
Previous research demonstrates overestimation of rare events in judgment tasks, and underweighting of rare events in decisions from experience. The current paper presents three laboratory experiments and a field study that explore this pattern. The results suggest that the overestimation and underweighting pattern can emerge in parallel. Part of the difference between the two tendencies can be explained as a product of a contingent recency effect: Although the estimations reflect negative recency, choice behavior reflects positive recency. A similar pattern is observed in the field study: Immediately following an aversive rare-event (i.e., a suicide bombing) people believe the risk decreases (negative recency) but at the same time exhibit more cautious behavior (positive recency). The rest of the difference is consistent with two well established mechanisms: judgment error and the use of small samples in choice. Implications for the two-stage choice model are discussed.
We study convergence of return- and hitting-time distributions of small sets $E_{k}$ with $\unicode[STIX]{x1D707}(E_{k})\rightarrow 0$ in recurrent ergodic dynamical systems preserving an infinite measure $\unicode[STIX]{x1D707}$. Some properties which are easy in finite measure situations break down in this null-recurrent set-up. However, in the presence of a uniform set $Y$ with wandering rate regularly varying of index $1-\unicode[STIX]{x1D6FC}$ with $\unicode[STIX]{x1D6FC}\in (0,1]$, there is a scaling function suitable for all subsets of $Y$. In this case, we show that return distributions for the $E_{k}$ converge if and only if the corresponding hitting-time distributions do, and we derive an explicit relation between the two limit laws. Some consequences of this result are discussed. In particular, this leads to improved sufficient conditions for convergence to ${\mathcal{E}}^{1/\unicode[STIX]{x1D6FC}}{\mathcal{G}}_{\unicode[STIX]{x1D6FC}}$, where ${\mathcal{E}}$ and ${\mathcal{G}}_{\unicode[STIX]{x1D6FC}}$ are independent random variables, with ${\mathcal{E}}$ exponentially distributed and ${\mathcal{G}}_{\unicode[STIX]{x1D6FC}}$ following the one-sided stable law of order $\unicode[STIX]{x1D6FC}$ (and ${\mathcal{G}}_{1}:=1$). The same principle also reveals the limit laws (different from the above) which occur at hyperbolic periodic points of prototypical null-recurrent interval maps. We also derive similar results for the barely recurrent $\unicode[STIX]{x1D6FC}=0$ case.
Provides some of the details of numerical optimization as applied to likelihood functions and discusses possible problems, both computational and data-generated.
The temperature programmed molecular dynamics (TPMD) method is a recent addition to the list of rare-event simulation techniques for materials. Study of thermally-activated events that are rare at molecular dynamics (MD) timescales is possible with TPMD by employing a temperature program that raises the temperature in stages to a point where the transitions happen frequently. Analysis of the observed waiting time distribution yields a wealth of information including kinetic mechanisms in the material, their rate constants and Arrhenius parameters. The first part of this review covers the foundations of the TPMD method. Recent applications of TPMD are discussed to highlight its main advantages. These advantages offer the possibility for rapid construction of kinetic Monte Carlo (KMC) models of a chosen accuracy using TPMD. In this regards, the second part focuses on the latest developments on uncertainty measures for KMC models. The third part focuses on current challenges for the TPMD method and ways of resolving them.
In this work, we develop a minimum action method (MAM) with optimal linear time scaling, called tMAM for short. The main idea is to relax the integration time as a functional of the transition path through optimal linear time scaling such that a direct optimization of the integration time is not required. The Feidlin-Wentzell action functional is discretized by finite elements, based on which h-type adaptivity is introduced to tMAM. The adaptive tMAM does not require reparametrization for the transition path. It can be applied to deal with quasi-potential: 1) When the minimal action path is subject to an infinite integration time due to critical points, tMAM with a uniform mesh converges algebraically at a lower rate than the optimal one. However, the adaptive tMAM can recover the optimal convergence rate. 2) When the minimal action path is subject to a finite integration time, tMAM with a uniform mesh converges at the optimal rate since the problem is not singular, and the optimal integration time can be obtained directly from the minimal action path. Numerical experiments have been implemented for both SODE and SPDE examples.
We present an efficient algorithm for calculating the minimum energy path (MEP) and energy barriers between local minima on a multidimensional potential energy surface (PES). Such paths play a central role in the understanding of transition pathways between metastable states. Our method relies on the original formulation of the string method [Phys. Rev. B, 66,052301 (2002)], i.e. to evolve a smooth curve along a direction normal to the curve. The algorithm works by performing minimization steps on hyperplanes normal to the curve. Therefore the problem of finding MEP on the PES is remodeled as a set of constrained minimization problems. This provides the flexibility of using minimization algorithms faster than the steepest descent method used in the simplified string method [J. Chem. Phys., 126(16), 164103 (2007)]. At the same time, it provides a more direct analog of the finite temperature string method. The applicability of the algorithm is demonstrated using various examples.
Parallel replica dynamics is a method for accelerating the computation of processes characterized by a sequence of infrequent events. In this work, the processes are governed by the overdamped Langevin equation. Such processes spend much of their time about the minima of the underlying potential, occasionally transitioning into different basins of attraction. The essential idea of parallel replica dynamics is that the exit distribution from a given well for a single process can be approximated by the distribution of the first exit of N independent identical processes, each run for only 1 /N-th the amount of time. While promising, this leads to a series of numerical analysis questions about the accuracy of the exit distributions. Building upon the recent work in [C. Le Bris, T. Lelièvre, M. Luskin and D. Perez, Monte Carlo Methods Appl. 18 (2012) 119–146], we prove a unified error estimate on the exit distributions of the algorithm against an unaccelerated process. Furthermore, we study a dephasing mechanism, and prove that it will successfully complete.
An occupancy problem with an infinite number of bins and a random probability vector for the locations of the balls is considered. The respective sizes of the bins are related to the split times of a Yule process. The asymptotic behavior of the landscape of the first empty bins, i.e. the set of corresponding indices represented by point processes, is analyzed and convergences in distribution to mixed Poisson processes are established. Additionally, the influence of the random environment, the random probability vector, is analyzed. It is represented by two main components: an independent, identically distributed sequence and a fixed random variable. Each of these components has a specific impact on the qualitative behavior of the stochastic model. It is shown in particular that, for some values of the parameters, some rare events, which are identified, determine the asymptotic behavior of the average values of the number of empty bins in some regions.
This paper examines a problem of importance to the telecommunications industry. In the design of modern ATM switches, it is necessary to use simulation to estimate the probability that a queue within the switch exceeds a given large value. Since these are extremely small probabilities, importance sampling methods must be used. Here we obtain a change of measure for a broad class of models with direct applicability to ATM switches.
We consider a model with A independent sources of cells where each source is modeled by a Markov renewal point process with batch arrivals. We do not assume the sources are necessarily identically distributed, nor that batch sizes are independent of the state of the Markov process. These arrivals join a queue served by multiple independent servers, each with service times also modeled as a Markov renewal process. We only discuss a time-slotted system. The queue is viewed as the additive component of a Markov additive chain subject to the constraint that the additive component remains non-negative. We apply the theory in McDonald (1999) to obtain the asymptotics of the tail of the distribution of the queue size in steady state plus the asymptotics of the mean time between large deviations of the queue size.
We consider the classical risk model with subexponential claim size distribution. Three methods are presented to simulate the probability of ultimate ruin and we investigate their asymptotic efficiency. One, based upon a conditional Monte Carlo idea involving the order statistics, is shown to be asymptotically efficient in a certain sense. We use the simulation methods to study the accuracy of the standard Embrechts-Veraverbeke [16] approximation for the ruin probability and also suggest a new one based upon ideas of Hogan [21].
We discuss the limits of point processes which are generated by a triangular array of rare events. Such point processes are motivated by the exceedances of a high boundary by a random sequence since exceedances are rare events in this case. This application relates the problem to extreme value theory from where the method is used to treat the asymptotic approximation of these point processes. The presented general approach extends, unifies and clarifies some of the various conditions used in the extreme value theory.
Let ψ(u) be the ruin probability in a risk process with initial reserve u, Poisson arrival rate β, claim size distribution B and premium rate p(x) at level x of the reserve. Let y(x) be the non-zero solution of the local Lundberg equation . It is shown that is non-decreasing and that log ψ(u) ≈ –I(u) in a slow Markov walk limit. Though the results and conditions are of large deviations type, the proofs are elementary and utilize piecewise comparisons with standard risk processes with a constant p. Also simulation via importance sampling using local exponential change of measure defined in terms of the γ(x) is discussed and some numerical results are presented.
The theory of robust non-linear filtering in Clark (1978) and Davis (1980), (1982) is used to evaluate the limiting conditional distribution of a diffusion, given an observation of a ‘rare-event' sample-path of the diffusion, as the signal-to-noise ratio and the diffusion noise-intensity converge to infinity and zero respectively. Under mild conditions it is shown that the limiting conditional distribution is a Dirac measure concentrated at a trajectory which solves a variational problem parametrised by the sample-path of the observed signal.
To treat the transient behavior of a system modeled by a stationary Markov process in continuous time, the state space is partitioned into good and bad states. The distribution of sojourn times on the good set and that of exit times from this set have a simple renewal theoretic relationship. The latter permits useful bounds on the exit time survival function obtainable from the ergodic distribution of the process. Applications to reliability theory and communication nets are given.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.