We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper considers the family of invariant measures of Markovian mean-field interacting particle systems on a countably infinite state space and studies its large deviation asymptotics. The Freidlin–Wentzell quasipotential is the usual candidate rate function for the sequence of invariant measures indexed by the number of particles. The paper provides two counterexamples where the quasipotential is not the rate function. The quasipotential arises from finite-horizon considerations. However, there are certain barriers that cannot be surmounted easily in any finite time horizon, but these barriers can be crossed in the stationary regime. Consequently, the quasipotential is infinite at some points where the rate function is finite. After highlighting this phenomenon, the paper studies some sufficient conditions on a class of interacting particle systems under which one can continue to assert that the Freidlin–Wentzell quasipotential is indeed the rate function.
Two ensembles are frequently used to model random graphs subject to constraints: the microcanonical ensemble (= hard constraint) and the canonical ensemble (= soft constraint). It is said that breaking of ensemble equivalence (BEE) occurs when the specific relative entropy of the two ensembles does not vanish as the size of the graph tends to infinity. Various examples have been analysed in the literature. It was found that BEE is the rule rather than the exception for two classes of constraints: sparse random graphs when the number of constraints is of the order of the number of vertices, and dense random graphs when there are two or more constraints that are frustrated. We establish BEE for a third class: dense random graphs with a single constraint on the density of a given simple graph. We show that BEE occurs in a certain range of choices for the density and the number of edges of the simple graph, which we refer to as the BEE-phase. We also show that, in part of the BEE-phase, there is a gap between the scaling limits of the averages of the maximal eigenvalue of the adjacency matrix of the random graph under the two ensembles, a property that is referred to as the spectral signature of BEE. We further show that in the replica symmetric region of the BEE-phase, BEE is due to the coexistence of two densities in the canonical ensemble.
This chapter generalizes Van Evera’s typology of process tracing tests to a fully probabilistic context, introducing measures of anticipated test strength that will also be used in Chapter 11.
We give an introduction to the concept of relative entropy (also called Kullback-Leibler divergence). We interpret relative entropy in terms of both coding and diversity, and sketch some connections with other subjects: Riemannian geometry (where relative entropy is infinitesimally a squared distance), measure theory, and statistics. We prove that relative entropy is uniquely characterized by a short list of properties.
Value is a generalization of diversity. In ecological terms, it allows measurement of a community when its species are assigned a worth that need not be related to how distinctive they are. Mathematically, the characterization theorem for value measures that we prove here allows us to prove a characterization theorem for the Hill numbers (hence, in principle, the Rényi entropies).
We give a short introduction to some classical information-theoretic quantities: joint entropy, conditional entropy and mutual information. We then interpret their exponentials ecologically, as meaningful measures of subcommunities of a larger metacommunity. These subcommunity and metacommunity measures have excellent logical properties, as we establish. We also show how all these quantities can be presented in terms of relative entropy and the value measures of the previous chapter.
The study of high-dimensional distributions is of interest in probability theory, statistics, and asymptotic convex geometry, where the object of interest is the uniform distribution on a convex set in high dimensions. The ℓp-spaces and norms are of particular interest in this setting. In this paper we establish a limit theorem for distributions on ℓp-spheres, conditioned on a rare event, in a high-dimensional geometric setting. As part of our proof, we establish a certain large deviation principle that is also relevant to the study of the tail behavior of random projections of ℓp-balls in a high-dimensional Euclidean space.
For a stationary Markov process the detailed balance condition is equivalent to thetime-reversibility of the process. For stochastic differential equations (SDE’s), the timediscretization of numerical schemes usually destroys the time-reversibility property.Despite an extensive literature on the numerical analysis for SDE’s, their stabilityproperties, strong and/or weak error estimates, large deviations and infinite-timeestimates, no quantitative results are known on the lack of reversibility of discrete-timeapproximation processes. In this paper we provide such quantitative estimates by using theconcept of entropy production rate, inspired by ideas from non-equilibrium statisticalmechanics. The entropy production rate for a stochastic process is defined as the relativeentropy (per unit time) of the path measure of the process with respect to the pathmeasure of the time-reversed process. By construction the entropy production rate isnonnegative and it vanishes if and only if the process is reversible. Crucially, from anumerical point of view, the entropy production rate is an a posterioriquantity, hence it can be computed in the course of a simulation as the ergodicaverage of a certain functional of the process (the so-called Gallavotti−Cohen (GC) action functional). We computethe entropy production for various numerical schemes such as explicit Euler−Maruyama and explicit Milstein’s forreversible SDEs with additive or multiplicative noise. In addition we analyze the entropyproduction for the BBK integrator for the Langevin equation. The order (in thetime-discretization step Δt) of the entropy production rate provides a tool toclassify numerical schemes in terms of their (discretization-induced) irreversibility. Ourresults show that the type of the noise critically affects the behavior of the entropyproduction rate. As a striking example of our results we show that the Euler scheme formultiplicative noise is not an adequate scheme from a reversibilitypoint of view since its entropy production rate does not decrease withΔt.
In this article we investigate the minimal entropy martingale measure for continuous-time Markov chains. The conditions for absence of arbitrage and existence of the minimal entropy martingale measure are discussed. Under this measure, expressions for the transition intensities are obtained. Differential equations for the arbitrage-free price are derived.
Much of uncertainty quantification to date has focused on determining the effect of variables modeled probabilistically, and with a known distribution, on some physical or engineering system. We develop methods to obtain information on the system when the distributions of some variables are known exactly, others are known only approximately, and perhaps others are not modeled as random variables at all.The main tool used is the duality between risk-sensitive integrals and relative entropy, and we obtain explicit bounds on standard performance measures (variances, exceedance probabilities) over families of distributions whose distance from a nominal distribution is measured by relative entropy. The evaluation of the risk-sensitive expectations is based on polynomial chaos expansions, which help keep the computational aspects tractable.
We prove asymptotic equipartition properties for simple hierarchical structures (modelled as multitype Galton-Watson trees) and networked structures (modelled as randomly coloured random graphs). For example, for large n, a networked data structure consisting of n units connected by an average number of links of order n/ log n can be coded by about H × n bits, where H is an explicitly defined entropy. The main technique in our proofs are large deviation principles for suitably defined empirical measures.
We investigate the dissipativity properties of a class of scalar secondorder parabolic partial differential equations with time-dependentcoefficients. We provide explicit condition on the drift term which ensurethat the relative entropy of one particular orbit with respect to some otherone decreases to zero. The decay rate is obtained explicitly by the use ofa Sobolev logarithmic inequality for the associated semigroup, which is derived by an adaptation of Bakry's Γ-calculus.As a byproduct, the systematic method for constructing entropieswhich we propose here also yields the well-known intermediate asymptotics for the heat equation in a very quick way, and without having to rescale the original equation.
The primary objective of this work is to develop coarse-grainingschemes for stochastic many-body microscopic models and quantify theireffectiveness in terms of a priori and a posteriori error analysis. Inthis paper we focus on stochastic lattice systems ofinteracting particles at equilibrium. The proposed algorithms are derived from an initial coarse-grainedapproximation that is directly computable by Monte Carlo simulations, and the corresponding numerical error is calculated using the specific relative entropy between the exact and approximate coarse-grained equilibrium measures. Subsequently we carry out a cluster expansion around this first – and often inadequate – approximation and obtain more accurate coarse-graining schemes.The cluster expansions yield also sharp a posteriori error estimates forthe coarse-grained approximations that can be used for the construction ofadaptive coarse-graining methods. We present a number of numerical examples that demonstrate that thecoarse-graining schemes developed here allow for accurate predictions of critical behavior and hysteresis in systems with intermediate and long-range interactions. We also present examples where they substantially improvepredictions of earlier coarse-graining schemes for short-range interactions.
The simulation of distributions of financial assets is an important issue for financial institutions. If risk measures are evaluated for a simulated distribution instead of the model-implied distribution, the errors in the risk measurements need to be analyzed. For distribution-invariant risk measures which are continuous on compacts, we employ the theory of large deviations to study the probability of large errors. If the approximate risk measurements are based on the empirical distribution of independent samples, then the rate function equals the minimal relative entropy under a risk measure constraint. We solve this minimization problem explicitly for shortfall risk and average value at risk.
A self-consistent model for charged particles, accounting for quantum confinement, diffusive transport and electrostatic interaction is considered. The electrostatic potential is a solution of a three-dimensional Poisson equation with the particle density as the source term. This density is the product of a two-dimensional surface density and that of a one-dimensional mixed quantum state. The surface density is the solution of a drift–diffusion equation with an effective surface potential deduced from the fully three-dimensional one and which involves the diagonalization of a one-dimensional Schrödinger operator. The overall problem is viewed as a two-dimensional drift–diffusion equation coupled to a Schrödinger–Poisson system. The latter is proven to be well posed by a convex minimization technique. A relative entropy and an a priori $L^2$ estimate provide sufficient bounds to prove existence and uniqueness of a global-in-time solution. In the case of thermodynamic equilibrium boundary data, a unique stationary solution is proven to exist. The relative entropy allows us to prove the convergence of the transient solution towards it as time grows to infinity. Finally, the low-order approximation of the relative entropy is used to prove that this convergence is exponential in time.
We study the mathematical properties of a general model of cell division structuredwith several internal variables. We begin with a simpler and specific model with two variables, wesolve the eigenvalue problem with strong or weak assumptions, and deduce from it the long-timeconvergence. The main difficulty comes from natural degeneracy of birth terms that we overcomewith a regularization technique. We then extend the results to the case with several parameters andrecall the link between this simplified model and the one presented in [6]; an application to thenon-linear problem is also given, leading to robust subpolynomial growth of the total population.
The aim of this paper is to define the entropy of a finite semi-Markov process. We define the entropy of the finite distributions of the process, and obtain explicitly its entropy rate by extending the Shannon–McMillan–Breiman theorem to this class of nonstationary continuous-time processes. The particular cases of pure jump Markov processes and renewal processes are considered. The relative entropy rate between two semi-Markov processes is also defined.
We study a classical stochastic control problem arising in financial economics: to maximize expected logarithmic utility from terminal wealth and/or consumption. The novel feature of our work is that the portfolio is allowed to anticipate the future, i.e. the terminal values of the prices, or of the driving Brownian motion, are known to the investor, either exactly or with some uncertainty. Results on the finiteness of the value of the control problem are obtained in various setups, using techniques from the so-called enlargement of filtrations. When the value of the problem is finite, we compute it explicitly and exhibit an optimal portfolio in closed form.
In the Bayesian estimation of higher-order Markov transition functions on finite state spaces, a prior distribution may assign positive probability to arbitrarily high orders. If there are n observations available, we show (for natural priors) that, with probability one, as n → ∞ the Bayesian posterior distribution ‘discriminates accurately' for orders up to β log n, if β is smaller than an explicitly determined β0. This means that the ‘large deviations' of the posterior are controlled by the relative entropies of the true transition function with respect to all others, much as the large deviations of the empirical distributions are governed by their relative entropies with respect to the true transition function. An example shows that the result can fail even for orders β log n if β is large.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.