We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Educational assessment concerns inference about students' knowledge, skills, and accomplishments. Because data are never so comprehensive and unequivocal as to ensure certitude, test theory evolved in part to address questions of weight, coverage, and import of data. The resulting concepts and techniques can be viewed as applications of more general principles for inference in the presence of uncertainty. Issues of evidence and inference in educational assessment are discussed from this perspective.
This is a revision of John Trimmer’s English translation of Schrödinger’s famous ‘cat paper’, originally published in three parts in Naturwissenschaften in 1935.
This is a reprinting of Schrödinger’s famous pair of papers delivered at the Cambridge Philosophical Society in late 1935 and 1936, wherein he first coins the term ‘entanglement’ to describe interacting quantum systems. The first paper (1935) is given here in full; section 4 of the second paper (1936) is reprinted as an appendix.
In this Element, the authors introduce Bayesian probability and inference for social science students and practitioners starting from the absolute beginning and walk readers steadily through the Element. No previous knowledge is required other than that in a basic statistics course. At the end of the process, readers will understand the core tenets of Bayesian theory and practice in a way that enables them to specify, implement, and understand models using practical social science data. Chapters will cover theoretical principles and real-world applications that provide motivation and intuition. Because Bayesian methods are intricately tied to software, code in both R and Python is provided throughout.
In this paper, we study a connection between disintegration of measures and geometric properties of probability spaces. We prove a disintegration theorem, addressing disintegration from the perspective of an optimal transport problem. We look at the disintegration of transport plans, which are used to define and study disintegration maps. Using these objects, we study the regularity and absolute continuity of disintegration of measures. In particular, we exhibit conditions for which the disintegration map is weakly continuous and one can obtain a path of measures given by this map. We show a rigidity condition for the disintegration of measures to be given into absolutely continuous measures.
Probability-based estimates of the future suicide of psychiatric patients are of little assistance in clinical practice. This article proposes strategic management of the interaction between the clinician and the patient in the assessment of potentially suicidal patients, using principles derived from game theory, to achieve a therapeutic outcome that minimises the likelihood of suicide. Further developments in the applications of large language models could allow us to quantify the basis for clinical decisions in individual patients. Documenting the basis of those decisions would help to demonstrate an adequate standard of care in every interaction.
Discusses statistical methods, covering random variables and variates, sample and population, frequency distributions, moments and moment measures, probability and stochastic processes, discrete and continuous probability distributions, return periods and quantiles, probability density functions, parameter estimation, hypothesis testing, confidence intervals, covariance, regression and correlation analysis, time-series analysis.
Forecasting elections is a high-risk, high-reward endeavor. Today’s polling rock star is tomorrow’s has-been. It is a high-pressure gig. Public opinion polls have been a staple of election forecasting for almost ninety years. But single source predictions are an imperfect means of forecasting, as we detailed in the preceding chapter. One of the most telling examples of this in recent years is the 2016 US presidential election. In this chapter, we will examine public opinion as an election forecast input. We organize election prediction into three broad buckets: (1) heuristics models, (2) poll-based models, and (3) fundamentals models.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
This Element looks at two projects that relate logic and information: the project of using logic to integrate, manipulate and interpret information and the proect of using the notion of information to provide interpretations of logical systems. The Element defines 'information' in a manner that includes misinformation and disinformation and uses this general concept of information to provide an interpretation of various paraconsistent and relevant logics. It also integrates these logics into contemporary theories of informational updating, probability theory and (rather informally) some ideas from the theory of the complexity of proofs. The Element assumes some prior knowledge of modal logic and its possible world semantics, but all the other necessary background is provided.
We investigate here the behaviour of a large typical meandric system, proving a central limit theorem for the number of components of a given shape. Our main tool is a theorem of Gao and Wormald that allows us to deduce a central limit theorem from the asymptotics of large moments of our quantities of interest.
Time for a break! Chapter 7 takes you for a guided walk through a tiny part of mathematical wonderland. We will encounter several mathematical personalities. An important one is Andrew Wiles, who solved Fermat’s Last Theorem. The story about how he finally obtained a proof is a must-read. We learn about the Fields Medal, the equivalent of a (non-existing) Nobel Prize in mathematics. We also tell you about the four-yearly International Congresses of Mathematicians and their influence on the field. There will be a first step on the ladder towards a theory of randomness; key names here are Jacob Bernoulli and Andrei Nikolajewitsch Kolmogorov. Randomness also comes to us through the famous discussion between Niels Bohr and Albert Einstein on “God throwing dice”. Of course, we include Leonhard Euler and his most beautiful formula of mathematics.
Explore the concept of risk through numerous examples and their statistical modeling, traveling from a historical perspective all the way to an up-to-date technical analysis. Written with a wide readership in mind, this book begins with accounts of a selection of major historical disasters, such as the North Sea flood of 1953 and the L'Aquila earthquake. These tales serve to set the scene and to motivate the second part of the book, which describes the mathematical tools required to analyze these events, and how to use them. The focus is on the basic understanding of the mathematical modeling of risk and what types of questions the methods allow one to answer. The text offers a bridge between the world of science and that of everyday experience. It is written to be accessible to readers with only a basic background in mathematics and statistics. Even the more technical discussions are interspersed with historical comments and plentiful examples.
The Coda sketches how the distinctive tradition of uncertainty in nineteenth-century literature and culture changes with the rise of literary modernism. Uncertainty remains of vital interest to writers like Henry James, D. H. Lawrence, James Joyce, Virginia Woolf, and E. M. Forster. Yet a more self-conscious embrace of chance, contingency, and randomness, alongside a more thoroughgoing skepticism, disengages this writing from the earlier literature’s concerns. Further valences acquired by the concept of uncertainty in the early twentieth century – as radical indeterminacy in physics and contrast class to risk in economics – both intensify cultural interest in the topic and disarticulate its nineteenth-century framework. In a reading of Joseph Conrad’s novel Chance (1914), I argue that his emphasis on the value of momentary judgments, on knowledge as mercurial and provisional, and on the role of accident in literary plots all reprise Victorian tactics.
The Victorian era is often seen as solidifying modern law’s idealization of number, rule, and definition. Yet Wilkie Collins thwarts the trend toward “trial by mathematics” and “actuarial justice” by adopting an antinumerical example as the basis for a literary experiment. The bizarre third verdict (“not proven”) of Scots law, which falls between “guilty” and “not guilty” and acts as an acquittal that nonetheless imputes a lack of evidence for conviction, structures his detective novel The Law and the Lady (1875). Revealing Collins’s sources in trial reports and legal treatises, this chapter shows how uncertainty inflects judicial reasoning and models of reading. The verdict of “not proven” undercuts the truth claims of binary judgment at law, subverts normative categories, and allows for more flexible visions of social judgment. Collins makes visible a counter-trend to certainty and closure in legal institutions and Victorian novels about the law. The chapter briefly treats Anthony Trollope’s Orley Farm (1862) and Mary Braddon’s An Open Verdict (1878), which also promote types of inference and models of critical judgment that value the tentative, hesitant, and processual, evading the calculative pressures of nineteenth-century law and life.
This chapter studies two contrasting models for predictive thinking and representation in Thomas Hardy. In The Return of the Native (1878), Hardy’s depiction of repetitive phenomena evokes one renovated account of logico-mathematical probability, John Venn’s empirical theory about how we judge from series of instances. In the novel’s palpably antiquated rural setting – where characters intuit more than they see, gamble by the light of glowworms, and infer human plots from long-run traces in the material world – the abstractions of Victorian logic acquire concrete form. In The Mayor of Casterbridge (1886), by contrast, serial iterations are compressed into images. Hardy designs literary equivalents of Francis Galton’s “composite photographs,” used to model statistical data and mental processes. Characters think in overlays, detecting a parent’s face playing over that of a child, designing a future self by laying transparencies over the present, and imagining human plots as grids from overhead. Serial and composite thinking extend to Hardy’s “approximative” theory of fiction. He uses these tropes as an implicit riposte to critics and advocates for a novelistic realism tolerant of repetition, coincidence, and improbability.
The Introduction provides an overview of the book’s argument about how novels in nineteenth-century Britain (by George Eliot, Wilkie Collins, William Thackeray, and Thomas Hardy) represented modes of thinking, judging, and acting in the face of uncertainty. It also offers a synopsis of key intellectual contexts: (1) the history of probability in logic and mathematics into the Victorian era, the parallel rise of statistics, and the novelistic importance of probability as a dual concept, geared to both the aleatory and the epistemic, to objective frequencies and subjective degrees of belief; (2) the school of thought known as associationism, which was related to mathematical probability and remained influential in the nineteenth century, underwriting the embodied account of mental function and volition in physiological psychology, and representations of deliberation and action in novels; (3) the place of uncertainty in treatises of rhetoric, law, and grammar, where considerations of evidence were inflected by probability’s epistemological transformation; and (4) the resultant shifts in literary probability (and related concepts like mimesis and verisimilitude) from Victorian novel theory to structuralist narratology, where understandings of probability as a dual concept were tacitly incorporated.
The Victorian novel developed unique forms of reasoning under uncertainty-of thinking, judging, and acting in the face of partial knowledge and unclear outcome. George Eliot, Wilkie Collins, William Thackeray, Thomas Hardy, and later Joseph Conrad drew on science, mathematics, philosophy, and the law to articulate a phenomenology of uncertainty against emergent models of prediction and decision-making. In imaginative explorations of unsure reasoning, hesitant judgment, and makeshift action, these novelists cultivated distinctive responses to uncertainty as intellectual concern and cultural disposition, participating in the knowledge work of an era shaped by numerical approaches to the future. Reading for uncertainty yields a rich account of the dynamics of thinking and acting, a fresh understanding of realism as a genre of the probable, and a vision of literary-critical judgment as provisional and open-ended. Daniel Williams spotlights the value of literary art in a present marked by models and technologies of prediction.
The Skidegate dialect of the Haida, an Indigenous Nation on the West Coast of Canada, has a phrase for “staying curious.” Gina gii Giixan aanagung means “to look around with curiosity and intent.” This Haida concept holds more than curiosity; it conveys the idea of staying observant with the world on purpose. It suggests an active stance. Staying curious by asking questions, paying attention, and learning new things takes energy and action.
The Stay Curious and Adjust Decision-Maker Move is about decision makers being in a learning relationship with their choices, actively seeking to uncover and learn from new information based on their own lives and experiences as well as the conversations they have with others. It’s a recognition that many choices are repeated (with minor changes), so there are ample opportunities for self-learning and making adjustments.
This chapter explains problems associated with planning infrastructure systems in order to improve resilience. Understanding the concept and basic methods for planning infrastructure investments is an important aspect for studying resilience because planning is a key process that contributes to resilience preparedness and adaptation attributes. Initially, the chapter discusses the fundamental problems and issues found when making decisions about investment allocations amid uncertain conditions. Then, probabilistic risk assessment (PRA) as still the main tool used in industry in planning processes is explained. Because characterizing intensity and other relevant attributes of disruptive events is an important component of planning processes for enhancing resilience, this chapter continues by exploring how these events – and especially hurricanes – can be characterized in order to obtain information that can be used as input for the planning process. Finally, the chapter concludes by discussing economic concepts and tools related to infrastructure resilience enhancement planning processes.