We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents an overview of an approach to the quantitative analysis of qualitative data with theoretical and methodological explanations of the two cornerstones of the approach, Alternating Least Squares and Optimal Scaling. Using these two principles, my colleagues and I have extended a variety of analysis procedures originally proposed for quantitative (interval or ratio) data to qualitative (nominal or ordinal) data, including additivity analysis and analysis of variance; multiple and canonical regression; principal components; common factor and three mode factor analysis; and multidimensional scaling. The approach has two advantages: (a) If a least squares procedure is known for analyzing quantitative data, it can be extended to qualitative data; and (b) the resulting algorithm will be convergent. Three completely worked through examples of the additivity analysis procedure and the steps involved in the regression procedures are presented.
This paper studies the problem of scaling ordinal categorical data observed over two or more sets of categories measuring a single characteristic. Scaling is obtained by solving a constrained entropy model which finds the most probable values of the scales given the data. A Kullback-Leibler statistic is generated which operationalizes a measure for the strength of consistency among the sets of categories. A variety of data of two and three sets of categories are analyzed using the entropy approach.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.
When the three-parameter logistic model is applied to tests covering a broad range of difficulty, there frequently is an increase in mean item discrimination and a decrease in variance of item difficulties and traits as the tests become more difficult. To examine the hypothesis that this unexpected scale shrinkage effect occurs because the items increase in complexity as they increase in difficulty, an approximate relationship is derived between the unidimensional model used in data analysis and a multidimensional model hypothesized to be generating the item responses. Scale shrinkage is successfully predicted for several sets of simulated data.
A Monte Carlo study was conducted to investigate the ability of three estimation criteria to recover the parameters of Case V and Case III models from comparative judgment data. Significant differences in recovery are shown to exist.
A new family of indices was introduced earlier as a link between two approaches: One based on item response theory and the other on sample statistics. In this study, the statistical properties of these indices are investigated and then the relationships to Guttman Scales, and to item and person response curves are discussed. Further, these indices are standardized, and an example of their potential usefulness for diagnosing students' misconceptions is shown.
A modified version of a coordinate adjustment technique which permits the analysis of comparisons of psychological intervals for an unknown ordering of stimuli is described and compared to the original version and to TORSCA. For configurations with a large number of points, knowledge of the rank order of the stimuli does not improve the solution. For configurations with a small number of points, the performance of the new algorithm with an unknown ordering is equivalent to TORSCA.
A procedure for ordering object (stimulus) pairs based on individual preference ratings is described. The basic assumption is that individual responses are consistent with a nonmetric multidimensional unfolding model. The method requires data where a numerical response is independently generated for each individual-object pair. In conjunction with a nonmetric multidimensional scaling procedure, it provides a vehicle for recovering meaningful object configurations.
Two least squares procedures for symmetrization of a conditional proximity matrix are derived. The solutions provide multiplicative constants for scaling the rows or columns of the matrix to maximize symmetry. It is suggested that the symmetrization is applicable for the elimination of bias effects like response bias, or constraints on the marginal frequencies imposed by the experimental design, as in confusion matrices.
I develop a survey method for estimating social influence over individual political expression, by combining the content-richness of document scaling with the flexibility of survey research. I introduce the “What Would You Say?” question, which measures self-reported usage of political catchphrases in a hypothetical social context, which I manipulate in a between-subjects experiment. Using Wordsticks, an ordinal item response theory model inspired by Wordfish, I estimate each respondent’s lexical ideology and outspokenness, scaling their political lexicon in a two-dimensional space. I then identify self-censorship and preference falsification as causal effects of social context on respondents’ outspokenness and lexical ideology, respectively. This improves upon existing survey measures of political expression: it avoids conflating expressive behavior with populist attitudes, it defines preference falsification in terms of code-switching, and it moves beyond trait measures of self-censorship, to characterize relative shifts in the content of expression between different contexts. I validate the method and present experiments demonstrating its application to contemporary concerns about self-censorship and polarization, and I conclude by discussing its interpretation and future uses.
Viewed from the perspective of public policy, behavioural public policy (BPP) faces challenges in four main areas: Systems, Impatience, Nudging, and Scaling. To address these challenges, several suggestions are proposed. First, understanding how BPP interventions unfold in complex systems requires better diagnostics and the development of predictive and generative models of human behaviour. Second, the rapid pace of policy processes necessitates a shift towards generating timely and fit-for-purpose evidence. Third, maximising the opportunities presented by BPP, beyond merely ‘nudging’, demands the early and proactive application of behavioural science in the policy cycle. Fourth, achieving widespread impact in BPP initiatives means considering scale-up from the start. Lastly, the consistent and comprehensive integration of behavioural science into standard policymaking practices would support sustainable progress in addressing these challenges.
This chapter show how throughout millennia, philanthropy has served as a catalyst for change and as a vehicle for community transformation. While COVID-19 has forced philanthropists worldwide to take immediate action and mobilise billions of dollars to save lives, African philanthropy and the culture of ‘giving’ are not new phenomena but are ingrained in the fabric of African societies. Before the arrival of colonialism, aid agencies and development partners, grassroots philanthropists and associations mobilised resources to address development issues. Within this context, the chapter focuses on the role of multi-sector partnerships in Africa and how they arose out of the crisis of the pandemic to drive the efficiency of vital collaborations between the African Union (AU), local governments, and the private sector. It shows how these partnerships helped the continent curb the pandemic and prevented the massive spread of infections. This chapter highlights the uniqueness and significance of these partnerships at the local and continental levels and identifies some of the core values underpinning them. The chapter also explores the importance and the impact of the AU’s strategic leadership and multi-sectoral partnerships in advancing the continent’s health and economic agenda while deconstructing some of the inherent challenges that were faced when trying to scale these alliances in Africa.
Having examined the production, consumption, and valuation of information and data, we can start to design business models for information goods. We examine the fundamental characteristics of information and data goods that we need to consider. It is critical for a digital innovator to design mechanisms that allow users to discover their valuation and preferences, and that allow the innovator to discover users’ willingness to pay. Ideally, such mechanisms account for cognitive tendencies to prefer intuitive, familiar, simple, and quick solutions to our data and information needs.
The components or functions derived from an eigenanalysis are linear combinations of the original variables. Principal components analysis (PCA) is a very common method that uses these components to examine patterns among the objects, often in a plot termed an ordination, and identify which variables are driving those patterns. Correspondence analysis (CA) is a related method used when the variables represent counts or abundances. Redundancy analysis and canonical CA are constrained versions of PCA and CA, respectively, where the components are derived after taking into account the relationships with additional explanatory variables. Finally, we introduce linear discriminant function analysis as a way of identifying and predicting membership of objects to predefined groups.
The analysis of ice response to stress using finite elements is described, using multiaxial constitutive relationships, including damage, in a viscoelastic framework. The U-shaped relationship of compliance with pressure is part of this formulation. The results show that the layer of damaged ice adjacent to the indentor arises naturally through the formulation, giving rise to a peak load and subsequent decline. This shows that there can be “layer failure” in addition to failure due to fractures and spalling. Tests on extrusion of crushed ice are described together with a formulation of constitutive relationships based on special triaxial tests of crushed ice. The ice temperature measured during field indentation tests showed a drop in temperature during the upswings in load. This was attributed to localized pressure melting. Small scale indentor tests are described, which show clearly the difference between layer failure and spalling, as found using high-speed video and pressure-sensitive film. The question of scaling, as used in ice tanks, is addressed. Flexural failure can be scaled to some extent; scaling of high-pressure zones lies in the mechanics as developed in the book.
In this chapter, a concept known as scaling is introduced. Scaling (also known as nondimensionalization) is essentially a form of dimensional analysis. Dimensional analysis is a general term used to describe a means of analyzing a system based off the units of the problem (e.g. kilogram for mass, kelvin for temperature, meter for length, coulomb for electric change, etc.). The concepts of this chapter, while not entirely about the fluid equations per se, is arguably the most useful in understanding the various concepts of fluid mechanics. In addition, the concepts discussed within this chapter can be extended to other areas of physics, particularly areas that are heavily reliant on differential equations (which is most of physics and engineering).
The knowledge economy represents a new domination by a longstanding factor of production. New insights and technological innovation have always shaped economic activity, but the rate of technological change and the proportion of knowledge as a factor of production and as a product have grown greatly in recent decades. This chapter describes the knowledge economy and explains how it makes it more likely that producers will have postive returns to scale – in other words, that profits will increase as the level of production grows. These features have profound implications for the international dimensions of the knowledge economy, as illustrated by branding and supply chains.
Empirical equations of downstream hydraulic geometry, entailing width, depth, velocity, and bed slope, can be derived using the scaling theory. The theory employs the momentum equation, a flow resistance formula, and continuity equation for gradually varied open channel flow. The scaling equations are expressed as power functions of water discharge and bed sediment size, and are applicable to alluvial, ice, and bedrock channels. These equations are valid for any value of water discharge as opposed to just mean or bank-full values that are used in empirical equations. This chapter discusses the use of scaling theory for the derivation of downstream hydraulic geometry. The scaling theory-based hydraulic geometry equations are also compared with those derived using the regime theory, threshold theory, and stability index theory, and the equations are found to be consistent.
This chapter discusses the ways in which natural selection has acted on the animal and primate brain, demonstrating that the human brain is better at some tasks, whereas other animals are better at certain others (e.g. special memory and chimpanzees). Human brains are the results of selection for very specific tasks, largely relating to social information. It also discusses the role of metabolism in brain evolution, reviewing the ‘expensive tissue hypothesis’. It summarizes brain anatomy, and shows that, anatomically, the human brain is essentially a scaled-up primate brain. Finally, it discusses the idea of consciousness, the ways we evaluate it in other animals, and how it may have arisen.
AM-meso structures offer a high potential for adapted properties combined with lightweight design. To utilize the potential a purposeful design of the meso structures is required. Therefore, this contribution presents an approach for modelling their properties depending on design parameters by scaling relationships. The relationships are investigated based on grey box and axiomatic models of elementary cells. Exemplary the pressure stiffness is determined using FEM in comparison to an analytical approximation. The comparison reveals effects and influences occurring within the elementary cell.