We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we will investigate how sociolinguistic theory overlaps with selected areas of applied linguistics. We revisit the question how discrimination operates in the language ideology of Standard English and find out how this may entail serious impediments in domains such as education and health advice. We look at how anthropological and ethnographic issues have an impact on cultural misunderstandings, how insights from variation and change can be used to help improve children’s reading and writing skills, and will discuss the involvement of sociolinguists in dialect maintenance and revival issues. There are special sections of forensic sociolinguistics and legal aspects of language usage, and we present hands-on cases of real-life issues where sociolinguistics is relevant, particularly the court case following the murder of Trayvon Martin in 2013.
This title focuses on the interpretative methodologies and principles employed by international human rights organs in applying and developing human rights norms. It explores the role of various interpreters, including international, regional, and national courts, in shaping the meaning and scope of human rights. The section examines the methods of interpretation used by human rights bodies, such as textual, contextual, purposive, and evolutionary approaches, and the challenges in ensuring consistency and coherence across different jurisdictions. It also discusses the purposes of interpretation, including the protection of human rights, the development of international human rights law, and the promotion of judicial dialogue and coherence. By analyzing the interpretative practices of human rights organs, this title aims to provide a deeper understanding of the dynamics of human rights interpretation and the factors influencing the application of human rights norms in diverse legal and cultural contexts.
This part focuses on the interpretative methodologies and principles employed by international human rights organs in applying and developing human rights norms. It explores the role of various interpreters, including international, regional, and national courts, in shaping the meaning and scope of human rights. The sections examine the methods of interpretation used by human rights bodies, such as textual, contextual, purposive, and evolutionary approaches, and the challenges in ensuring consistency and coherence across different jurisdictions. It also discusses the purposes of interpretation, including the protection of human rights, the development of international human rights law, and the promotion of judicial dialogue and coherence. This part delves into the international legal regime governing human rights and freedoms, covering states’ general obligations, the conditions for engaging state responsibility, and the regime for the enjoyment and exercise of rights and freedoms. By analyzing the interpretative practices and legal obligations, this part aims to provide a deeper understanding of the dynamics of human rights interpretation and the factors influencing the application of human rights norms in diverse legal and cultural contexts.
The preface paradox is often taken to show that beliefs can be individually rational but jointly inconsistent. However, this received conflict between rationality and consistency is unfounded. This paper seeks to show that no rational beliefs are actually inconsistent in the preface paradox
This paper critically assesses Rizzo and Whitman’s theory of inclusive rationality in light of the ongoing cross-disciplinary debate about rationality, welfare analyses and policy evaluation. The paper aims to provide three main contributions to this debate. First, it explicates the relation between the consistency conditions presupposed by standard axiomatic conceptions of rationality and the standards of rationality presupposed by Rizzo and Whitman’s theory of inclusive rationality. Second, it provides a qualified defence of the consistency conditions presupposed by standard axiomatic conceptions of rationality against the main criticisms put forward by Rizzo and Whitman. And third, it identifies and discusses specific strengths and weaknesses of Rizzo and Whitman’s theory of inclusive rationality in the context of welfare analyses and policy evaluation.
We begin with the canonical status of the reals: this extends up to uniqueness to within isomorphism as a complete Archimedean ordered field, but not up to cardinality aspects. We discuss four ‘elephants in the room’ here (an elephant in the room is something obviously there but which no one wants to mention). The first elephant (from Gödel’s incompleteness theorem and the Continuum Hypothesis, CH): one cannot properly speak of the real line, but rather which real line one chooses to work with. The second is ‘which sets of reals can one use?’ (it depends on what axioms of set theory one assumes – in particular, the role of the Axiom of Choice, AC). The third is that there are sentences that are neither provable nor disprovable, and that no non-trivial axiom system is capable of proving its own consistency. Thus, we do not – cannot – know that mathematics itself is consistent. The fourth elephant is that even to define cardinals, the concept of cardinality needs AC.
Consider the class of two parameter marginal logistic (Rasch) models, for a test of m True-False items, where the latent ability is assumed to be bounded. Using results of Karlin and Studen, we show that this class of nonparametric marginal logistic (NML) models is equivalent to the class of marginal logistic models where the latent ability assumes at most (m + 2)/2 values. This equivalence has two implications. First, estimation for the NML model is accomplished by estimating the parameters of a discrete marginal logistic model. Second, consistency for the maximum likelihood estimates of the NML model can be shown (when m is odd) using the results of Kiefer and Wolfowitz. An example is presented which demonstrates the estimation strategy and contrasts the NML model with a normal marginal logistic model.
Diagnostic classification models are confirmatory in the sense that the relationship between the latent attributes and responses to items is specified or parameterized. Such models are readily interpretable with each component of the model usually having a practical meaning. However, parameterized diagnostic classification models are sometimes too simple to capture all the data patterns, resulting in significant model lack of fit. In this paper, we attempt to obtain a compromise between interpretability and goodness of fit by regularizing a latent class model. Our approach starts with minimal assumptions on the data structure, followed by suitable regularization to reduce complexity, so that readily interpretable, yet flexible model is obtained. An expectation–maximization-type algorithm is developed for efficient computation. It is shown that the proposed approach enjoys good theoretical properties. Results from simulation studies and a real application are presented.
This paper traces the course of the consequences of viewing test responses as simply providing dichotomous data concerning ordinal relations. It begins by proposing that the score matrix is best considered to be items-plus-persons by items-plus-persons, and recording the wrongs as well as the rights. This shows how an underlying order is defined, and was used to provide the basis for a tailored testing procedure. It also was used to define a number of measures of test consistency. Test items provide person dominance relations, and the relations provided by one item can be in one of three relations with a second one: redundant, contradictory, or unique. Summary statistics concerning the number of relations of each kind are easy to get and provide useful information about the test, information which is related to but different from the usual statistics. These concepts can be extended to form the basis of a test theory which is based on ordinal statistics and frequency counts and which invokes the concept of true scores only in a limited sense.
A closed form estimator of the uniqueness (unique variance) in factor analysis is proposed. It has analytically desirable properties—consistency, asymptotic normality and scale invariance. The estimation procedure is given through the application to the two sets of Emmett's data and Holzinger and Swineford's data. The new estimator is shown to lead to values rather close to the maximum likelihood estimator.
Previous studies have found some puzzling power anomalies related to testing the indirect effect of a mediator. The power for the indirect effect stagnates and even declines as the size of the indirect effect increases. Furthermore, the power for the indirect effect can be much higher than the power for the total effect in a model where there is no direct effect and therefore the indirect effect is of the same magnitude as the total effect. In the presence of direct effect, the power for the indirect effect is often much higher than the power for the direct effect even when these two effects are of the same magnitude. In this study, the limiting distributions of related statistics and their non-centralities are derived. Computer simulations are conducted to demonstrate their validity. These theoretical results are used to explain the observed anomalies.
This commentary concerns the theoretical properties of the estimation procedure in “A General Method of Empirical Q-matrix Validation” by Jimmy de la Torre and Chia-Yi Chiu. It raises the consistency issue of the estimator, proposes some modifications to it, and also makes some conjectures.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
Chang and Stout (1993) presented a derivation of the asymptotic posterior normality of the latent trait given examinee responses under nonrestrictive nonparametric assumptions for dichotomous IRT models. This paper presents an extention of their results to polytomous IRT models in a fairly straightforward manner. In addition, a global information function is defined, and the relationship between the global information function and the currently used information functions is discussed. An information index that combines both the global and local information is proposed for adaptive testing applications.
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized method of moments (GMM) estimation techniques in multilevel modeling, the authors present a series of estimators along a robust to efficient continuum. This continuum depends on the assumptions that the analyst makes regarding the extent of the correlated effects. It is shown that the GMM approach provides an overarching framework that encompasses well-known estimators such as fixed and random effects estimators and also provides more options. These GMM estimators can be expressed as instrumental variable (IV) estimators which enhances their interpretability. Moreover, by exploiting the hierarchical structure of the data, the current technique does not require additional variables unlike traditional IV methods. Further, statistical tests are developed to compare the different estimators. A simulation study examines the finite sample properties of the estimators and tests and confirms the theoretical order of the estimators with respect to their robustness and efficiency. It further shows that not only are regression coefficients biased, but variance components may be severely underestimated in the presence of correlated effects. Empirical standard errors are employed as they are less sensitive to correlated effects when compared to model-based standard errors. An example using student achievement data shows that GMM estimators can be effectively used in a search for the most efficient among unbiased estimators.
The asymptotic posterior normality (APN) of the latent variable vector in an item response theory (IRT) model is a crucial argument in IRT modeling approaches. In case of a single latent trait and under general assumptions, Chang and Stout (Psychometrika, 58(1):37–52, 1993) proved the APN for a broad class of latent trait models for binary items. Under the same setup, they also showed the consistency of the latent trait’s maximum likelihood estimator (MLE). Since then, several modeling approaches have been developed that consider multivariate latent traits and assume their APN, a conjecture which has not been proved so far. We fill this theoretical gap by extending the results of Chang and Stout for multivariate latent traits. Further, we discuss the existence and consistency of MLEs, maximum a-posteriori and expected a-posteriori estimators for the latent traits under the same broad class of latent trait models.
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output “AND” gate (DINA) model and the Deterministic Input Noisy Output “OR” gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.
The paper derives sufficient conditions for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
We consider latent variable models for an infinite sequence (or universe) of manifest (observable) variables that may be discrete, continuous or some combination of these. The main theorem is a general characterization by empirical conditions of when it is possible to construct latent variable models that satisfy unidimensionality, monotonicity, conditional independence, and tail-measurability. Tail-measurability means that the latent variable can be estimated consistently from the sequence of manifest variables even though an arbitrary finite subsequence has been removed. The characterizing, necessary and sufficient, conditions that the manifest variables must satisfy for these models are conditional association and vanishing conditional dependence (as one conditions upon successively more other manifest variables). Our main theorem considerably generalizes and sharpens earlier results of Ellis and van den Wollenberg (1993), Holland and Rosenbaum (1986), and Junker (1993). It is also related to the work of Stout (1990).
The main theorem is preceded by many results for latent variable models in general—not necessarily unidimensional and monotone. They pertain to the uniqueness of latent variables and are connected with the conditional independence theorem of Suppes and Zanotti (1981). We discuss new definitions of the concepts of “true-score” and “subpopulation,” which generalize these notions from the “stochastic subject,” “random sampling,” and “domain sampling” formulations of latent variable models (e.g., Holland, 1990; Lord & Novick, 1968). These definitions do not require the a priori specification of a latent variable model.