We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper traces the social history of the household registration system (koseki seido) in Japan from its beginning to the present day. The paper argues that the koseki has been an essential tool of social control used at various stages in history to facilitate the political needs and priorities of the ruling elite by constructing and policing the boundaries of Japanese self. This self has been mediated through the principles of family as defined by the state and has created diverse marginalised and excluded others. The study includes social unrest and agency of these others in furthering understanding of the role of the koseki in Japanese society. The paper also contributes understanding of nationality and citizenship in contemporary Japan in relation to the koseki.
Data has become central in various activities during armed conflict, including the identification of deceased persons. While the use of data-based methods can significantly improve the efficiency of efforts to identify the dead and inform their families about their fate, data can equally enable harm. This article analyzes the obligations that arise for States regarding the processing of data related to the identification of deceased persons. Despite being drafted long before the “age of data”, several international humanitarian law (IHL) provisions can be considered to give rise to obligations which protect those whose data is used to identify the dead from certain data-based harms. However, some of these protections are based on a data protection-friendly interpretation of more general obligations, and many only apply in international armed conflict. Against this background, it is suggested that further analysis on how international human rights law and domestic or regional data protection law could help to strengthen the case for data protection where IHL does not contain specific duties to protect data would be desirable.
The posthumous identification of Argentine soldiers killed in action during the international armed conflict in the Falkland Islands/Islas Malvinas was the result of the Humanitarian Project Plan, an unprecedented humanitarian forensic operation carried by the International Committee of the Red Cross at the request of Argentina and the United Kingdom. As a result of respect for international humanitarian law obligations to the dead and the novel use of humanitarian forensic action to help make this possible, tombstones that once read “Argentine soldier known only to God” now bear a name, thereby assuring their families of the fundamental right to know the final fate of their loved ones. This project was originally requested by relatives of unidentified soldiers and some veterans of the war, and was agreed to and supported by the parties to the past armed conflict. Although the unidentified soldiers were not missing (as it was known that they had died on the battlefield and that they were buried with dignity in a military cemetery, although without identification), it was essential for their relatives to have their names restored and to be able to honour them in their respective graves. While this was a logistically challenging and complex forensic operation, the main challenges were not exclusively logistical and scientific, but were political and diplomatic as well. At the completion of two Humanitarian Project Plan missions, 121 Argentine soldiers originally buried without a name had been identified; one identity had been corroborated and the remains of one body that were buried in two different graves were reassociated. All families were informed.
The key requirement for GMO authorisation is the submission of analytical methods for the detection, identification and quantification (DIQ), which has proven challenging in the case of New Genomic Techniques (NGTs). Currently available non-analytical approaches, such as blockchain traceability and probabilistic analysis, while potentially useful for monitoring, are insufficient for authorisation purposes. The lack of reliable DIQ methods hinders the authorisation of NGT products and raises concerns for both organic and conventional agriculture, where the presence of NGT products goes undetected. Therefore, the existing GMO regulatory framework requires reevaluation to address the challenges posed by NGTs while ensuring compliance with the broader EU food law framework.
I introduce the basic notion of a statistical model, its identification and partial identification. I discuss axiomatic characterizations of the random utility model.
For simple prospects routinely used for certainty equivalent elicitation, random expected utility preferences imply a conditional expectation function that can mimic deterministic rank-dependent preferences. That is, a subject with random expected utility preferences can have expected certainty equivalents exactly like those predicted by rank-dependent probability weighting functions of the inverse-s shape discussed by Quiggin (J Econ Behav Organ 3:323–343, 1982) and advocated by Tversky and Kahneman (J Risk Uncertainty 5:297–323, 1992), Prelec (Econometrica 66:497–527, 1998) and other scholars. Certainty equivalents may not nonparametrically identify preferences: Their conditional expectation (and critically, their interpretation) depends on assumptions concerning the source of their variability.
Section 1.1 calls attention to the prevalent research practice that studies planning with incredible certitude. Section 1.2 contrasts the conceptions of uncertainty in consequentialist and axiomatic decision theory. Section 1.3 presents the formal structure of consequentialist theory, which is used throughout the book. Section 1.4 explains the prevalent econometric characterization of uncertainty, which distinguishes identification problems and statistical imprecision. Section 1.5 discusses the distinct perspectives on social welfare expressed in various strands of research on planning.
In this note, we prove that the 3 parameter logistic model with fixed-effect abilities is identified only up to a linear transformation of the ability scale under mild regularity conditions, contrary to the claims in Theorem 2 of San Martín et al. (Psychometrika, 80(2):450–467, 2015a).
The notion of scale freeness does not seem to have been well understood in the factor analytic literature. It has been believed that if the loss function that is minimized to obtain estimates of the parameters in the factor model is scale invariant, then the estimates are scale free. It is shown that scale invariance of the loss function is neither a necessary nor a sufficient condition for scale freeness. A theorem that ensures scale freeness in the orthogonal factor model is given in this paper.
By reference to nominated attributes, a genus, being a population of objects of one specified kind, may be partitioned into species, being subpopulations of different kinds. A prototype is an object representative of its species within the genus. Using this framework, the paper describes how objects can be relatively differentiated with respect to attributes, and how attributes can be relatively differentiating with respect to objects. Methods and rationale for such differential ordering of objects and attributes are presented by example, formal development, and application.
For a genus Ω comprising n species of object there is a subset P ofn distinct prototypes. With respect to m nominated attributes, each object in Ω has an m-element characterization. Together these determine an n × m objects × attributes matrix, the rows of which are the characterizations of the prototypical objects. Over then species in Ω, an associated relative frequency vector gives the distribution of objects (and of their characterizations). The matrix and vector associate the objects in Ω with points in a metric space (P, δ); and it is with respect to various sums of distances in this attribute space that one can differentially order objects and attributes.
The definition of the distance function δ is generalized across kinds of difference, types of characterization, scale-types of measurement, Minkowski index ≧ 1, and any form of distribution of objects over species. Explanatory and taxonomic applications in psychology and other fields are discussed, with focus on classification, identification, recognition, and search. The Braille code and the identification of its characters provide illustration.
Conditions for removing the indeterminancy due to rotation are given for both the oblique and orthogonal factor analysis models. The conditions indicate why published counterexamples to conditions discussed by Jöreskog are not identifiable.
Probabilistic models of same-different and identification judgments are compared (within each paradigm) with regard to their sensitivity to perceptual dependence or the degree to which the underlying psychological dimensions are correlated. Three same-different judgment models are compared. One is a step function or decision bound model and the other two are probabilistic variants of a similarity model proposed by Shepard. Three types of identification models are compared: decision bound models, a probabilistic multidimensional scaling model, and probabilistic models based on the Shepard-Luce choice rule. The decision bound models were found to be most sensitive to perceptual dependence, especially when there is considerable distributional overlap. The same-different model based on the city-block metric and an exponential decay similarity function, and the corresponding identification model were found to be particularly insensitive to perceptual dependence. These results suggest that if a Shepard-type similarity function accurately describes behavior, then under typical experimental conditions it should be difficult to see the effects of perceptual dependence. This result provides strong support for a perceptual independence assumption when using these models. These theoretical results may also play an important role in studying different decision rules employed at different stages of identification training.
This paper presents some results on identification in multitrait-multimethod (MTMM) confirmatory factor analysis (CFA) models. Some MTMM models are not identified when the (factorial-patterned) loadings matrix is of deficient column rank. For at least one other MTMM model, identification does exist despite such deficiency. It is also shown that for some MTMM CFA models, Howe's (1955) conditions sufficient for rotational uniqueness can fail, yet the model may well be identified and rotationally unique. Implications of these results for CFA models in general are discussed.
We provide a framework for motivating and diagnosing the functional form in the structural part of nonlinear or linear structural equation models when the measurement model is a correctly specified linear confirmatory factor model. A mathematical population-based analysis provides asymptotic identification results for conditional expectations of a coordinate of an endogenous latent variable given exogenous and possibly other endogenous latent variables, and theoretically well-founded estimates of this conditional expectation are suggested. Simulation studies show that these estimators behave well compared to presently available alternatives. Practically, we recommend the estimator using Bartlett factor scores as input to classical non-parametric regression methods.
It is shown that problems of rotational equivalence of restricted factor loading matrices in orthogonal factor analysis are equivalent to problems of identification in simultaneous equations systems with covariance restrictions. A necessary (under a regularity assumption) and sufficient condition for local uniqueness is given and a counterexample is provided to a theorem by J. Algina concerning necessary and sufficient conditions for global uniqueness.
Multitrait-Multimethod (MTMM) matrices are often analyzed by means of confirmatory factor analysis (CFA). However, fitting MTMM models often leads to improper solutions, or non-convergence. In an attempt to overcome these problems, various alternative CFA models have been proposed, but with none of these the problem of finding improper solutions was solved completely. In the present paper, an approach is proposed where improper solutions are ruled out altogether and convergence is guaranteed. The approach is based on constrained variants of components analysis (CA). Besides the fact that these methods do not give improper solutions, they have the advantage that they provide component scores which can later on be used to relate the components to external variables. The new methods are illustrated by means of simulated data, as well as empirical data sets.
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank deficiency of the Jacobian matrix, composed of derivatives of the covariance elements with respect to the union of the free parameters of M1 and M2 (which characterizes model M12), is a necessary and sufficient condition for the local equivalence of M1 and M2. This condition is satisfied, in practice, when the analysis dealing with the fitting of M0, predicts that the decreases in the chi-square goodness-of-fit statistic for the fitting of M1 or M2, or M12 are all equal for any set of sample data, except on differences due to rounding errors.
This research concerns a mediation model, where the mediator model is linear and the outcome model is also linear but with a treatment–mediator interaction term and a residual correlated with the residual of the mediator model. Assuming the treatment is randomly assigned, parameters in this mediation model are shown to be partially identifiable. Under the normality assumption on the residual of the mediator and the residual of the outcome, explicit full-information maximum likelihood estimates of model parameters are introduced given the correlation between the residual for the mediator and the residual for the outcome. A consistent variance matrix of these estimates is derived. Currently, the coefficients of this mediation model are estimated using the iterative feasible generalized least squares (IFGLS) method that is originally developed for seemingly unrelated regressions (SURs). We argue that this mediation model is not a system of SURs. While the IFGLS estimates are consistent, their variance matrix is not. Theoretical comparisons of the FIMLE variance matrix and the IFGLS variance matrix are conducted. Our results are demonstrated by simulation studies and an empirical study. The FIMLE method has been implemented in a freely available R package iMediate.
Recently, there has been a renewed interest in the four-parameter item response theory model as a way to capture guessing and slipping behaviors in responses. Research has shown, however, that the nested three-parameter model suffers from issues of unidentifiability (San Martín et al. in Psychometrika 80:450–467, 2015), which places concern on the identifiability of the four-parameter model. Borrowing from recent advances in the identification of cognitive diagnostic models, in particular, the DINA model (Gu and Xu in Stat Sin https://doi.org/10.5705/ss.202018.0420, 2019), a new model is proposed with restrictions inspired by this new literature to help with the identification issue. Specifically, we show conditions under which the four-parameter model is strictly and generically identified. These conditions inform the presentation of a new exploratory model, which we call the dyad four-parameter normal ogive (Dyad-4PNO) model. This model is developed by placing a hierarchical structure on the DINA model and imposing equality constraints on a priori unknown dyads of items. We present a Bayesian formulation of this model, and show that model parameters can be accurately recovered. Finally, we apply the model to a real dataset.
In this article, we present a general theorem and proof for the global identification of composed CFA models. They consist of identified submodels that are related only through covariances between their respective latent factors. Composed CFA models are frequently used in the analysis of multimethod data, longitudinal data, or multidimensional psychometric data. Firstly, our theorem enables researchers to reduce the problem of identifying the composed model to the problem of identifying the submodels and verifying the conditions given by our theorem. Secondly, we show that composed CFA models are globally identified if the primary models are reduced models such as the CT-C\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(M-1)$$\end{document} model or similar types of models. In contrast, composed CFA models that include non-reduced primary models can be globally underidentified for certain types of cross-model covariance assumptions. We discuss necessary and sufficient conditions for the global identification of arbitrary composed CFA models and provide a Python code to check the identification status for an illustrative example. The code we provide can be easily adapted to more complex models.