We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A penalized likelihood (PL) method for structural equation modeling (SEM) was proposed as a methodology for exploring the underlying relations among both observed and latent variables. Compared to the usual likelihood method, PL includes a penalty term to control the complexity of the hypothesized model. When the penalty level is appropriately chosen, the PL can yield an SEM model that balances the model goodness-of-fit and model complexity. In addition, the PL results in a sparse estimate that enhances the interpretability of the final model. The proposed method is especially useful when limited substantive knowledge is available for model specifications. The PL method can be also understood as a methodology that links the traditional SEM to the exploratory SEM (Asparouhov & Muthén in Struct Equ Model Multidiscipl J 16:397–438, 2009). An expectation-conditional maximization algorithm was developed to maximize the PL criterion. The asymptotic properties of the proposed PL were also derived. The performance of PL was evaluated through a numerical experiment, and two real data illustrations were presented to demonstrate its utility in psychological research.
Diagnostic classification models (DCMs) have seen wide applications in educational and psychological measurement, especially in formative assessment. DCMs in the presence of testlets have been studied in recent literature. A key ingredient in the statistical modeling and analysis of testlet-based DCMs is the superposition of two latent structures, the attribute profile and the testlet effect. This paper extends the standard testlet DINA (T-DINA) model to accommodate the potential correlation between the two latent structures. Model identifiability is studied and a set of sufficient conditions are proposed. As a byproduct, the identifiability of the standard T-DINA is also established. The proposed model is applied to a dataset from the 2015 Programme for International Student Assessment. Comparisons are made with DINA and T-DINA, showing that there is substantial improvement in terms of the goodness of fit. Simulations are conducted to assess the performance of the new method under various settings.
Researchers have widely used exploratory factor analysis (EFA) to learn the latent structure underlying multivariate data. Rotation and regularised estimation are two classes of methods in EFA that they often use to find interpretable loading matrices. In this paper, we propose a new family of oblique rotations based on component-wise \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^p$$\end{document} loss functions \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(0 < p\le 1)$$\end{document} that is closely related to an \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L^p$$\end{document} regularised estimator. We develop model selection and post-selection inference procedures based on the proposed rotation method. When the true loading matrix is sparse, the proposed method tends to outperform traditional rotation and regularised estimation methods in terms of statistical accuracy and computational cost. Since the proposed loss functions are nonsmooth, we develop an iteratively reweighted gradient projection algorithm for solving the optimisation problem. We also develop theoretical results that establish the statistical consistency of the estimation, model selection, and post-selection inference. We evaluate the proposed method and compare it with regularised estimation and traditional rotation methods via simulation studies. We further illustrate it using an application to the Big Five personality assessment.
A review of model-selection criteria is presented, with a view toward showing their similarities. It is suggested that some problems treated by sequences of hypothesis tests may be more expeditiously treated by the application of model-selection criteria. Consideration is given to application of model-selection criteria to some problems of multivariate analysis, especially the clustering of variables, factor analysis and, more generally, describing a complex of variables.
A new model selection statistical test is proposed for testing the null hypothesis that two probability models equally effectively fit the underlying data generating process (DGP). The new model selection test, called the Discrepancy Risk Model Selection Test (DRMST), extends previous work (see Vuong, 1989) on this problem in four distinct ways. First, generalized goodness-of-fit measures (which include log-likelihood functions) can be used. Second, unlike the classical likelihood ratio test, the models are not required to be fully nested where the nesting concept is defined for generalized goodness-of-fit measures. The DRMST also differs from the likelihood ratio test by not requiring that either competing model provides a completely accurate representation of the DGP. And, fourth, the DRMST may be used to compare competing time-series models using correlated observations as well as data consisting of independent and identically distributed observations.
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\chi ^2$$\end{document} test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
The strategy frequency estimation method (Dal Bó and Fréchette in Am Econ Rev 101(1):411-429, 2011; Fudenberg in Am Econ Rev 102(2):720-749, 2012) allows us to estimate the fraction of subjects playing each of a list of strategies in an infinitely repeated game. Currently, this method assumes that subjects tremble with the same probability. This paper extends this method, so that subjects’ trembles can be heterogeneous. Out of 60 ex ante plausible specifications, the selected model uses the six strategies described in Dal Bó and Fréchette (2018), and allows the distribution of trembles to vary by strategy.
During the last fifteen years, Akaike's entropy-based Information Criterion (AIC) has had a fundamental impact in statistical model evaluation problems. This paper studies the general theory of the AIC procedure and provides its analytical extensions in two ways without violating Akaike's main principles. These extensions make AIC asymptotically consistent and penalize overparameterization more stringently to pick only the simplest of the “true” models. These selection criteria are called CAIC and CAICF. Asymptotic properties of AIC and its extensions are investigated, and empirical performances of these criteria are studied in choosing the correct degree of a polynomial model in two different Monte Carlo experiments under different conditions.
A maximum likelihood estimation procedure is developed for multidimensional scaling when (dis)similarity measures are taken by ranking procedures such as the method of conditional rank orders or the method of triadic combinations. The central feature of these procedures may be termed directionality of ranking processes. That is, rank orderings are performed in a prescribed order by successive first choices. Those data have conventionally been analyzed by Shepard-Kruskal type of nonmetric multidimensional scaling procedures. We propose, as a more appropriate alternative, a maximum likelihood method specifically designed for this type of data. A broader perspective on the present approach is given, which encompasses a wide variety of experimental methods for collecting dissimilarity data including pair comparison methods (such as the method of tetrads) and the pick-M method of similarities. An example is given to illustrate various advantages of nonmetric maximum likelihood multidimensional scaling as a statistical method. At the moment the approach is limited to the case of one-mode two-way proximity data, but could be extended in a relatively straightforward way to two-mode two-way, two-mode three-way or even three-mode three-way data, under the assumption of such models as INDSCAL or the two or three-way unfolding models.
Model selection is a popular strategy in structural equation modeling (SEM). To select an “optimal” model, many selection criteria have been proposed. In this study, we derive the asymptotics of several popular selection procedures in SEM, including AIC, BIC, the RMSEA, and a two-stage rule for the RMSEA (RMSEA-2S). All of the results are derived under weak distributional assumptions and can be applied to a wide class of discrepancy functions. The results show that both AIC and BIC asymptotically select a model with the smallest population minimum discrepancy function (MDF) value regardless of nested or non-nested selection, but only BIC could consistently choose the most parsimonious one under nested model selection. When there are many non-nested models attaining the smallest MDF value, the consistency of BIC for the most parsimonious one fails. On the other hand, the RMSEA asymptotically selects a model that attains the smallest population RMSEA value, and the RESEA-2S chooses the most parsimonious model from all models with the population RMSEA smaller than the pre-specified cutoff. The empirical behavior of the considered criteria is also illustrated via four numerical examples.
Several hierarchical classes models can be considered for the modeling of three-way three-mode binary data, including the INDCLAS model (Leenen, Van Mechelen, De Boeck, and Rosenberg, 1999), the Tucker3-HICLAS model (Ceulemans, Van Mechelen, and Leenen, 2003), the Tucker2-HICLAS model (Ceulemans and Van Mechelen, 2004), and the Tucker1-HICLAS model that is introduced in this paper. Two questions then may be raised: (1) how are these models interrelated, and (2) given a specific data set, which of these models should be selected, and in which rank? In the present paper, we deal with these questions by (1) showing that the distinct hierarchical classes models for three-way three-mode binary data can be organized into a partially ordered hierarchy, and (2) by presenting model selection strategies based on extensions of the well-known scree test and on the Akaike information criterion. The latter strategies are evaluated by means of an extensive simulation study and are illustrated with an application to interpersonal emotion data. Finally, the presented hierarchy and model selection strategies are related to corresponding work by Kiers (1991) for principal component models for three-way three-mode real-valued data.
This chapter introduces the class of autoregressive moving average models and discusses their properties in special cases and in general. We provide alternative methods for the estimation of unknown parameters and describe the properties of the estimators. We discuss key issues like hypothesis testing and model selection.
We first discuss a phenomenon called data mining. This can involve multiple tests on which variables or correlations are relevant. If used improperly, data mining may associate with scientific misconduct. Next, we discuss one way to arrive at a single final model, involving stepwise methods. We see that various stepwise methods lead to different final models. Next, we see that various configurations in test situations, here illustrated for testing for cointegration, lead to different outcomes. It may be possible to see which configurations make most sense and can be used for empirical analysis. However, we suggest that it is better to keep various models and somehow combine inferences. This is illustrated by an analysis of the losses in airline revenues in the United States owing to 9/11. We see that out of four different models, three estimate a similar loss, while the fourth model suggests only 10 percent of that figure. We argue that it is better to maintain various models, that is, models that stand various diagnostic tests, for inference and for forecasting, and to combine what can be learned from them.
Chapter 17 covers TWO-WAY INTERACTIONS IN MULTIPLE REGRESSION and includes the following specific topics, among others: Two-Way Interaction, First-Order Effects, Main Effects, Interaction Effects, Model Selection, AIC, BIC, and Probing Interactions.
Chapter 17 covers two-way interactions in multiple regression and includes the following specific topics, among others: two-way interaction, first-order effects, main effects, interaction effects, model selection, AIC, BIC, and probing interactions.
Modelling a neural system involves the selection of the mathematical form of the model’s components, such as neurons, synapses and ion channels, plus assigning values to the model’s parameters. This may involve matching to the known biology, fitting a suitable function to data or computational simplicity. Only a few parameter values may be available through existing experimental measurements or computational models. It will then be necessary to estimate parameters from experimental data or through optimisation of model output. Here we outline the many mathematical techniques available. We discuss how to specify suitable criteria against which a model can be optimised. For many models, ranges of parameter values may provide equally good outcomes against performance criteria. Exploring the parameter space can lead to valuable insights into how particular model components contribute to particular patterns of neuronal activity. It is important to establish the sensitivity of the model to particular parameter values.
We can easily find ourselves with lots of predictors. This situation has been common in ecology and environmental science but has spread to other biological disciplines as genomics, proteomics, metabolomics, etc., become widespread. Models can become very complex, and with many predictors, collinearity is more likely. Fitting the models is tricky, particularly if we’re looking for the “best” model, and the way we approach the task depends on how we’ll use the model results. This chapter describes different model selection approaches for multiple regression models and discusses ways of measuring the importance of specific predictors. It covers stepwise procedures, all subsets, information criteria, model averaging and validation, and introduces regression trees, including boosted trees.
In this chapter we introduce Bayesian inference and use it to extend the frequentist models of the previous chapters. To do this, we describe the concept of model priors, informative priors, uninformative priors, and conjugate prior-likelihood pairs . We then discuss Bayesian updating rules for using priors and likelihoods to obtain posteriors. Building upon priors and posteriors, we then describe more advanced concepts including predictive distributions, Bayes factors, expectation maximization to obtain maximum posterior estimators, and model selection. Finally, we present hierarchical Bayesian models, Markov blankets, and graphical representations. We conclude with a case study on change point detection.
The “No Miracle Argument” for scientific realism contends that the only plausible explanation for the predictive success of scientific theories is their truthlikeness, but doesn’t specify what ‘truthlikeness’ means. I argue that if we understand ‘truthlikeness’ in terms of Kullback-Leibler (KL) divergence, the resulting realist thesis (RKL) is a plausible explanation for science’s success. Still, RKL probably falls short of the realist’s ideal. I argue, however, that the strongest version of realism that the argument can plausibly establish is RKL. The realist needs another argument for establishing a stronger realist thesis.
Progress in the computational cognitive sciences depends critically on model evaluation. This chapter provides an accessible description of key considerations and methods important in model evaluation, with special emphasis on evaluation in the forms of validation, comparison, and selection. Major sub-topics include qualitative and quantitative validation, parameter estimation, cross-validation, goodness of fit, and model mimicry. The chapter includes definitions of an assortment of key concepts, relevant equations, and descriptions of best practices and important considerations in the use of these model evaluation methods. The chapter concludes with important high-level considerations regarding emerging directions and opportunities for continuing improvement in model evaluation.