Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2025-01-05T15:03:40.434Z Has data issue: false hasContentIssue false

A Spectral Method for Identifiable Grade of Membership Analysis with Binary Responses

Published online by Cambridge University Press:  27 December 2024

Ling Chen
Affiliation:
Columbia University
Yuqi Gu*
Affiliation:
Columbia University
*
Correspondence should be made to Yuqi Gu, Department of Statistics, Columbia University, New York, NY10027, USA. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Grade of membership (GoM) models are popular individual-level mixture models for multivariate categorical data. GoM allows each subject to have mixed memberships in multiple extreme latent profiles. Therefore, GoM models have a richer modeling capacity than latent class models that restrict each subject to belong to a single profile. The flexibility of GoM comes at the cost of more challenging identifiability and estimation problems. In this work, we propose a singular value decomposition (SVD)-based spectral approach to GoM analysis with multivariate binary responses. Our approach hinges on the observation that the expectation of the data matrix has a low-rank decomposition under a GoM model. For identifiability, we develop sufficient and almost necessary conditions for a notion of expectation identifiability. For estimation, we extract only a few leading singular vectors of the observed data matrix and exploit the simplex geometry of these vectors to estimate the mixed membership scores and other parameters. We also establish the consistency of our estimator in the double-asymptotic regime where both the number of subjects and the number of items grow to infinity. Our spectral method has a huge computational advantage over Bayesian or likelihood-based methods and is scalable to large-scale and high-dimensional data. Extensive simulation studies demonstrate the superior efficiency and accuracy of our method. We also illustrate our method by applying it to a personality test dataset.

Type
Theory & Methods
Copyright
Copyright © 2024 The Author(s), under exclusive licence to The Psychometric Society

Multivariate categorical data are routinely collected in various social and behavioral sciences, such as psychological tests (Chen et al., Reference Chen, Li and Zhang2019), educational assessments (Shang et al., Reference Shang, Erosheva and Xu2021), and political surveys (Chen et al., Reference Chen, Ying and Zhang2021b). In these applications, it is often of great interest to use latent variables to model the unobserved constructs such as personalities, abilities, political ideologies, etc. Popular latent variable models for multivariate categorical data include the item response theory models (IRT; Embretson & Reise Reference Embretson and Reise2013) and latent class models (LCM; Hagenaars & McCutcheon Reference Hagenaars and McCutcheon2002), which employ continuous and discrete latent variables, respectively. Different from these modeling approaches, the grade of membership (GoM) models (Woodbury et al., Reference Woodbury, Clive and Garson1978; Erosheva, Reference Erosheva2002; Erosheva, Reference Erosheva2005) allow each observation to have mixed memberships in multiple extreme latent profiles. GoM models assume that each observation has a latent membership vector with K continuous membership scores that sum up to one. Each membership score quantifies the extent to which this observation belongs to each of K extreme profiles. So GoM can be viewed as incorporating both the continuous aspect (via the membership scores) and discrete aspect (via the K extreme latent profiles) of latent variables. More generally, GoM belongs to the broad family of mixed membership models for individual-level mixtures (Airoldi et al., Reference Airoldi, Blei, Erosheva and Fienberg2014). Thanks to their nice interpretability and rich expressive power, variants of mixed membership models including GoM are widely used in many applications such as survey data modeling (Erosheva et al., Reference Erosheva, Fienberg and Joutard2007), response time modeling (Pokropek, Reference Pokropek2016), topic modeling (Blei et al., Reference Blei, Ng and Jordan2003), social networks (Airoldi et al., Reference Airoldi, Blei, Fienberg and Xing2008), and data privacy (Manrique-Vallier & Reiter, Reference Manrique-Vallier and Reiter2012).

The flexibility of GoM models comes at the cost of more challenging identifiability and estimation problems. In the existing literature on GoM model estimation, Bayesian inference using Markov Chain Monte Carlo (MCMC) is perhaps the most prevailing approach (Erosheva, Reference Erosheva2002; Erosheva et al., Reference Erosheva, Fienberg and Joutard2007; Gormley & Murphy, Reference Gormley and Murphy2009; Gu et al., Reference Gu, Erosheva, Xu and Dunson2023). However, the posterior distributions of the GoM model parameters are complicated, after integrating out the individual-level membership scores. Many studies developed advanced MCMC algorithms for approximate posterior computation. Yet MCMC sampling is time-consuming and typically not computationally efficient. On the other hand, the frequentist estimation approach of marginal maximum likelihood (MML) would bring a similar challenge, because the marginal likelihood still involves the intractable integrals of those latent membership scores. Actually, it was pointed out in Borsboom et al. (Reference Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan2016) that GoM models are very useful in identifying meaningful profiles in applications including depression, personality disorders, etc., but they are temporarily not widely used in psychometrics due to the lack of readily accessible and efficient statistical software.

Recently, the development of the R package sirt (Robitzsch & Robitzsch, Reference Robitzsch and Robitzsch2022) provides a joint maximum likelihood (JML) algorithm for GoMs based on the iterative estimation method proposed in Erosheva (Reference Erosheva2002). In contrast with MML, the JML approach treats the subjects’ latent membership scores as fixed unknown parameters rather than random quantities. This approach hence circumvents the need of evaluating the intractable integrals during estimation. JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to very large-scale data with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation methods and aid psychometric researchers and practitioners to perform GoM analysis of modern item response data.

In addition to the difficulty of estimation, model identifiability is also a challenging issue for GoM models. A model is identifiable if the model parameters can be reliably recovered from the observed data. Identifiability is crucial to ensuring valid statistical estimation as well as meaningful interpretation of the inferred latent structures. The handbook of Airoldi et al. (Reference Airoldi, Blei, Erosheva and Fienberg2014) emphasizes theoretical difficulties of identifiability in mixed membership models, including GoM models. Recently, recognizing the difficulty of establishing identifiability of GoM models, Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023) proposed to incorporate a dimension-grouping modeling component to GoM and established the population identifiability for this new model. However, their identifiability results do not apply to the original GoM. In addition, their identifiability notion only concerns the population parameters in the model, but excludes the individual-level latent membership scores.

To address the aforementioned issues, we propose a novel singular value decomposition (SVD)-based spectral approach to GoM analysis with multivariate binary data. Our approach hinges on the observation that the expectation of the response matrix admits a low-rank decomposition under GoM. Our contributions are three-fold. First, we consider a notion of expectation identifiability and establish identifiability for GoM models with binary responses. Under this new notion, the identifiable quantities include not only the population parameters, but also the individual membership scores that indicate the grades of memberships. Specifically, we derive sufficient conditions that are almost necessary for identifiability. Second, based on our new identifiability results, we propose an SVD-based spectral estimation method scalable to large-scale and high-dimensional data. Third, we establish the consistency of our spectral estimator in the double-asymptotic regime where both the number of subjects N and the number of items J grow to infinity. Both the population parameters and the individual membership scores can be consistently estimated on average. In the simulation studies, we empirically verify the identifiability results and also demonstrate the superior efficiency and accuracy of our algorithm. A real data example also illustrates that meaningful interpretation can be drawn after applying our proposed method.

The rest of the paper is structured as follows. Section 1 introduces the model setup and lays out the motivation for this work. Section 2 presents the identifiability results. Section 3 proposes a spectral estimation algorithm and establishes its consistency. Section 4 conducts simulation studies to assess the performance of the proposed method and empirically verify the identifiability results. Section 5 illustrates the proposed method using a real data example in a psychological test. Finally, Sect. 6 concludes the paper and discusses future research directions. The proofs of the identifiability results are included in Appendix.

1. Model Setup and Motivation

GoM models can be used to model multivariate categorical data with a mixed membership structure. In this work, we focus on multivariate binary responses, which are very commonly encountered in social, behavioral, and biomedical applications, including yes/no responses in social science surveys (Erosheva et al., Reference Erosheva, Fienberg and Joutard2007; Chen et al., Reference Chen, Ying and Zhang2021b), correct/wrong answers in educational assessments (Shang et al., Reference Shang, Erosheva and Xu2021), and presence/absence of symptoms in medical diagnosis (Woodbury et al., Reference Woodbury, Clive and Garson1978). We point out that our identifiability results and spectral estimation method in the binary case will illuminate the key structure of GoM and pave the way for generalizing to the general categorical response case. We will briefly discuss the possibility of such extensions in Sect. 6. In our binary response setting, denote the number of items by J. For a random subject i, denote his or her observed response to the j-th item by Rij{0,1} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_{ij}\in \{0,1\}$$\end{document} for j=1,,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j=1,\dots , J$$\end{document} .

A GoM model is characterized by two levels of modeling: the population level and the individual level. On the population level, K extreme latent profiles are defined to capture a finite number of prototypical response patterns. For k1,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k\in 1, \dots , K$$\end{document} , the k-th extreme latent profile is characterized by the item parameter vector θk=(θ1k,,θJK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\theta }_{k}=(\theta _{1k},\dots , \theta _{JK})$$\end{document} with θjk[0,1] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{jk}\in [0,1]$$\end{document} collecting the Bernoulli parameters of conditional response probabilities. Specifically,

(1) θjk=P(Rij=1subjectisolely belongs to thek-th extreme latent profile). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \theta _{jk} = {\mathbb {P}}(R_{ij}=1\mid \text {subject}\ i\ \text {solely belongs to the}\ k\text {-th extreme latent profile}). \end{aligned}$$\end{document}

We collect all the item-level Bernoulli parameters in a J×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J\times K$$\end{document} matrix Θ=(θjk)RJ×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }=(\theta _{jk})\in {\mathbb {R}}^{J\times K}$$\end{document} . On the individual level, each subject i has a latent membership vector πi=(πi1,,πiK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i=(\pi _{i1},\dots ,\pi _{iK})$$\end{document} , satisfying πik0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}\ge 0$$\end{document} and k=1Kπik=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{k=1}^K \pi _{ik}=1$$\end{document} . Here πik \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}$$\end{document} indicates the extent to which subject i partially belongs to the k-th extreme profile, and they are called membership scores. It is now instrumental to compare the assumption of the GoM model with that of the latent class model (LCM; Goodman Reference Goodman1974; Hagenaars & McCutcheon Reference Hagenaars and McCutcheon2002). Both GoM and LCM share a similar formulation of the item parameters Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} as defined in (1). However, under an LCM, each subject i is associated with a categorical variable zi[K] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z_i\in [K]$$\end{document} instead of a membership vector. This means LCM restricts each subject to solely belonging to a single profile, as opposed to partially belonging to multiple profiles in the GoM. Further, the fundamental representation theorem in Erosheva et al. (Reference Erosheva, Fienberg and Joutard2007) shows that a GoM model can be reformulated with an LCM but with KJ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K^J$$\end{document} latent classes instead of K latent classes. In summary, GoM is a more general and flexible tool than LCM for modeling multivariate data, but also exhibits a more complicated model structure. It is therefore more challenging to establish identifiability and perform estimation under the GoM setting.

Given the membership score vector πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} and the item parameters Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} , the conditional probability of the i-th subject providing a positive response to the j-th item is

(2) P(Rij=1πi,Θ)=k=1Kπikθjk. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathbb {P}}(R_{ij}=1\mid \varvec{\pi }_i, \varvec{\Theta }) = \sum _{k=1}^K \pi _{ik}\theta _{jk}. \end{aligned}$$\end{document}

In other words, the positive response probability for a subject to an item is a convex combination of the extreme profile response probabilities θjk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{jk}$$\end{document} weighted by the subject’s membership scores πik \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}$$\end{document} . The GoM model assumes that a subject’s responses Ri,1,,Ri,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_{i,1},\dots , R_{i, J}$$\end{document} are conditionally independent given the membership scores πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} . We consider a sample of N i.i.d. subjects and collect all the membership scores in a N×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times K$$\end{document} matrix Π=(πik)RN×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }=(\pi _{ik})\in {\mathbb {R}}^{N\times K}$$\end{document} .

In the GoM modeling literature (e.g., Erosheva Reference Erosheva2002), there are two perspectives of dealing with the membership scores πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} : the random-effect and the fixed-effect perspectives. The random effect perspective treats πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} as random and assumes that they follow some distribution parameterized by α \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\alpha }$$\end{document} : πiDα(·) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i \sim D_{\varvec{\alpha }}(\cdot )$$\end{document} . Note that πiΔK-1={x=(x1,,xK):xi0,i=1Kxi=1} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i\in \Delta _{K-1} = \{\textbf{x}=(x_1,\dots , x_K): x_i\ge 0,\ \sum _{i=1}^Kx_i=1\}$$\end{document} where ΔK-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta _{K-1}$$\end{document} denotes the probability simplex. A common choice of the distribution Dα \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_{\varvec{\alpha }}$$\end{document} for πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} is the Dirichlet distribution (Blei et al., Reference Blei, Ng and Jordan2003).

From the above random-effect perspective, the marginal likelihood function for a GoM model is

(3) L(Θ,αR)=i=1NΔK-1j=1Jk=1KπikθjkRij1-k=1Kπikθjk1-RijdDα(πi), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} L(\varvec{\Theta }, \varvec{\alpha }\mid \textbf{R}) = \prod _{i=1}^N \int _{\Delta _{K-1}} \prod _{j=1}^J \left( \sum _{k=1}^K\pi _{ik}\theta _{jk}\right) ^{R_{ij}} \left( 1-\sum _{k=1}^K\pi _{ik}\theta _{jk}\right) ^{1-R_{ij}} d D_{\varvec{\alpha }}(\varvec{\pi }_i), \end{aligned}$$\end{document}

where the membership scores πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} ’s are marginalized out with respect to their distribution Dα \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_{\varvec{\alpha }}$$\end{document} . The integration of the products of sums in (3) poses challenges to establishing identifiability. This difficulty motivated Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023) to introduce a dimension-grouping component to simplify the integrals and then prove identifiability for that new model. In terms of estimation, the marginal maximum likelihood (MML) approach maximizes (3) to estimate the population parameters (Θ,α) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Theta },\varvec{\alpha })$$\end{document} rather than the individual membership scores Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} . Bayesian inference with MCMC is also often used for estimation (Erosheva et al., Reference Erosheva, Fienberg and Joutard2007; Manrique-Vallier & Reiter, Reference Manrique-Vallier and Reiter2012; Gu et al., Reference Gu, Erosheva, Xu and Dunson2023), where inferring parameters like α \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\alpha }$$\end{document} in the Dirichlet distribution Dα(·) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_{\varvec{\alpha }}(\cdot )$$\end{document} typically require the Metropolis–Hastings sampling.

On the other hand, the fixed-effect perspective of GoM treats the membership scores πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} as fixed unknown parameters and aims to directly estimate them. This approach does not model the distribution of the membership scores and hence circumvents the need to evaluate the intractable integrals during estimation. In this case, if still adopting the likelihood framework, the joint likelihood function of both Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} for a GoM model is

(4) L(Π,ΘR)=i=1Nj=1Jk=1KπikθjkRij1-k=1Kπikθjk1-Rij. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} L(\varvec{\Pi }, \varvec{\Theta }\mid \textbf{R}) = \prod _{i=1}^N\prod _{j=1}^J \left( \sum _{k=1}^K\pi _{ik}\theta _{jk}\right) ^{R_{ij}} \left( 1-\sum _{k=1}^K\pi _{ik}\theta _{jk}\right) ^{1-R_{ij}}. \end{aligned}$$\end{document}

The joint maximum likelihood (JML) approach maximizes (4) to estimate Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} . Based on an iterative algorithm proposed by Erosheva (Reference Erosheva2002), the R package sirt (Robitzsch & Robitzsch, Reference Robitzsch and Robitzsch2022) provides a function JML to solve this optimization problem under GoM. JML methods are typically known as inconsistent for many traditional models (Neyman & Scott, Reference Neyman and Scott1948) when the sample size goes to infinity (large N), but the number of observed variables is finite (fixed J). Nonetheless, in modern large-scale assessments or surveys where the data collection scope is unprecedentedly big and high-dimensional, both N and J can be quite large. JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to modern big datasets with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation methods and aid psychometric researchers and practitioners in performing GoM analysis of item response data.

To address the above issues in the GoM analysis, in this work, we propose a novel singular value decomposition (SVD) based spectral approach. Our approach hinges on the observation that the expectation of the response matrix under a GoM model admits a low-rank decomposition. To see this, it is useful to summarize (2) in matrix form

(5) R0:=E[R]=ΠN×KΘK×J, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{R}_0:={\mathbb {E}}[\textbf{R}]=\underbrace{\varvec{\Pi }}_{N\times K} \underbrace{\varvec{\Theta }^{\top }}_{K\times J}, \end{aligned}$$\end{document}

where R=(Rij){0,1}N×J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}=(R_{ij})\in \{0,1\}^{N\times J}$$\end{document} denotes the binary response data matrix, and R0=(R0,ij)RN×J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0=(R_{0,ij})\in {\mathbb {R}}^{N\times J}$$\end{document} is the element-wise expectation of R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} . Note that the factorization in (5) implies that the N×J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times J$$\end{document} matrix R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} has rank at most K, which is the number of extreme latent profiles. Since K is typically (much) smaller than N and J, the decomposition (5) exhibits a low-rank structure. Therefore, we can consider the singular value decomposition (SVD) of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} :

(6) R0=UΣV, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{R}_0=\textbf{U}\varvec{\Sigma }\textbf{V}^{\top }, \end{aligned}$$\end{document}

where Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Sigma }$$\end{document} is a K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} diagonal matrix collecting the K singular values of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} ; denote these singular values by σ1σK0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _1\ge \cdots \ge \sigma _K\ge 0$$\end{document} and write Σ=diag(σ1,,σK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Sigma }=\text {diag}(\sigma _1,\dots , \sigma _K)$$\end{document} . Matrices UN×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{N\times K}$$\end{document} , VJ×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{V}_{J\times K}$$\end{document} collect the corresponding left and right singular vectors and satisfy UU=VV=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}^{\top }\textbf{U}=\textbf{V}^{\top }\textbf{V}=\textbf{I}_K$$\end{document} . Our high-level idea is to utilize the top K left singular vectors of the data matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} to identify and estimate Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and subsequently Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} . In the following two sections, we will present new identifiability results and develop a spectral estimation algorithm for GoM models based on the SVD in (6).

2. Identifiability Results

The study of identifiability in statistics dates back to Koopmans & Reiersol (Reference Koopmans and Reiersol1950). A model is identifiable if the model parameters can be reliably recovered from the observed data. Identifiability is a crucial property of a statistical model as it is a prerequisite for valid and reproducible statistical inference. In latent variable modeling, identifiability is especially essential since it is a foundation for meaningful interpretation of the latent constructs.

Traditionally, identifiability of a statistical model means that the population parameters can be uniquely determined from the marginal distribution of the observed variables (Koopmans & Reiersol, Reference Koopmans and Reiersol1950; Goodman, Reference Goodman1974). In the context of GoM models, this notion of population identifiability is equivalent to identifying parameters (Θ,α) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Theta }, \varvec{\alpha })$$\end{document} from the marginal distribution in (3). The complicated integrals in (3) make it difficult to establish population identifiability, which motivated Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023) to propose a dimension-grouping modeling component to simplify GoM and prove identifiability for that new model. However, it remains unknown whether the original GoM models can be identified.

In this work, we consider a new notion of identifiability, which we term as expectation identifiability. This notion concerns not only the item parameters, but also the individual membership scores. Similar identifiability notions are widely adopted and studied in the network modeling and topic modeling literature, e.g., Jin et al. (Reference Jin, Ke and Luo2023), Ke & Jin (Reference Ke and Jin2023), Mao et al. (Reference Mao, Sarkar and Chakrabarti2021), and Ke & Wang (Reference Ke and Wang2022). Specifically, recall from (5) that the expectation of the data matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} has a low-rank decomposition R0=ΠΘ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0 = \varvec{\Pi } \varvec{\Theta }^{\top }$$\end{document} , we seek to understand under what conditions this decomposition is unique. Note that both the Bernoulli probabilities Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and the membership scores Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} are treated as parameters to be identified. We call a parameter set (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi },\varvec{\Theta })$$\end{document} valid if πiΔK-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i\in \Delta _{K-1}$$\end{document} and θjk[0,1] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{jk}\in [0,1]$$\end{document} for all i, j, and k. We formally define expectation identifiability below.

Definition 1

(Expectation identifiability) A GoM model with parameter set (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi }, \varvec{\Theta })$$\end{document} is said to be identifiable, if for any other valid parameter set (Π~,Θ~) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widetilde{\varvec{\Pi }}, \widetilde{\varvec{\Theta }})$$\end{document} , Π~Θ~=ΠΘ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}\widetilde{\varvec{\Theta }}^{\top }=\varvec{\Pi }\varvec{\Theta }^{\top }$$\end{document} holds if and only if (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi }, \varvec{\Theta })$$\end{document} and (Π~,Θ~) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widetilde{\varvec{\Pi }}, \widetilde{\varvec{\Theta }})$$\end{document} are identical up to a permutation of the K extreme profiles.

One might ask whether identifying Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} from the expectation R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} has any implications on identifying Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} from the observed data R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} . In fact, when both N and J are large with respect to K, it is known that the difference between the low-rank decompositions of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} and R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} is small in a certain sense (Chen et al., Reference Chen, Chi, Fan and Ma2021a). We will revisit and elaborate on this subtlety when describing our estimation method and presenting the simulation results. Briefly speaking, studying the expectation identifiability problem from R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} is very meaningful in modern large-scale and high-dimensional data settings with large N and J.

We next present our new identifiability conditions for GoM models. We first define the important concept of pure subjects.

Definition 2

(Pure subject) Subject i is a pure subject for extreme profile k if the only positive entry of πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i$$\end{document} is located at index k, that is,

πi=(0,,0,1k-thentry,0,,0). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\pi }_i = (0, \dots , 0, \underbrace{1}_{k\text {-}\textrm{th} \ \textrm{entry}}, 0, \dots , 0). \end{aligned}$$\end{document}

In words, subject i is a pure subject for profile k if it solely belongs to this profile and has no membership in any other profile. We consider the following condition for Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} .

Condition 1

Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} satisfies that every extreme latent profile has at least one pure subject.

Condition 1 is a quite mild assumption on Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} , because it only requires that each of the K extreme profiles has at least one representative subject among all N subjects. Intuitively, this condition is reasonable because the existence of these representative subjects indeed helps pinpoint the meaning and interpretation of the extreme profiles. In real data applications of the GoM model, each pure subject is characterized by a prototypical response pattern indicating a particular classification or diagnosis. Specifically, each column of the J×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J\times K$$\end{document} item parameter matrix Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} is examined to coin the interpretation of each extreme profile. So, the existence of a pure subject in the kth ( 1kK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\le k\le K$$\end{document} ) extreme profile means that there indeed exists a prototypical subject characterized by the parameters in the kth column of the Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} matrix. As a concrete applied example, Woodbury et al. (Reference Woodbury, Clive and Garson1978) fitted the GoM model to a clinical dataset, where the item parameters for four extreme latent profiles were estimated. Then each of the extreme profiles was interpreted according to its response characteristics revealed via the item parameters. There, the four extreme profiles were interpreted as “Asymptomatic,” “Moderate,” “Acyanotic Severe,” “Cyanotic Severe” in Woodbury et al. (Reference Woodbury, Clive and Garson1978). In this context, having pure subjects in each of these extreme profiles is a practically meaningful assumption, because it just means that in the sample with a large number of N subjects, there exist an “Asymptomatic” subject, a “Moderate” subject, an “Acyanotic Severe” subject, and a “Cyanotic Severe” subject.

Under Condition 1, Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} contains one identity submatrix IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{I}_K$$\end{document} after some row permutation. For any matrix A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} , we use AS,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}_{{\textbf{S}},:}$$\end{document} to denote the rows with indices in S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} . Under Condition 1, denote S=(S1,,SK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}=(S_1,\dots , S_K)$$\end{document} as the index vector of one set of K pure subjects such that ΠS,:=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }_{{\textbf{S}},:}=\textbf{I}_K$$\end{document} . So S1,,SK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_1, \dots , S_K$$\end{document} are distinct integers ranging in {1,,N} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{1,\dots , N\}$$\end{document} . For example, if the first K rows of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} is equal to IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{I}_K$$\end{document} , then S=(1,2,,K) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}} = (1,2,\dots ,K)$$\end{document} . Recall R0=UΣV \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0 =\textbf{U}\varvec{\Sigma }\textbf{V}^{\top }$$\end{document} is the SVD for R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} , where U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} is a N×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times K$$\end{document} matrix collecting the K left singular vectors as columns. Interestingly, Condition 1 induces a simplex geometry on the row vectors of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} . We have the following important proposition, which serves as a foundation for both our identifiability results and estimation procedure.

Proposition 1

Under Condition 1, the left singular matrix U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} satisfies

(7) U=ΠUS,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}=\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}. \end{aligned}$$\end{document}

Furthermore, Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} can be written as

(8) Π=UUS,:-1, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Pi }&= \textbf{U}\textbf{U}_{{\textbf{S}},:}^{-1}, \end{aligned}$$\end{document}
(9) Θ=VΣUΠ(ΠΠ)-1=VΣUS,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }&= \textbf{V}\varvec{\Sigma }\textbf{U}^{\top }\varvec{\Pi }(\varvec{\Pi }^{\top }\varvec{\Pi })^{-1} = \textbf{V}\varvec{\Sigma } \textbf{U}^{\top }_{{\textbf{S}},:}. \end{aligned}$$\end{document}

We elaborate more on Proposition 1. Equation (7) in Proposition 1 implies that the left singular matrix U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and the membership score matrix Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} differ by a linear transformation, which is the K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} matrix US,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\textbf{S},:}$$\end{document} . Condition 1 and the properties of the singular value decomposition (such as the columns of V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{V}$$\end{document} being orthogonal to each other and Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Sigma }$$\end{document} being invertible) are used to prove (7). Equations (8) and (9) imply that if an index set of pure subjects S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} is known, then the parameters of interest Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} can be written in closed forms in terms of the SVD and S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} . More specifically, (7) is equivalent to

(10) Ui,:=k=1KπikUSk,:,i=1,,N. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}_{i, :} =\sum _{k=1}^K\pi _{ik}\textbf{U}_{S_k, :}, \ i=1,\dots , N. \end{aligned}$$\end{document}

Each Ui,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{i,:}$$\end{document} is the embedding of the i-th subject into the top-K left singular subspace of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} , and all the rows U1,:,,UN,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1,:}, \ldots , \textbf{U}_{N,:}$$\end{document} can be plotted as points in the K-dimensional Euclidean space RK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^K$$\end{document} . Geometrically, since k=1Kπik=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{k=1}^K\pi _{ik}=1$$\end{document} with πik0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}\ge 0$$\end{document} , we know that each Ui,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{i,:}$$\end{document} is a convex combination of US1,:,,USK,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{S_1,:},\dots ,\textbf{U}_{S_K,:}$$\end{document} , which are the embeddings of the K types of pure subjects. This means in RK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^K$$\end{document} , all the subjects lie in a simplex (i.e., the generalization of a triangle or tetrahedron to higher dimensions) whose vertices are these K types of pure subjects. Note that Ui \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_i$$\end{document} and Ui \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{i'}$$\end{document} overlap if they have the same membership score vectors πi=πi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\pi }_i = \varvec{\pi }_{i'}$$\end{document} . Figure 1 gives an illustration of this simplex geometry on such embeddings in R3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^3$$\end{document} with K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} .

Similar simplex structure in the spectral domain was first discovered and used for estimation under the degree-corrected mixed membership network model (Jin et al. Reference Jin, Ke and Luo2023, first posted on arXiv in 2017). Later, spectral approaches to estimating mixed memberships via exploiting the simplex structure are also used for related network models (Mao et al., Reference Mao, Sarkar and Chakrabarti2021) and topic models (Ke & Wang, Reference Ke and Wang2022). Compared to network models where the data matrix is symmetric, the GoM model has an N×J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times J$$\end{document} asymmetric data matrix. Compared to topic models, the entries of the perturbation matrix R-R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}-\textbf{R}_0$$\end{document} in the GoM model independently follow Bernoulli distributions, whereas in topic models, the entries of the perturbation matrix follow multinomial distributions.

Figure 1 Illustration of the simplex geometry of the N×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times K$$\end{document} left singular matrix U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} with K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . The solid dots represent the row vectors of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} in R3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^3$$\end{document} , and the three simplex vertices (i.e, vertices of the triangle) correspond to the three types of pure subjects. All the dots lie in this triangle.

It is worth noting that the expectation identifiability in Definition 1 is closely related to the uniqueness of non-negative matrix factorization (NMF, Donoho & Stodden Reference Donoho and Stodden2003; Hoyer Reference Hoyer2004; Berry et al. Reference Berry, Browne, Langville, Pauca and Plemmons2007). NMF seeks to decompose a nonnegative matrix MRm×n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}\in {\mathbb {R}}^{m\times n}$$\end{document} into M=WH \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}= \textbf{W}\textbf{H}$$\end{document} , where both WRm×r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}\in {\mathbb {R}}^{m\times r}$$\end{document} and HRr×n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{H}\in {\mathbb {R}}^{r\times n}$$\end{document} are non-negative matrices. An NMF is called separable if each column of W \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}$$\end{document} appears as a column of M \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}$$\end{document} (Donoho & Stodden, Reference Donoho and Stodden2003). If we write M=R0,W=Θ,H=Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}=\textbf{R}_0^{\top }, \textbf{W}=\varvec{\Theta }, \textbf{H}=\varvec{\Pi }^{\top }$$\end{document} , then the separability condition for this NMF aligns with Condition 1. It is also generally assumed that W \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}$$\end{document} is of full rank, otherwise H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{H}$$\end{document} typically cannot be uniquely determined (Gillis & Vavasis, Reference Gillis and Vavasis2013). Theorem 2 shows that when Condition 1 holds, Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} being full-rank suffices for GoM model identifiability. We will also show that model identifiability still holds with certain relaxations on the rank of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} . On another note, estimating an NMF usually involves direct manipulation of the original data matrix, which can be computationally inefficient when dealing with large datasets. In our approach, we employ an NMF algorithm in Gillis & Vavasis (Reference Gillis and Vavasis2013) on the SVD of the data matrix instead of the data matrix itself to estimate GoM parameters. Since our procedure operates on the singular subspace with significantly lower dimension than the original data space, it yields lower computational cost compared to conventional NMF procedures.

We next present our identifiability results for GoM models. We first show that the pure-subject Condition 1 is almost necessary for the identifiability of a GoM model.

Theorem 1

Suppose θjk(0,1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{jk}\in (0,1)$$\end{document} for all j=1,,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j=1,\dots , J$$\end{document} and k=1,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=1,\dots , K$$\end{document} . If there is one extreme profile that does not have any pure subject, then the GoM model is not identifiable.

The proofs of the theorems are all deferred to Appendix. Theorem 1 reveals the importance of Condition 1 for identifiability. In fact, later we will use this condition as a foundation for our estimation algorithm. Our next theorem presents sufficient and almost necessary conditions for GoM models to be identifiable.

Theorem 2

Suppose Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} satisfies Condition 1.

  1. If rank (Θ)=K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Theta })=K$$\end{document} , then the GoM model is identifiable.

  2. If rank (Θ)=K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Theta })=K-1$$\end{document} and no column of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} is an affine combination of the other columns of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} , then the GoM model is identifiable. (An affine combination of vectors x1,,xn \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{x}_1,\dots , \textbf{x}_n$$\end{document} is defined as i=1naixi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{i=1}^na_i\textbf{x}_i$$\end{document} with i=1nai=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{i=1}^na_i=1$$\end{document} .)

  3. In any other case, if there exists a subject i such that πik>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}>0$$\end{document} for every k=1,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=1,\dots , K$$\end{document} , then the GoM model is not identifiable.

The high-level proof idea of Theorem 2 shares a similar spirit to Theorem 2.1 in Mao et al. (Reference Mao, Sarkar and Chakrabarti2021). We next explain and interpret the three settings in Theorem 2. According to part (a), if the K item parameter vectors θ1,,θK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\theta }_1,\ldots ,\varvec{\theta }_K$$\end{document} (i.e., the K columns of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} ) are linearly independent, then the GoM model is identifiable under Condition 1. In part (b), for identifiability to hold when rank(Θ)=K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=K-1$$\end{document} , any item parameter vector θk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\theta }_k$$\end{document} cannot be written as an affine combination of the remaining vectors {θk:kk} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{\varvec{\theta }_{k'}:\; k'\ne k\}$$\end{document} . This is a weaker requirement on Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} compared to part (a). Part (c) states that if the conditions in parts (a) or (b) do not hold and if there exists a completely mixed subject that partially belongs to all profiles (i.e., πik>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}>0$$\end{document} for all k), then the model is not identifiable. Part (c) also shows that the sufficient identifiability conditions in (a) and (b) are close to being necessary, because the existence of a completely mixed subject is a very mild assumption. We next further give three toy examples with K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} and J=4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=4$$\end{document} to illustrate the conditions in Theorem 2.

Example 1

Consider

Θ=0.20.80.80.20.80.20.80.20.80.80.20.2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }= \begin{pmatrix} 0.2 &{} 0.8 &{} 0.8\\ 0.2 &{} 0.8 &{} 0.2\\ 0.8 &{} 0.2 &{} 0.8\\ 0.8 &{} 0.2 &{} 0.2 \end{pmatrix}. \end{aligned}$$\end{document}

It is easy to verify that rank(Θ)=K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=K=3$$\end{document} . This case falls into scenario (a) in Theorem 2, so a GoM model parameterized by (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi }, \varvec{\Theta })$$\end{document} is identifiable if Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} satisfies Condition 1.

Example 2

Consider

Θ=0.20.80.80.20.80.80.80.20.80.80.20.8. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }= \begin{pmatrix} 0.2 &{} 0.8 &{} 0.8\\ 0.2 &{} 0.8 &{} 0.8\\ 0.8 &{} 0.2 &{} 0.8\\ 0.8 &{} 0.2 &{} 0.8 \end{pmatrix}. \end{aligned}$$\end{document}

Now rank(Θ)=K-1=2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=K-1=2$$\end{document} since the third column of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} is a linear combination of the first two columns. However, there is no column of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} that is an affine combination of the other columns of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} . This case falls into scenario (b) in Theorem 2, so a GoM model parameterized by (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi }, \varvec{\Theta })$$\end{document} is identifiable if Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} satisfies Condition 1.

Example 3

Consider

Θ=0.20.80.50.20.80.50.80.20.50.80.20.5. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }= \begin{pmatrix} 0.2 &{} 0.8 &{} 0.5\\ 0.2 &{} 0.8 &{} 0.5\\ 0.8 &{} 0.2 &{} 0.5\\ 0.8 &{} 0.2 &{} 0.5 \end{pmatrix}. \end{aligned}$$\end{document}

In this case, rank(Θ)=K-1=2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=K-1=2$$\end{document} , and the third column of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} is an affine combination of the first two columns. This case falls into scenario (c) in Theorem 2, so if there exists a subject that partially belongs to all K profiles, then the GoM model is not identifiable.

3. SVD-Based Spectral Estimation Method and Its Consistency

3.1. Estimation Algorithm

In the literature of GoM model estimation, the most prevailing approaches are perhaps Bayesian inferences using Markov chain Monte Carlo (MCMC) algorithms such as Gibbs and Metropolis–Hastings sampling (Erosheva, Reference Erosheva2002; Erosheva et al., Reference Erosheva, Fienberg and Joutard2007; Gu et al., Reference Gu, Erosheva, Xu and Dunson2023). However, MCMC is time-consuming and typically not computationally efficient. As Borsboom et al. (Reference Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan2016) points out, despite their usefulness, GoM models are somewhat underrepresented in psychometric applications due to the lack of readily accessible statistical software. Recently, the R package sirt (Robitzsch & Robitzsch, Reference Robitzsch and Robitzsch2022) provides a joint maximum likelihood (JML) algorithm to fit GoM models. This algorithm implements the Lagrange multiplier method proposed in Erosheva (Reference Erosheva2002) and solves the optimization problem in a gradient descent fashion. Although this JML algorithm is computationally more efficient compared to MCMC algorithms, it is still not scalable to very large-scale response data due to its iterative manner. Therefore, it is of interest to develop a non-iterative estimation method suitable to analyze modern datasets with a large number of items and subjects.

We next propose a fast SVD-based spectral method to estimate GoM models. Recall that Proposition 1 establishes the expressions for Π,Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }, \varvec{\Theta }$$\end{document} in (8) and (9). In practice, since S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} is not known, we propose to estimate it using a vertex-hunting technique called successive projection algorithm (SPA; Araújo et al. Reference Araújo, Saldanha, Galvao, Yoneyama, Chame and Visani2001; Gillis & Vavasis Reference Gillis and Vavasis2013). As stated in Proposition 1, Condition 1 induces a simplex geometry on the row vectors of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and the simplex vertices correspond to the pure subjects S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} . To locate K vertices for any input matrix URN×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}\in {\mathbb {R}}^{N\times K}$$\end{document} that has such a simplex structure, SPA first finds the subject with the maximum row norm in U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} . That is, the first vertex index is

S^1=argmax1iNUi,:2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widehat{S}_1=\mathop {\textrm{argmax}}\limits _{1\le i\le N} \Vert \textbf{U}_{i,:}\Vert _2. \end{aligned}$$\end{document}

Here and after we use x2=xx \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{x}\Vert _2=\sqrt{\textbf{x}^{\top }\textbf{x}}$$\end{document} to denote the 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _2$$\end{document} norm of any vector x \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{x}$$\end{document} . Since the l2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_2$$\end{document} norm of any convex combination of the vertices is at most the maximum l2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_2$$\end{document} norm of the vertices, this step is guaranteed to return one of the vertices of the simplex. SPA then projects all the remaining row vectors {Ui,::iS^1} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{\textbf{U}_{i,:}:\; i\ne \widehat{S}_1\}$$\end{document} onto the subspace that is orthogonal to US^1,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\widehat{S}_1,:}$$\end{document} . Mathematically, denote v1:=US^1,:/US^1,:2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{v}_1:=\textbf{U}_{\widehat{S}_1,:}/\Vert \textbf{U}_{\widehat{S}_1,:}\Vert _2$$\end{document} as the scaled vector with unit norm, then the projected vectors are the rows of the matrix U(IK-v1v1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}(\textbf{I}_K -\textbf{v}_1\textbf{v}_1^{\top })$$\end{document} , where IK-v1v1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{I}_K -\textbf{v}_1\textbf{v}_1^{\top }$$\end{document} is a K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} projection matrix. In the second step, SPA finds the second vertex index as the one that has the maximum norm among the projected row vectors,

S^2=argmaxiS^1Ui,:(IK-v1v1)2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widehat{S}_2 = \mathop {\textrm{argmax}}\limits _{i\ne \widehat{S}_1} \Vert \textbf{U}_{i,:}(\textbf{I}_K - \textbf{v}_1 \textbf{v}_1^{\top })\Vert _2. \end{aligned}$$\end{document}

The above procedure of first finding the row with the maximum projected norm and then projecting the remaining rows onto the orthogonal space is iteratively employed until finding all K vertex indices. Sequentially, for each k=1,,K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=1,\ldots ,K-1$$\end{document} , define a unit-norm vector vk=US^k,:/US^k,:2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{v}_k = \textbf{U}_{\widehat{S}_k,:}/\Vert \textbf{U}_{\widehat{S}_k,:}\Vert _2$$\end{document} , then the (k+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(k+1)$$\end{document} -th vertex index is estimated as

S^k+1=argmaxi{S^1,,S^k}Ui,:(IK-v1v1)(IK-vkvk)2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widehat{S}_{k+1} = \mathop {\textrm{argmax}}\limits _{i\not \in \{\widehat{S}_1, \ldots ,\widehat{S}_k\}} \Vert \textbf{U}_{i,:} (\textbf{I}_K - \textbf{v}_1\textbf{v}_1^{\top }) \cdots (\textbf{I}_K - \textbf{v}_k\textbf{v}_k^{\top })\Vert _2. \end{aligned}$$\end{document}

Here the projection matrices (IK-v1v1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{I}_K - \textbf{v}_1\textbf{v}_1^{\top })$$\end{document} , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dots $$\end{document} , (IK-vkvk) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{I}_K - \textbf{v}_k\textbf{v}_k^{\top })$$\end{document} sequentially project the rows of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} to the orthogonal spaces of those already found vertices US^1,:,,US^k,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\widehat{S}_1,:},\ldots , \textbf{U}_{\widehat{S}_k,:}$$\end{document} . This SPA procedure can be intuitively understood by visually inspecting the toy example in Fig. 1. Since U1,:,,UN,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1,:}, \ldots , \textbf{U}_{N,:}$$\end{document} lie in a triangle in Fig. 1, it is not hard to see that the vector with the largest norm should be one of the three vertices US1,:,US2,:,US3,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{S_1,:}, \textbf{U}_{S_2,:}, \textbf{U}_{S_3,:}$$\end{document} , say US3,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{S_3,:}$$\end{document} . Furthermore, after projecting the remaining Ui,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{i,:}$$\end{document} onto the space orthogonal to US3,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{S_3,:}$$\end{document} , the maximum norm of the projected vectors should correspond to i=S1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=S_1$$\end{document} or S2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_2$$\end{document} . This observation intuitively justifies that SPA can find the correct set of pure subjects given a simplex structure. With the estimated pure subjects S^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{S}}$$\end{document} , Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} can be subsequently obtained via (8) and (9).

The above estimation procedure is based on the assumption that R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} is known. In practice, we only have access to the binary random data matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} whose expectation is R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} . Fortunately, it is known that for a large-dimensional random matrix with a low-rank expectation, the top-K SVD of the random matrix and the SVD of its expectation are close (e.g., see Chen et al. Reference Chen, Chi, Fan and Ma2021a). This nontrivial theoretical result is our key insight and motivation to consider the top-K SVD of R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} as a surrogate for that of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} :

(11) RU^Σ^V^, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{R}\approx \widehat{\textbf{U}}\widehat{\varvec{\Sigma }}\widehat{\textbf{V}}^{\top }, \end{aligned}$$\end{document}

where Σ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Sigma }}$$\end{document} is a K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} diagonal matrix collecting the K largest singular values of R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} , and U^N×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{N\times K}$$\end{document} , V^J×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{V}}_{J\times K}$$\end{document} collect the corresponding left and right singular vectors with U^U^=V^V^=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}^{\top }\widehat{\textbf{U}} =\widehat{\textbf{V}}^{\top }\widehat{\textbf{V}}=\textbf{I}_K$$\end{document} . Specifically, Chen et al. (Reference Chen, Chi, Fan and Ma2021a) proved that the difference between U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} up to a rotation is small when N and J are large with respect to K. Therefore, since Proposition 1 shows that the population row vectors U1,:,,UN,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1,:},\dots , \textbf{U}_{N,:}$$\end{document} form a simplex structure, the empirical row vectors {U^i,:}i=1N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{\widehat{\textbf{U}}_{i,:}\}_{i=1}^N$$\end{document} are expected to form a noisy simplex cloud distributed around the population simplex. We call this noisy cloud the empirical simplex. For an illustration of the population and the empirical simplex, see Fig. 2 in Sect. 4.

In order to recover the population simplex from the empirical one, we use a pruning step similar to that in Mao et al. (Reference Mao, Sarkar and Chakrabarti2021) to reduce noise. We summarize this pruning procedure in Algorithm 1. The high-level idea behind pruning is that if U^i,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{i,:}$$\end{document} has a large norm but very few close neighbors, then it is likely to be outside of the population simplex and hence should be pruned (i.e., removed) before performing SPA to achieve higher accuracy in vertex hunting. More specifically, Algorithm 1 first calculates the norm of each row of U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} (line 1-3) and identifies the vectors with norms in the upper q-quantile (line 4). The larger q is, the more such points are found. Then for each vector found, its average distance xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_i$$\end{document} to its r nearest neighbors is calculated (line 5-8). Finally, the subjects to be pruned are those whose xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_i$$\end{document} belong to the upper e-quantile of all the xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x_i$$\end{document} ’s (line 9). A larger value of e indicates that a larger proportion of the points will be pruned. According to our preliminary simulations, we observe that the estimation results are not very sensitive to these tuning parameters rqe. After pruning, we can use SPA to hunt for the K vertices of the pruned empirical simplex to obtain S^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{S}}$$\end{document} and then estimate Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} .

Algorithm 1 Prune

Algorithm 2 GoM Estimation by Successive Projection Algorithm with Pruning

Algorithm 2 summarizes our proposed method of estimating parameters (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi },\varvec{\Theta })$$\end{document} based on SPA with the pruning step. We first introduce some notation. The index of the element in x \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{x}$$\end{document} that has the maximum value is denoted by argmax(x) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{argmax}(\textbf{x})$$\end{document} . For any matrix A=(aij) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}=(a_{ij})$$\end{document} , denote by A+=(max{aij,0}) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}_+=(\max \{a_{ij}, 0\})$$\end{document} the matrix that retains the nonnegative values of A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} and sets any negative values to zero. Denote by diag(x) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$diag(\textbf{x})$$\end{document} the diagonal matrix whose diagonals are the entries of the vector x \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{x}$$\end{document} . If S2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}_2$$\end{document} is a subvector of S1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}_1$$\end{document} , denote by S1\S2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}_1\backslash {\textbf{S}}_2$$\end{document} the complement of vector S2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}_2$$\end{document} in vector S1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}_1$$\end{document} . For a positive integer M, denote [M]={1,,M} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[M] = \{1,\ldots ,M\}$$\end{document} . After obtaining the index vector of the pruned subjects P^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{P}}$$\end{document} (which is a subvector of (1,2,,N) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1,2,\ldots ,N)$$\end{document} ) from Algorithm 1 (line 2), we use SPA on the pruned matrix U^[N]\P^,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{[N]{\setminus }\widehat{\textbf{P}},:}$$\end{document} to obtain the estimated pure subject index vector S^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{{\textbf{S}}}$$\end{document} (line 3-8). Once this is achieved, Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} can be estimated by modifying (8) and (9) in Proposition 1. We first calculate

(12) Π~=U^U^S^,:-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\varvec{\Pi }} = \widehat{\textbf{U}} \left( \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}\right) ^{-1} \end{aligned}$$\end{document}

based on (8). The Π~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}$$\end{document} obtained above does not necessarily fall into the parameter domain for Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} . Therefore, we first truncate all the entries of Π~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}$$\end{document} to be nonnegative and then re-normalize each row to sum to one (line 10). Based on Π^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}$$\end{document} , we can also estimate Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} by

(13) Θ~=V^Σ^U^Π^(Π^Π^)-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\varvec{\Theta }}=\widehat{\textbf{V}}\widehat{\varvec{\Sigma }} \widehat{\textbf{U}}^{\top } \widehat{\varvec{\Pi }} (\widehat{\varvec{\Pi }}^{\top } \widehat{\varvec{\Pi }})^{-1} \end{aligned}$$\end{document}

according to (9). Our proposed method can be viewed as a method of moments. Equations (12) and (13) are based on the first moment of the response matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} , where we equate the low-rank structure of the population first-moment matrix R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} with the observed first-moment matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} . Lastly, we truncate Θ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}$$\end{document} to be between [ϵ,1-ϵ] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[\epsilon , 1-\epsilon ]$$\end{document} and obtain the final estimator Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} (line 12). When ϵ=0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon =0$$\end{document} , this truncation ensures that entries of Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} lie in the parameter domain [0, 1]. In the numerical studies, we choose ϵ=0.001 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon = 0.001$$\end{document} to be consistent and comparable with the default setting in the JML function in the R package sirt.

Remark 1

There are two possible ways to estimate Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} according to (9). The first one is defining Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} as the truncated version of

Θ~=V^Σ^U^S^,:, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\varvec{\Theta }}=\widehat{\textbf{V}}\widehat{\varvec{\Sigma }} \widehat{\textbf{U}}_{\widehat{\textbf{S}},:}^{\top }, \end{aligned}$$\end{document}

which only uses the information of U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} corresponding to the K pure subjects indexed by S^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{{\textbf{S}}}$$\end{document} . Another approach is estimating Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} via (13), which uses information from all of the N subjects. The latter approach is expected to give more stable estimates. Our preliminary simulations also justify that using (13) indeed gives higher estimation accuracy, so we choose to estimate Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} that way.

3.2. Estimation Consistency

In this subsection, we prove that our spectral method guarantees estimation consistency of both the individual membership scores Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and the item parameters Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} . We consider the double-asymptotic regime where both the number of subjects N and the number of items J grow to infinity and K is fixed. At the high level, since our estimators are functions of the empirical SVD, we will prove consistency by leveraging singular subspace perturbation theory (Chen et al., Reference Chen, Chi, Fan and Ma2021a) that quantifies the discrepancy between the SVD of R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} and that of the low-rank expectation R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} .

Before stating the theorem, we introduce some notations. Write f(n)g(n) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(n)\lesssim g(n)$$\end{document} if there exists a constant c>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c > 0$$\end{document} such that |f(n)|c|g(n)| \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|f(n)|\le c|g(n)|$$\end{document} holds for all sufficiently large n, and write f(n)g(n) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(n) \succsim g(n)$$\end{document} if there exists a constant c>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c > 0$$\end{document} such that |f(n)|c|g(n)| \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|f(n)| \ge c |g(n)|$$\end{document} holds for all sufficiently large n. For any matrix A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} , denote its kth largest singular value as σk(A) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _k(\textbf{A})$$\end{document} , and define its condition number κ(A) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa (\textbf{A})$$\end{document} as the ratio of its largest singular value to its smallest singular value. For any m×n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m\times n$$\end{document} matrix A=(aij)Rm×n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}=(a_{ij})\in {\mathbb {R}}^{m\times n}$$\end{document} , denote its Frobenius norm by AF=i=1mj=1naij2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{A}\Vert _F=\sqrt{\sum _{i=1}^m\sum _{j=1}^n a_{ij}^2}$$\end{document} .

Condition 2

κ(Π)1,κ(Θ)1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa (\varvec{\Pi })\lesssim 1, \kappa (\varvec{\Theta })\lesssim 1$$\end{document} , σK(Π)N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _K(\varvec{\Pi }) \succsim \sqrt{N}$$\end{document} , and σK(Θ)J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _K(\varvec{\Theta }) \succsim \sqrt{J}$$\end{document} .

Condition 2 is a reasonable and mild assumption on the parameter matrices. As a simple example, it is not hard to verify that if Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} both consist of many identity submatrices IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{I}_K$$\end{document} vertically stacked together, then Condition 2 is satisfied. The following theorem establishes the consistency of estimating both Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} using our spectral estimator.

Theorem 3

Consider Π~=U^U^S^,:-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}=\widehat{\textbf{U}} \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1}$$\end{document} , Θ~=V^Σ^U^S^,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }} =\widehat{\textbf{V}}\widehat{\varvec{\Sigma }} \widehat{\textbf{U}}^{\top }_{\widehat{\textbf{S}},:}$$\end{document} . Assume that Conditions 1 and 2 hold. If N,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N, J \rightarrow \infty $$\end{document} with N/J20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${N}/{J^2}\rightarrow 0$$\end{document} and J/N20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${J}/{N^2}\rightarrow 0$$\end{document} , then we have

(14) 1NKΠ~-ΠPFP0,1JKΘ~P-ΘFP0, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{1}{\sqrt{NK}} \Vert \widetilde{\varvec{\Pi }}-\varvec{\Pi }\textbf{P}\Vert _F {\mathop {\rightarrow }\limits ^{P}} 0, \quad \frac{1}{\sqrt{JK}} \Vert \widetilde{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F {\mathop {\rightarrow }\limits ^{P}} 0, \end{aligned}$$\end{document}

where the notation P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathop {\rightarrow }\limits ^{P}}$$\end{document} means convergence in probability, and P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P}$$\end{document} is a K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} permutation matrix which has exactly one entry of “1” in each row/column and zero entries elsewhere.

Theorem 3 implies that

1NKΠ~-ΠPF=1NKi=1Nk=1K(π~i,k-πi,ϕ(k))2P0;1JKΘ~P-ΘF=1JKj=1Jk=1K(θ~j,ϕ(k)-θj,k)2P0, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{1}{\sqrt{NK}} \Vert \widetilde{\varvec{\Pi }}-\varvec{\Pi }\textbf{P}\Vert _F&= \sqrt{\frac{1}{NK} \sum _{i=1}^N \sum _{k=1}^K (\widetilde{\pi }_{i,k} - \pi _{i,\phi (k)})^2} ~{\mathop {\rightarrow }\limits ^{P}} 0; \\ \frac{1}{\sqrt{JK}} \Vert \widetilde{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F&= \sqrt{\frac{1}{JK} \sum _{j=1}^J \sum _{k=1}^K (\widetilde{\theta }_{j,\phi (k)} - \theta _{j,k})^2} ~{\mathop {\rightarrow }\limits ^{P}} 0, \end{aligned}$$\end{document}

where ϕ:{1,,K}{1,,K} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\phi : \{1,\ldots ,K\}\rightarrow \{1,\ldots ,K\}$$\end{document} is a permutation map determined by the K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} permutation matrix P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P}$$\end{document} . These results mean that as N,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N,J\rightarrow \infty $$\end{document} , the average squared estimation error across all entries in the mixed membership score matrix Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and that across all entries in the item parameter matrix Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} both converge to zero in probability. This double-asymptotic regime with both N and J going to infinity and this consistency notion in scaled Frobenius norm are similar to those considered in the joint MLE approach to item factor analysis in Chen et al. (Reference Chen, Li and Zhang2019) and Chen et al. (Reference Chen, Li and Zhang2020). To the best of our knowledge, this is the first time that consistency results are established for GoM models in this modern regime.

4. Simulation Studies

4.1. Evaluating the Proposed Method

We carry out simulation studies to evaluate the accuracy and computational efficiency of our new method. We consider K{3,8} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\in \{3, 8\}$$\end{document} , N{200,1000,2000,3000,4000,5000} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\in \{200, 1000, 2000, 3000, 4000, 5000\}$$\end{document} , and J=N/5 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=N/5$$\end{document} , which correspond to large-scale and high-dimensional data scenarios. Such simulation regimes share a similar spirit with those in Zhang et al. (Reference Zhang, Chen and Li2020), which proposed and evaluated an SVD-based approach for item factor analysis. In each simulation setting, we generate 100 independent replicates. Within each setting and each replication, the rows of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} are independently simulated from the Dirichlet (α) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\alpha })$$\end{document} distribution with α \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\alpha }$$\end{document} equal to the K-dimensional all-one vector, and the first K rows of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} are set to the identity matrix IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{I}_K$$\end{document} in order to satisfy the pure-subject Condition 1. The entries in Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} are independently simulated from the uniform distribution on [0, 1]. Our preliminary simulations suggest setting the tuning parameters in the pruning Algorithm 1 to r=10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=10$$\end{document} , q=0.4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q=0.4$$\end{document} , e=0.2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$e=0.2$$\end{document} with 8% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$8\%$$\end{document} subjects removed before SPA, so that the pruned empirical simplex is adequately close to the population simplex. We use the above setting throughout all the simulations and the real data analysis unless otherwise specified. It turns out the performance of our method is robust to these tuning parameter values and not much tuning is needed in practice.

To illustrate the simplex geometry of the population and empirical left singular matrices U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} (from the SVD of R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} ) and U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} (from the top-K SVD of R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} ), we plot Ui,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{i,:}$$\end{document} and U^i,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{i,:}$$\end{document} with N=2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=2000$$\end{document} and K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} in Fig. 2. All vectors are projected to two dimensions for better visualization of the simplex (i.e., triangle) structure. In Fig. 2, the red-shaded area corresponds to the population simplex, the green crosses are the removed subjects selected by the pruning Algorithm 1, and the blue dots form the empirical simplex obtained after pruning. As we can see, the empirical row vectors U^i,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{i,:}$$\end{document} approximately form a simplex cloud around the population simplex. After pruning out the noisy vectors, the resulting empirical simplex is close to the population one. This fact not only illustrates the effectiveness of the pruning procedure in Algorithm 1, but also confirms the usefulness of our notion of expectation identifiability by showing the close proximity of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} . The resemblance between the scatter plots of the rows of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} implies that the simplex geometry of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} holds approximately for U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} , and hence justifies that SPA can be applied to U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} to estimate Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} .

Figure 2 Row vectors of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} projected to R2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^2$$\end{document} in the simulation setting with N=2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=2000$$\end{document} and K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . The red-shaded area is the population simplex, the green crosses are the removed subjects in pruning, and the blue dots form the empirical simplex retained after pruning.

We compare the performance of our proposed method with the JML algorithm (Joint Maximum Likelihood) in the R package sirt, because the latter is currently considered as the most efficient estimation method for GoM models. We follow the default settings of JML with the maximum iteration number set as 600, the global parameter convergence criterion as 0.001, the maximum change in relative deviance as 0.001, and the minimum value of πik \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{ik}$$\end{document} and θjk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _{jk}$$\end{document} as 0.001. We measure the parameter estimation error by the mean absolute error (MAE). That is, the error between the estimated Π^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}$$\end{document} and the ground truth Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} for each replicate is quantified by the mean absolute bias (and similarly for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} ):

l(Π,Π^):=1NKi=1Nk=1K|πik-π^ik|,l(Θ,Θ^):=1JKj=1Jk=1K|θjk-θ^jk|. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} l(\varvec{\Pi }, \widehat{\varvec{\Pi }}) := \frac{1}{NK}\sum _{i=1}^N \sum _{k=1}^K\vert \pi _{ik} - \widehat{\pi }_{ik} \vert , \quad l(\varvec{\Theta }, \widehat{\varvec{\Theta }}) := \frac{1}{JK} \sum _{j=1}^J\sum _{k=1}^K\vert \theta _{jk} - \widehat{\theta }_{jk} \vert . \end{aligned}$$\end{document}

We present the comparisons between our spectral method and JML in terms of computation time and estimation error in Table 1 and Figs. 3, 4, 5. As shown in Fig. 3, the computation time for both methods increases as the sample size N and the number of items J increase. Notably, JML takes significantly more time than our proposed method, especially when N and J are large. For example, when N=5000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=5000$$\end{document} , J=1000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=1000$$\end{document} , K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} , it takes about 3 h for JML to reach the maximum iteration number for each replication, while it takes less than 40 s on average for our proposed method. Table 1 further records the mean computational time in seconds across replications for JML and the proposed method. Moreover, we observe that JML is not able to converge in almost all replications when K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} , including when the sample size is as small as N=200 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=200$$\end{document} . In summary, when the number of extreme latent profiles K is large enough, JML not only takes a long time to run but also is unable to reach convergence given the default convergence criterion.

Figure 3 Computation time for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} (left) and K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} (right) in simulations. For each simulation setting, we show the median, 25% quantile, and 75% quantile of the computation time for the 100 replications.

Table 1 Table of average computational time in seconds across replications for JML and the proposed spectral method for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} and K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document}

The huge computational advantage of the proposed method does not come at the cost of degraded estimation accuracy. Figures 4 and 5 show that the estimation error decreases as the sample size N and the number of items J increase for both methods. When the sample size is large enough ( N2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\ge 2000$$\end{document} ) for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} , our proposed method gives more accurate estimation on average compared to JML. For K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} , our proposed method yields higher estimation accuracy for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} for all sample sizes on average. When N2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\ge 2000$$\end{document} , the estimation accuracy of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} is slightly worse but comparable to JML. We point out that due to their non-iterative nature, SVD or eigendecomposition-based methods typically tend to give worse estimation compared to iterative methods that aim to find the MLE; for example, see the comparison of an SVD-based method and a joint MLE method for item factor analysis in Zhang et al. (Reference Zhang, Chen and Li2020). However, it turns out that given a fixed computational resource (i.e., the default maximum iteration number in the sirt package), the iterative method JML can give worse estimation accuracy than our spectral method.

Figure 4 Simulation results of estimation error for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . The boxplots represent the mean absolute error for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

Figure 5 Simulation results of estimation error for K=8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} . The boxplots represent the mean absolute error for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

We also compare our spectral method with the Gibbs sampling method for the GoM model. The parameters in the Gibbs sampler are initialized from their prior distributions. The number of burn-in samples is set to 5000. We take the average of 2000 samples after the burn-in phase as the Gibbs sampling estimates. Since Gibbs sampling is an MCMC algorithm, the computation can take a long time when the sample size N and number of items J are large. Therefore, we only consider N=200,1000,2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=200, 1000, 2000$$\end{document} , and K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . We compare the computation time and estimation accuracy of our proposed method, JML, and Gibbs sampling. The results are summarized in Tables 2 and 3. Compared to Gibbs sampling, our proposed method is approximately 10,000 times faster when N=2000 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=2000$$\end{document} and K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . In terms of the estimation accuracy, when the sample size is small, the Gibbs sampling has better accuracy. When the sample size is large enough, the three methods are comparable in terms of estimation accuracy. These simulation results justify that our method is more suitable for large-scale and high-dimensional datasets.

Table 2 Average computation time in seconds for each method and sample size in simulations when K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}

Table 3 Average mean absolute error for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} for each method and sample size in simulations when K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}

We also note that without the pure-subject Condition 1, one can still obtain estimates from the JML or the Gibbs sampler by directly running their corresponding estimation algorithms. However, simply running those algorithms for arbitrary data generated under the GoM model may give misleading results if the model is not identifiable, because identifiability is the prerequisite for any valid statistical inference. In contrast, for our proposed spectral estimator, under the pure subject condition, we have established both identifiability (Theorem 1) and estimation consistency (Theorem 3) for both the mixed membership scores Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and the item parameters Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} .

We also compare the simulation results of our proposed method with and without the pruning step when K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} to examine the effectiveness of pruning. The comparison of the estimation accuracy for the two approaches is summarized in Fig. 6. For the estimation of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} , estimation with pruning is consistently better for all sample sizes. For the estimation of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} , estimation results are comparable for the two approaches. When sample size is large, pruning gives slightly better results.

Figure 6 Comparison of estimation results with and without the pruning procedure when K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} .

To summarize, the above simulation results demonstrate the superior efficiency and accuracy of our proposed method to estimate GoM models. Our method provides comparable estimation results compared with JML and Gibbs sampling and is much more scalable to large datasets with many subjects and many items.

4.2. Verifying Identifiability

We also conduct another simulation study to verify the identifiability results. We consider three different cases with K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} , N{200,1000,2000,3000} \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\in \{200,1000,2000,3000\}$$\end{document} , and J=N/5 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=N/5$$\end{document} . In Case 1, we set the ground truth Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} by vertically concatenating copies of the identifiable 4×3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4\times 3$$\end{document} Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} -matrix in Example 1, and the generation mechanism for Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} remains the same as in Sect. 4.1. In Case 2, we use the same Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} as in Case 1, while for Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} , after generating rows of Π=(πik) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }'=(\pi '_{ik})$$\end{document} from Dirichlet (1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{1})$$\end{document} , we truncate πik \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi '_{ik}$$\end{document} to be no less than 1/3 and then re-normalize each row of it; after this operation the minimum entry in the resulting Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} is 0.2. Such a generated Π=(πik) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi } = (\pi _{ik})$$\end{document} does not satisfy the pure-subject Condition 1. In Case 3, we generate Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} using the same mechanism as in Sect. 4.1, but generate Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} whose rows are replicates of the vector (0.8,  0.5,  0.2) so that rank(Θ)=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=1$$\end{document} . Case 1 falls into part (a) in Theorem 2 and is identifiable, whereas Cases 2 and 3 correspond to part (c) and are not identifiable. Figure 7 shows that the estimation errors in Cases 2 and 3 are significantly larger than those in Case 1. These results empirically verify our identifiability conclusions in Theorem 2.

Figure 7 A simulation study verifying identifiability. Estimation errors for three different cases; see the concrete settings of Cases 1, 2, and 3 in the main text. The box plots represent the mean absolute error for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

5. Real Data Example

We illustrate our proposed method by applying it to a real-world personality test dataset, the Woodworth Psychoneurotic Inventory (WPI) dataset. WPI was designed by the United States Army during World War I to identify soldiers at risk for shell shock and is often credited as the first personality test. The WPI dataset can be downloaded from the Open Psychometrics Project website: http://openpsychometrics.org/_rawdata/. The dataset consists of binary yes/no responses to J=116 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=116$$\end{document} items from 6019 subjects. We remove subjects with missing responses and only keep the subjects who are at least ten years old. This screening process leaves us with N=3842 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=3842$$\end{document} subjects.

We apply both the JML method and our new spectral method to the WPI dataset to compare computation time. Figure 8 shows the computation time in seconds versus the number of extreme profiles K for the two methods. For K4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\ge 4$$\end{document} , the number of iterations for JML reaches the default maximum iteration number in the sirt package and does not converge. Similar to the simulations, our spectral method takes significantly less computation time compared to JML. This observation again confirms that the proposed method is scalable to real-world datasets with a large sample size and a relatively large number of items.

Figure 8 Computation time for the WPI dataset. The lines indicate the run time in seconds versus K for JML and our spectral method. Note that for K4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\ge 4$$\end{document} , the number of iterations in JML reaches the default maximum iteration number.

Choosing the number of extreme latent profiles K is a nontrivial problem for GoM models in practice. Available model selection techniques include the Akaike information criterion (AIC; Akaike Reference Akaike1998) and the Bayesian information criterion (BIC; Schwarz Reference Schwarz1978) for likelihood-based methods, and the deviance information criteria (DIC; Spiegelhalter et al. Reference Spiegelhalter, Best, Carlin and Van Der Linde2002) for Bayesian MCMC methods. For factor analysis and principal component analysis, parallel analysis (Horn, Reference Horn1965; Dobriban & Owen, Reference Dobriban and Owen2019) is one popular eigenvalue-based method to select the latent dimension. For the WPI dataset, when choosing K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} , we observe that the estimated Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} matrix has three well-separated column vectors, which imply a meaningful interpretation of the extreme profiles. When increasing K to 4, the columns of the estimated Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} are no longer that well separated and interpretable. Moreover, choosing K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} produces the least reconstruction error compared to K=2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=2$$\end{document} or 4. That is, K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} leads to the smallest mean absolute error between Π^Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}\widehat{\varvec{\Theta }}^{\top }$$\end{document} and the observed data matrix R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}$$\end{document} . Since the goal of the current data analysis is mainly to illustrate the proposed spectral method, we next present and discuss the estimation results for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . The important problem of how to select K in GoM models in a principled manner is left as a future direction.

Figure 9 Heatmap of Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} of a subset of 30 WPI items. The values are the estimated probability of responding “yes” for each item given each extreme profile.

We next take a closer look at the estimation result given by the proposed method for K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} . In order to interpret each extreme latent profile, we present the heatmap of part of the estimated item parameter matrix Θ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} in Fig. 9. Since the number of items J=116 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J=116$$\end{document} is large, we only display a subset of the items for better visualization. More specifically, 30 out of 116 items are chosen in Fig. 9 based on the criterion that the chosen items have the largest variability in (θ^j,1,θ^j,2,θ^j,3) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widehat{\theta }_{j,1}, \widehat{\theta }_{j,2}, \widehat{\theta }_{j,3})$$\end{document} . Based on Fig. 9, we interpret profile 1 as people who are physically unhealthy since they have higher probabilities of fainting, dyspepsia, and asthma or hay fever. People belonging to profile 2 tend to be socially passive since they are worrying, do not find their way easily, and get tired of things or people easily. Profile 3 on the other hand is identified as the healthy group.

Figure 10 shows the ternary diagram of the estimated membership scores Π^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}$$\end{document} made with the R package ggtern. The WPI dataset comes with the age information of each subject, and we color code the subjects in Fig. 10 according to their ages. The darker color represents older people while the lighter color represents younger people. Each dot represents a subject, and the location of the dot in the equilateral triangle depicts the three membership scores of this subject. Specifically, dots with a large membership score on a profile are closer to the vertex of this profile. One can see that the pure-subject Condition 1 is satisfied here since there are dots located almost exactly at each of the three vertices in Fig. 10. This figure also reveals that darker dots are more gathered around the vertex of profile 1, which means that older people are more likely to belong to the extreme profile identified as physically unhealthy. Correspondingly, younger people are closer to profile 2 or 3. Recalling our earlier interpretation of the extreme profiles from Fig. 9, these results are intuitively meaningful since older people tend to be less healthy compared with younger people. It is worth emphasizing that the age information is not used in our estimation of the GoM model, but our method is able to generate interpretable results with respect to age.

Figure 10 Barycentric plot of the estimated membership scores Π^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}$$\end{document} for WPI data, color coded with the age covariate.

6. Discussion

In this paper, we have adopted a spectral approach to GoM analysis of multivariate binary responses. Under the notion of expectation identifiability, we have proposed sufficient conditions that are close to being necessary for GoM models to be identifiable. For estimation, we have proposed an efficient SVD-based spectral algorithm to estimate the subject-level and population-level parameters in the GoM model. Our spectral method has a huge computational advantage over Bayesian or likelihood-based methods and is scalable to large-scale and high-dimensional data. Simulation results demonstrate the superior efficiency and accuracy of our method, and also empirically corroborate our identifiability conclusions. We hope this work provides a useful tool for psychometric researchers and practitioners by making GoM analysis less computationally daunting and statistically mysterious.

The expectation identifiability considered in this work is a suitable identifiability notion for large-scale and high-dimensional GoM models. A recent paper Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023) studied the population identifiability of a variant of the GoM model called the dimension-grouped mixed membership model. Generally speaking, population identifiability is a traditional notion of identifiability which aims at identifying the population parameters in the model but not the individual latent variables. In the context of the dimension-grouped GoM model, population identifiability in Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023) means identifying the item parameters Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and the distribution parameters for Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (i.e., the Dirichlet distribution parameters (α1,,αK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\alpha _1,\ldots ,\alpha _K)$$\end{document} , where each row in Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} is assumed to follow Dirichlet(α1,,αK) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {Dirichlet}(\alpha _1,\ldots ,\alpha _K)$$\end{document} ), but not identifying the entries in Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} directly. Such a traditional identifiability notion is more suitable for the low-dimensional case with small J and large N, as considered in Gu et al. (Reference Gu, Erosheva, Xu and Dunson2023). In contrast, in this work we are motivated by the large-scale and high-dimensional setting with both N and J going to infinity. In this setting, expectation identifiability that concerns both Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} is a suitable notion to study, and we have also established the corresponding consistency result for Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} when the pure subject condition for expectation identifiability is satisfied.

The pure subject Condition 1 is a mild condition that is crucial to both our identifiability result and estimation procedure. It may be of interest to test whether this condition holds in practice. Testing Condition 1 is equivalent to testing whether a data cloud in a general K-dimensional space has a simplex structure, which is a non-trivial problem. When K=3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} , a visual inspection is plausible by plotting the row vectors of the left singular matrix and checking if the point cloud forms a triangle in the 3-dimensional space. However, when K is larger than 3, visual inspection becomes infeasible. Recently, a formal statistical procedure has been proposed by a working paper Freyaldenhoven et al. (Reference Freyaldenhoven, Ke, Li and Olea2023) to test the anchor word assumption for topic models, which is another type of mixed membership models. The anchor word assumption requires that there exists at least one anchor word for each topic, which is analogous to our pure subject Condition 1. Freyaldenhoven et al. (Reference Freyaldenhoven, Ke, Li and Olea2023) considers the hypothesis testing of the existence of anchor words. They first characterizes that for a matrix P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P}$$\end{document} that admits a low-rank factorization under the anchor word assumption, one can write P=CP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P} =\textbf{C}\textbf{P}$$\end{document} for some matrix C \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{C}$$\end{document} that belongs to a certain set CK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {C}}_K$$\end{document} . Based on this property, Freyaldenhoven et al. (Reference Freyaldenhoven, Ke, Li and Olea2023) then construct a test statistic T=infCCKP-CP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T = \inf _{\textbf{C}\in {\mathcal {C}}_K} \Vert \textbf{P} - \textbf{C} \textbf{P}\Vert $$\end{document} to test the null hypothesis that the anchor-word assumption holds. To achieve a level- α \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document} test, the null hypothesis will be rejected if T is larger than the (1-α)% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1-\alpha )\%$$\end{document} quantile of the distribution of the test statistic under the null. We conjecture that it might be possible to generalize this procedure to the GoM setting and leave this direction as future work.

There are several additional research directions worth exploring in the future. First, this work has focused on binary responses, while in practice it is also of interest to perform GoM analysis of polytomous responses, such as Likert-scale responses in psychological questionnaires (see, e.g., Gu et al. Reference Gu, Erosheva, Xu and Dunson2023). It would be desirable to extend our method to such multivariate polytomous data. Under a GoM model for polytomous data, a similar low-rank structure as (5) should still exist for each category of the responses. Since our method is built upon the low-rank structure and the pure subject condition, we conjecture that exploiting such structures in polytomous data could still lead to nice spectral algorithms. Second, it is worth considering developing a model that directly incorporates additional covariates into the GoM analysis. Our current method does not use additional covariates, such as the age information in the WPI dataset. Proposing a covariate-assisted GoM model and a corresponding spectral method may be methodologically interesting and practically useful.

Acknowledgements

This work is partially supported by National Science Foundation Grant DMS-2210796. The authors thank the editor Prof. Matthias von Davier, an associate editor, and three anonymous reviewers for their helpful and constructive comments.

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Code Availability

The MATLAB code implementing the proposed method is available at this link: https://github.com/lscientific/spectral_GoM.

Appendix A: Proofs of the Identifiability Results

Proof of Proposition 1

If we take the rows that correspond to S \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{S}}$$\end{document} of both sides in the SVD (6) and use the fact that ΠS,:=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }_{{\textbf{S}},:}=\textbf{I}_K$$\end{document} , then

(15) US,:ΣV=[R0]S,:=ΠS,:Θ=Θ. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}_{{\textbf{S}},:}\varvec{\Sigma }\textbf{V}^{\top }=[\textbf{R}_0]_{{\textbf{S}},:} =\varvec{\Pi }_{{\textbf{S}},:} \varvec{\Theta }^{\top }=\varvec{\Theta }^{\top }. \end{aligned}$$\end{document}

This gives an expression of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document}

(16) Θ=VΣUS,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }= \textbf{V}\varvec{\Sigma } \textbf{U}^{\top }_{{\textbf{S}},:}. \end{aligned}$$\end{document}

Further note that

(17) U=R0VΣ-1=ΠΘVΣ-1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}=\textbf{R}_0\textbf{V}\varvec{\Sigma }^{-1}=\varvec{\Pi }\varvec{\Theta }^{\top }\textbf{V}\varvec{\Sigma }^{-1}. \end{aligned}$$\end{document}

If we plug (15) into (17) and note that V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{V}$$\end{document} have orthogonal columns, we have

(18) U=ΠUS,:ΣVVΣ-1=ΠUS,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}=\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}\varvec{\Sigma }\textbf{V}^{\top }\textbf{V}\varvec{\Sigma }^{-1}=\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}. \end{aligned}$$\end{document}

Equation (18) also tells us that US,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{{\textbf{S}},:}$$\end{document} must be full rank since both U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} have rank K. Therefore, we have an expression of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} :

Π=U(US,:)-1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Pi }=\textbf{U}(\textbf{U}_{{\textbf{S}},:})^{-1}. \end{aligned}$$\end{document}

On the other hand, based on the singular value decomposition UΣV=ΠΘ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}\varvec{\Sigma }\textbf{V}^{\top } = \varvec{\Pi } \varvec{\Theta }^{\top }$$\end{document} , we can left multiply (ΠΠ)-1Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi }^{\top }\varvec{\Pi })^{-1}\varvec{\Pi }^{\top }$$\end{document} with both hand sides to obtain

(19) (ΠΠ)-1ΠUΣV=ΘΘ=VΣUΠ(ΠΠ)-1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned}&(\varvec{\Pi }^{\top }\varvec{\Pi })^{-1}\varvec{\Pi }^{\top } \textbf{U}\varvec{\Sigma }\textbf{V}^{\top } =\varvec{\Theta }^{\top }\nonumber \\&\quad \Longrightarrow \quad \varvec{\Theta }=\textbf{V}\varvec{\Sigma }\textbf{U}^{\top }\varvec{\Pi }(\varvec{\Pi }^{\top }\varvec{\Pi })^{-1}. \end{aligned}$$\end{document}

We next show that (16) and (19) are equivalent. Since UU=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}^{\top }\textbf{U}=\textbf{I}_K$$\end{document} , (18) leads to

US,:ΠΠUS,:=IK, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}_{{\textbf{S}},:}^{\top }\varvec{\Pi }^{\top }\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}=\textbf{I}_K, \end{aligned}$$\end{document}

which yields

(ΠΠ)-1=US,:US,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} (\varvec{\Pi }^{\top }\varvec{\Pi })^{-1}=\textbf{U}_{{\textbf{S}},:}\textbf{U}^{\top }_{{\textbf{S}},:}. \end{aligned}$$\end{document}

Plugging this equation into (19), we have

Θ=VΣUΠUS,:US,:=VΣUU(US,:)-1US,:US,:=VΣUS,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }&= \textbf{V}\varvec{\Sigma }\textbf{U}^{\top }\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}\textbf{U}^{\top }_{{\textbf{S}},:}\\&= \textbf{V}\varvec{\Sigma }\textbf{U}^{\top }\textbf{U}(\textbf{U}_{{\textbf{S}},:})^{-1}\textbf{U}_{{\textbf{S}},:} \textbf{U}^{\top }_{{\textbf{S}},:} \\&= \textbf{V}\varvec{\Sigma }\textbf{U}^{\top }_{{\textbf{S}},:} \end{aligned}$$\end{document}

This shows the equivalence of (16) and (19) and completes the proof of the proposition. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Theorem 1

Without loss of generality, assume that the first extreme latent profile does not have a pure subject. Then πi11-δ,i=1,,N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{i1}\le 1-\delta ,\ \forall i=1,\dots , N$$\end{document} for some δ>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta >0$$\end{document} . For each 0<ϵ<δ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0<\epsilon <\delta $$\end{document} , define a K×K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\times K$$\end{document} matrix

Mϵ=1+(K-1)ϵ2-ϵ21K-10K-1ϵ1K-11K-1+(1-(K-1)ϵ)IK-1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{M}_{\epsilon } = \begin{bmatrix} 1+(K-1)\epsilon ^2 &{} -\epsilon ^2\textbf{1}_{K-1}^{\top } \\ \textbf{0}_{K-1} &{} \epsilon \textbf{1}_{K-1}\textbf{1}_{K-1}^{\top } +(1-(K-1)\epsilon )\textbf{I}_{K-1} \end{bmatrix}. \end{aligned}$$\end{document}

We will show that Π~ϵ=ΠMϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}_{\epsilon }=\varvec{\Pi }\textbf{M}_{\epsilon }$$\end{document} and Θ~ϵ=ΘMϵ-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}_{\epsilon }=\varvec{\Theta }\textbf{M}_{\epsilon }^{-1}$$\end{document} form a valid parameter set. That is, each element of Π~ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}_{\epsilon }$$\end{document} and Θ~ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}_{\epsilon }$$\end{document} lies in [0, 1] and the rows of Π~ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}_{\epsilon }$$\end{document} sum to one. Since M0=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}_0=\textbf{I}_K$$\end{document} and by the continuity of matrix determinant, Mϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}_{\epsilon }$$\end{document} is full rank when ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon $$\end{document} is small enough. Also notice that Mϵ1K=1K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}_{\epsilon }\textbf{1}_K=\textbf{1}_K$$\end{document} . Therefore, Π~ϵ1K=ΠMϵ1K=Π1K=1N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}_{\epsilon }\textbf{1}_K=\varvec{\Pi }\textbf{M}_{\epsilon }\textbf{1}_K=\varvec{\Pi }\textbf{1}_K=\textbf{1}_N$$\end{document} . For each i=1,,N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=1,\dots , N$$\end{document} , π~i1=πi1(1+(K-1)ϵ2)0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\pi }_{i1}=\pi _{i1}(1+(K-1)\epsilon ^2)\ge 0$$\end{document} . For any fixed k=2,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=2,\dots ,K$$\end{document} , (Mϵ)kk=1-(K-2)ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{M}_{\epsilon })_{kk} = 1 - (K-2)\epsilon $$\end{document} and (Mϵ)mk=ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{M}_{\epsilon })_{mk}=\epsilon $$\end{document} for mk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m\ne k$$\end{document} . Thus when ϵ1/(K-1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon \le 1/(K-1)$$\end{document} , we have (Mϵ)mkϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\textbf{M}_{\epsilon })_{mk}\ge \epsilon $$\end{document} for any m=1,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m=1,\dots , K$$\end{document} . Therefore, the following inequalities hold for each i=1,,N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=1,\dots , N$$\end{document} and k=2,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=2,\dots , K$$\end{document} :

π~ik=-ϵ2πi1+m=2Kπim(Mϵ)mk-ϵ2πi1+ϵm=2Kπim-ϵ2(1-δ)+ϵ(1-πi1)-ϵ2(1-δ)+ϵδϵδ2>0. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\pi }_{ik}&=-\epsilon ^2\pi _{i1}+\sum _{m=2}^K \pi _{im}(\textbf{M}_{\epsilon })_{mk} {\ge } -\epsilon ^2\pi _{i1} +\epsilon \sum _{m=2}^K\pi _{im}\\&\ge -\epsilon ^2(1-\delta )+\epsilon (1-\pi _{i1}) \ge -\epsilon ^2(1-\delta ) + \epsilon \delta \ge \epsilon \delta ^2 >0. \end{aligned}$$\end{document}

Here we also used k=1Kπik=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{k=1}^K\pi _{ik}=1$$\end{document} , πi11-δ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{i1}\le 1-\delta $$\end{document} , and ϵ<δ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon <\delta $$\end{document} .

Further notice that

Mϵ-IK=ϵ2(K-1)-ϵ21K-10K-1ϵ1K-11K-1-ϵ(K-1)IK-1, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{M}_{\epsilon }-\textbf{I}_K= \begin{bmatrix} \epsilon ^2(K-1) &{} -\epsilon ^2\textbf{1}_{K-1}^{\top } \\ \textbf{0}_{K-1} &{} \epsilon \textbf{1}_{K-1}\textbf{1}_{K-1}^{\top } -\epsilon (K-1)\textbf{I}_{K-1} \end{bmatrix}, \end{aligned}$$\end{document}

which leads to Mϵ-IKFϵ00 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{M}_{\epsilon }-\textbf{I}_K\Vert _F {\mathop {\longrightarrow }\limits ^{\epsilon \rightarrow 0}}0$$\end{document} . Here AF=i=1mj=1naij2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{A}\Vert _F=\sqrt{\sum _{i=1}^m\sum _{j=1}^n a_{ij}^2}$$\end{document} is the Frobenius norm of any matrix A=(aij)Rm×n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}=(a_{ij})\in {\mathbb {R}}^{m\times n}$$\end{document} . By the continuity of matrix inverse and Frobenius norm,

Mϵ-1-IKFϵ00. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \textbf{M}_{\epsilon }^{-1}-\textbf{I}_K\Vert _F {\mathop {\longrightarrow }\limits ^{\epsilon \rightarrow 0}}0. \end{aligned}$$\end{document}

Therefore,

Θ~-ΘF=Θ(IK-Mϵ-1)FΘ2IK-Mϵ-1Fϵ00. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widetilde{\varvec{\Theta }}-\varvec{\Theta }\Vert _F = \Vert \varvec{\Theta }(\textbf{I}_K -\textbf{M}_{\epsilon }^{-1})\Vert _F\le \Vert \varvec{\Theta }\Vert _2\Vert \textbf{I}_K -\textbf{M}_{\epsilon }^{-1}\Vert _F {\mathop {\longrightarrow }\limits ^{\epsilon \rightarrow 0}}0. \end{aligned}$$\end{document}

Since all the elements of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} are strictly in (0, 1), the elements of Θ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}$$\end{document} must be in [0, 1] when ϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon $$\end{document} is small enough. Also note that Mϵ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{M}_{\epsilon }$$\end{document} is not a permutation matrix when ϵ>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon >0$$\end{document} ; thus, the GoM model is not identifiable up to a permutation. This completes the proof of the theorem. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Theorem 2

Suppose rank(Θ)=rK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Theta })=r\le K$$\end{document} . Now, consider SVD R0=UΣV \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0=\textbf{U}\varvec{\Sigma }\textbf{V}^{\top }$$\end{document} with URN×r,VRJ×r,ΣRr×r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}\in {\mathbb {R}}^{N\times r}, \textbf{V}\in {\mathbb {R}}^{J\times r}, \varvec{\Sigma }\in {\mathbb {R}}^{r\times r}$$\end{document} . For simplicity, we continue to use the same notations U,Σ,V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U},\varvec{\Sigma },\textbf{V}$$\end{document} here even though the matrix dimensions have changed. Without loss of generality, we can reorder the subjects and memberships so that Π1:K,:=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }_{1:K,:}=\textbf{I}_K$$\end{document} from Assumption 1. According to Proposition 1,

(20) U=ΠU1:K,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}=\varvec{\Pi }\textbf{U}_{1:K,:}. \end{aligned}$$\end{document}

Since rank(Π)=K,rank(U)=r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rank(\varvec{\Pi })=K, rank(\textbf{U})=r$$\end{document} , we must have rank(U1:K,:)=rank(U)=r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{rank} (\textbf{U}_{1:K,:})=\textrm{rank}(\textbf{U})=r$$\end{document} .

Suppose another set of parameters (Π~,Θ~) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widetilde{\varvec{\Pi }},\widetilde{\varvec{\Theta }})$$\end{document} yields the same R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}_0$$\end{document} and we denote its corresponding pure subject index vector as S~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\textbf{S}}}$$\end{document} so that Π~S~,:=IK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}_{\widetilde{{\textbf{S}}},:}=\textbf{I}_K$$\end{document} . Similarly, we have

(21) U=Π~US~,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}=\widetilde{\varvec{\Pi }}\textbf{U}_{\widetilde{{\textbf{S}}},:}. \end{aligned}$$\end{document}

Taking the S~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\textbf{S}}}$$\end{document} rows of both sides of (20) and the first K rows of both sides of (21) yields

ΠS~,:U1:K,:=US~,:,U1:K,:=Π~1:K,:US~,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Pi }_{\widetilde{{\textbf{S}}},:}\textbf{U}_{1:K,:} =\textbf{U}_{\widetilde{{\textbf{S}}},:},\ \textbf{U}_{1:K,:} =\widetilde{\varvec{\Pi }}_{1:K,:}\textbf{U}_{\widetilde{{\textbf{S}}},:}. \end{aligned}$$\end{document}

The above equation shows that US~,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\widetilde{{\textbf{S}}},:}$$\end{document} is in the convex hull created by the rows of U1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1:K,:}$$\end{document} , and U1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1:K,:}$$\end{document} is in the convex hull created by the rows of US~,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\widetilde{{\textbf{S}}},:}$$\end{document} . Therefore, there must exist a permutation matrix P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P}$$\end{document} such that US~,:=PU1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{\widetilde{{\textbf{S}}},:}=\textbf{P}\textbf{U}_{1:K,:}$$\end{document} . Combining this fact with (20) and (21) leads to

(22) (Π-Π~P)U1:K,:=0. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} (\varvec{\Pi }- \widetilde{\varvec{\Pi }}\textbf{P}) \textbf{U}_{1:K,:} = 0. \end{aligned}$$\end{document}

Proof of part (a). For part (a), r=K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=K$$\end{document} and U1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1:K,:}$$\end{document} is full rank according to (20). In this case, (22) directly leads to Π=Π~P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }=\widetilde{\varvec{\Pi }}\textbf{P}$$\end{document} and thus Θ~=ΘP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}=\varvec{\Theta }\textbf{P}^{\top }$$\end{document} .

Now generally consider r<K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r<K$$\end{document} . By permuting the rows and columns of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} , we can write

(23) Θ=CCW1W2CW2CW1, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }= \begin{bmatrix} {\textbf{C}} &{} {\textbf{C}}\textbf{W}_1 \\ \textbf{W}_2^{\top }{\textbf{C}} &{} \textbf{W}_2^{\top }{\textbf{C}}\textbf{W}_1 \end{bmatrix}, \end{aligned}$$\end{document}

where CRr×r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{C}}\in {\mathbb {R}}^{r\times r}$$\end{document} is full rank, W1Rr×(K-r) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1\in {\mathbb {R}}^{r\times (K-r)}$$\end{document} and W2Rr×(J-r) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_2\in {\mathbb {R}}^{r\times (J-r)}$$\end{document} . Now comparing the block columns of (23) and Θ=VΣ(U1:K,:) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }=\textbf{V}\varvec{\Sigma }(\textbf{U}_{1:K,:})^{\top }$$\end{document} gives

(24) IrW2C=VΣ(U1:r,:),IrW2CW1=VΣ(U(r+1):N,:). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{bmatrix} \textbf{I}_r \\ \textbf{W}_2^{\top } \end{bmatrix}{\textbf{C}}&= \textbf{V}\varvec{\Sigma }(\textbf{U}_{1:r,:})^{\top },\nonumber \\ \begin{bmatrix} \textbf{I}_r \\ \textbf{W}_2^{\top } \end{bmatrix}{\textbf{C}}\textbf{W}_1&= \textbf{V}\varvec{\Sigma }(\textbf{U}_{(r+1):N,:})^{\top }. \end{aligned}$$\end{document}

Since C \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\textbf{C}}$$\end{document} is full rank, U1:r,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1:r,:}$$\end{document} has to also be full rank and (24) can be translated into

U(r+1):N,:=W1U1:r,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}_{(r+1):N,:}=\textbf{W}_1^{\top } \textbf{U}_{1:r,:}. \end{aligned}$$\end{document}

Therefore,

(25) U1:K,:=IrW1U1:r,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{U}_{1:K,:}=\begin{bmatrix} \textbf{I}_r \\ \textbf{W}_1^{\top } \end{bmatrix}\textbf{U}_{1:r,:}. \end{aligned}$$\end{document}

By plugging the (25) into (22) and again using the fact that U1:r,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{1:r,:}$$\end{document} is full rank, we have

(26) (Π-Π~P)IrW1=0. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} (\varvec{\Pi }-\widetilde{\varvec{\Pi }}\textbf{P}) \begin{bmatrix} \textbf{I}_r \\ \textbf{W}_1^{\top } \end{bmatrix} = \textbf{0}. \end{aligned}$$\end{document}

Proof of part (b). Denote A:=Π-Π~P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}:=\varvec{\Pi }-\widetilde{\varvec{\Pi }}\textbf{P}$$\end{document} . If r=K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=K-1$$\end{document} , W1=(W1,1,,W1,K-1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1 = (W_{1,1}, \dots , W_{1,K-1})$$\end{document} is a (K-1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(K-1)$$\end{document} -dimensional vector and (26) gives us

(27) A:,j+W1,jA:,k=0,j=1,,K-1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{A}_{:,j} + W_{1,j}\textbf{A}_{:,k}=0, \quad \forall j=1,\dots , K-1. \end{aligned}$$\end{document}

Denote an r-dimensional column vector with all entries equal to one by 1r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{1}_r$$\end{document} . Right multiplying 1r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{1}_r$$\end{document} to both sides of (26) yields

A1rW11r=0. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \textbf{A}\begin{bmatrix} \textbf{1}_r \\ \textbf{W}_1^{\top }\textbf{1}_r \end{bmatrix} = 0. \end{aligned}$$\end{document}

Also, note that both Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} and Π~P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}\textbf{P}$$\end{document} have row sums of 1. Hence,

j=1KA:,j=0N,j=1K-1A:,j+W11rA:,K=0N. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned}&\sum _{j=1}^{K}\textbf{A}_{:,j} =\textbf{0}_N,\\&\sum _{j=1}^{K-1}\textbf{A}_{:,j} + \textbf{W}_1^{\top } \textbf{1}_r\textbf{A}_{:,K} =\textbf{0}_N. \end{aligned}$$\end{document}

Taking the difference of the two equations above gives (1-W11r)A:,K=0N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(1-\textbf{W}_1^{\top }\textbf{1}_r) \textbf{A}_{:,K} = \textbf{0}_N$$\end{document} . If W11r1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1^{\top }\textbf{1}_r\ne 1$$\end{document} , then A:,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}_{:,K}$$\end{document} has to be 0N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{0}_N$$\end{document} , which implies A:,j=0N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}_{:,j}=\textbf{0}_N$$\end{document} for all j=1,K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j=1\dots , K-1$$\end{document} according to (27). Therefore, A=Π-Π~P=0N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}=\varvec{\Pi }-\widetilde{\varvec{\Pi }}\textbf{P}=\textbf{0}_N$$\end{document} , which leads to Θ~=ΘP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Theta }}=\varvec{\Theta }\textbf{P}^{\top }$$\end{document} .

Note that using (25) leads to

Θ=U1:K,:ΣV=IK-1W1U1:(K-1),:ΣV=IK-1W1(Θ:,1:(K-1)). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Theta }^{\top }=\textbf{U}_{1:K,:}\varvec{\Sigma }\textbf{V}^{\top }=\begin{bmatrix} \textbf{I}_{K-1} \\ \textbf{W}_1^{\top } \end{bmatrix}\textbf{U}_{1:(K-1),:}\varvec{\Sigma }\textbf{V}^{\top } = \begin{bmatrix} \textbf{I}_{K-1} \\ \textbf{W}_1^{\top } \end{bmatrix}(\varvec{\Theta }_{:,1:(K-1)})^{\top }. \end{aligned}$$\end{document}

Hence Θ:,K=Θ:,1:(K-1)W1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }_{:,K}=\varvec{\Theta }_{:,1:(K-1)}\textbf{W}_1$$\end{document} . Therefore, the condition W11r1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1^{\top }\textbf{1}_r\ne 1$$\end{document} is equivalent to the K-th column of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} not being an affine combination of the other columns.

Proof of part (c). Now consider the case of either r=K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=K-1$$\end{document} with W11r=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1^{\top }\textbf{1}_r=1$$\end{document} , or r<K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r<K-1$$\end{document} . Assume subject m is completely mixed so that πm,k>0,k=1,,K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi _{m,k}>0,\forall k=1,\dots , K$$\end{document} . Define

π~i=πiifimπm+ϵβ[-W1,IK-r]ifi=m, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\varvec{\pi }}_i^{\top }= {\left\{ \begin{array}{ll} \varvec{\pi }_i^{\top } &{} \quad \text {if } i\ne m\\ \varvec{\pi }_m^{\top } + \epsilon \varvec{\beta }^{\top } [-\textbf{W}_1^{\top } , ~\textbf{I}_{K-r}] &{} \quad \text {if } i=m \end{array}\right. }, \end{aligned}$$\end{document}

where ϵ>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon >0$$\end{document} is small enough so that π~i(0,1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\pi }}_i\in (0,1)$$\end{document} , and βRK-r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }\in {\mathbb {R}}^{K-r}$$\end{document} is such that β(1K-r-W11r)=0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }^{\top }(\textbf{1}_{K-r}-\textbf{W}_1^{\top } \textbf{1}_{r})=0$$\end{document} . Note that such β0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta } \ne \textbf{0}$$\end{document} always exists under the assumption in part (c), because if r=K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r=K-1$$\end{document} with W11r=1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W}_1^{\top }\textbf{1}_r=1$$\end{document} , then β(1K-r-W11r)=β(1-1)=0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }^{\top }(\textbf{1}_{K-r}-\textbf{W}_1^{\top } \textbf{1}_{r}) = \beta (1-1)=0$$\end{document} holds for any βR \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta \in {\mathbb {R}}$$\end{document} ; if r<K-1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r<K-1$$\end{document} , then K-r2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K-r\ge 2$$\end{document} and β \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }$$\end{document} has dimension at least two, so the inner product equation β(1K-r-W11r)=0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }^{\top }(\textbf{1}_{K-r}-\textbf{W}_1^{\top } \textbf{1}_{r})=0$$\end{document} must have a nonzero solution β \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }$$\end{document} . The constructed Π~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}$$\end{document} have row sums of 1 by the construction of β \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\beta }$$\end{document} . Furthermore, Π~U1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}\textbf{U}_{1:K,:}$$\end{document} and ΠU1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }\textbf{U}_{1:K,:}$$\end{document} can only be different on the m-th row, and

π~mU1:K,:=πmU1:K,:+ϵβ[-W1,IK-r]IrW1U1:r,:=πmU1:K,:. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\varvec{\pi }}_m^{\top } \textbf{U}_{1:K,:}=\varvec{\pi }^{\top }_m\textbf{U}_{1:K,:} + \epsilon \varvec{\beta }^{\top }[-\textbf{W}_1^{\top }, ~ \textbf{I}_{K-r}] \begin{bmatrix} \textbf{I}_r \\ \textbf{W}_1^{\top } \end{bmatrix}\textbf{U}_{1:r,:}=\varvec{\pi }_m^{\top }\textbf{U}_{1:K,:}. \end{aligned}$$\end{document}

Hence, Π~U1:K,:=ΠU1:K,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\varvec{\Pi }}\textbf{U}_{1:K,:}=\varvec{\Pi }\textbf{U}_{1:K,:}$$\end{document} . This gives us

ΠΘ=ΠU1:K,:ΣV=Π~U1:K,:ΣV=Π~Θ. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \varvec{\Pi }\varvec{\Theta }^{\top } = \varvec{\Pi }\textbf{U}_{1:K,:}\varvec{\Sigma }\textbf{V}^{\top } = \widetilde{\varvec{\Pi }} \textbf{U}_{1:K,:}\varvec{\Sigma }\textbf{V}^{\top }=\widetilde{\varvec{\Pi }}\varvec{\Theta }^{\top }. \end{aligned}$$\end{document}

We can see that (Π,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varvec{\Pi },\varvec{\Theta })$$\end{document} and (Π~,Θ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widetilde{\varvec{\Pi }}, \varvec{\Theta })$$\end{document} yield the same model but ΠΠ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }\ne \widetilde{\varvec{\Pi }}$$\end{document} . This completes the proof for part (c). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Appendix B: Proof of the Consistency Theorem 3

For any matrix A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} with SVD A=UAΣAVA \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}=\textbf{U}_{\textbf{A}}\varvec{\Sigma }_{\textbf{A}}\textbf{V}_{\textbf{A}}^{\top }$$\end{document} , define

sgn(A):=UAVA. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {sgn}(\textbf{A}) := \textbf{U}_{\textbf{A}}\textbf{V}_{\textbf{A}}^{\top }. \end{aligned}$$\end{document}

According to Remark 4.1 in Chen et al. (Reference Chen, Chi, Fan and Ma2021a), for any two matrices A,BRn×r,rn \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}, \textbf{B}\in {\mathbb {R}}^{n\times r}, r\le n$$\end{document} :

sgn(AB)=argminOOr×rAO-B, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \text {sgn}(\textbf{A}^{\top }\textbf{B}) = \arg \min _{\textbf{O}\in {\mathcal {O}}^{r\times r}} \Vert \textbf{A}\textbf{O}- \textbf{B}\Vert , \end{aligned}$$\end{document}

where Or×r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {O}}^{r\times r}$$\end{document} is the set of all orthonormal matrices of size r×r \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$r\times r$$\end{document} . The 2-to- \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\infty $$\end{document} norm of matrix A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} is defined as the maximum row l2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_2$$\end{document} norm, i.e., A2,=maxieiA \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left\Vert {\textbf{A}}\right\Vert _{2,\infty }=\max _i \Vert \textbf{e}_i^{\top } \textbf{A}\Vert $$\end{document} . Define

r=max{N,J}min{N,J}. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} r=\frac{\max \{N,J\}}{\min \{N,J\}}. \end{aligned}$$\end{document}

Under Condition 2, we have κ(R0)=σ1(ΠΘ)σK(ΠΘ)σ1(Π)σ1(Θ)σK(Π)σK(Θ)=κ(Π)κ(Θ)1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa (\textbf{R}_0) = \frac{\sigma _1(\varvec{\Pi }\varvec{\Theta }^{\top })}{\sigma _K(\varvec{\Pi }\varvec{\Theta }^{\top })} \le \frac{\sigma _1(\varvec{\Pi })\sigma _1(\varvec{\Theta })}{\sigma _K(\varvec{\Pi })\sigma _K(\varvec{\Theta })} = \kappa (\varvec{\Pi })\kappa (\varvec{\Theta })\lesssim 1$$\end{document} and σK(R0)σK(Π)σK(Θ)NJ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _K(\textbf{R}_0) \ge \sigma _K(\varvec{\Pi }) \sigma _K(\varvec{\Theta }) \succsim \sqrt{NJ}$$\end{document} .

Lemma 1

Under Condition 2, if N/J20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N/J^2\rightarrow 0$$\end{document} and J/N20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J/N^2\rightarrow 0$$\end{document} , then with probability at least 1-O((N+J)-5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-O((N+J)^{-5})$$\end{document} , one has

(28) U^-U·sgn(UU^)2,r+log(N+J)NJ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widehat{\textbf{U}} -\textbf{U}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}}) \Vert _{2,\infty }&\lesssim \frac{\sqrt{r} +\sqrt{\log (N+J)}}{\sqrt{NJ}} \end{aligned}$$\end{document}
(29) U^Σ^V^-UΣVrlog(N+J)min{N,J}. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widehat{\textbf{U}}\widehat{\varvec{\Sigma }}\widehat{\textbf{V}}^{\top } -\textbf{U}\varvec{\Sigma } \textbf{V}^{\top }\Vert _{\infty }&\lesssim \sqrt{\frac{r\log (N+J)}{\min \{N,J\}}}. \end{aligned}$$\end{document}

Here the infinity norm A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{A}\Vert _{\infty }$$\end{document} for any matrix A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{A}$$\end{document} is defined as the maximum absolute entry value. We write the RHS of (28) as ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} and the RHS of (29) as η \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eta $$\end{document} .

Proof of Lemma 1

We will use Theorem 4.4 in Chen et al. (Reference Chen, Chi, Fan and Ma2021a) to prove the lemma and will verify the conditions of that theorem are satisfied.

Define the incoherence parameter μ:=maxNU2,2K,JV2,2K \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu := \max \left\{ \frac{N\left\Vert {\textbf{U}}\right\Vert _{2,\infty }^2}{K}, \frac{J\left\Vert {\textbf{V}}\right\Vert _{2,\infty }^2}{K}\right\} $$\end{document} . Note that

U2,US,:2,US,:=1σK(Π)1N, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \textbf{U}\Vert _{2,\infty } \le \Vert \textbf{U}_{{\textbf{S}},:}\Vert _{2,\infty } \le \Vert \textbf{U}_{{\textbf{S}},:}\Vert = \frac{1}{\sigma _K(\varvec{\Pi })} \lesssim \frac{1}{\sqrt{N}}, \end{aligned}$$\end{document}

since all rows of U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} are convex combinations of US,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}_{{\textbf{S}},:}$$\end{document} . On the other hand,

V2,=ΘUS,:-Σ-12,Θ2,US,:-Σ-1Θ2,US,:-1·1σK(ΠΘ)=Θ2,σ1(Π)σK(ΠΘ)Θ2,κ(Π)σK(Θ)Kκ(Π)σK(Θ)1J. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \textbf{V}\Vert _{2,\infty }&= \Vert \varvec{\Theta }\textbf{U}^{-\top }_{{\textbf{S}},:} \varvec{\Sigma }^{-1}\Vert _{2,\infty } \le \Vert \varvec{\Theta }\Vert _{2,\infty } \Vert \textbf{U}^{-\top }_{{\textbf{S}},:} \varvec{\Sigma }^{-1}\Vert \\&\le \Vert \varvec{\Theta }\Vert _{2,\infty } \Vert \textbf{U}^{-1}_{{\textbf{S}},:}\Vert \cdot \frac{1}{\sigma _K(\varvec{\Pi }\varvec{\Theta }^{\top })} = \frac{\Vert \varvec{\Theta }\Vert _{2,\infty }\sigma _1(\varvec{\Pi })}{\sigma _K(\varvec{\Pi }\varvec{\Theta }^{\top })} \\&\le \frac{\Vert \varvec{\Theta }\Vert _{2,\infty }\kappa (\varvec{\Pi })}{\sigma _K(\varvec{\Theta })} \le \frac{\sqrt{K} \kappa (\varvec{\Pi })}{\sigma _K(\varvec{\Theta })} \lesssim \frac{1}{\sqrt{J}}. \end{aligned}$$\end{document}

Therefore, μ1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu \lesssim 1$$\end{document} .

On the other hand, we will show that log(N+J)/min{N,J}1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{\log (N+J)/\min \{N, J\}}\lesssim 1$$\end{document} . By the symmetry of N and J, we assume JN \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J\le N$$\end{document} without loss of generality. Thus,

log(N+J)min{N,J}=log(N+J)Jlog(J2+J)J0. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \sqrt{\frac{\log (N+J)}{\min \{N, J\}}} =\sqrt{\frac{\log (N+J)}{J}} \lesssim \sqrt{\frac{\log (J^2+J)}{J}} \rightarrow 0. \end{aligned}$$\end{document}

Therefore, Assumption 4.2 in Chen et al. (Reference Chen, Chi, Fan and Ma2021a) holds and (28) and (29) can be directly obtained from Theorem 4.4 in Chen et al. (Reference Chen, Chi, Fan and Ma2021a). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Lemma 2

Let Conditions 1 and 2 hold. Then, there exists a permutation matrix P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{P}$$\end{document} such that with probability at least 1-O((N+J)-5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-O((N+J)^{-5})$$\end{document} ,

(30) U^S^,:-PUS,:·sgn(UU^)ε. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widehat{\textbf{U}}_{\widehat{\textbf{S}},:} - \textbf{P}\textbf{U}_{{\textbf{S}},:} \cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})\Vert \lesssim \varepsilon . \end{aligned}$$\end{document}

Proof of Lemma 2

Using Proposition 1, we will apply Theorem 4 in Klopp et al. (Reference Klopp, Panov, Sigalla and Tsybakov2023) with G~=U^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\textbf{G}}=\widehat{\textbf{U}}$$\end{document} , G=U·sgn(UU^) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{G}=\textbf{U}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})$$\end{document} , W=Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{W} =\varvec{\Pi }$$\end{document} , Q=US,:·sgn(UU^) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{Q}=\textbf{U}_{{\textbf{S}},:}\cdot \text {sgn}(\textbf{U}^{\top } \widehat{\textbf{U}})$$\end{document} , N=U^-Usgn(UU^) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{N}=\widehat{\textbf{U}}-\textbf{U}\text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}})$$\end{document} , N=U^-U·sgn(UU^) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{N} =\widehat{\textbf{U}}-\textbf{U}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})$$\end{document} . According to Lemma 1, eiNε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \textbf{e}_i^{\top }\textbf{N}\Vert \le \varepsilon $$\end{document} and εr+log(N+J)NJ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon \lesssim \frac{\sqrt{r} +\sqrt{\log (N+J)}}{\sqrt{NJ}}$$\end{document} . On the other hand, σK(Q)=σK(US,:)=1σ1(Π)1N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _K(\textbf{Q})=\sigma _K(\textbf{U}_{{\textbf{S}},:}) =\frac{1}{\sigma _1(\varvec{\Pi })}\ge \frac{1}{\sqrt{N}}$$\end{document} since U=ΠUS,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}=\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}$$\end{document} and σ1(Π)ΠFNmaxieiΠ2N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _1(\varvec{\Pi }) \le \left\Vert {\varvec{\Pi }}\right\Vert _F \le \sqrt{N} \max _i\left\Vert {\textbf{e}_i^{\top }\varvec{\Pi }}\right\Vert _{2} \le \sqrt{N}$$\end{document} . Therefore, εCσK(US,:)KK \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon \le C_*\frac{\sigma _K (\textbf{U}_{{\textbf{S}},:})}{K\sqrt{K}}$$\end{document} for some C>0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_*>0$$\end{document} small enough. Then we can use Theorem 4 in Klopp et al. (Reference Klopp, Panov, Sigalla and Tsybakov2023) to get

U^-PUS,:·sgn(UU^)C0Kκ(US,:·sgn(UU^))ε=C0Kκ(US,:)ε=(i)C0Kκ(Π)εεwith probability at least1-O((N+J)-5). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widehat{\textbf{U}} - \textbf{P}\textbf{U}_{{\textbf{S}},:}\cdot \text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}})\Vert&\le C_0\sqrt{K}\kappa (\textbf{U}_{{\textbf{S}},:}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}}))\varepsilon \\&= C_0\sqrt{K}\kappa (\textbf{U}_{{\textbf{S}},:})\varepsilon {\mathop {=}\limits ^{(i)}} C_0\sqrt{K}\kappa (\varvec{\Pi })\varepsilon \\&\lesssim \varepsilon ~~ \text { with probability at least}\ 1-O((N+J)^{-5}). \end{aligned}$$\end{document}

Here (i) is because U=ΠUS,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}=\varvec{\Pi }\textbf{U}_{{\textbf{S}},:}$$\end{document} . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Theorem 3

First show that U^S^,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}$$\end{document} is not degenerate. By Weyl’s inequality and Lemma 2, with probability at least 1-O((N+J)-5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-O((N+J)^{-5})$$\end{document} , we have

σK(U^S^,:)σK(PUS,:O)-U^S^,:-PUS,:·sgn(UU^)σK(US,:)-U^S^,:-PUS,:·sgn(UU^)F1σ1(Π)-ε1N-r+log(N+J)NJ1N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \sigma _K(\widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:})&\ge \sigma _K(\textbf{P}\textbf{U}_{{\textbf{S}},:}\textbf{O}) - \Vert \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:} -\textbf{P}\textbf{U}_{{\textbf{S}},:}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})\Vert \\&\ge \sigma _K(\textbf{U}_{{\textbf{S}},:}) -\Vert \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:} -\textbf{P}\textbf{U}_{{\textbf{S}},:}\cdot \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})\Vert _F\\&\succsim \frac{1}{\sigma _1(\varvec{\Pi })} - \varepsilon \\&\succsim \frac{1}{\sqrt{N}} - \frac{\sqrt{r} +\sqrt{\log (N+J)}}{\sqrt{NJ}}\\&\succsim \frac{1}{\sqrt{N}} \end{aligned}$$\end{document}

when NJ are large enough and NJ2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{N}{J^2}$$\end{document} converges to zero. Therefore, U^S^,: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}$$\end{document} is invertible.

For the estimation of Π \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} ,

Π~-ΠPF=U^U^S^,:-1-UUS,:-1PFU^(U^S^,:-1-sgn(UU^)US,:-1P)FI1+(U^-Usgn(UU^))[P-1US,:sgn(UU^)]-1FI2=:I1+I2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widetilde{\varvec{\Pi }}-\varvec{\Pi }\textbf{P}\Vert _F&= \Vert \widehat{\textbf{U}} \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1} - \textbf{U}\textbf{U}^{-1}_{{\textbf{S}},:}\textbf{P}\Vert _F\\&\le \underbrace{\Vert \widehat{\textbf{U}} (\widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1} -\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})^{\top } \textbf{U}^{-1}_{{\textbf{S}},:}\textbf{P})\Vert _F}_{I_1} + \underbrace{\Vert (\widehat{\textbf{U}} - \textbf{U}\text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}}))[\textbf{P}^{-1}\textbf{U}_{{\textbf{S}},:} \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})]^{-1}\Vert _F}_{I_2}\\&=: I_1 + I_2. \end{aligned}$$\end{document}

We will look at I1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_1$$\end{document} and I2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_2$$\end{document} separately.

I1=U^(U^S^,:-1-sgn(UU^)US,:-1P)FU^S^,:-1-sgn(UU^)US,:-1PFU^S^,:-1sgn(UU^)US,:-1PU^S^,:-PUS,:sgn(UU^)FN·σ1(Π)·εN·r+log(N+J)Jwith probability at least1-O((N+J)-5); \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} I_1&= \Vert \widehat{\textbf{U}}(\widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1} -\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})^{\top }\textbf{U}^{-1}_{{\textbf{S}},:} \textbf{P}^{\top })\Vert _F \\&\le \Vert \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1} - \text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}})^{\top }\textbf{U}^{-1}_{{\textbf{S}},:}\textbf{P}^{\top }\Vert _F\\&\le \Vert \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:}^{-1}\Vert \Vert \text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}})^{\top }\textbf{U}^{-1}_{{\textbf{S}},:} \textbf{P}^{\top }\Vert \Vert \widehat{\textbf{U}}_{\widehat{{\textbf{S}}},:} - \textbf{P}\textbf{U}_{{\textbf{S}},:}\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})\Vert _F\\&\lesssim \sqrt{N} \cdot \sigma _1(\varvec{\Pi }) \cdot \varepsilon \\&\lesssim \sqrt{N}\cdot \frac{\sqrt{r}+\sqrt{\log (N+J)}}{\sqrt{J}} ~~ \text {with probability at least}\ 1-O((N+J)^{-5}); \end{aligned}$$\end{document}

and

I2=(U^-Usgn(UU^))[P-1US,:sgn(UU^)]-1FU^-Usgn(UU^)FUS,:-1N·ε·σ1(Π)N·r+log(N+J)Jwith probability at least1-O((N+J)-5). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} I_2&= \Vert (\widehat{\textbf{U}} - \textbf{U}\text {sgn}(\textbf{U}^{\top } \widehat{\textbf{U}}))[\textbf{P}^{-1}\textbf{U}_{{\textbf{S}},:}\text {sgn} (\textbf{U}^{\top }\widehat{\textbf{U}})]^{-1}\Vert _F\\&\le \Vert \widehat{\textbf{U}} - \textbf{U}\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}}) \Vert _F \Vert \textbf{U}_{{\textbf{S}},:}^{-1}\Vert \\&\le \sqrt{N} \cdot \varepsilon \cdot \sigma _1(\varvec{\Pi }) \\&\lesssim \sqrt{N}\cdot \frac{\sqrt{r}+\sqrt{\log (N+J)}}{\sqrt{J}} ~~ \text {with probability at least}\ 1-O((N+J)^{-5}). \end{aligned}$$\end{document}

Therefore, with probability at least 1-O((N+J)-5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-O((N+J)^{-5})$$\end{document} ,

1NKΠ~-ΠPFr+log(N+J)J=NJ+log(N+J)JifN>J,1N+log(N+J)JifNJ. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{1}{\sqrt{NK}} \Vert \widetilde{\varvec{\Pi }}-\varvec{\Pi }\textbf{P}\Vert _F&\lesssim \frac{\sqrt{r}+\sqrt{\log (N+J)}}{\sqrt{J}}\\&= {\left\{ \begin{array}{ll} \frac{\sqrt{N}}{J} + \frac{\sqrt{\log (N+J)}}{\sqrt{J}} &{} \text {if } N > J, \\ \frac{1}{\sqrt{N}} + \frac{\sqrt{\log (N+J)}}{\sqrt{J}} &{} \text {if } N \le J. \end{array}\right. } \end{aligned}$$\end{document}

Therefore, 1NKΠ~-ΠPF \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\sqrt{NK}} \Vert \widetilde{\varvec{\Pi }}-\varvec{\Pi }\textbf{P}\Vert _F$$\end{document} converges to zero in probability as N,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N, J\rightarrow \infty $$\end{document} and NJ20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{N}{J^2}\rightarrow 0$$\end{document} .

For the estimation of Θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} ,

Θ~P-ΘF=PU^S^,:Σ^V^-US,:ΣVF(PU^S^,:-US,:sgn(UU^))Σ^V^F+(US,:sgn(UU^)-U^S,:)Σ^V^F+U^S,:Σ^V^-US,:ΣVFPU^S^,:-US,:sgn(UU^)F·σ1(R)·V^+US,:sgn(UU^)-U^S,:·σ1(R)·V^+KJU^S,:Σ^V^-US,:ΣV(ii)ε·σ1(R0)+ε·(σ1(R)-σ1(R0))+KJ·ηwith probability at least1-O((N+J)-5), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned}&\Vert \widetilde{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F \\&\quad = \Vert \textbf{P}^{\top }\widehat{\textbf{U}}_{\widehat{\textbf{S}},:} \widehat{\varvec{\Sigma }}\widehat{\textbf{V}}^{\top } - \textbf{U}_{{\textbf{S}},:}\varvec{\Sigma }\textbf{V}^{\top }\Vert _F\\&\quad \le \Vert (\textbf{P}^{\top }\widehat{\textbf{U}}_{\widehat{\textbf{S}},:} -\textbf{U}_{{\textbf{S}},:}\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})) \widehat{\varvec{\Sigma }}\widehat{\textbf{V}}^{\top }\Vert _F + \Vert (\textbf{U}_{{\textbf{S}},:} \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}}) - \widehat{\textbf{U}}_{{\textbf{S}},:}) \widehat{\varvec{\Sigma }}\widehat{\textbf{V}}^{\top }\Vert _F \\&\qquad + \Vert \widehat{\textbf{U}}_{{\textbf{S}},:} \widehat{\varvec{\Sigma }} \widehat{\textbf{V}}^{\top } - \textbf{U}_{{\textbf{S}},:}\varvec{\Sigma }\textbf{V}^{\top }\Vert _F\\&\quad \le \Vert \textbf{P}^{\top }\widehat{\textbf{U}}_{\widehat{\textbf{S}},:} - \textbf{U}_{{\textbf{S}},:}\text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}})\Vert _F \cdot \sigma _1(\textbf{R})\cdot \Vert \widehat{\textbf{V}}\Vert + \Vert \textbf{U}_{{\textbf{S}},:} \text {sgn}(\textbf{U}^{\top }\widehat{\textbf{U}}) - \widehat{\textbf{U}}_{{\textbf{S}},:} \Vert \cdot \sigma _1(\textbf{R})\cdot \Vert \widehat{\textbf{V}}\Vert \\&\qquad + \sqrt{KJ}\Vert \widehat{\textbf{U}}_{{\textbf{S}},:} \widehat{\varvec{\Sigma }} \widehat{\textbf{V}}^{\top } - \textbf{U}_{{\textbf{S}},:}\varvec{\Sigma }\textbf{V}^{\top }\Vert _{\infty }\\ {\mathop {\lesssim }\limits ^{(ii)}}&\varepsilon \cdot \sigma _1(\textbf{R}_0) +\varepsilon \cdot (\sigma _1(\textbf{R}) - \sigma _1(\textbf{R}_0)) + \sqrt{KJ} \cdot \eta ~~ \text {with probability at least}\ 1-O((N+J)^{-5}), \end{aligned}$$\end{document}

where (ii) results from Lemma 2. By Weyl’s inequality, |σ1(R)-σ1(R0)|R-R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|\sigma _1(\textbf{R}) - \sigma _1(\textbf{R}_0)| \le \left\Vert {\textbf{R}-\textbf{R}_0}\right\Vert $$\end{document} , where R-R0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{R}-\textbf{R}_0$$\end{document} is a mean-zero Bernoulli matrix. According to Eq (3.9) in Chen et al. (Reference Chen, Chi, Fan and Ma2021a), with probability at least 1-(N+J)-8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-(N+J)^{-8}$$\end{document} ,

R-R0N+J+log(N+J). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \left\Vert {\textbf{R}-\textbf{R}_0}\right\Vert \lesssim \sqrt{N+J} + \sqrt{\log (N+J)}. \end{aligned}$$\end{document}

Furthermore, σ1(R0)σK(R0)NJ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _1(\textbf{R}_0)\ge \sigma _K(\textbf{R}_0)\succsim \sqrt{NJ}$$\end{document} by Condition 2; thus, we know that σ1(R0)|σ1(R)-σ1(R)| \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma _1(\textbf{R}_0) \succsim |\sigma _1(\textbf{R}) - \sigma _1(\textbf{R}^*)|$$\end{document} with probability at least 1-(N+J)-8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-(N+J)^{-8}$$\end{document} . Therefore, with probability at least 1-O((N+J)-5) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-O((N+J)^{-5})$$\end{document} ,

Θ^P-ΘFε·σ1(R0)+KJ·ηr+log(N+J)NJ·N·J+Jrlog(N+J)min{N,J}=r+log(N+J)+Jrlog(N+J)min{N,J}. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Vert \widehat{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F&\lesssim \varepsilon \cdot \sigma _1(\textbf{R}_0) + \sqrt{KJ}\cdot \eta \\&\lesssim \frac{\sqrt{r} + \sqrt{\log (N+J)}}{\sqrt{NJ}} \cdot \sqrt{N} \cdot \sqrt{J} + \sqrt{J} \sqrt{\frac{r\log (N+J)}{\min \{N,J\}}}\\&= \sqrt{r} + \sqrt{\log (N+J)} + \sqrt{J} \sqrt{\frac{r\log (N+J)}{\min \{N,J\}}}. \end{aligned}$$\end{document}

Thus,

1JKΘ^P-ΘFr+log(N+J)J+rlog(N+J)min{N,J}=NJ+log(N+J)J+Nlog(N+J)JifN>J1N+log(N+J)J+Jlog(N+J)NifNJ. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{1}{\sqrt{JK}} \Vert \widehat{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F&\lesssim \frac{\sqrt{r} + \sqrt{\log (N+J)}}{\sqrt{J}} + \sqrt{\frac{r\log (N+J)}{\min \{N,J\}}} \\&= {\left\{ \begin{array}{ll} \frac{\sqrt{N}}{J}+ \frac{\sqrt{\log (N+J)}}{\sqrt{J}} + \frac{\sqrt{N\log (N+J)}}{J} &{} \text {if } N > J \\ \frac{1}{\sqrt{N}} + \frac{\sqrt{\log (N+J)}}{\sqrt{J}} + \frac{\sqrt{J\log (N+J)}}{N} &{} \text {if } N \le J \end{array}\right. }. \end{aligned}$$\end{document}

Therefore, 1JKΘ^P-ΘF \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{\sqrt{JK}} \Vert \widehat{\varvec{\Theta }}\textbf{P}-\varvec{\Theta }\Vert _F$$\end{document} converges to zero in probability as N,J \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N, J\rightarrow \infty $$\end{document} and NJ2,JN20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{N}{J^2}, \frac{J}{N^2}\rightarrow 0$$\end{document} . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Footnotes

Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

References

Airoldi, E. M., Blei, D., Erosheva, E. A., Fienberg, S. E.. (2014). Handbook of mixed membership models and their applications. Boca Raton: CRC Press.CrossRefGoogle Scholar
Airoldi, E. M., Blei, D. M., Fienberg, S. E., Xing, E. P.. (2008). Mixed membership stochastic blockmodels. Journal of Machine Learning Research, 9, 19812014.Google ScholarPubMed
Akaike, H. (1998). Information theory and an extension of the maximum likelihood principle. Selected papers of Hirotugu Akaike (pp. 199–213).CrossRefGoogle Scholar
Araújo, M. C. U., Saldanha, T. C. B., Galvao, R. K. H., Yoneyama, T., Chame, H. C., Visani, V.. (2001). The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. Chemometrics and Intelligent Laboratory Systems, 57(2), 6573.CrossRefGoogle Scholar
Berry, M. W., Browne, M., Langville, A. N., Pauca, V. P., Plemmons, R. J.. (2007). Algorithms and applications for approximate nonnegative matrix factorization. Computational Statistics & Data Analysis, 52(1), 155173.CrossRefGoogle Scholar
Blei, D. M., Ng, A. Y., Jordan, M. I.. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan), 9931022.Google Scholar
Borsboom, D., Rhemtulla, M., Cramer, A. O., van der Maas, H. L., Scheffer, M., Dolan, C. V.. (2016). Kinds versus continua: A review of psychometric approaches to uncover the structure of psychiatric constructs. Psychological Medicine, 46(8), 15671579.CrossRefGoogle ScholarPubMed
Chen, Y., Chi, Y., Fan, J., Ma, C.. (2021). Spectral methods for data science: A statistical perspective. Foundations and Trends® in Machine Learning, 14(5), 566806.CrossRefGoogle Scholar
Chen, Y., Li, X., Zhang, S.. (2019). Joint maximum likelihood estimation for high-dimensional exploratory item factor analysis. Psychometrika, 84, 124146.CrossRefGoogle ScholarPubMed
Chen, Y., Li, X., Zhang, S.. (2020). Structured latent factor analysis for large-scale data: Identifiability, estimability, and their implications. Journal of the American Statistical Association, 115(532), 17561770.CrossRefGoogle Scholar
Chen, Y., Ying, Z., Zhang, H.. (2021). Unfolding-model-based visualization: Theory, method and applications. Journal of Machine Learning Research, 22, 11.Google Scholar
Dobriban, E., Owen, A. B.. (2019). Deterministic parallel analysis: An improved method for selecting factors and principal components. Journal of the Royal Statistical Society Series B: Statistical Methodology, 81(1), 163183.CrossRefGoogle Scholar
Donoho, D., & Stodden, V. (2003). When does non-negative matrix factorization give a correct decomposition into parts? Advances in Neural Information Processing Systems, 16.Google Scholar
Embretson, S. E., Reise, S. P.. (2013), Item response theory, New York: Psychology Press.CrossRefGoogle Scholar
Erosheva, E. A. (2002). Grade of membership and latent structure models with application to disability survey data. PhD thesis, Carnegie Mellon University.Google Scholar
Erosheva, E. A.. (2005). Comparing latent structures of the grade of membership, Rasch, and latent class models. Psychometrika, 70(4), 619628.CrossRefGoogle Scholar
Erosheva, E. A., Fienberg, S. E., Joutard, C.. (2007). Describing disability through individual-level mixture models for multivariate binary data. Annals of Applied Statistics, 1(2), 346.CrossRefGoogle ScholarPubMed
Freyaldenhoven, S., Ke, S., Li, D., & Olea, J. L. M. (2023). On the testability of the anchor words assumption in topic models. Technical report, working paper, Cornell University.Google Scholar
Gillis, N., Vavasis, S. A.. (2013). Fast and robust recursive algorithms for separable nonnegative matrix factorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 698714.CrossRefGoogle Scholar
Goodman, L. A.. (1974). Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika, 61(2), 215231.CrossRefGoogle Scholar
Gormley, I. C., Murphy, T. B.. (2009). A grade of membership model for rank data. Bayesian Analysis, 4(2), 265295.CrossRefGoogle Scholar
Gu, Y., Erosheva, E. E., Xu, G., Dunson, D. B.. (2023). Dimension-grouped mixed membership models for multivariate categorical data. Journal of Machine Learning Research, 24(88), 149.Google Scholar
Hagenaars, J. A., McCutcheon, A. L.. (2002). Applied latent class analysis. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Horn, J. L.. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179185.CrossRefGoogle ScholarPubMed
Hoyer, P. O.. (2004). Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research, 5(9), 14571469.Google Scholar
Jin, J., Ke, Z. T., Luo, S.. (2023). Mixed membership estimation for social networks. Journal of Econometrics,.Google Scholar
Ke, Z. T., Jin, J.. (2023). Special invited paper: The score normalization, especially for heterogeneous network and text data. Stat, 12(1), e545.CrossRefGoogle Scholar
Ke, Z. T., Wang, M.. (2022). Using SVD for topic modeling. Journal of the American Statistical Association, 2022, 116.Google Scholar
Klopp, O., Panov, M., Sigalla, S., & Tsybakov, A. (2023). Assigning topics to documents by successive projections. Annals of Statistics (to appear).CrossRefGoogle Scholar
Koopmans, T. C., Reiersol, O.. (1950). The identification of structural characteristics. The Annals of Mathematical Statistics, 21(2), 165181.CrossRefGoogle Scholar
Manrique-Vallier, D., Reiter, J. P.. (2012). Estimating identification disclosure risk using mixed membership models. Journal of the American Statistical Association, 107(500), 13851394.CrossRefGoogle ScholarPubMed
Mao, X., Sarkar, P., Chakrabarti, D.. (2021). Estimating mixed memberships with sharp eigenvector deviations. Journal of the American Statistical Association, 116(536), 19281940.CrossRefGoogle Scholar
Neyman, J., Scott, E. L.. (1948). Consistent estimates based on partially consistent observations. Econometrica: Journal of the Econometric Society, 16, 132.CrossRefGoogle Scholar
Pokropek, A.. (2016). Grade of membership response time model for detecting guessing behaviors. Journal of Educational and Behavioral Statistics, 41(3), 300325.CrossRefGoogle Scholar
Robitzsch, A., & Robitzsch, M. A. (2022). Packag ‘sirt’: Supplementary item response theory models.Google Scholar
Schwarz, G.. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461464.CrossRefGoogle Scholar
Shang, Z., Erosheva, E. A., Xu, G.. (2021). Partial-mastery cognitive diagnosis models. Annals of Applied Statistics, 15(3), 15291555.CrossRefGoogle Scholar
Spiegelhalter, D. J., Best, N. G., Carlin, B. P., Van Der Linde, A.. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(4), 583639.CrossRefGoogle Scholar
Woodbury, M. A., Clive, J., Garson, A. Jr. (1978). Mathematical typology: A grade of membership technique for obtaining disease definition. Computers and Biomedical Research, 11(3), 277298.CrossRefGoogle ScholarPubMed
Zhang, H., Chen, Y., Li, X.. (2020). A note on exploratory item factor analysis by singular value decomposition. Psychometrika, 85, 358372.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Illustration of the simplex geometry of the N×K\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\times K$$\end{document} left singular matrix U\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} with K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}. The solid dots represent the row vectors of U\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} in R3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^3$$\end{document}, and the three simplex vertices (i.e, vertices of the triangle) correspond to the three types of pure subjects. All the dots lie in this triangle.

Figure 1

Algorithm 1 Prune

Figure 2

Algorithm 2 GoM Estimation by Successive Projection Algorithm with Pruning

Figure 3

Figure 2 Row vectors of U\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textbf{U}$$\end{document} and U^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\textbf{U}}$$\end{document} projected to R2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^2$$\end{document} in the simulation setting with N=2000\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N=2000$$\end{document} and K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}. The red-shaded area is the population simplex, the green crosses are the removed subjects in pruning, and the blue dots form the empirical simplex retained after pruning.

Figure 4

Figure 3 Computation time for K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} (left) and K=8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document} (right) in simulations. For each simulation setting, we show the median, 25% quantile, and 75% quantile of the computation time for the 100 replications.

Figure 5

Table 1 Table of average computational time in seconds across replications for JML and the proposed spectral method for K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document} and K=8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document}

Figure 6

Figure 4 Simulation results of estimation error for K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}. The boxplots represent the mean absolute error for Θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

Figure 7

Figure 5 Simulation results of estimation error for K=8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=8$$\end{document}. The boxplots represent the mean absolute error for Θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

Figure 8

Table 2 Average computation time in seconds for each method and sample size in simulations when K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}

Figure 9

Table 3 Average mean absolute error for Θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} and Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} for each method and sample size in simulations when K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}

Figure 10

Figure 6 Comparison of estimation results with and without the pruning procedure when K=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K=3$$\end{document}.

Figure 11

Figure 7 A simulation study verifying identifiability. Estimation errors for three different cases; see the concrete settings of Cases 1, 2, and 3 in the main text. The box plots represent the mean absolute error for Θ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Theta }$$\end{document} (left) and Π\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{\Pi }$$\end{document} (right) versus the sample size N.

Figure 12

Figure 8 Computation time for the WPI dataset. The lines indicate the run time in seconds versus K for JML and our spectral method. Note that for K≥4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K\ge 4$$\end{document}, the number of iterations in JML reaches the default maximum iteration number.

Figure 13

Figure 9 Heatmap of Θ^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Theta }}$$\end{document} of a subset of 30 WPI items. The values are the estimated probability of responding “yes” for each item given each extreme profile.

Figure 14

Figure 10 Barycentric plot of the estimated membership scores Π^\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\varvec{\Pi }}$$\end{document} for WPI data, color coded with the age covariate.