Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2025-01-05T14:31:06.152Z Has data issue: false hasContentIssue false

Factor analysis models via I-divergence optimization

Published online by Cambridge University Press:  01 January 2025

Lorenzo Finesso
Affiliation:
IEIIT - CNR
Peter Spreij*
Affiliation:
Universiteit van Amsterdam
*
Correspondence should be made to Peter Spreij, Korteweg-de Vries Institute for Mathematics, Universiteit van Amsterdam, POBox 94248, 1090 GE Amsterdam, The Netherlands. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Given a positive definite covariance matrix Σ^\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\widehat{\Sigma }$$\end{document} of dimension n, we approximate it with a covariance of the form HH⊤+D\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$HH^\top +D$$\end{document}, where H has a prescribed number k<n\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$k<n$$\end{document} of columns and D>0\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$D>0$$\end{document} is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances Σ^\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\widehat{\Sigma }$$\end{document} and HH⊤+D\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$HH^\top +D$$\end{document}, respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár–Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton–Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms.

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Copyright
Copyright © 2015 The Author(s). This article is published with open access at Springerlink.com

1. Introduction

Let Y be a given zero mean normal vector of dimension n and covariance C ov ( Y ) = Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(Y)=\widehat{\Sigma }$$\end{document} . A standard Factor Analysis (FA) model for Y is a linear model

(1.1) Y = H X + ε , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} Y = HX + \varepsilon , \end{aligned}$$\end{document}

where H is a deterministic matrix, X is a standard normal vector of dimension k < n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k<n$$\end{document} , i.e., with zero mean and C ov ( X ) = I k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(X)=I_k$$\end{document} (the k-dimensional identity), and ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} is a zero mean normal vector, independent from X, with C ov ( ε ) = D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(\varepsilon )=D$$\end{document} diagonal. The model (1.1) explains the n components of Y as linear combinations of the k < n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k<n$$\end{document} components of X, perturbed by the independent noise ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} . The FA model built-in linear structure and data reduction mechanism make it very attractive in applied research.

It is not always possible to describe the given Y with a FA model. Indeed, as a consequence of the hypotheses on X and ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} ,

(1.2) Σ ^ = H H + D , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widehat{\Sigma }= HH^\top +D, \end{aligned}$$\end{document}

a relation which imposes strong structural constraints on the covariance Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} . Determining whether the given Y admits a FA model (1.1) requires the solution of an algebraic problem: given Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , find, if they exist, pairs (H, D) such that (1.2) holds. The structural constraints impose that H must be a tall matrix, and D a diagonal matrix. For a given Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , the existence and uniqueness of a pair (H, D) are not guaranteed. Generically, the Ledermann bound (Anderson & Rubin, Reference Anderson and Rubin1956; Ledermann, Reference Ledermann1937), gives necessary conditions for the existence of a FA model in terms of k and n.

As it turns out, for the data reduction case of this paper, the right tools to deal with the existence and the construction of an FA model are geometric in nature and come from the theory of stochastic realization, see Finesso and Picci (Reference Finesso, Picci, Bensoussan and Lions1984) for an early contribution on the subject.

In the present paper we address the problem of constructing an approximate FA model of the given Y. Since in general the relation (1.2) does not hold for any (H, D), one has to find ways to gauge the closeness of Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} to the FA model covariance H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top +D$$\end{document} . One possibility is to use a form of least squares as a loss criterion. Here we adopt the I-divergence I ( Σ ^ | | H H + D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\widehat{\Sigma } || HH^\top +D)$$\end{document} , also known as Kullback-Leibler distance, between the corresponding (multivariate) normal laws. Throughout the paper Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} is given and is assumed to be non-singular.

In statistical inference it is well known, and reviewed in Section 2, that the I-divergence is, up to constants independent of H and D, the parameters yielding the covariance H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top + D$$\end{document} , the opposite of the normal log likelihood. One has the identity

(1.3) - I ( Σ ^ | | H H + D ) = n 2 - 1 2 log | H H + D | Σ ^ - 1 2 tr ( ( H H + D ) - 1 Σ ^ ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} -{\mathcal {I}}(\widehat{\Sigma } || HH^\top +D)=\frac{n}{2} -\frac{1}{2}\log \frac{|HH^\top +D|}{\widehat{\Sigma }}-\frac{1}{2}\hbox {tr}\Big ((HH^\top + D)^{-1}\widehat{\Sigma }\Big ), \end{aligned}$$\end{document}

where Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} is now the empirical covariance matrix, used as an estimator of the true covariance H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top + D$$\end{document} . In the empirical context non-singular Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} is usually the case if the number of variables is smaller than the number of observations. A completely different situation, singular Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , arises when the number of variables is larger than the number of observations, see e.g., Bai and Li (Reference Bai and Li2012), Trendafilov and Unkel (Reference Trendafilov and Unkel2011) for recent results.

The choice of the best (H, D) pair can then be posed as a maximum- likelihood problem, as first proposed by Lawley (Reference Lawley1940). Lacking a closed form solution, the maximization problem (1.3) has to be attacked numerically, and several optimization algorithms have been either adapted or custom-tailored for it. Among the former, the EM method, introduced in the context of FA estimation by Rubin and Thayer (Reference Rubin and Thayer1982), and still mutating and evolving (Adachi, Reference Adachi2013; Zhao, Yu, & Jiang, Reference Zhao, Yu and Jiang2008; Zhao & Shi, Reference Zhao and Shi2014), takes full advantage of the structure of the likelihood in order to guarantee descent at each iteration, although at the expense of a less than ideal convergence rate, which can be slow and sensitive to the initial conditions.

It has long been known, see Csiszár and Tusnády (Reference Csiszár and Tusnády1984), that any EM algorithm can be reformulated embedding the problem in a properly chosen larger parameter space and then performing alternating partial minimizations of the I-divergence over properly defined subspaces. This setup has previously been followed for various problems, e.g., mixture decomposition (Csiszár & Tusnády, Reference Csiszár and Tusnády1984), non-negative matrix factorization (Finesso & Spreij, Reference Finesso and Spreij2006), and approximation of non-negative impulse response functions (Finesso & Spreij, Reference Finesso and Spreij2015). The advantage afforded by the embedding procedure is that both partial minimizations have closed form solutions; moreover a necessary and sufficient condition of optimality of a geometric flavor, a Pythagoras rule, see Csiszár (Reference Csiszár1975), is available to check optimality for both partial minimizations. As it turns out, and we prove this assertion in Section 5, the EM method proposed in Rubin and Thayer (Reference Rubin and Thayer1982) corresponds to a suboptimal embedding, as one of the Pythagoras rules fails. The main result of this paper is an iterative algorithm, called AML, for the construction of an (H, D) pair minimizing the I-divergence from Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} using an optimal embedding, for which both Pythagoras rules hold. We also study the behavior of the algorithm in the singular case, i.e., D not of full rank, which is well known to be problematic for FA modeling (Jöreskog, Reference Jöreskog1967). These theoretical considerations make up the bulk of the paper. We emphasize that the present paper is not on numerical subtleties and (often very clever) improvements as established in the literature to accelerate the convergence of EM type algorithms. Rather, the central feature is the systematic methodology to derive an algorithm by a constructive procedure. Nevertheless, we make a brief foray into the numerical aspects, developing a version of AML, which we call ACML, in the spirit of ECME [a Newton–Raphson variation on EM, Liu and Rubin (Reference Liu and Rubin1994)].

The remainder of the paper is organized as follows. In Section 2 the approximation problem is posed and discussed, as well as its estimation problem counterpart. Section 3 recasts the problem as a double minimization in a larger space, making it amenable to a solution in terms of alternating minimization. In Section 4, we present the alternating minimization algorithm, provide alternative versions of it, and study its asymptotics. We also point out, in Section 5, the similarities and the differences between our algorithm and the EM algorithm. Section 6 is dedicated to a constrained version of the optimization problem (the singular D case) and the pertinent alternating minimization algorithm. The study of the singular case also sheds light on the boundary limit points of the algorithm presented in Section 4. The last Section 7 is devoted to numerical illustrations, where we compare the performance of the AML, EM, ACML, and ECME algorithms. The Appendix contains the proofs of most of the technical results, and also decomposition results on the I-divergence, which are interesting in their own right, beyond application to Factor Analysis.

2. Problem Statement

In the present section, we pose the approximation problem and discuss the closely related estimation problem and its statistical counterpart.

2.1. Approximation Problem

Consider independent normal, zero mean, random vectors X and ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} , of respective dimensions k and n, where k < n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k<n$$\end{document} , and with C ov ( X ) = I k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(X)=I_k$$\end{document} and C ov ( ε ) = D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(\varepsilon )=D$$\end{document} , a diagonal matrix. For any deterministic conformable matrix H, the n dimensional vector Y given by

(2.1) Y = H X + ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} Y=HX + \varepsilon \end{aligned}$$\end{document}

is called a standard FA model. The matrices (H, D) are the parameters that identify the model. As a consequence of the hypotheses,

(2.2) C ov ( Y ) = H H + D . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathbb {C}}\hbox {ov}(Y) = HH^\top +D. \end{aligned}$$\end{document}

Given an n-dimensional covariance matrix Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , one can pose the problem of approximating it with the covariance of a standard FA model, i.e., of finding (H, D) such that

(2.3) Σ ^ H H + D . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widehat{\Sigma } \approx HH^\top +D. \end{aligned}$$\end{document}

In this paper, we pose and solve the problem of finding an optimal approximation (2.3) when the criterion of closeness is the I-divergence (also known as Kullback-Leibler distance) between normal laws. Recall that [see e.g., Theorem 1.8.2 in Ihara (Reference Ihara1993)] if ν 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu _1$$\end{document} and ν 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu _2$$\end{document} are two zero mean normal distributions on R m \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^m$$\end{document} , whose covariance matrices, Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1$$\end{document} and Σ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _2$$\end{document} , respectively, are both non-singular, the I-divergence I ( ν 1 | | ν 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\mathcal {I}}(\nu _1||\nu _2) $$\end{document} takes the explicit form ( | · | \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|\cdot |$$\end{document} denotes determinant)

(2.4) I ( ν 1 | | ν 2 ) = 1 2 log | Σ 2 | | Σ 1 | - m 2 + 1 2 tr ( Σ 2 - 1 Σ 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\nu _1||\nu _2)=\frac{1}{2}\log \frac{|\Sigma _2|}{|\Sigma _1|}-\frac{m}{2} +\frac{1}{2}\mathrm{tr}(\Sigma _2^{-1}\Sigma _1). \end{aligned}$$\end{document}

Since, because of zero means, the I-divergence depends only on the covariance matrices, we usually write I ( Σ 1 | | Σ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma _1||\Sigma _2)$$\end{document} instead of I ( ν 1 | | ν 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\nu _1||\nu _2)$$\end{document} . The approximate FA model problem can then be cast as follows.

Problem 2.1

Given the covariance matrix Σ ^ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }>0$$\end{document} , of size n, and an integer k < n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k<n$$\end{document} , minimizeFootnote 1

(2.5) I ( Σ ^ | | H H + D ) = 1 2 log | H H + D | | Σ ^ | - n 2 + 1 2 tr ( ( H H + D ) - 1 Σ ^ ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\widehat{\Sigma } || HH^\top +D) = \frac{1}{2}\log \frac{|HH^\top +D|}{|\widehat{\Sigma }|}-\frac{n}{2} +\frac{1}{2}\mathrm{tr}((HH^\top +D)^{-1}\widehat{\Sigma }), \end{aligned}$$\end{document}

where the minimization is taken over all diagonal matrices D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D\ge 0$$\end{document} , and H R n × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H \in {\mathbb {R}}^{n\times k}$$\end{document} .

The first result is that in Problem 2.1, a minimum always exists.

Proposition 2.2

There exist matrices H R n × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^*\in {\mathbb {R}}^{n\times k}$$\end{document} , and non-negative diagonal D R n × n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D^*\in {\mathbb {R}}^{n\times n}$$\end{document} that minimize the I-divergence in Problem 2.1.

Proof

The proof can be found in Finesso and Spreij (Reference Finesso, Chiuso, Pinzoni and Ferrante2007). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Finesso and Spreij (Reference Finesso and Spreij2006) studied an approximate non-negative matrix factorization (NMF) problem where the objective function was also of I-divergence type. In that case, using a relaxation technique, the original minimization was lifted to a double minimization in a higher dimensional space, leading naturally to an alternating minimization algorithm. The core of the present paper consists in following the same approach, in the completely different context of covariance matrices, and to solve Problem 2.1 with an alternating minimization algorithm.

As a side remark note that I ( Σ 1 | | Σ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma _1||\Sigma _2)$$\end{document} , computed as in (2.4), can be considered as an I-divergence between two positive definite matrices, without referring to normal distributions. Hence the approximation Problem 2.1 is meaningful even without assuming normality.

2.2. Estimation Problem

In a statistical setup, the approximation Problem 2.1 has an equivalent formulation as an estimation problem. Let indeed Y 1 , , Y N \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_1,\ldots ,Y_N$$\end{document} be a sequence of i.i.d. observations, whose distribution is modeled according to (2.1), where the matrices H and D are the unknown parameters. This is the context of Exploratory Factor Analysis, where no constraints are imposed on the matrix H. Let Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} denote the sample covariance matrix of the data. If the data have strictly positive covariance, for large enough N, the sample covariance will be strictly positive almost surely. The normal log likelihood ( H , D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell (H,D)$$\end{document} yields

(2.6) 1 N ( H , D ) = - n 2 log ( 2 π ) - 1 2 log | H H + D | - 1 2 tr ( ( H H + D ) - 1 Σ ^ ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{1}{N}\ell (H,D)=-\frac{n}{2}\log (2\pi )-\frac{1}{2}\log |HH^\top +D|-\frac{1}{2}\hbox {tr}\Big ((HH^\top + D)^{-1}\widehat{\Sigma }\Big ). \end{aligned}$$\end{document}

One immediately sees that ( H , D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell (H,D)$$\end{document} is, up to constants not depending on H and D, equal to - I ( Σ ^ | | H H + D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-{\mathcal {I}}(\widehat{\Sigma }||HH^\top + D)$$\end{document} . Hence, maximum-likelihood estimation parallels I-divergence minimization in Problem 2.1, only the interpretation is different.

The equations for the maximum-likelihood estimators can be found in e.g., Section 14.3.1 of Anderson (Reference Anderson1984). In terms of the unknown parameters H and D, with D assumed to be non-singular, they are

(2.7) H = ( Σ ^ - H H ) D - 1 H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H&=(\widehat{\Sigma }-HH^\top )D^{-1}H \end{aligned}$$\end{document}
(2.8) D = Δ ( Σ ^ - H H ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D&=\Delta (\widehat{\Sigma }-HH^\top ). \end{aligned}$$\end{document}

where Δ ( M ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta (M)$$\end{document} , defined for any square M, coincides with M on the diagonal and is zero elsewhere. Note that the matrix H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top +D$$\end{document} obtained by maximum-likelihood estimation is automatically invertible. Then it can be verified that Equation (2.7) is equivalent to

(2.9) H = Σ ^ ( H H + D ) - 1 H , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H =\widehat{\Sigma }(HH^\top +D)^{-1}H, \end{aligned}$$\end{document}

which is meaningful also when D is not invertible.

It is clear that the system of Equations (2.7), (2.8) does not have an explicit solution. For this reason, several numerical algorithms have been devised, among others a version of the EM algorithm, see Rubin and Thayer (Reference Rubin and Thayer1982). In the present paper we consider an alternative approach, which we will compare with the EM and some of the other algorithms.

3. Lifted Version of the Problem

In this section, Problem 2.1 is recast in a higher dimensional space, making it amenable to solution via two partial minimizations which will lead, in Section 4.1, to an alternating minimization algorithm. The original n dimensional FA model (2.1) is embedded in a larger n + k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n+k$$\end{document} dimensional linear model as follows:

(3.1) V = Y Z = H I n Q 0 X ε , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} V=\begin{pmatrix} Y \\ Z\end{pmatrix} = \begin{pmatrix}H &{}I_n\\ Q^\top &{} 0\end{pmatrix}\begin{pmatrix} X \\ \varepsilon \end{pmatrix}\,, \end{aligned}$$\end{document}

where the deterministic matrix Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q^\top $$\end{document} is square of size k, and for the terms H, X, and ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} the hypotheses leading to model (2.1) still hold. The vector V, as well as its subvector Z, is therefore zero mean normal, with

(3.2) C ov ( V ) = H H + D H Q ( H Q ) Q Q . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathbb {C}}\hbox {ov}(V)=\begin{pmatrix} HH^\top + D &{} HQ \\ (HQ)^\top &{} Q^\top Q &{} \end{pmatrix}. \end{aligned}$$\end{document}

Remark 3.1

The embedding (3.1) has a simple interpretation as a convenient reparametrization of the following alternative version of the standard FA model (2.1),

(3.3) Y = L Z + ε , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} Y=LZ +\varepsilon , \end{aligned}$$\end{document}

where Z and ε \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon $$\end{document} are zero mean normal vectors of sizes k and n, respectively, with C ov ( Z ) = P > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(Z)=P>0$$\end{document} and C ov ( ε ) = I n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}(\varepsilon )=I_n$$\end{document} . Letting P = Q Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P=Q^\top Q$$\end{document} and X = Q - Z \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X=Q^{-\top }Z$$\end{document} , where Q is any k × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k\times k$$\end{document} square rootFootnote 2 of P, one easily recognizes that (3.1) is a reparametrization of (3.3).

In this paper, all vectors are zero mean and normal, with law completely specified by the covariance matrix. The set of all covariance matrices of size n + k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n+k$$\end{document} will be denoted as Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}$$\end{document} . An element Σ Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma \in {\varvec{\Sigma }}$$\end{document} can always be decomposed as

(3.4) Σ = Σ 11 Σ 12 Σ 21 Σ 22 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma =\begin{pmatrix} \Sigma _{11} &{} \Sigma _{12} \\ \Sigma _{21} &{} \Sigma _{22} \end{pmatrix}, \end{aligned}$$\end{document}

where Σ 11 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{11}$$\end{document} and Σ 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{22}$$\end{document} are square, of respective sizes n and k.

Two subsets of Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}$$\end{document} , comprising covariance matrices with special structure, will play a major role in what follows. The subset Σ 0 Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_0 \subset {\varvec{\Sigma }}$$\end{document} contains all covariances (3.4) with Σ 11 = Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{11}=\widehat{\Sigma }$$\end{document} , a given matrix, i.e.,

Σ 0 = { Σ Σ : Σ 11 = Σ ^ } . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\varvec{\Sigma }}_0=\{\Sigma \in {\varvec{\Sigma }}: \Sigma _{11}=\widehat{\Sigma }\}. \end{aligned}$$\end{document}

The generic element of Σ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_0$$\end{document} will often be denoted as Σ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _0$$\end{document} . Also of interest is the subset Σ 1 Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_1\subset {\varvec{\Sigma }}$$\end{document} , containing all covariances (3.4) of the special form (3.2), i.e.,

Σ 1 = Σ Σ : Σ 11 = H H + D , Σ 12 = H Q , Σ 22 = Q Q with H , D , Q as in (3.2) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\varvec{\Sigma }}_1=\left\{ \, \Sigma \in {\varvec{\Sigma }}: \, \Sigma _{11}=HH^\top + D, \,\, \Sigma _{12}=HQ, \, \Sigma _{22}=Q^\top Q \, \hbox { with } \, \, H, D, Q \hbox { as in (3.2) }\right\} . \end{aligned}$$\end{document}

The generic elements of Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_1$$\end{document} will often be denoted Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1$$\end{document} , or Σ ( H , D , Q ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma (H,D,Q)$$\end{document} when the parameters are relevant.

We are now ready to pose the following double minimization problem.

Problem 3.2

Find

min Σ 0 Σ 0 , Σ 1 Σ 1 I ( Σ 0 | | Σ 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{\Sigma _0\in {\varvec{\Sigma }}_0,\Sigma _1\in {\varvec{\Sigma }}_1}{\mathcal {I}}(\Sigma _0||\Sigma _1). \end{aligned}$$\end{document}

Problems 3.2 and 2.1 are related by the following proposition.

Proposition 3.3

Let Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} be given. It holds that

min H , D I Σ ^ | | H H + D = min Σ 0 Σ 0 , Σ 1 Σ 1 I Σ 0 | | Σ 1 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{H,D}\, {\mathcal {I}}\left( \widehat{\Sigma }\,||\,HH^\top + D\right) =\min _{\Sigma _0\in {\varvec{\Sigma }}_0,\Sigma _1\in {\varvec{\Sigma }}_1}{\mathcal {I}}\left( \Sigma _0||\Sigma _1\right) . \end{aligned}$$\end{document}

Proof

The proof can be found in Finesso and Spreij (Reference Finesso, Chiuso, Pinzoni and Ferrante2007). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

3.1. Partial Minimization Problems

The first partial minimization, required for the solution of Problem 3.2, is as follows.

Problem 3.4

Given a strictly positive definite covariance matrix Σ Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma \in {\varvec{\Sigma }}$$\end{document} , find

min Σ 0 Σ 0 I ( Σ 0 | | Σ ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{\Sigma _0 \in {\varvec{\Sigma }}_0} \, {\mathcal {I}}(\Sigma _0||\Sigma ). \end{aligned}$$\end{document}

The unique solution to this problem can be computed analytically and is given below.

Proposition 3.5

The unique minimizer Σ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _0^*$$\end{document} of Problem 3.4 is given by

Σ 0 = Σ ^ Σ ^ Σ 11 - 1 Σ 12 Σ 21 Σ 11 - 1 Σ ^ Σ 22 - Σ 21 Σ 11 - 1 ( Σ 11 - Σ ^ ) Σ 11 - 1 Σ 12 > 0 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _0^*=\begin{pmatrix} \widehat{\Sigma } &{} \,\, \widehat{\Sigma }\Sigma _{11}^{-1}\Sigma _{12}\\ \Sigma _{21}\Sigma _{11}^{-1}\widehat{\Sigma } &{} \,\, \Sigma _{22}-\Sigma _{21}\Sigma _{11}^{-1} (\Sigma _{11}-\widehat{\Sigma })\Sigma _{11}^{-1}\Sigma _{12} \end{pmatrix} >0. \end{aligned}$$\end{document}

Moreover,

(3.5) I ( Σ 0 | | Σ ) = I ( Σ ^ | | Σ 11 ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\Sigma _0^*||\Sigma )={\mathcal {I}}(\widehat{\Sigma }||\Sigma _{11}), \end{aligned}$$\end{document}

and the Pythagorean rule

I ( Σ 0 | | Σ ) = I ( Σ 0 | | Σ 0 ) + I ( Σ 0 | | Σ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\Sigma _0||\Sigma )={\mathcal {I}}(\Sigma _0||\Sigma _0^*)+{\mathcal {I}}(\Sigma _0^*||\Sigma ) \end{aligned}$$\end{document}

holds for any strictly positive Σ 0 Σ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _0\in {\varvec{\Sigma }}_0$$\end{document} .

Proof

See Appendix 2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Next we turn to the second partial minimization

Problem 3.6

Given a strictly positive definite covariance matrix Σ Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma \in {\varvec{\Sigma }}$$\end{document} , find

min Σ 1 Σ 1 I ( Σ | | Σ 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{\Sigma _1 \in {\varvec{\Sigma }}_1} \, {\mathcal {I}}(\Sigma ||\Sigma _1). \end{aligned}$$\end{document}

The proposition below gives the explicit solution to this problem.

Proposition 3.7

A minimizer Σ 1 = Σ ( H , D , Q ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1^*=\Sigma (H^*,D^*,Q^*)$$\end{document} of Problem 3.6 is given by

Q = Σ 22 1 / 2 H = Σ 12 Σ 22 - 1 / 2 D = Δ ( Σ ~ 11 ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} Q^*&=\Sigma _{22}^{1/2}\\ H^*&= \Sigma _{12}\Sigma _{22}^{-1/2} \\ D^*&= \Delta (\widetilde{\Sigma }_{11}), \end{aligned}$$\end{document}

where

Σ ~ 11 = Σ 11 - Σ 12 Σ 22 - 1 Σ 21 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{\Sigma }_{11}=\Sigma _{11}-\Sigma _{12}\Sigma _{22}^{-1}\Sigma _{21}. \end{aligned}$$\end{document}

The corresponding minimizing matrix is

(3.6) Σ 1 = Σ ( H , D , Q ) = Σ 12 Σ 22 - 1 Σ 21 + Δ ( Σ ~ 11 ) Σ 12 Σ 21 Σ 22 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _1^*=\Sigma (H^*,D^*,Q^*)= \begin{pmatrix} \Sigma _{12}\Sigma _{22}^{-1}\Sigma _{21}+\Delta (\widetilde{\Sigma }_{11}) &{} \Sigma _{12} \\ \Sigma _{21} &{} \Sigma _{22} \end{pmatrix}. \end{aligned}$$\end{document}

Moreover, I ( Σ | | Σ 1 ) = I ( Σ ~ 11 | | Δ ( Σ ~ 11 ) ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma ||\Sigma _1^*)= {\mathcal {I}}(\widetilde{\Sigma }_{11}||\Delta (\widetilde{\Sigma }_{11}))$$\end{document} and the Pythagorean rule

(3.7) I ( Σ | | Σ 1 ) = I ( Σ | | Σ 1 ) + I ( Σ 1 | | Σ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\Sigma ||\Sigma _1)={\mathcal {I}}(\Sigma ||\Sigma _1^*)+{\mathcal {I}}(\Sigma _1^*||\Sigma _1) \end{aligned}$$\end{document}

holds for any Σ 1 = Σ ( H , D , Q ) Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1=\Sigma (H,D,Q)\in {\varvec{\Sigma }}_1$$\end{document} .

Proof

See Appendix 2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Note that Problem 3.6 cannot have a unique solution in terms of the matrices H and Q. Indeed, if U is a unitary k × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k\times k$$\end{document} matrix and H = H U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'=HU$$\end{document} , Q = U Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q'=U^\top Q$$\end{document} , then H H = H H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'H'^\top =HH^\top $$\end{document} , Q Q = Q Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q'^\top Q'=Q^\top Q$$\end{document} , and H Q = H Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'Q'=HQ$$\end{document} . Nevertheless, the optimal matrices H H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top $$\end{document} , HQ, and Q Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q^\top Q$$\end{document} are unique, as it can be easily checked using the expressions in Proposition 3.7.

Remark 3.8

Note that, since Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma $$\end{document} is supposed to be strictly positive, Σ ~ 11 : = Σ 11 - Σ 12 Σ 22 - 1 Σ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\Sigma }_{11}:=\Sigma _{11}-\Sigma _{12}\Sigma _{22}^{-1}\Sigma _{21}$$\end{document} is strictly positive too. It follows that D = Δ ( Σ ~ 11 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D^*=\Delta (\widetilde{\Sigma }_{11})$$\end{document} is strictly positive.

We close this section by considering a constrained version of the second partial minimization Problem 3.6 to which we will return in Section 5, when we discuss the connection with the EM algorithm. The constraint that we impose is Q = Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q=Q_0$$\end{document} fixed, whereas H and D remain free. The set over which the optimization will be carried out is Σ 10 Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_{10}\subset {\varvec{\Sigma }}_1$$\end{document} defined as

Σ 10 = Σ Σ : Σ 11 = H H + D , Σ 12 = H Q 0 , Σ 22 = Q 0 Q 0 with H , D as in ( 3.2 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\varvec{\Sigma }}_{10}=\left\{ \, \Sigma \in {\varvec{\Sigma }}\, : \, \Sigma _{11}=HH^\top + D, \,\, \Sigma _{12}=HQ_0, \, \Sigma _{22}=Q_0^\top Q_0 \, \hbox { with } \, \, H, D \hbox { as in } \,(3.2)\, \right\} . \end{aligned}$$\end{document}

We pose the following constrained optimization problem.

Problem 3.9

Given a strictly positive covariance Σ Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma \in {\varvec{\Sigma }}$$\end{document} , find

min Σ 10 Σ 10 I ( Σ | | Σ 10 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{\Sigma _{10} \in {\varvec{\Sigma }}_{10}} \, {\mathcal {I}}(\Sigma ||\Sigma _{10}). \end{aligned}$$\end{document}

The solution is given in the next proposition.

Proposition 3.10

A solution Σ 10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{10}^*$$\end{document} of Problem 3.9 is obtained for H = Σ 12 Σ 22 - 1 Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^*=\Sigma _{12}\Sigma _{22}^{-1}Q_0^\top $$\end{document} , and D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D^*$$\end{document} as in Proposition 3.7, resulting in

(3.8) Σ 10 = Σ 12 Σ 22 - 1 Q 0 Q 0 Σ 22 - 1 Σ 21 + Δ ( Σ ~ 11 ) Σ 12 Σ 22 - 1 Q 0 Q 0 Q 0 Q 0 Σ 22 - 1 Σ 21 Q 0 Q 0 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _{10}^*=\begin{pmatrix} \Sigma _{12}\Sigma _{22}^{-1}Q_0^\top Q_0\Sigma _{22}^{-1}\Sigma _{21} + \Delta (\widetilde{\Sigma }_{11}) &{} \Sigma _{12}\Sigma _{22}^{-1}Q_0^\top Q_0 \\ Q_0^\top Q_0\Sigma _{22}^{-1}\Sigma _{21} &{} Q_0^\top Q_0 \end{pmatrix}. \end{aligned}$$\end{document}

Proof

See Appendix 2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

For the constrained problem, the Pythagorean rule does not hold. Intuitively, since Σ 10 Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_{10}\subset {\varvec{\Sigma }}_1$$\end{document} the optimal value of the constrained Problem 3.9 is in general higher than the optimal value of the free Problem 3.6. To compute the extra cost incurred notice that Σ 10 Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{10}^* \in {\varvec{\Sigma }}_1$$\end{document} , therefore the Pythagorean rule (3.7) gives

(3.9) I ( Σ | | Σ 10 ) = I ( Σ | | Σ 1 ) + I ( Σ 1 | | Σ 10 ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\Sigma ||\Sigma _{10}^*)= {\mathcal {I}}(\Sigma ||\Sigma _1^*)+{\mathcal {I}}(\Sigma _1^*||\Sigma _{10}^*), \end{aligned}$$\end{document}

hence I ( Σ | | Σ 10 ) I ( Σ | | Σ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma ||\Sigma _{10}^*)\ge {\mathcal {I}}(\Sigma ||\Sigma _1^*)$$\end{document} , where Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1^*$$\end{document} is as in Proposition 3.7. The quantity I ( Σ 1 | | Σ 10 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma _1^*||\Sigma _{10}^*)$$\end{document} represents the extra cost. An elementary computation gives

(3.10) I ( Σ 1 | | Σ 10 ) = I ( Σ 22 | | Q 0 Q 0 ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\Sigma _1^*||\Sigma _{10}^*)={\mathcal {I}}(\Sigma _{22}||Q_0^\top Q_0), \end{aligned}$$\end{document}

i.e., the optimizing matrices Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1^*$$\end{document} and Σ 10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{10}^*$$\end{document} , see (3.6), (3.8), coincide iff Q 0 Q 0 = Σ 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0^\top Q_0=\Sigma _{22}$$\end{document} .

Summarizing the comments on the constrained problem: (i) the optimal value at the minimum is higher since Σ 10 Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma }}_{10}\subset {\varvec{\Sigma }}_1$$\end{document} , (ii) the extra cost is explicitly given by (3.10), as I ( Σ 22 | | Q 0 Q 0 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma _{22}||Q_0^\top Q_0)$$\end{document} , and (iii) there is no analog to the Pythagorean rule (3.7). The conclusion is that the solution Σ 10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{10}^*$$\end{document} of the constrained Problem 3.9 is suboptimal for the free Problem 3.6. The consequences of the suboptimality will be further discussed in Section 5.

4. Alternating Minimization Algorithm

In this section, the core of the paper, the two partial minimizations of Section 3 are combined into an alternating minimization algorithm for the solution of Problem 2.1. A number of equivalent formulations of the updating equations will be presented and their properties are discussed.

4.1. The Algorithm

We suppose that the given covariance matrix Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} is strictly positive definite. To set up the iterative minimization algorithm, assign initial values H 0 , D 0 , Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0, D_0, Q_0$$\end{document} to the parameters, with D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_0$$\end{document} diagonal, Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0$$\end{document} invertible and H 0 H 0 + D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0H_0^\top +D_0$$\end{document} invertible. The updating rules are constructed as follows. Let H t , D t , Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t, D_t, Q_t$$\end{document} be the parameters at the t-th iteration, and Σ 1 , t = Σ ( H t , D t , Q t ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{1,t} = \Sigma (H_t,D_t,Q_t)$$\end{document} the corresponding covariance, defined as in (3.2). Now solve the two partial minimizations as illustrated below.

where Σ 0 , t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{0,t}$$\end{document} denotes the solution of the first minimization with input Σ 1 , t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{1,t}$$\end{document} . To express in a compact form the resulting update equations, define

(4.1) R t = I - H t ( H t H t + D t ) - 1 H t + H t ( H t H t + D t ) - 1 Σ ^ ( H t H t + D t ) - 1 H t . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} R_t=I- H_t^\top (H_tH_t^\top + D_t)^{-1}H_t + H_t^\top (H_tH_t^\top + D_t)^{-1} \widehat{\Sigma } (H_tH_t^\top + D_t)^{-1}H_t. \end{aligned}$$\end{document}

Note that, by Remark 3.8, H t H t + D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_tH^\top _t+D_t$$\end{document} is actually invertible for all t, since both H 0 H 0 + D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0H^\top _0+D_0$$\end{document} and Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0$$\end{document} have been chosen to be invertible. It is easy to show that also I - H t ( H t H t + D t ) - 1 H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I- H_t^\top (H_tH_t^\top + D_t)^{-1}H_t$$\end{document} , and consequently R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} , are strictly positive and therefore invertible. The update equations resulting from the cascade of the two minimizations are

(4.2) Q t + 1 = ( Q t R t Q t ) 1 / 2 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} Q_{t+1}&= \Big (Q_t^\top R_t Q_t\Big )^{1/2}, \end{aligned}$$\end{document}
(4.3) H t + 1 = Σ ^ ( H t H t + D t ) - 1 H t Q t Q t + 1 - 1 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H_{t+1}&= \widehat{\Sigma }(H_tH_t^\top + D_t)^{-1}H_tQ_tQ_{t+1}^{-1}, \end{aligned}$$\end{document}
(4.4) D t + 1 = Δ ( Σ ^ - H t + 1 H t + 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_{t+1}&= \Delta (\widehat{\Sigma }-H_{t+1}H_{t+1}^\top ). \end{aligned}$$\end{document}

Properly choosing the square root in Equation (4.2) will make Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_t$$\end{document} disappear from the update equations. This is an attractive feature since, at the t-th step of the algorithm, only H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} and D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} are needed to construct the approximation H t H t + D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t H_t^\top + D_t$$\end{document} . The choice that accomplishes this is ( Q t R t Q t ) 1 / 2 = R t 1 / 2 Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(Q_t^\top R_t Q_t)^{1/2} = R_t^{1/2} Q_t$$\end{document} , where R t 1 / 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t^{1/2}$$\end{document} is a symmetric root of R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} , resulting in Q t + 1 = R t 1 / 2 Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{t+1} = R_t^{1/2} Q_t$$\end{document} . Upon substituting Q t + 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{t+1}$$\end{document} in Equation (4.3), one gets the AML algorithm.

Algorithm 4.1

(AML) Given H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} , D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} from the t-th step, and R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} as in (4.1), the update equations for a I-divergence minimizing algorithm are

(4.5) H t + 1 = Σ ^ ( H t H t + D t ) - 1 H t R t - 1 / 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H_{t+1}&= \widehat{\Sigma }(H_tH_t^\top + D_t)^{-1}H_tR_t^{-1/2} \end{aligned}$$\end{document}
(4.6) D t + 1 = Δ ( Σ ^ - H t + 1 H t + 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_{t+1}&= \Delta (\widehat{\Sigma }-H_{t+1}H_{t+1}^\top ). \end{aligned}$$\end{document}

Since R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} only depends on H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} and D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} , see (4.1), the parameter Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_t$$\end{document} has been effectively removed from the update equations, although its presence was essential for the derivation.

Remark 4.2

Algorithm 4.1 has one immediate attractive feature: it preserves at each step the diagonal structure of Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} . Indeed, if we let Σ t = H t H t + D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _t=H_tH_t^\top +D_t$$\end{document} , then it follows from Equation (4.6) that Δ ( Σ t ) = Δ ( Σ ^ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta (\Sigma _t)=\Delta (\widehat{\Sigma })$$\end{document} .

Algorithm 4.1 potentially has two drawbacks making its implementation computationally less attractive. To update H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} via Equation (4.5) one has to compute, at each step, the square root of the k × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k \times k$$\end{document} matrix R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} and the inverse of the n × n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n \times n$$\end{document} matrix H t H t + D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_tH_t^\top + D_t$$\end{document} . The latter problem may in principle be addressed via the matrix inversion lemma, but this requires an invertible D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} which could be problematic in practical situations when one encounters nearly singular D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} . An alternative approach to Algorithm 4.1, to avoid the square roots at each iteration, is to update H t : = H t H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t:=H_tH_t^\top $$\end{document} and D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} as before.

Proposition 4.3

Let H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} be as in Algorithm 4.1. Pick H 0 = H 0 H 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_0=H_0H_0^\top $$\end{document} , and D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_0$$\end{document} such that H 0 + D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_0+D_0$$\end{document} is invertible. The update equation for H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_{t}$$\end{document} becomes

(4.7) H t + 1 = Σ ^ ( H t + D t ) - 1 H t ( D t + Σ ^ ( H t + D t ) - 1 H t ) - 1 Σ ^ . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {H}}_{t+1} = \widehat{\Sigma }({\mathcal {H}}_{t}+D_{t})^{-1}{\mathcal {H}}_{t}\big (D_{t} +\widehat{\Sigma }({\mathcal {H}}_t+D_t)^{-1}{\mathcal {H}}_t\big )^{-1}\widehat{\Sigma }. \end{aligned}$$\end{document}

Proof

See Appendix 2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

One can run the update Equation (4.7), for any number T of steps, and then switch back to H T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_T$$\end{document} , taking any n × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\times k$$\end{document} factor of H T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_T$$\end{document} i.e., solve H T = H T H T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_T = H_T H_T^\top $$\end{document} . Since Equation (4.7) transforms H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} into H t + 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_{t+1}$$\end{document} preserving the rank, the latter factorization is always possible.

4.2. Asymptotic Properties

In Proposition 4.4 below, we collect the asymptotic properties of Algorithm 4.1, also quantifying the I-divergence decrease at each step.

Proposition 4.4

For Algorithm 4.1, the followings hold.

  1. (a) H t H t Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_tH_t^\top \le \widehat{\Sigma }$$\end{document} for all t 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t\ge 1$$\end{document} .

  2. (b) If D 0 > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_0>0$$\end{document} and Δ ( Σ ^ - D 0 ) > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta (\widehat{\Sigma } - D_0)>0$$\end{document} then D t > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t>0$$\end{document} for all t 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t\ge 1$$\end{document} .

  3. (c) The matrices R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} are invertible for all t 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t\ge 1$$\end{document} .

  4. (d) If H t H t + D t = Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t H_t^\top + D_t = \widehat{\Sigma }$$\end{document} then H t + 1 = H t , D t + 1 = D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{t+1}=H_t, \,D_{t+1}=D_t$$\end{document} .

  5. (e) Decrease of the objective function:

    I ( Σ ^ | | Σ ^ t ) - I ( Σ ^ | | Σ ^ t + 1 ) = I ( Σ 1 , t + 1 | | Σ 1 , t ) + I ( Σ 0 , t | | Σ 0 , t + 1 ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\widehat{\Sigma }||\widehat{\Sigma }_t) - {\mathcal {I}}(\widehat{\Sigma }||\widehat{\Sigma }_{t+1})= {\mathcal {I}}(\Sigma _{1,t+1}||\Sigma _{1,t})+{\mathcal {I}}(\Sigma _{0,t}|| \Sigma _{0,t+1}), \end{aligned}$$\end{document}
    where Σ ^ t = H t H t + D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }_t = H_tH_t^\top + D_t$$\end{document} is the t-th approximation of Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , and Σ 0 , t , Σ 1 , t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{0,t}, \Sigma _{1,t}$$\end{document} were defined in Section 4.1.
  6. (f) The interior limit points (H, D) of the algorithm satisfy

    (4.8) H = ( Σ ^ - H H ) D - 1 H , D = Δ ( Σ ^ - H H ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H = (\widehat{\Sigma }-HH^\top )D^{-1}H, \qquad \qquad D = \Delta (\widehat{\Sigma }-HH^\top ), \end{aligned}$$\end{document}
    which are the ML Equations (2.7) and (2.8). If (H, D) is a solution to these equation also (HU, D) is a solution, for any unitary matrix U R k × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$U \in {\mathbb {R}}^{k \times k}$$\end{document} .
  7. (g) Limit points ( H , D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$({\mathcal {H}},D)$$\end{document} satisfy

    H = Σ ^ ( H + D ) - 1 H , D = Δ ( Σ ^ - H ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {H}}=\widehat{\Sigma }({\mathcal {H}}+D)^{-1}{\mathcal {H}}, \qquad \qquad D=\Delta (\widehat{\Sigma }-{\mathcal {H}}). \end{aligned}$$\end{document}

Proof

  1. (a) This follows from Remark 3.8 and the construction of the algorithm as a combination of the two partial minimizations.

  2. (b) This similarly follows from Remark 3.8.

  3. (c) Use the identity I - H t ( H t H t + D t ) - 1 H t = ( I + H t D t - 1 H t ) - 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I-H_t^\top (H_tH_t^\top + D_t)^{-1}H_t=(I+H_t^\top D_t^{-1}H_t)^{-1}$$\end{document} and Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} non-negative definite.

  4. (d) In this case, Equation (4.1) shows that R t = I \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t=I$$\end{document} and substituting this into the update equations yields the conclusion.

  5. (e) As matter of fact, we can express the decrease as a sum of two I-divergences, since the algorithm is the superposition of the two partial minimization problems. The results follow from a concatenation of Proposition 3.5 and Proposition 3.7.

  6. (f) Assume that all variables converge. Then, from (4.3), it follows that Equation (2.9) holds in the limit. This gives the first of the desired relations, the rest is trivial.

  7. (g) This follows by inserting the result of (f.).

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

In part (f) of Proposition 4.4, we made the assumption that the limit points (H, D) are interior points. This does not always hold true as it may happen that D contains zeros on the diagonal. See also Section 6.2.

Remark 4.5

Assertions (b) and (c) of Proposition 4.4 agree with the recent results of Adachi (Reference Adachi2013) (Lemma 1 and Theorem 1) for the closely related EM algorithm with a strictly positive definite empirical covariance matrix Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} . We note that the assertions (b) and (c) are automatic consequences of our setup, they follow from the casting of the problem as a double divergence minimization problem. Indeed, the solutions to the ensuing partial minimization problems are automatically strictly positive definite matrices, as otherwise the minimum divergences would be infinite, which is impossible.

5. Comparison with the EM Algorithm

In Rubin and Thayer (Reference Rubin and Thayer1982), a version of the EM algorithm (see Dempster, Laird, & Rubin, Reference Dempster, Laird and Rubin1977) has been put forward in the context of estimation for FA models. This algorithm is as follows, with R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} as in (4.1).

Algorithm 5.1

(EM)

H t + 1 = Σ ^ ( H t H t + D t ) - 1 H t R t - 1 D t + 1 = Δ ( Σ ^ - H t + 1 R t H t + 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H_{t+1}&= \widehat{\Sigma }(H_t H_t^\top + D_t)^{-1}H_t R_t^{-1} \\ D_{t+1}&= \Delta (\widehat{\Sigma }- H_{t+1} R_t H_{t+1}^\top ). \end{aligned}$$\end{document}

The EM Algorithm 5.1 differs in both equations from our AML Algorithm 4.1. It is well known that EM algorithms can be derived as alternating minimizations, see Csiszár and Tusnády (Reference Csiszár and Tusnády1984), it is therefore interesting to investigate how Algorithm 5.1 can be derived within our framework. Thereto one considers the first partial minimization problem together with the constrained second partial minimization Problem 3.9, the constraint being Q = Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q=Q_0$$\end{document} , for some Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0$$\end{document} . Later on we will see that the particular choice of Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0$$\end{document} , as long as it is invertible, is irrelevant. The concatenation of these two problems results in the EM Algorithm 5.1, as is detailed below.

Starting at ( H t , D t , Q 0 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(H_t,D_t,Q_0)$$\end{document} , one performs the first partial minimization that results in the matrix

Σ ^ Σ ^ ( H t H t + D t ) - 1 H t Q 0 Q 0 H t ( H t H t + D t ) - 1 Σ ^ Q 0 R t Q 0 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \begin{pmatrix} \widehat{\Sigma } &{} \widehat{\Sigma }(H_tH_t^\top +D_t)^{-1}H_tQ_0 \\ Q_0^\top H_t^\top (H_tH_t^\top +D_t)^{-1}\widehat{\Sigma } &{} Q_0^\top R_t Q_0 \end{pmatrix}. \end{aligned}$$\end{document}

Performing now the constrained second minimization, according to the results of Proposition 3.10, one obtains

(5.1) H t + 1 = Σ ^ ( H t H t + D t ) - 1 H t R t - 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H_{t+1}&= \widehat{\Sigma }(H_tH_t^\top +D_t)^{-1}H_t R_t^{-1} \end{aligned}$$\end{document}
(5.2) D t + 1 = Δ ( Σ ^ - Σ ^ ( H t H t + D t ) - 1 H t R t - 1 H t ( H t H t + D t ) - 1 Σ ^ ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_{t+1}&= \Delta \big (\widehat{\Sigma }-\widehat{\Sigma }(H_tH_t^\top +D_t)^{-1}H_t R_t ^{-1} H_t^\top (H_tH_t^\top +D_t)^{-1}\widehat{\Sigma }\big ). \end{aligned}$$\end{document}

Substitution of (5.1) into (5.2) yields

(5.3) D t + 1 = Δ ( Σ ^ - H t + 1 R t H t + 1 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_{t+1}=\Delta (\widehat{\Sigma }-H_{t+1}R_tH_{t+1}^\top ). \end{aligned}$$\end{document}

One sees that the matrix Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_0$$\end{document} does not appear in the recursion, just as the matrices Q t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_t$$\end{document} do not occur in Algorithm 4.1, but we lost the second optimality property in the construction of Algorithm 4.1, due to the imposed constraint Q = Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q=Q_0$$\end{document} . Moreover, the EM algorithm does not enjoy the diagonal preservation property mentioned in Remark 4.2 for Algorithm 4.1, due to the presence of R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} in Equation (5.3).

Summarizing, both Algorithms 4.1 and 5.1 are the result of two partial minimization problems. The latter algorithm differs from ours in that the second partial minimization is constrained. In view of the extra cost incurred by the suboptimal constrained minimization, see Equation (3.9), our Algorithm 4.1 yields a better performance. We will illustrate these considerations by some numerical examples in Section 7.

6. Singular D

It has been known for a long time, see e.g., Jöreskog (Reference Jöreskog1967), that numerical solutions to the ML Equations (2.7), (2.8) often produce a nearly singular matrix D. This motivates the analysis of the minimization Problem 2.1 when D is constrained, at the outset, to be singular (Section 6.1), and the investigation of its consequences for the minimization algorithm of Proposition 4.3 (Section 6.2).

6.1. Approximation with Singular D

In this section, we consider the approximation Problem 2.1 under the constraint D 2 = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_2=0$$\end{document} where

(6.1) D = D 1 0 0 D 2 = D ~ 0 0 0 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D=\begin{pmatrix} D_1 &{} 0 \\ 0 &{} D_2 \end{pmatrix}=\begin{pmatrix} \widetilde{D} &{} 0 \\ 0 &{} 0 \end{pmatrix}, \end{aligned}$$\end{document}

with D 1 = D ~ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_1=\widetilde{D}>0$$\end{document} of size n 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_1$$\end{document} and D 2 = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_2=0$$\end{document} of size n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2$$\end{document} . The constrained minimization problem can be formulated as follows.

Problem 6.1

Given Σ ^ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }>0$$\end{document} of size n × n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\times n$$\end{document} and integers n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2$$\end{document} and k, with n 2 k < n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2 \le k < n$$\end{document} , minimize

I ( Σ ^ | | H H + D ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\widehat{\Sigma } || HH^\top +D), \end{aligned}$$\end{document}

over (H, D) with D satisfying (6.1).

Remark 6.2

Alternatively, in Jöreskog (Reference Jöreskog1967), the solution of the likelihood Equations (2.8) and (2.9) has been investigated under zero constraints on D. In this section, we work directly on the objective function of Problem 6.1.

To reduce the complexity of Problem 6.1 we will now decompose the objective function, choosing a convenient representation of the matrix H = H 1 H 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H=\begin{pmatrix} H_1 \\ H_2 \end{pmatrix}$$\end{document} , where H i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_i$$\end{document} has n i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_i$$\end{document} rows. Inspired by the parametrization in Jöreskog (Reference Jöreskog1967) we make the following observation. Given any orthogonal matrix Q, define H = H Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H' = HQ$$\end{document} , then clearly H H + D = H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'H'^\top + D = HH^\top +D$$\end{document} . Let H 2 = U ( 0 Λ ) V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{2}=U(0\,\, \Lambda )V^{\top }$$\end{document} be the singular value decomposition of H 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{2}$$\end{document} , with Λ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda $$\end{document} a positive definite diagonal matrix of size n 2 × n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{2}\times n_{2}$$\end{document} , and U and V orthogonal of sizes n 2 × n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{2}\times n_{2}$$\end{document} and k × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k\times k$$\end{document} , respectively. Let

H = H V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H'= HV \end{aligned}$$\end{document}

The blocks of H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'$$\end{document} are H 1 = H 1 V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{1}' =H_{1}V$$\end{document} and H 2 = ( H 21 H 22 ) : = ( 0 U Λ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{2}'=(H_{21}'\, H_{22}'):=(0\quad U\Lambda )$$\end{document} , with H 21 R ( k - n 2 ) × n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{21}'\in {\mathbb {R}}^{(k-n_2)\times n_2}$$\end{document} and H 22 R n 2 × n 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H'_{22}\in {\mathbb {R}}^{n_2\times n_2}$$\end{document} . Hence, without loss of generality, in the remainder of this section we assume that

(6.2) H = H 1 H 2 = H 11 H 12 0 H 22 , H 22 invertible . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H=\begin{pmatrix} H_1 \\ H_2 \end{pmatrix} = \begin{pmatrix} H_{11} &{} H_{12} \\ 0 &{} H_{22} \end{pmatrix}, \qquad H_{22} \,\, \hbox {invertible}. \end{aligned}$$\end{document}

Finally, let

K = Σ ^ 12 Σ ^ 22 - 1 - H 1 H 2 ( H 2 H 2 ) - 1 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} K=\widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}-H_1H_2^\top (H_2H_2^\top )^{-1}, \end{aligned}$$\end{document}

which, under (6.2), is equivalent to

K = Σ ^ 12 Σ ^ 22 - 1 - H 12 H 22 - 1 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} K=\widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}-H_{12}H_{22}^{-1}. \end{aligned}$$\end{document}

Here is the announced decomposition of the objective function.

Lemma 6.3

Let D be as in Equation (6.1). The following I-divergence decomposition holds.

(6.3) I ( Σ ^ | | H H + D ) = I ( Σ ~ 11 | | H 11 H 11 + D ~ ) + I ( Σ ^ 22 | | H 22 H 22 ) + 1 2 tr ( Σ ^ 22 K ( H 11 H 11 + D ~ ) - 1 K ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}(\widehat{\Sigma }||HH^{\top }+D) =&\, {\mathcal {I}}(\widetilde{\Sigma }_{11}||H_{11}H_{11}^{\top }+ \widetilde{D})+{\mathcal {I}}(\widehat{\Sigma }_{22}||H_{22}H_{22}^{\top })\nonumber \\&+\frac{1}{2}\text{ tr }\big (\widehat{\Sigma }_{22}K^\top (H_{11}H_{11}^{\top } +\widetilde{D})^{-1}K\big ). \end{aligned}$$\end{document}

Proof

The proof follows from Lemma 9.1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

We are now ready to characterize the solution of Problem 6.1. Observe first that the second and third terms on the right-hand side of (6.3) are non-negative and can be made zero. To this end, it is enough to select H 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{22}$$\end{document} such that H 22 H 22 = Σ ^ 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{22}H_{22}^{\top }=\widehat{\Sigma }_{22}$$\end{document} and then H 12 = Σ ^ 12 Σ ^ 22 - 1 H 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{12}=\widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}H_{22}$$\end{document} . The remaining blocks, H 11 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{11}$$\end{document} and D ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}$$\end{document} , are determined minimizing the first term. We have thus proved the following

Proposition 6.4

Any pair (H, D), as in (6.1) and (6.2), solving Problem 6.1 satisfies

  1. (i) I ( Σ ~ 11 | | H 11 H 11 + D ~ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\widetilde{\Sigma }_{11}||H_{11}H_{11}^{\top }+\widetilde{D})$$\end{document} is minimized,

  2. (ii) I ( Σ ^ | | H H + D ) = I ( Σ ~ 11 | | H 11 H 11 + D ~ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\widehat{\Sigma }||HH^\top + D) = {\mathcal {I}}(\widetilde{\Sigma }_{11}||H_{11}H_{11}^{\top }+\widetilde{D})$$\end{document} ,

  3. (iii) H 22 H 22 = Σ ^ 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{22}H_{22}^{\top } = \widehat{\Sigma }_{22}$$\end{document} ,

  4. (iv) H 12 = Σ ^ 12 Σ ^ 22 - 1 H 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{12} = \widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}H_{22}$$\end{document} .

Remark 6.5

In the special case n 2 = k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{2}=k$$\end{document} , the matrices H 11 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{11}$$\end{document} and H 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{21}$$\end{document} are empty, H 12 = H 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{12}=H_{1}$$\end{document} , and H 22 = H 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{22}=H_{2}$$\end{document} . From Proposition 6.4, at the minimum, H 2 H 2 = Σ ^ 22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{2}H_{2}^{\top }=\widehat{\Sigma }_{22}$$\end{document} , H 1 H 2 = Σ ^ 12 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{1}H_2^\top =\widehat{\Sigma }_{12}$$\end{document} , and D ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}$$\end{document} minimizes I ( Σ ~ 11 | | D ~ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\widetilde{\Sigma }_{11}||\widetilde{D})$$\end{document} . The latter problem has solution D ~ = Δ ( Σ ~ 11 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}=\Delta (\widetilde{\Sigma }_{11})$$\end{document} . It is remarkable that in this case the minimization problem has an explicit solution.

6.2. Algorithm When a Part of D has Zero Diagonal

In Section 6.1, we have posed the minimization problem under the additional constraint that the matrix D contains a number of zeros on the diagonal. In the present section, we investigate how this constraint affects the alternating minimization algorithm. For simplicity, we give a detailed account of this, only using the recursion (4.7) for H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} . Initialize the algorithm at ( H 0 , D 0 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(H_0, D_0)$$\end{document} with

(6.4) D 0 = D ~ 0 0 0 0 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_0 = \begin{pmatrix} \widetilde{D}_0 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \end{aligned}$$\end{document}

where D ~ 0 > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}_0>0$$\end{document} , and

(6.5) H 0 = H 10 H 20 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} H_0=\begin{pmatrix} H_{10} \\ H_{20} \end{pmatrix}, \end{aligned}$$\end{document}

where H 20 R n 2 × k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{20}\in {\mathbb {R}}^{n_2\times k}$$\end{document} is assumed to have full row rank, so that n 2 k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2\le k$$\end{document} . Clearly, H 0 H 0 + D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0H_0^\top + D_0$$\end{document} is invertible. For H 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_0$$\end{document} as in Equation (6.5) put

(6.6) H ~ 0 = H 10 ( I - H 20 ( H 20 H 20 ) - 1 H 20 ) H 10 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{{\mathcal {H}}}_0= H_{10}(I-H_{20}^\top (H_{20}H_{20}^\top )^{-1}H_{20})H_{10}^\top . \end{aligned}$$\end{document}

Proposition 6.6

Consider the update Equation (4.7). The upper left block H t 11 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}^{11}_t$$\end{document} of H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} can be computed running a recursion for H ~ t : = H t 11 - Σ ^ 12 Σ ^ 22 - 1 Σ ^ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}_t := {\mathcal {H}}^{11}_t-\widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}\widehat{\Sigma }_{21}$$\end{document} , with initial condition H ~ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}_0$$\end{document} ,

H ~ t + 1 = Σ ~ 11 ( H ~ t + D ~ t ) - 1 H ~ t ( D ~ t + Σ ~ 11 ( H ~ t + D ~ t ) - 1 H ~ t ) - 1 Σ ~ 11 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{{\mathcal {H}}}_{t+1}= \widetilde{\Sigma }_{11}(\widetilde{{\mathcal {H}}}_t +\widetilde{D}_t)^{-1}\widetilde{{\mathcal {H}}}_t(\widetilde{D}_t+ \widetilde{\Sigma }_{11}(\widetilde{{\mathcal {H}}}_t +\widetilde{D}_t)^{-1}\widetilde{{\mathcal {H}}}_t)^{-1}\widetilde{\Sigma }_{11}, \end{aligned}$$\end{document}

whereas the blocks on the border of H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} remain constant. The iterates for D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} all have a lower right block of zeros, while the upper left n 1 × n 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_1 \times n_1$$\end{document} block D ~ t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}_t$$\end{document} satisfies

D ~ t = Δ ( Σ ~ 11 - H ~ t ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{D}_{t} = \Delta (\widetilde{\Sigma }_{11} - \widetilde{{\mathcal {H}}}_{t}). \end{aligned}$$\end{document}

Limit points ( H ~ , D ~ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\widetilde{{\mathcal {H}}},\widetilde{D})$$\end{document} with D ~ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}>0$$\end{document} satisfy H ~ = Σ ~ 11 ( H ~ + D ~ ) - 1 H ~ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}=\widetilde{\Sigma }_{11}(\widetilde{{\mathcal {H}}}+ \widetilde{D})^{-1}\widetilde{{\mathcal {H}}}$$\end{document} , D ~ = Δ ( Σ ~ 11 - H ~ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{D}=\Delta (\widetilde{\Sigma }_{11}-\widetilde{{\mathcal {H}}})$$\end{document} .

Proof

See Appendix 2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Note that the recursions of Proposition 6.6 are exactly those that follow from the optimization Problem 6.1. Comparison with (4.7) shows that, while the algorithm for the unconstrained case updates H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} of size n × n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\times n$$\end{document} , now one needs to update H ~ t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}_t$$\end{document} which is of smaller size n 1 × n 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_1\times n_1$$\end{document} .

In the special case n 2 = k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2=k$$\end{document} , the matrix H ~ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}_0$$\end{document} of (6.6) is equal to zero. Therefore, H ~ 1 11 = Σ ^ 12 Σ ^ 22 - 1 Σ ^ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathcal {H}}}^{11}_1 =\widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}\widehat{\Sigma }_{21}$$\end{document} which proves the following

Corollary 6.7

Let the initial value D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_0$$\end{document} be as in Equation (6.4) with n 2 = k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_2=k$$\end{document} . Then for any initial value H 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_0$$\end{document} , the algorithm converges in one step and one has that the first iterates D 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_1$$\end{document} and H 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_1$$\end{document} , which are equal to the terminal values, are given by

D 1 = Δ ( Σ ~ 11 ) 0 0 0 H 1 = Σ ^ 12 Σ ^ 22 - 1 Σ ^ 21 Σ ^ 12 Σ ^ 21 Σ ^ 22 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} D_1&= \begin{pmatrix} \Delta (\widetilde{\Sigma }_{11}) &{} 0 \\ 0 &{} 0 \end{pmatrix} \\ {\mathcal {H}}_1&= \begin{pmatrix} \widehat{\Sigma }_{12}\widehat{\Sigma }_{22}^{-1}\widehat{\Sigma }_{21} &{} \widehat{\Sigma }_{12} \\ \widehat{\Sigma }_{21} &{} \widehat{\Sigma }_{22} \end{pmatrix}. \end{aligned}$$\end{document}

It is remarkable that in this case the algorithm reaches in one step, the optimal values are computed explicitly in Remark 6.5.

7. Numerical Comparisons with Other Algorithms

We briefly investigate the numerical performance of our AML Algorithm 4.1, and compare it against the performance of other algorithms. The natural competitor of AML is the EM Algorithm 5.1. After the publication of Rubin and Thayer (Reference Rubin and Thayer1982), the EM algorithm has evolved into a cohort of improved alternatives (Liu & Rubin, Reference Liu and Rubin1994, Reference Liu and Rubin1998, and more recently by Zhao et al., Reference Zhao, Yu and Jiang2008), basically differing from the original EM on numerical implementation aspects. Most notably, in the ECME variant (Liu & Rubin, Reference Liu and Rubin1998), H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} is updated as in the EM algorithm, but D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_{t}$$\end{document} is updated by direct maximization of the likelihood (equivalently minimization of the I-divergence) with respect to D, keeping H fixed at the value H t + 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_{t+1}$$\end{document} . This step cannot be done analytically, and is realized taking a few Newton–Raphson iterations, and Liu and Rubin (Reference Liu and Rubin1998) suggests that one or two iterations are usually sufficient. The resulting D t + 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_{t+1}$$\end{document} does not necessarily increase the likelihood with respect to D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} ; therefore, a check has to be performed, and possibly the iteration has to be repeated adjusting its size. The rationale behind ECME is that the advantage in speed afforded by the direct (along the D parameter) maximization of the likelihood outweighs the drawback of having to check each iteration for actual improvement. We have derived, in the same spirit, a variant of AML retaining the H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H_t$$\end{document} update Equation (4.5) and replacing the D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} update Equation (4.6) with the same Newton–Raphson iterations as in ECME. We named the resulting algorithm ACML. All numerical experiments in this section should be read as comparisons between the performances of AML and ACML versus EM and ECME.

7.1. Findings

To run the numerical comparisons, we have selected from the published literature five examples of correlation matrices, some of which are well known for being problematic for FA modeling. We have also constructed a sixth data set as an exact FA covariance Σ ^ = H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma } = HH^\top +D$$\end{document} , selecting randomly the entries of H and D, see below. For each of the six data sets we ran the four algorithms in parallel (sharing the same initial conditions) several times, changing the initial conditions at each run. The figures at the end of the paper are plots of the I-divergence vs. iterations and have been selected to show the typical behaviors of the four algorithms for each data set. The data sets are the following correlation matrices Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} of size n × n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\times n$$\end{document} .

  • Figure 1: Rubin–Thayer, n = 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document} , taken from Rubin and Thayer (Reference Rubin and Thayer1982).

  • Figure 2: Maxwell, n = 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document} , Table 4 in Maxwell (Reference Maxwell1961), also analyzed as data set 2 in Jöreskog (Reference Jöreskog1967).

  • Figure 3: Rao, n = 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document} , taken from Rao (Reference Rao1955).

  • Figure 4: Harman, n = 8 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=8$$\end{document} , Table 5.3 in Harman (Reference Harman1967).

  • Figure 5: Emmett, n = 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=9$$\end{document} , Table I in Emmett (Reference Emmett1949), also analyzed as data set 1 in Jöreskog (Reference Jöreskog1967).

  • Figure 6: The data set is a randomly generated covariance of the standard FA model type, i.e., Σ ^ = H H + γ D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma } = HH^\top +\gamma D$$\end{document} , with n = 20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=20$$\end{document} . The elements of H R 20 × 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H\in \mathbb {R}^{20\times 4}$$\end{document} and of D R 20 × 20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D\in \mathbb {R}^{20\times 20}$$\end{document} are samples of a uniform on [ 1 , 10 ] \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[1, \, 10]$$\end{document} . For the choice of γ R + \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma \in \mathbb {R}_+$$\end{document} see below under (c2).

Figure 1. Rubin–Thayer.

Figure 2. Maxwell.

Figure 3. Rao.

Figure 4. Harman.

Figure 5. Emmett.

Figure 6. True FA model.

In all numerical experiments, the number of factors has been fixed, equal to k = 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=4$$\end{document} . Initially it was found that, for a number of runs with different data sets and initial conditions, the ECME algorithm produced negative values for the diagonal matrices D t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_t$$\end{document} caused by a routine application of the Newton–Raphson (NR) algorithm. The NR routine has afterward been improved, implementing the restricted step version of the NR algorithm for both ECME and ACML. In all ECME and ACML experiments, we have consistently taken 2 steps of the NR algorithm at each iteration. To present the findings, we have grouped the data sets into three groups (a.), (b.), and (c.), within which we observed similar behaviors. Different behaviors are ranked according to their limit divergence and speed of convergence, with priority given to the former.

(a1)

Rubin–Thayer data (Figure 1). The graphs of the EM/ECME pair are very similar to those of Liu and Rubin (Reference Liu and Rubin1998) and we observe that the AML/ACML pair outperforms EM/ECME. The typical ranking for this data set was ACML best, followed by ECME, AML, EM in that order. In a few occasions we observed AML best, followed by EM, ACML, and ECME. The ECME was the most sensitive to initial conditions.

(a2)

Maxwell data (Figure 2). The typical ranking for this data set is as above, ACML, ECME, AML, EM, in decreasing order. For both ECME and ACML we have been able to reproduce Table 5 of Jöreskog for the elements of the D-matrix, and also identified the eighth element of the D-matrix as D 8 = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_8=0$$\end{document} . In a few occasions ACML and ECME displayed very close behaviors, significantly outperforming AML and EM whose behaviors were also close to each other.

(b1)

Rao data (Figure 3). The typical ranking for the data set was ACML, AML, EM, and ECME. Sometimes it took more than 1500 iterations before a significant drop in the divergence of the best performing algorithm could be seen. The D 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_1$$\end{document} should be estimated close to zero (Jennrich & Robinson, Reference Jennrich and Robinson1969), which was usually the case for ACML, for AML and EM with slower convergence. ECME displayed different behaviors (sometimes very good), depending on the initial conditions.

(b2)

Harman data (Figure 4). The typical ranking for the data set was as above, ACML, AML, EM, ECME. In our runs, ACML performed consistently best, whereas ECME consistently exhibited much larger divergences. For this data set, D 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_2$$\end{document} is known to be zero (Jennrich & Robinson, Reference Jennrich and Robinson1969). All runs of the ACML have quickly produced D 2 = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_2=0$$\end{document} , sometimes ECME too, although large deviations have been seen as well. AML and ME exhibited much slower convergence, often 5000 iterations were not enough.

(c1)

Emmett data (Figure 5). The behavior of the four algorithms for this data set is exceptional. Very often AML and EM gave faster and better (i.e., to smaller values) convergence than ACML and ECME.

(c2)

True FA covariance matrix (Figure 6). Since Σ ^ = H H + γ D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma } = HH^\top +\gamma D$$\end{document} , where H R 20 × 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H\in \mathbb {R}^{20\times 4}$$\end{document} and the selected number of factors is k = 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=4$$\end{document} , this is a perfect modeling situation, with vanishing theoretical minimum divergence. The AML is the best performer, reaching null divergence extremely fast, while the ranking of the other algorithms is sensitive to the value of γ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma $$\end{document} . Figure 6, for γ = 10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma =10$$\end{document} , shows AML and EM. The pair ACML and ECME has a much worse behavior, which cannot be plotted on the same graph. For γ = 0.1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma =0.1$$\end{document} , the ranking of behaviors is different. AML is still the best, immediately followed by ACML, whereas ECME and EM behave erratically and do not converge to zero. We omitted the figure.

8. Conclusions

Given a positive definite covariance matrix Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} , which may be an empirical covariance, of dimension n, we considered the problem of approximating it with a covariance of the form H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top +D$$\end{document} , where H has a prescribed low number columns and D > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D>0$$\end{document} is diagonal. We have chosen to gauge the quality of the approximation by the I-divergence between the zero mean normal laws with covariances Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} and H H + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top +D$$\end{document} , respectively. By lifting the minimization problem into a larger space, we have been able to develop an optimal procedure from first principles to determine a pair (H, D) that minimizes the I-divergence. As consequence, the procedure also yields an iterative alternating minimization algorithm (AML) à la Csiszár–Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. We have obtained a number of theoretical results that are of independent interest. The convergence of the algorithm has also been studied, with special attention given to the case where D is singular. The theoretical properties of the AML have been compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton–Raphson variation on EM), we also developed a similar variant of AML, called ACML, and in a few numerical experiments we compared the performances of the four algorithms. We have seen that usually the ACML algorithm performed best, in particular, better than ECME. In some specific experiments, AML was best, and always outperforming the EM algorithm.

Acknowledgments

We are indebted to an Associate Editor and three Reviewers for their careful reading of the original manuscript and for their most helpful suggestions to improve the content and the presentation of the paper.

Appendix 1: Decompositions of the I-Divergence

Recall that, for given probability measures P 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_1$$\end{document} and P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_2$$\end{document} , defined on the same measurable space, and such that P 1 P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_1\ll {\mathbb {P}}_2$$\end{document} , the I-divergence is defined as

(9.1) I ( P 1 | | P 2 ) = E P 1 log d P 1 d P 2 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_1||{\mathbb {P}}_2)=\mathbb {E}_{{\mathbb {P}}_1}\log \frac{\mathrm{d}{\mathbb {P}}_1}{\mathrm{d}{\mathbb {P}}_2}, \end{aligned}$$\end{document}

where d P 1 d P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\mathrm{d}{\mathbb {P}}_1}{\mathrm{d}{\mathbb {P}}_2}$$\end{document} denotes the Radon-Nikodym derivative of P 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_1$$\end{document} w.r.t. P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_2$$\end{document} . If P 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_1$$\end{document} and P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_2$$\end{document} are distributions on R m \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^m$$\end{document} of a random vector X with corresponding densities f 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_1$$\end{document} and f 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_2$$\end{document} that are everywhere positive, the Radon-Nikodym derivative d P 1 d P 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\mathrm{d}{\mathbb {P}}_1}{\mathrm{d}{\mathbb {P}}_2}$$\end{document} becomes the likelihood ratio f 1 ( X ) f 2 ( X ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{f_1(X)}{f_2(X)}$$\end{document} and (9.1) reduces to the integral

(9.2) I ( P 1 | | P 2 ) = R m f 1 ( x ) log f 1 ( x ) f 2 ( x ) d x . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_1||{\mathbb {P}}_2) =\int _{\mathbb {R}^m} f_1(x)\log \frac{f_1(x)}{f_2(x)}\,\mathrm{d}x. \end{aligned}$$\end{document}

We note that all subsequent expressions for the I-divergence have similar counterparts in terms of densities. In this section we derive a number of decomposition results for the I-divergence between two probability measures. Similar results are derived in Cramer (Reference Cramer2000), see also Finesso and Spreij (Reference Finesso and Spreij2006) for the discrete case. These decompositions yield the core arguments for the proofs of the propositions in Sections 3.1 and 6.1.

Lemma 9.1

Let P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} and Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY}$$\end{document} be given probability distributions of a Euclidean random vector (X, Y) and denote by P X | Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{X|Y}$$\end{document} and Q X | Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{X|Y}$$\end{document} the corresponding regular conditional distributions of X given Y. Assume that P X Y Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY} \ll {\mathbb {Q}}_{XY}$$\end{document} . Then

(9.3) I ( P X Y | | Q X Y ) = I ( P Y | | Q Y ) + E P Y I ( P X | Y | | Q X | Y ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})={\mathcal {I}}({\mathbb {P}}_Y||{\mathbb {Q}}_Y) +\mathbb {E}_{{\mathbb {P}}_Y}{\mathcal {I}}({\mathbb {P}}_{X|Y}||{\mathbb {Q}}_{X|Y}). \end{aligned}$$\end{document}

Proof

It is easy to see that we also have P Y Q Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_Y \ll {\mathbb {Q}}_Y$$\end{document} . Moreover, we also have absolute continuity of the conditional laws, in the sense that if 0 is a version of the conditional probability Q ( X B | Y ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}(X\in B|Y)$$\end{document} , then it is also a version of P ( X B | Y ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}(X\in B|Y)$$\end{document} . One can show that a conditional version of the Radon-Nikodym theorem applies and that a conditional Radon-Nikodym derivative d P X | Y d Q X | Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{\mathrm{d}{\mathbb {P}}_{X|Y}}{\mathrm{d}{\mathbb {Q}}_{X|Y}}$$\end{document} exists Q Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_Y$$\end{document} almost surely. Moreover, one has the Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY}$$\end{document} as factorization

d P X Y d Q X Y = d P X | Y d Q X | Y d P Y d Q Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{\mathrm{d}{\mathbb {P}}_{XY}}{\mathrm{d}{\mathbb {Q}}_{XY}}=\frac{\mathrm{d}{\mathbb {P}}_{X|Y}}{\mathrm{d}{\mathbb {Q}}_{X|Y}}\frac{\mathrm{d}{\mathbb {P}}_{Y}}{\mathrm{d}{\mathbb {Q}}_{Y}}. \end{aligned}$$\end{document}

Taking logarithms on both sides, and expectation under P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} yields

E P X Y log d P X Y d Q X Y = E P X Y log d P X | Y d Q X | Y + E P X Y log d P Y d Q Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \mathbb {E}_{{\mathbb {P}}_{XY}}\log \frac{\mathrm{d}{\mathbb {P}}_{XY}}{\mathrm{d}{\mathbb {Q}}_{XY}} =\mathbb {E}_{{\mathbb {P}}_{XY}}\log \frac{\mathrm{d}{\mathbb {P}}_{X|Y}}{\mathrm{d}{\mathbb {Q}}_{X|Y}} +\mathbb {E}_{{\mathbb {P}}_{XY}}\log \frac{\mathrm{d}{\mathbb {P}}_{Y}}{\mathrm{d}{\mathbb {Q}}_{Y}}. \end{aligned}$$\end{document}

Writing the first term on the right-hand side as E P X Y { E P X Y [ log d P X | Y d Q X | Y | Y ] } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {E}_{{\mathbb {P}}_{XY}}\{\mathbb {E}_{{\mathbb {P}}_{XY}}[\log \frac{\mathrm{d}{\mathbb {P}}_{X|Y}}{\mathrm{d}{\mathbb {Q}}_{X|Y}}|Y]\}$$\end{document} , we obtain E P Y { E P X | Y [ log d P X | Y d Q X | Y | Y ] } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {E}_{{\mathbb {P}}_{Y}}\{\mathbb {E}_{{\mathbb {P}}_{X|Y}}[\log \frac{\mathrm{d}{\mathbb {P}}_{X|Y}}{\mathrm{d}{\mathbb {Q}}_{X|Y}}|Y]\}$$\end{document} . The result follows. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

The decomposition of Lemma 9.1 is useful when solving I-divergence minimization problems with marginal constraints, like the one considered below.

Proposition 9.2

Let Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY}$$\end{document} and P Y 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^0_Y$$\end{document} be given probability distributions of a Euclidean random vector (X, Y), and of its subvector Y, respectively. Consider the I-divergence minimization problem

min P X Y P I ( P X Y | | Q X Y ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{{\mathbb {P}}_{XY} \in \mathcal {P}} \, {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY}), \end{aligned}$$\end{document}

where

P : = P X Y | P X Y ( d x , Y ) = P Y 0 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \mathcal {P} := \left\{ {\mathbb {P}}_{XY} \, \Big | \, \int {\mathbb {P}}_{XY}(dx, Y) = {\mathbb {P}}^0_Y \right\} . \end{aligned}$$\end{document}

If the marginal P Y 0 Q Y 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^0_Y \ll {\mathbb {Q}}^0_Y$$\end{document} , then the I-divergence is minimized by P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^*_{XY}$$\end{document} specified by the Radon-Nikodym derivative

(9.4) d P X Y d Q X Y = d P Y 0 d Q Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \frac{\mathrm{d}{\mathbb {P}}^*_{XY}}{\mathrm{d}{\mathbb {Q}}_{XY}}=\frac{\mathrm{d}{\mathbb {P}}^0_Y}{\mathrm{d}{\mathbb {Q}}_Y}. \end{aligned}$$\end{document}

Moreover, the Pythagorean rule holds i.e. for any other distribution P P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}\in \mathcal {P}$$\end{document} ,

(9.5) I ( P X Y | | Q X Y ) = I ( P X Y | | P X Y ) + I ( P X Y | | Q X Y ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})={\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {P}}^*_{XY})+{\mathcal {I}}({\mathbb {P}}^*_{XY}||{\mathbb {Q}}_{XY}), \end{aligned}$$\end{document}

and one also has

(9.6) I ( P X Y | | Q X Y ) = I ( P Y 0 | | Q Y ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}^*_{XY}||{\mathbb {Q}}_{XY})={\mathcal {I}}({\mathbb {P}}^0_Y||{\mathbb {Q}}_Y). \end{aligned}$$\end{document}

Proof

The starting point is Equation (9.3), which now takes the form

(9.7) I ( P X Y | | Q X Y ) = I ( P Y 0 | | Q Y ) + E P Y I ( P X | Y | | Q X | Y ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})={\mathcal {I}}({\mathbb {P}}^0_Y||{\mathbb {Q}}_Y) +\mathbb {E}_{{\mathbb {P}}_Y}{\mathcal {I}}({\mathbb {P}}_{X|Y}||{\mathbb {Q}}_{X|Y}). \end{aligned}$$\end{document}

Since the first term on the right-hand side is fixed, the minimizing P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^*_{XY}$$\end{document} must satisfy P X | Y = Q X | Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^*_{X|Y}={\mathbb {Q}}_{X|Y}$$\end{document} . It follows that P X Y = P X | Y P Y 0 = Q X | Y P Y 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^*_{XY}={\mathbb {P}}^*_{X|Y}{\mathbb {P}}^0_Y={\mathbb {Q}}_{X|Y}{\mathbb {P}}^0_Y$$\end{document} , thus verifying (9.4) and (9.6). We finally show that (9.5) holds.

I ( P X Y | | Q X Y ) = E P X Y log d P X Y d P X Y + E P X Y log d P X Y d Q X Y = I ( P X Y | | P X Y ) + E P Y log d P Y 0 d Q Y = I ( P X Y | | P X Y ) + E P Y 0 log d P Y 0 d Q Y , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})&=\mathbb {E}_{{\mathbb {P}}_{XY}} \log \frac{\mathrm{d}{\mathbb {P}}_{XY}}{\mathrm{d}{\mathbb {P}}^*_{XY}}+\mathbb {E}_{{\mathbb {P}}_{XY}} \log \frac{\mathrm{d}{\mathbb {P}}^*_{XY}}{\mathrm{d}{\mathbb {Q}}_{XY}} \\&= {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {P}}^*_{XY}) + \mathbb {E}_{{\mathbb {P}}_Y}\log \frac{\mathrm{d}{\mathbb {P}}^0_Y}{\mathrm{d}{\mathbb {Q}}_Y} \\&= {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {P}}^*_{XY}) + \mathbb {E}_{{\mathbb {P}}^0_Y}\log \frac{\mathrm{d}{\mathbb {P}}^0_Y}{\mathrm{d}{\mathbb {Q}}_Y}, \end{aligned}$$\end{document}

where we used that any P X Y P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY} \in \mathcal {P}$$\end{document} has Y-marginal distribution P Y 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^0_Y$$\end{document} . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

The results above can be extended to the case where the random vector ( X , Y ) : = ( X , Y 1 , Y m ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(X, Y):=(X, Y_1, \dots Y_m)$$\end{document} , i.e., Y consists of m random subvectors Y i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_i$$\end{document} . For any probability distribution P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} on (X, Y), consider the conditional distributions P Y i | X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{Y_i|X}$$\end{document} and define the probability distribution P ~ X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathbb {P}}}_{XY}$$\end{document} on (X, Y):

P ~ X Y = i P Y i | X P X . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \widetilde{{\mathbb {P}}}_{XY} = \prod _i {\mathbb {P}}_{Y_i|X} {\mathbb {P}}_X. \end{aligned}$$\end{document}

Note that, under P ~ X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathbb {P}}}_{XY}$$\end{document} , the Y i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_i$$\end{document} are conditionally independent given X. The following lemma sharpens Lemma 9.1.

Lemma 9.3

Let P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} and Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY}$$\end{document} be given probability distributions of a Euclidean random vector ( X , Y ) : = ( X , Y 1 , Y m ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(X,Y):=(X,Y_1,\dots Y_m)$$\end{document} . Assume that P X Y Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY} \ll {\mathbb {Q}}_{XY}$$\end{document} and that, under Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY}$$\end{document} , the subvectors Y i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_i$$\end{document} of Y are conditionally independent given X, then

I ( P X Y | | Q X Y ) = I ( P X Y | | P ~ X Y ) + i E P X I ( P Y i | X | | Q Y i | X ) + I ( P X | | Q X ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})={\mathcal {I}}({\mathbb {P}}_{XY}||\widetilde{{\mathbb {P}}}_{XY}) +\sum _i\mathbb {E}_{{\mathbb {P}}_X}{\mathcal {I}}({\mathbb {P}}_{Y_i|X}||{\mathbb {Q}}_{Y_i|X}) + {\mathcal {I}}({\mathbb {P}}_X||{\mathbb {Q}}_X). \end{aligned}$$\end{document}

Proof

The proof runs along the same lines as the proof of Lemma 9.1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

The decomposition of Lemma 9.3 is useful when solving I-divergence minimization problems with conditional independence constraints, like the one considered below.

Proposition 9.4

Let P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} be a given probability distribution of a Euclidean random vector ( X , Y ) : = ( X , Y 1 , Y m ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(X,Y):=(X,Y_1,\dots Y_m)$$\end{document} . Consider the I-divergence minimization problem

min Q X Y Q I ( P X Y | | Q X Y ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \min _{{\mathbb {Q}}_{XY} \in \mathcal {Q}} \, {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY}), \end{aligned}$$\end{document}

where

Q : = Q X Y | Q Y 1 , , Y m | X = i Q Y i | X . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \mathcal {Q} := \left\{ {\mathbb {Q}}_{XY} \,\, \Big | \,\, {\mathbb {Q}}_{Y_1, \dots , Y_m|X} = \prod _i {\mathbb {Q}}_{Y_i|X} \right\} . \end{aligned}$$\end{document}

If P X Y Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY} \ll {\mathbb {Q}}_{XY}$$\end{document} for some Q X Y Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY} \in \mathcal {Q}$$\end{document} then the I-divergence is minimized by

Q X Y = P ~ X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathbb {Q}}^*_{XY}=\widetilde{{\mathbb {P}}}_{XY} \end{aligned}$$\end{document}

Moreover, the Pythagorean rule holds, i.e., for any Q X Y Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{XY} \in \mathcal {Q}$$\end{document} ,

I ( P X Y | | Q X Y ) = I ( P X Y | | Q X Y ) + I ( Q X Y | | Q X Y ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}_{XY})= {\mathcal {I}}({\mathbb {P}}_{XY}||{\mathbb {Q}}^*_{XY}) + {\mathcal {I}}({\mathbb {Q}}^*_{XY}||{\mathbb {Q}}_{XY}). \end{aligned}$$\end{document}

Proof

From the right-hand side of the identity in Lemma 9.3, we see that the first I-divergence is not involved in the minimization, whereas the other two can be made equal to zero, by selecting Q Y i | X = P Y i | X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_{Y_i|X}={\mathbb {P}}_{Y_i|X}$$\end{document} and Q X = P X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}_X={\mathbb {P}}_X$$\end{document} . This shows that the minimizing Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}$$\end{document} is equal to P ~ X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathbb {P}}}_{XY}$$\end{document} . To prove the Pythagorean rule, we first observe that trivially

(9.8) I ( P X Y | Q X Y ) = I ( P X Y | P ~ X Y ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {P}}_{XY}|{\mathbb {Q}}^*_{XY})={\mathcal {I}}({\mathbb {P}}_{XY}|\widetilde{{\mathbb {P}}}_{XY}). \end{aligned}$$\end{document}

Next we apply the identity in Lemma 9.3 with Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}$$\end{document} replacing P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}_{XY}$$\end{document} . In this case the corresponding Q ~ X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{{\mathbb {Q}}}^*_{XY}$$\end{document} obviously equals Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}$$\end{document} itself. Hence the identity reads

(9.9) I ( Q X Y | | Q X Y ) = i E Q X I ( Q Y i | X | | Q Y i | X ) + I ( Q X | | Q X ) = i E P X I ( P Y i | X | | Q Y i | X ) + I ( P X | | Q X ) , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {I}}({\mathbb {Q}}^*_{XY}||{\mathbb {Q}}_{XY})&= \sum _i\mathbb {E}_{{\mathbb {Q}}^*_X}{\mathcal {I}}({\mathbb {Q}}^*_{Y_i|X}||{\mathbb {Q}}_{Y_i|X}) + {\mathcal {I}}({\mathbb {Q}}^*_X ||{\mathbb {Q}}_X) \nonumber \\&= \sum _i\mathbb {E}_{{\mathbb {P}}_X}{\mathcal {I}}({\mathbb {P}}_{Y_i|X}||{\mathbb {Q}}_{Y_i|X}) + {\mathcal {I}}({\mathbb {P}}_X||{\mathbb {Q}}_X), \end{aligned}$$\end{document}

by definition of Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}$$\end{document} . Adding up Equations (9.8) and (9.9) gives the result. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Appendix 2: Proof of the Technical Results

Proof of Proposition 3.5

(First partial minimization). Consider the setup and the notation of Proposition 9.2. Identify Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}$$\end{document} with the normal N ( 0 , Σ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N(0,\Sigma )$$\end{document} , and P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}$$\end{document} with N ( 0 , Σ 0 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N(0,\Sigma _0)$$\end{document} . By virtue of (9.4), the optimal P \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^*$$\end{document} is a zero mean normal whose covariance matrix can be computed using the properties of conditional normal distributions. In particular,

Σ 21 = E P X Y = E P ( E P [ X | Y ] Y ) = E P ( E Q [ X | Y ] Y ) = E P ( Σ 21 Σ 11 - 1 Y Y ) = Σ 21 Σ 11 - 1 E P 0 Y Y = Σ 21 Σ 11 - 1 Σ ^ . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma ^*_{21} = \mathbb {E}_{{\mathbb {P}}^*}XY^\top&= \mathbb {E}_{{\mathbb {P}}^*}(\mathbb {E}_{{\mathbb {P}}^*}[X|Y]Y^\top )\\&=\mathbb {E}_{{\mathbb {P}}^*}(\mathbb {E}_{\mathbb {Q}}[X|Y]Y^\top )\\&=\mathbb {E}_{{\mathbb {P}}^*}(\Sigma _{21}\Sigma _{11}^{-1}YY^\top ) \\&= \Sigma _{21}\Sigma _{11}^{-1}\mathbb {E}_{{\mathbb {P}}^0}YY^\top \\&=\Sigma _{21}\Sigma _{11}^{-1}\widehat{\Sigma }. \end{aligned}$$\end{document}

Likewise

Σ 22 = Σ 22 - Σ 21 Σ 11 - 1 Σ 12 + Σ 21 Σ 11 - 1 Σ ^ Σ 11 - 1 Σ 12 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma ^*_{22} = \Sigma _{22}-\Sigma _{21}\Sigma _{11}^{-1}\Sigma _{12} + \Sigma _{21}\Sigma _{11}^{-1}\widehat{\Sigma }\Sigma _{11}^{-1}\Sigma _{12}. \end{aligned}$$\end{document}

To prove that Σ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _0^*$$\end{document} is strictly positive, note first that Σ 11 = Σ ^ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma ^*_{11} = \widehat{\Sigma } >0$$\end{document} by assumption. To conclude, since Σ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma >0$$\end{document} , it is enough to note that

Σ 22 - Σ 21 ( Σ 11 ) - 1 Σ 12 = Σ 22 - Σ 21 Σ 11 - 1 Σ 12 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma ^*_{22}-\Sigma ^*_{21} (\Sigma ^*_{11})^{-1}\Sigma ^*_{12} \, = \, \Sigma _{22}-\Sigma _{21}\Sigma _{11}^{-1}\Sigma _{12}. \end{aligned}$$\end{document}

Finally, the relation I ( Σ 0 | | Σ ) = I ( Σ ^ | | Σ 11 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\Sigma _0^*||\Sigma )={\mathcal {I}}(\widehat{\Sigma }||\Sigma _{11})$$\end{document} is Equation (9.6) adapted to the present situation. The Pythagorean rule follows from this relation and Equation (9.7). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Proposition 3.7

(Second partial minimization). We adhere to the setting and the notation of Proposition 9.4. Identify P = P X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}={\mathbb {P}}_{XY}$$\end{document} with the normal distribution N ( 0 , Σ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N(0, \Sigma )$$\end{document} and Q = Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}={\mathbb {Q}}_{XY}$$\end{document} with the normal N ( 0 , Σ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N(0, \Sigma _1)$$\end{document} , where Σ 1 Σ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _1\in {\varvec{\Sigma }}_1$$\end{document} . The optimal Q = Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*={\mathbb {Q}}^*_{XY}$$\end{document} is again normal and specified by its (conditional) mean and covariance matrix. Since Q Y i | X = P Y i | X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{Y_i|X}={\mathbb {P}}_{Y_i|X}$$\end{document} for all i, we have E Q [ Y | X ] = E P [ Y | X ] = Σ 12 Σ 22 - 1 X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {E}_{{\mathbb {Q}}^*}[Y|X]=\mathbb {E}_{{\mathbb {P}}}[Y|X]=\Sigma _{12}\Sigma _{22}^{-1}X$$\end{document} ; moreover, Q X = P X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_X = {\mathbb {P}}_X$$\end{document} . Hence we find

Σ 12 = E Q Y X = E Q E Q [ Y | X ] X = E P E P [ Y | X ] X = Σ 12 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma ^*_{12} = \mathbb {E}_{{\mathbb {Q}}^*} YX^\top = \mathbb {E}_{{\mathbb {Q}}^*}\mathbb {E}_{{\mathbb {Q}}^*}[Y|X]X^\top =\mathbb {E}_{{\mathbb {P}}}\mathbb {E}_{{\mathbb {P}}}[Y|X]X^\top =\Sigma _{12}. \end{aligned}$$\end{document}

Furthermore, under Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*$$\end{document} , the Y i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_i$$\end{document} are conditionally independent given X. Then C ov Q ( Y i , Y j | X ) = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}_{{\mathbb {Q}}^*}(Y_i,Y_j|X)=0$$\end{document} , for i j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\ne j$$\end{document} , whereas V ar Q ( Y i | X ) = V ar P ( Y i | X ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {V}\mathrm{ar}_{{\mathbb {Q}}^*}(Y_i|X)=\mathbb {V}\mathrm{ar}_{{\mathbb {P}}}(Y_i|X)$$\end{document} , which is the ii-element of Σ ~ 11 : = Σ 11 - Σ 12 Σ 22 - 1 Σ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{\Sigma }_{11}:=\Sigma _{11}-\Sigma _{12}\Sigma _{22}^{-1}\Sigma _{21}$$\end{document} , from which it follows that C ov Q ( Y | X ) = Δ ( Σ ~ 11 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {C}}\hbox {ov}_{{\mathbb {Q}}^*}(Y|X)=\Delta (\widetilde{\Sigma }_{11}).$$\end{document} We can now evaluate

Σ 11 = C ov Q ( Y ) = E Q Y Y = E Q ( E Q [ Y | X ] E [ Y | X ] + C ov Q ( Y | X ) ) = E Q ( Σ 12 Σ 22 - 1 X X Σ 22 - 1 Σ 21 + Δ ( Σ ~ 11 ) ) = Σ 12 Σ 22 - 1 Σ 21 + Δ ( Σ ~ 11 ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma ^*_{11} = {\mathbb {C}}\hbox {ov}_{{\mathbb {Q}}^*}(Y)&= \mathbb {E}_{{\mathbb {Q}}^*}YY^\top \\&= \mathbb {E}_{{\mathbb {Q}}^*}\Big ( {\mathbb {E}\,}_{{\mathbb {Q}}^*}[Y|X]\,{\mathbb {E}\,}[Y|X]^\top + {\mathbb {C}}\hbox {ov}_{{\mathbb {Q}}^*}(Y|X) \Big ) \\&=\mathbb {E}_{{\mathbb {Q}}^*}\Big (\Sigma _{12}\Sigma _{22}^{-1}XX^\top \Sigma _{22}^{-1}\Sigma _{21} + \Delta (\widetilde{\Sigma }_{11}) \Big ) \\&=\Sigma _{12}\Sigma _{22}^{-1}\Sigma _{21} + \Delta (\widetilde{\Sigma }_{11}). \end{aligned}$$\end{document}

The Pythagorean rule follows from the general result of Proposition 9.4. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Proposition 3.10

(Constrained second partial minimization). We can still apply Lemma 9.3 and Proposition 9.4, with the proviso that the marginal distribution of X is fixed at some Q X 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^0_X$$\end{document} . The optimal distribution Q X Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}$$\end{document} will therefore take the form Q X Y = i P Y i | X Q X 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*_{XY}= \prod _i {\mathbb {P}}_{Y_i|X} {\mathbb {Q}}^0_X$$\end{document} .

Turning to the explicit computation of the optimal normal law, inspection of the proof of Proposition 3.7 reveals that under Q \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Q}}^*$$\end{document} we have E Q Y X = Σ 12 Σ 22 - 1 Q 0 Q 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {E}_{{\mathbb {Q}}^*} YX^\top = \Sigma _{12}\Sigma _{22}^{-1}Q_0^\top Q_0$$\end{document} and

C ov Q ( Y ) = Δ ( Σ ~ 11 ) + Σ 12 Σ 22 - 1 Q 0 Q 0 Σ 22 - 1 Σ 21 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathbb {C}}\hbox {ov}_{{\mathbb {Q}}^*}(Y)=\Delta (\widetilde{\Sigma }_{11})+ \Sigma _{12}\Sigma _{22}^{-1}Q_0^\top Q_0\Sigma _{22}^{-1}\Sigma _{21}, \end{aligned}$$\end{document}

thus completing the proof. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Proposition 4.3

(Update rule for H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} ). From Equation (4.5) one immediately gets

(10.1) H t + 1 = H t + 1 H t + 1 = Σ ^ ( H t + D t ) - 1 H t R t - 1 H t ( H t + D t ) - 1 Σ ^ . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\mathcal {H}}_{t+1}= H_{t+1}H_{t+1}^\top = \widehat{\Sigma }({\mathcal {H}}_t+D_t)^{-1}H_tR_t^{-1}H_t^\top ({\mathcal {H}}_t+D_t)^{-1}\widehat{\Sigma }. \end{aligned}$$\end{document}

The key step in the proof is an application of the elementary identity, see e.g., Exercise 16(h) of Chapter 5 in Searle (Reference Searle1982),

( I + H P H ) - 1 H = H ( I + P H H ) - 1 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} (I+H^\top P H)^{-1}H^\top =H^\top (I+PHH^\top )^{-1}, \end{aligned}$$\end{document}

valid for all H and P of appropriate dimensions for which both inverses exist. We have already seen that R t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_t$$\end{document} is invertible and of the type I + H P H \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I+HPH^\top $$\end{document} . Following this recipe, we compute

R t - 1 H t = H t ( I - ( H t + D t ) - 1 H t + ( H t + D t ) - 1 Σ ^ ( H t + D t ) - 1 H t ) - 1 = H t ( ( H t + D t ) - 1 D t + ( H t + D t ) - 1 Σ ^ ( H t + D t ) - 1 H t ) - 1 = H t ( D t + Σ ^ ( H t + D t ) - 1 H t ) - 1 ( H t + D t ) . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} R_t^{-1}H_t^\top&= H_t^\top \big (I-({\mathcal {H}}_t+D_t)^{-1}{\mathcal {H}}_t +({\mathcal {H}}_t+D_t)^{-1}\widehat{\Sigma }({\mathcal {H}}_t+D_t)^{-1}{\mathcal {H}}_t\big )^{-1}\\&= H_t^\top \big ( ({\mathcal {H}}_t+D_t)^{-1}D_t+({\mathcal {H}}_t+D_t)^{-1}\widehat{\Sigma }({\mathcal {H}}_t+D_t)^{-1}{\mathcal {H}}_t\big )^{-1}\\&= H_t^\top \big (D_t + \widehat{\Sigma }({\mathcal {H}}_t+D_t)^{-1}{\mathcal {H}}_t\big )^{-1} ({\mathcal {H}}_t+D_t). \end{aligned}$$\end{document}

Insertion of this result into (10.1) yields (4.7). \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Proof of Proposition 6.6

(Update rule for H t \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_t$$\end{document} , singular case). It is sufficient to show this for one iteration. We start from Equation (4.7) with t = 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$t=0$$\end{document} and compute the value of H 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_1$$\end{document} . To that end we first obtain under the present assumption an expression for the matrix ( H 0 + D 0 ) - 1 H 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$({\mathcal {H}}_0+D_0)^{-1}{\mathcal {H}}_0$$\end{document} . Let P = I - H 20 ( H 20 H 20 ) - 1 H 20 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P=I-H_{20}^\top (H_{20}H_{20}^\top )^{-1}H_{20}$$\end{document} . It holds that

(10.2) ( H 0 + D 0 ) - 1 H 0 = ( D ~ 0 + H 10 P H 10 ) - 1 H 10 P H 10 0 ( H 20 H 20 ) - 1 H 20 H 10 ( D ~ 0 + H 10 P H 10 ) - 1 D ~ 0 I , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} ({\mathcal {H}}_0+D_0)^{-1}{\mathcal {H}}_0= \begin{pmatrix} (\widetilde{D}_0 +H_{10}PH_{10}^\top )^{-1}H_{10}PH_{10}^\top &{} 0 \\ (H_{20}H_{20}^\top )^{-1}H_{20}H_{10}^\top (\widetilde{D}_0 +H_{10}PH_{10}^\top )^{-1}\widetilde{D}_0 &{} I \end{pmatrix}, \end{aligned}$$\end{document}

as one can easily verify by multiplying this equation by H 0 + D 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}_0+D_0$$\end{document} . We also need the inverse of D 0 + Σ ^ ( H 0 + D 0 ) - 1 H 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D_0+\widehat{\Sigma }({\mathcal {H}}_0+D_0)^{-1}{\mathcal {H}}_0$$\end{document} , postmultiplied with Σ ^ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }$$\end{document} . Introduce U = D ~ 0 + Σ ~ 11 ( H 10 P H 10 + D ~ 0 ) - 1 H 10 P H 10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$U=\widetilde{D}_0+\widetilde{\Sigma }_{11}(H_{10}PH_{10}^\top +\widetilde{D}_0)^{-1}H_{10}PH_{10}^\top $$\end{document} and

V = Σ ^ 22 - 1 Σ ^ 21 ( H 10 P H 10 + D ~ 0 ) - 1 + ( H 20 H 20 ) - 1 H 20 H 10 ( H 20 H 20 ) - 1 D ~ 0 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} V=\widehat{\Sigma }_{22}^{-1}\widehat{\Sigma }_{21}(H_{10}PH_{10}^\top + \widetilde{D}_0)^{-1}+ (H_{20}H_{20}^\top )^{-1}H_{20}H_{10}^\top (H_{20}H_{20}^\top )^{-1}\widetilde{D}_0. \end{aligned}$$\end{document}

It results that

(10.3) ( D 0 + Σ ^ ( H 0 + D 0 ) - 1 H 0 ) - 1 Σ ^ = U - 1 Σ ~ 11 0 - V U - 1 Σ ~ 11 + Σ ^ 22 - 1 Σ ^ 21 I . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \big (D_0+\widehat{\Sigma }({\mathcal {H}}_0+D_0)^{-1}{\mathcal {H}}_0\big )^{-1}\widehat{\Sigma }= \begin{pmatrix} U^{-1}\widetilde{\Sigma }_{11} &{} 0 \\ -VU^{-1}\widetilde{\Sigma }_{11}+\widehat{\Sigma }_{22}^{-1}\widehat{\Sigma }_{21} &{} I \end{pmatrix}. \end{aligned}$$\end{document}

Insertion of the expressions (10.2) and (10.3) into (4.7) yields the result. The equations for the stationary points follow as before. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

Footnotes

1 Note that, since Σ ^ > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{\Sigma }>0$$\end{document} , the I-divergence I ( Σ ^ | | H H ⊤ + D ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {I}}(\widehat{\Sigma } || HH^\top +D) $$\end{document} is finite if and only if H H ⊤ + D \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$HH^\top +D$$\end{document} is invertible. This condition will always be assumed, without real loss of generality since the problem is to minimize the I-divergence.

2 For any non-negative definite matrix M ≥ 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M\ge 0$$\end{document} a square root is any matrix N such that M = N N ⊤ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M=NN^\top $$\end{document} . In general the square root is not unique, but if M > 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M>0$$\end{document} the symmetric square root is unique. A common notation for a square root of M is M 1 / 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M^{1/2}$$\end{document} .

References

Adachi, K. (2013). Factor analysis with EM algorithm never gives improper solutions when sample covariance and initial parameter matrices are proper. Psychometrika, 78, 380394.CrossRefGoogle ScholarPubMed
Anderson, T. W. (1984). An introduction to multivariate statistical analysis, New York, USA: WileyGoogle Scholar
Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. V (pp. 111–150). Berkeley and Los Angeles: University of California Press.Google Scholar
Bai, J., & Li, K. (2012). Statistical analysis of factor models of high dimension. Annals of Statistics, 40, 436465.CrossRefGoogle Scholar
Cramer, E. (2000). Probability measures with given marginals and conditionals: I-projections and conditional iterative proportional fitting. Statistics and Decisions, 18, 311329.Google Scholar
Csiszár, I. (1975). I\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I$$\end{document}-divergence geometry of probability distributions and minimization problems. Annals of Probability, 3, 146158.Google Scholar
Csiszár, I., & Tusnády, G. (1984). Information geometry and alternating minimization procedures. Statistics and Decisions, suppl. issue 1, 205237.Google Scholar
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society Series B, 39, 138.CrossRefGoogle Scholar
Emmett, W. G. (1949). Factor analysis by Lawley’s method of maximum likelihood. British Journal of Statistical Psychology, 2, 9097.CrossRefGoogle Scholar
Finesso, L., Picci, G., Bensoussan, A., & Lions, J. L. (1984). Linear statistical models and stochastic realization theory. Analysis and optimization of systems, Berlin: Springer 445470.CrossRefGoogle Scholar
Finesso, L., & Spreij, P. (2006). Nonnegative matrix factorization and I-divergence alternating minimization. Linear Algebra and its Applications, 416, 270287.CrossRefGoogle Scholar
Finesso, L., Chiuso, A., Pinzoni, S., & Ferrante, A. (2007). Factor analysis and alternating minimization. Modeling, estimation, and control, Festschrift in honor of Giorgio Picci, Berlin: Springer 8596.CrossRefGoogle Scholar
Finesso, L., & Spreij, P. (2015). Approximation of nonnegative systems by finite impulse response convolutions. IEEE Transactions on Information Theory, 61, 43994409.CrossRefGoogle Scholar
Harman, H. H. (1967). Modern factor analysis, 2Chicago, IL: The University of Chicago PressGoogle Scholar
Ihara, S. (1993). Information theory for continuous systems, Singapore: World ScientificCrossRefGoogle Scholar
Jennrich, R. I., & Robinson, S. M. (1969). A Newton-Raphson algorithm for maximum likelihood factor analysis. Psychometrika, 34, 111123.CrossRefGoogle Scholar
Jöreskog, K. G. (1967). Some contributions to maximum likelihood factor analysis. Psychometrika, 32, 443482.CrossRefGoogle Scholar
Lawley, D. N. (1940). The estimation of factor loadings by the method of maximum likelihood. Proceedings of the Royal Society of Edinburgh, 60, 6482.CrossRefGoogle Scholar
Ledermann, W. (1937). On the rank of the reduced correlation matrix in multiple-factor analysis. Psychometrika, 2, 8593.CrossRefGoogle Scholar
Liu, C., & Rubin, D. B. (1994). The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence. Biometrika, 81, 633648.CrossRefGoogle Scholar
Liu, C., & Rubin, D. B. (1998). Maximum likelihood estimation of factor analysis using the ECME algorithm with complete and incomplete data. Statistica Sinica, 8, 729747.Google Scholar
Maxwell, A. E. (1961). Recent trends in factor analysis. Journal of the Royal Statistical Society Series A, 124, 4959.CrossRefGoogle Scholar
Rao, C. R. (1955). Estimation and tests of significance in factor analysis. Psychometrika, 20, 93111.CrossRefGoogle Scholar
Rubin, D. B., & Thayer, D. T. (1982). EM algorithms for ML factor analysis. Psychometrika, 47, 6976.CrossRefGoogle Scholar
Searle, S. R. (1982). Matrix algebra useful for statistics, New York, USA: WileyGoogle Scholar
Trendafilov, N. T., & Unkel, S. (2011). Exploratory factor analysis of data matrices with more variables than observations. Journal of Computational and Graphical Statistics, 20, 874891.CrossRefGoogle Scholar
Zhao, J-HYu, P. L. H., & Jiang, Q. (2008). ML estimation for factor analysis: EM or non-EM?. Statistics and Computing, 18, 109123.CrossRefGoogle Scholar
Zhao, J., & Shi, L. (2014). Automated learning of factor analysis with complete and incomplete data. Computational Statistics & Data Analysis, 72, 205218.CrossRefGoogle Scholar
Figure 0

Figure 1. Rubin–Thayer.

Figure 1

Figure 2. Maxwell.

Figure 2

Figure 3. Rao.

Figure 3

Figure 4. Harman.

Figure 4

Figure 5. Emmett.

Figure 5

Figure 6. True FA model.