Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-23T08:56:53.212Z Has data issue: false hasContentIssue false

Eisenstein metrics

Published online by Cambridge University Press:  03 December 2021

Cameron Franc*
Affiliation:
Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada
Rights & Permissions [Opens in a new window]

Abstract

We study families of metrics on automorphic vector bundles associated with representations of the modular group. These metrics are defined using an Eisenstein series construction. We show that in certain cases, the residue of these Eisenstein metrics at their rightmost pole is a harmonic metric for the underlying representation of the modular group. The last section of the paper considers the case of a family of representations that are indecomposable but not irreducible. The analysis of the corresponding Eisenstein metrics, and the location of their rightmost pole, is an open question whose resolution depends on the asymptotics of matrix-valued Kloosterman sums.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Canadian Mathematical Society

1 Introduction

This paper concerns results on functions on the complex upper-half plane that take values in Hermitian matrices and which satisfy automorphic transformation laws. These functions are uniformizations of metrics on automorphic vector bundles, and the motivation for their study stems from modern Hodge theory. Nonabelian Hodge correspondences describe equivalences between categories of stable connections and categories of Higgs bundles on an underlying base manifold. Such correspondences have found use throughout geometry, topology, physics and even in number theory, beginning primarily with Ngo’s proof of the fundamental lemma [Reference Ngô26] using properties of the Hitchin integrable system [Reference Hitchin13, Reference Hitchin14] associated with moduli spaces of Higgs bundles. In this paper, we introduce Hermitian matrix-valued Eisenstein series and study to what extent harmonic metrics realizing the nonabelian Hodge correspondence could be described as residues of such Eisenstein series. This program is carried out fully for the two-dimensional inclusion representation of the modular group, and some general results and difficulties are studied when the monodromy at the cusp is unitary. Before getting to the details, we shall provide background and motivation, as well as a summary of our results.

The nonabelian Hodge correspondence traces back to work of Narasimhan and Seshadri [Reference Narasimhan and Seshadri25] in the compact case, and Mehta and Seshadri [Reference Mehta and Seshadri23] in the noncompact setting. These works established correspondences between categories of unitary representations of fundamental groups and categories of holomorphic vector bundles on a base curve. Later authors, beginning with work of Hitchin [Reference Hitchin13, Reference Hitchin14], Donaldson [Reference Donaldson8], Corlette [Reference Corlette4], and Simpson [Reference Simpson27, Reference Simpson28], extended this to encompass all irreducible representations of the fundamental group, by enhancing vector bundles $E/X$ with the additional structure of a Higgs field, which is an $\mathcal {O}_{X}$ -linear map

$$ \begin{align*} \theta \colon E \to E\otimes \Omega^{1}_{X}, \end{align*} $$

satisfying $\theta ^{2}=0$ , a condition which is automatic for curves.

One reason why nonabelian Hodge correspondences have proved so useful is that they can be used to replace nonlinear objects—regular connections—with $\mathcal {O}_{X}$ -linear Higgs fields. For example, recently in joint work with Rayan, we used this strategy in [Reference Franc and Rayan10] to establish new instances of inequalities governing the multiplicities among the line bundles that arise in decompositions of vector bundles associated with vector-valued modular forms. In past joint work with Mason [Reference Franc and Mason9], we established instances of such inequalities by proving the existence of semicanonical forms for the modular derivative $D_{k} = q\frac {d}{dq} -\tfrac {k}{12}E_{2}$ acting on spaces of modular forms of weight k. The nonlinear nature of these operators proves to be a nontrivial obstacle in such arguments.

Unfortunately, at the heart of establishing nonabelian Hodge correspondences lies the problem of solving a nonlinear partial differential equation, the solution of which yields the existence of harmonic metrics (cf. Definition 3.1) on vector bundles that can be used to associate Higgs bundles with regular connections, and vice versa. For an overview of how such correspondences work, see [Reference Franc and Rayan10], [Reference García-Raboso and Rayan11], or [Reference Goldman and Xia12] for the rank-one case. Existence proofs for harmonic metrics can be executed using a heat-flow argument as in [Reference Donaldson8], but writing down explicit examples of harmonic metrics can be difficult except in special circumstances.

If the base manifold X is the compactification of some quotient $\Gamma \backslash \operatorname {\mathrm {\mathcal {H}}}$ where $\operatorname {\mathrm {\mathcal {H}}}$ is the complex upper-half plane, and $\Gamma $ is a Fuchsian group, then vector bundles on X are pulled back to trivializable bundles on $\operatorname {\mathrm {\mathcal {H}}}$ . Attendant structures on such vector bundles, such as connections, Higgs fields, or metrics, can then be described as automorphic objects on $\operatorname {\mathrm {\mathcal {H}}}$ , typically vector- or matrix-valued, satisfying a prescribed transformation law under $\Gamma $ . In this paper, we explore the use of Eisenstein series for constructing metrics on automorphic vector bundles, focusing on the case of $\Gamma = \operatorname {\mathrm {SL}}_{2}(\mathbf {Z})$ , so that $X = Y \cup \{\infty \}$ where $Y = \Gamma \backslash \operatorname {\mathrm {\mathcal {H}}}$ is the moduli space of elliptic curves. We are primarily interested in representations that are not unitary, so that the corresponding harmonic metric is different from the Petersson inner product. See [Reference Deitmar5Reference Deitmar and Monheim7, Reference Müller24] for recent work on analysis of automorphic forms transforming under nonunitary representations.

In Section 2, we associate Eisenstein metrics $H(\tau ,s)$ with representations of $\Gamma $ generalizing constructions of [Reference Knopp and Mason18Reference Knopp and Mason20], and prove their convergence when the real part of s is large enough. We state our main convergence result (Proposition 2.15) in a form that is flexible enough to work for complex analytic families of representations of $\Gamma $ . Such families of Eisenstein series are examples of higher dimensional analogues of families studied in [Reference Bruggeman1]. Following this, Section 3 then briefly recalls the definition of a harmonic metric.

In the standard theory of Eisenstein series, one starts with a simple function satisfying a linear differential equation and averages to get a more interesting function satisfying a larger set of invariance properties. By linearity, the averaged function satisfies the same linear differential equation. If one instead hopes to solve a nonlinear differential equation, such averaging cannot be expected to solve nonlinear equations in the same way that one solves linear equations. As such, the following result may be somewhat surprising.

Theorem 1.1 Let $\rho $ be the inclusion representation of $\operatorname {\mathrm {SL}}_{2}(\mathbf {Z})$ , and let $H(\tau ,s)$ be the corresponding Eisenstein metric. Then, $H(\tau ,s)$ admits analytic continuation to $\operatorname {Re}(s)> \frac 32$ with a simple pole at $s=2$ of residue equal to a tame harmonic metric for $\rho $ .

See Theorem 4.1 for a more precise statement of this result, which is proved by computing the Fourier coefficients of $H(\tau ,s)$ using standard techniques from the theory of Eisenstein series.

In Section 5, we consider $H(\tau ,s)$ for representations where $\rho (T)$ is unitary, where $T =\left (\begin {smallmatrix}1&1\\0&1\end {smallmatrix}\right )$ is the cuspidal stabilizer, and we work out an expression for the Fourier coefficients of $H(\tau ,s)$ . When $\rho $ is itself unitary, this expression shows that $H(\tau ,s)$ has a simple pole at $s=1$ of residue equal to a scalar multiple of the identity matrix, which is the Petersson inner product for unitary representations. Thus, the analogue of Theorem 1.1 is true when $\rho (T)$ is unitary, except that the pole shifts left to $s=1$ . The difference between the unitary case and the case of Theorem 1.1 seems to be the nontrivial $(2\times 2)$ -Jordan block in $\rho (T)$ from Theorem 1.1, whereas $\rho (T)$ is diagonalizable for unitary $\rho $ .

It is unclear to this author to what extent one might expect to recover harmonic metrics as residues of Eisenstein metrics, and so to probe this question in Section 6, we consider a family of nonunitarizable representations $\rho $ such that $\rho (T)$ is of finite order (hence unitarizable). Unfortunately, the difficulty in using the Fourier coefficient computations of Section 5 in general rests in understanding some matrix-valued nonabelian Kloosterman sums and associated generating series. To describe these sums, if $\rho $ is a representation of $\Gamma $ , L is a matrix satisfying $\rho (T) = {\mathbf {e}}(L) := e^{2\pi i L}$ , and h is a Hermitian positive-definite matrix satisfying

(1.1) $$ \begin{align} \rho(\pm T)^{t}h\overline{\rho(\pm T)} = h, \end{align} $$

then the associated Kloosterman sums are defined as

$$ \begin{align*} \operatorname{\mathrm{Kl}}(\rho,L, c) = \sum_{\substack{d=1\\\gcd(c,d)=1}}^{c} {\mathbf{e}}(-L\tfrac dc)\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}h \overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)} \,{\mathbf{e}}(L\tfrac dc), \end{align*} $$

where $a,b \in \mathbf {Z}$ are chosen so that $ad-bc=1$ . The invariance property of equation (1.1) ensures that $\operatorname {\mathrm {Kl}}(\rho ,L,c)$ is well-defined independent of this choice, and the exponentials in the definition of $\operatorname {\mathrm {Kl}}(\rho ,L,c)$ ensure that the summands only depend on the value of d modulo c.

As is typical in the theory of Eisenstein series, understanding the analytic continuation of Eisenstein metrics $H(\tau ,s)$ rests in coming to grips with the analytic properties of Kloosterman sum generating series

$$ \begin{align*} D(s) = \sum_{c\geq 1} \frac{\operatorname{\mathrm{Kl}}(\rho,L,c)}{c^{s}}. \end{align*} $$

In Section 6, we analyze these sums for a family of representations and exponents $(\rho ,L)$ arising from a group cocycle obtained by integrating $\eta ^{4}$ , where $\eta $ is the Dedekind eta function. This family of two-dimensional representations, studied in [Reference Marks and Mason21], contains a one-dimensional subrepresentation that does not split off as a direct summand for generic specializations of the family. We show in Proposition 6.1 that the family of Kloosterman sums $\operatorname {\mathrm {Kl}}(\rho ,L,c)$ admit a sort of second-order Taylor expansion in the deformation parameters about the specialization where the family becomes decomposable. The second-order term in this Taylor expansion contains a sequence $a_{c}$ of positive integers, whose values are shown in Table 1 and plotted in Figure 1. Determining the rightmost pole of $H(\tau ,s)$ for this family of representations comes down to understanding the growth of this sequence $a_{c}$ . For example, if one could prove that $a_{c} = O(\phi (c)\log (c))$ , where $\phi $ is the Euler phi function, then the rightmost pole of $H(\tau ,s)$ would occur at $s=1$ and the corresponding residue would be positive definite. If instead $a_{c} = O(\phi (c) c^{\varepsilon })$ for some $\varepsilon> 0$ , then the rightmost pole of $H(\tau ,s)$ would occur to the right of $s=1$ and the residue would not be positive definite, hence not a harmonic metric. As we have only computed the terms $a_{c}$ for $c\leq 5,000$ , we are not prepared to make a conjecture as to the expected growth of sequences such as $a_{c}$ . A natural question is to ask whether the $\ell $ -adic machinery developed in [Reference Katz16] and subsequent work could be employed to study such asymptotic questions, and we plan to return to this in future work.

Remark 1.2 The Kloosterman sums above are special cases of more general matrix-valued Kloosterman sums

$$ \begin{align*} \sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c} {\mathbf{e}}(-L\tfrac ac)\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right){\mathbf{e}}(-L\tfrac dc) \end{align*} $$

that appeared in [Reference Knopp and Mason18Reference Knopp and Mason20]. Notice that if $\rho $ is trivial, then this yields classical Kloosterman sums. In the definition of $\operatorname {\mathrm {Kl}}(\rho ,L,c)$ above, the representation $\rho $ is replaced by its induced action on the space $\operatorname {\mathrm {Herm}}_{d}$ of $(d\times d)$ -Hermitian matrices.

Table 1: Values of the sequences $a_{c}$ and $\left \lvert b_{c}\right \rvert $ for small values of c.

Figure 1: Values of the sequences $\frac {a_{c}}{\phi (c)}$ and $\frac {\left \lvert b_{c}\right \rvert }{\phi (c)}$ in blue and red, respectively.

1.1 Notation and conventions

  • $\Gamma = \operatorname {\mathrm {SL}}_{2}(\mathbf {Z})$ .

  • $T =\left (\begin {smallmatrix}1&1\\0&1\end {smallmatrix}\right )$ , $S = \left (\begin {smallmatrix}0&-1\\1&0\end {smallmatrix}\right )$ .

  • $\operatorname {\mathrm {\mathcal {H}}} = \{x+iy\in \mathbf {C} \mid y> 0\}$ is the complex upper-half plane.

  • ${\mathbf {e}}(M) = e^{2\pi i M}$ for complex matrices M.

  • In this note, all metrics are Hermitian, and $\operatorname {\mathrm {Herm}}_{d}$ denotes the space of $(d\times d)$ Hermitian matrices.

  • Representations are complex and finite-dimensional.

2 Eisenstein metrics

Let $\rho \colon \Gamma \to \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ be a representation of $\Gamma = \operatorname {\mathrm {SL}}_{2}(\mathbf {Z})$ . Let $\operatorname {\mathrm {Herm}}_{d}$ denote the real vector space of $d\times d$ Hermitian matrices, so that $M\in \operatorname {\mathrm {Herm}}_{d}$ means that $\bar M = M^{t}$ .

Definition 2.1 A metric for $\rho $ is a smooth function $H\colon \operatorname {\mathrm {\mathcal {H}}} \to \operatorname {\mathrm {Herm}}_{d}$ such that $H(\tau )$ is positive definite for all $\tau \in \operatorname {\mathrm {\mathcal {H}}}$ , and such that

(2.1) $$ \begin{align} \rho(\gamma)^{t} H(\gamma \tau)\overline{\rho(\gamma)} = H(\tau) \end{align} $$

holds for all $\gamma \in \Gamma $ .

Such functions can be used to define analogues of Petersson inner products on spaces of vector-valued modular forms associated to $\rho $ . For example, if F satisfies $F(\gamma \tau )=\rho (\gamma )F(\tau )$ for all $\gamma \in \Gamma $ , and similarly for G, then we can define an invariant scalar-valued form by the rule

$$ \begin{align*} \langle F,G\rangle_{\tau} := F(\tau)^{t}H(\tau)\overline{G(\tau)}. \end{align*} $$

Equation (2.1) implies that $\langle F,G\rangle _{\gamma \tau } = \langle F,G\rangle _{\tau }$ for all $\gamma \in \Gamma $ .

Example 2.2 If $\rho $ is unitary, then $H(\tau ) = I_{d}$ defines the usual Petersson metric. More generally, if $M \in \operatorname {\mathrm {Herm}}_{d}$ is a constant positive definite matrix that satisfies $M\cdot \gamma = M$ with respect to the right action $M\cdot \gamma = \rho (\gamma )^{t}M\overline {\rho (\gamma )}$ of $\Gamma $ , then $H(\tau ) = M$ is a metric for $\rho $ .

Let $h\colon \operatorname {\mathrm {\mathcal {H}}} \to \operatorname {\mathrm {Herm}}_{d}$ be a positive definite and smooth function that satisfies equation (2.1) for $\gamma =\pm T$ . More precisely, we assume that

(2.2) $$ \begin{align} \rho(T)^{t}h(\tau+1)\overline{\rho(T)} =& h(\tau), \end{align} $$
(2.3) $$ \begin{align} \rho(-I)^{t}h(\tau)\overline{\rho(-I)} =& h(\tau). \end{align} $$

If $\rho (-I)$ is a scalar matrix, then we must have $\rho (-I) = \pm I_{d}$ , and equation (2.3) is satisfied. This occurs, for example, if $\rho $ is irreducible.

Given an h as in the preceding paragraph, the corresponding Poincaré series is defined as usual by the formula

(2.4) $$ \begin{align} P(\rho,h,\tau) := \sum_{\gamma \in \langle \pm T\rangle\backslash \Gamma} \rho(\gamma)^{t}h(\gamma \tau)\overline{\rho(\gamma)}. \end{align} $$

This is well defined thanks to equations (2.2) and (2.3). Furthermore, if P converges absolutely, then we have

$$ \begin{align*} P(\rho,h,\alpha \tau) =&\sum_{\gamma \in \langle \pm T\rangle\backslash \Gamma} \rho(\gamma)^{t}h(\gamma \alpha\tau)\overline{\rho(\gamma)}\\ =&\sum_{\gamma \in \langle \pm T\rangle\backslash \Gamma} \rho(\gamma\alpha^{-1})^{t}h(\gamma\tau)\overline{\rho(\gamma\alpha^{-1})}\\ =& \rho(\alpha)^{-t}P(h,\tau)\overline{\rho(\alpha)}^{-1}, \end{align*} $$

for all $\alpha \in \Gamma $ . Thus, after moving the $\rho $ -terms to the other side, one sees that $P(h,\tau )$ satisfies equation (2.1) as a function of $\tau $ . If h is chosen so that $P(\rho ,h,\tau )$ converges absolutely to a smooth function, then the series $P(\rho ,h,\tau )$ defines a metric for $\rho $ .

Let $g \in \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ . Notice that if h satisfies equations (2.2) and (2.3) for $\rho $ , then $g^{-t}h\overline {g}^{-1}$ satisfies equations (2.2) and (2.3) for $g\rho g^{-1}$ .

Lemma 2.3 Assuming that both Poincaré series converge absolutely, then one has

$$ \begin{align*} P(g\rho g^{-1}, g^{-t}h\bar g^{-1},\tau) = g^{-t}P(\rho,h,\tau)\bar g^{-1}. \end{align*} $$

Proof The proof is a direct computation:

$$ \begin{align*} P(g\rho g^{-1}, g^{-t}h\bar g^{-1},\tau) =&\sum_{\gamma \in \langle \pm T\rangle\backslash \Gamma} (g\rho(\gamma)g^{-1})^{t}g^{-t}h(\gamma \alpha\tau)\bar g^{-1}\bar g\overline{\rho(\gamma)}\bar g^{-1}\\ =& g^{-t}P(\rho,h,\tau)\bar g^{-1}. \\[-32pt]\end{align*} $$

Our next goal is to describe convenient choices of functions h satisfying equations (2.2) and (2.3). To this end, we introduce the notion of exponents. Define ${\mathbf {e}}(M) = e^{2\pi i M}$ for matrices M.

Definition 2.4 A choice of exponents for $\rho $ is a matrix L such that $\rho (T) = {\mathbf {e}}(L)$ .

Since the matrix exponential is surjective, exponents always exist. They can be described explicitly in terms of a Jordan canonical form for $\rho (T)$ (cf. Theorem 3.7 of [Reference Candelori and Franc2]), and this description shows that if X commutes with $\rho (T)$ , then X also commutes with a choice of exponents.

Let L be a choice of exponents for $\rho $ . Since the matrix exponential satisfies ${\mathbf {e}}(X+Y) = {\mathbf {e}}(X)\,{\mathbf {e}}(Y)$ provided that $XY=YX$ , it follows that

$$ \begin{align*}{\mathbf{e}}(L(\tau+1)) = \rho(T)\,{\mathbf{e}}(L\tau)={\mathbf{e}}(L\tau)\rho(T).\end{align*} $$

Therefore, for any $h\in \operatorname {\mathrm {Herm}}_{d}$ , the function

(2.5) $$ \begin{align} h(\tau) = {\mathbf{e}}(-L\tau)^{t}h\overline{{\mathbf{e}}(-L\tau)} \end{align} $$

is Hermitian and satisfies equation (2.2). However, $h(\tau )$ need not satisfy equation (2.3), and it need not be positive definite in general. Therefore, we introduce a set of admissible choices for h.

Definition 2.5 Given a representation $\rho \colon \Gamma \to \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ , define

$$ \begin{align*} \operatorname{\mathrm{Pos}}(\rho) := \{h\in \operatorname{\mathrm{Herm}}_{d} \mid h \textrm{ is positive definite and } \rho(-I)^{t}h\overline{\rho(-I)}=h\}. \end{align*} $$

Remark 2.6 In many cases, the condition that $\rho (-I)^{t}h\overline {\rho (-I)}=h$ in Definition 2.5 holds automatically for all $h \in \operatorname {\mathrm {Herm}}_{n}$ . This is so, for example, if $\rho $ is irreducible, for then one has $\rho (-I) = \pm I$ . In such cases, $\operatorname {\mathrm {Pos}}(\rho )$ is the set of all positive definite matrices in $\operatorname {\mathrm {Herm}}_{d}$ .

Proposition 2.7 Suppose that $\rho (-I)$ is unitary. Then, the set $\operatorname {\mathrm {Pos}}(\rho )$ contains I, and it is closed under addition and under multiplication by positive real numbers. Furthermore, if $h\in \operatorname {\mathrm {Pos}}(\rho )$ , then $g^{t}h\bar g$ is positive definite for all $g \in \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ .

Proof Necessarily, $\rho (T)^{t}\overline {\rho (T)}$ is Hermitian, and it is also necessarily positive definite. The condition that $\rho (-I)$ is unitary says exactly that I satisfies the second defining condition of $\operatorname {\mathrm {Pos}}(\rho )$ , so that $I \in \operatorname {\mathrm {Pos}}(\rho )$ when $\rho (-I)$ is unitary. Closure of $\operatorname {\mathrm {Pos}}(\rho )$ under addition and positive real rescalings is clear. Similarly, if z is a complex column vector, then $z^{t}g^{t}h\overline {gz} = w^{t}h\bar w$ where $w = gz$ . Since g is invertible, w is never zero, and so $w^{t}h\bar w> 0$ . This concludes the proof.▪

Definition 2.8 Let $\rho $ denote a representation of $\Gamma $ , and let L denote a choice of exponents for $\rho $ . Then, the associated Eisenstein metric is the infinite series

$$ \begin{align*} H(\rho,L,h,\tau,s) := \sum_{\gamma \in \langle \pm T\rangle\backslash \Gamma} \rho(\gamma)^{t}{\mathbf{e}}(-L^{t}(\gamma \cdot \tau))h\overline{{\mathbf{e}}(-L(\gamma \cdot \tau))}\overline{\rho(\gamma)}\operatorname{Im}(\gamma\cdot\tau)^{s}, \end{align*} $$

where $h \in \operatorname {\mathrm {Pos}}(\rho )$ and $\tau \in \operatorname {\mathrm {\mathcal {H}}}$ .

In the definition above, the dot in $\gamma \cdot \tau $ denotes the action of $\Gamma $ on $\operatorname {\mathrm {\mathcal {H}}}$ , not matrix multiplication. All other products in the expression defining Eisenstein metrics are ordinary matrix products. When $\rho $ is trivial, $L=0$ , and $h=1$ , then this definition gives the usual Eisenstein series.

The formation of $H(\rho ,L,h,\tau ,s)$ is linear in h. We will often write $H(h,\tau ,s)$ for these Eisenstein metrics when the dependence on $\rho $ and L is clear. Our next goal is to prove that $H(h,\tau ,s)$ converges absolutely if the real part of s is large enough. The proof is analogous to the proof of Theorem 3.2 in [Reference Knopp and Mason18]. We will show that one has convergence when $\operatorname {Re}(s)$ is large even if $(\rho ,L)$ varies in a family, as in the following definitions.

Definition 2.9 Let $U\subseteq \mathbf {C}^{m}$ denote an open subset. A family of representations for $\Gamma $ varying analytically over U consists of an analytic map

$$ \begin{align*} \rho \colon U \to \operatorname{\mathrm{Hom}}(\Gamma,\operatorname{\mathrm{GL}}_{n}(\mathbf{C})). \end{align*} $$

If $\rho $ is a family of representations, then we usually write $\rho _{u}$ for the value of $\rho $ at $u \in U$ . For each $\gamma \in \Gamma $ , we obtain a function $\rho (\gamma ) \colon U \to \operatorname {\mathrm {GL}}_{n}(\mathbf {C})$ , and the analyticity of $\rho $ consists of the analyticity of these maps. It suffices to test analyticity on a set of generators for $\Gamma $ . If $\mathcal {O}(U)$ denotes the ring of analytic functions on U, then a family of representations on U is the same thing as a homomorphism

$$ \begin{align*} \rho \colon \Gamma \to \operatorname{\mathrm{GL}}_{n}(\mathcal{O}(U)). \end{align*} $$

Definition 2.10 A choice of exponents for a family of representations $\rho $ on $U\subseteq \mathbf {C}^{m}$ consists of a holomorphic map $L \colon U \to M_{m}(\mathbf {C})$ such that $\rho (T) = e^{2\pi i L}$ .

As above, we will sometimes write $L_{u}$ for the exponents evaluated at a point $u \in U$ .

Example 2.11 There exists an analytic family of representations on $\mathbf {C}^{\times }$ determined by

$$ \begin{align*} \rho(T) &= \left(\begin{matrix}u&u\\0&u^{-1}\end{matrix}\right), & \rho(S) &= \left(\begin{matrix}0&-u\\u^{-1}&0\end{matrix}\right). \end{align*} $$

The specializations $\rho _{u}$ are irreducible as long as $u^{4}-u^{2}+1 \neq 0$ . Since $\operatorname {\mathrm {Tr}}(\rho (T)) = u+u^{-1}$ , one sees that this family is nontrivial, in the sense that not all fibers are isomorphic representations, although one does have $\rho _{u} \cong \rho _{u^{-1}}$ . Modulo this identification, this describes one component of the universal family of irreducible representations of $\Gamma $ of rank two (cf. [Reference Mason22, Reference Tuba and Wenzl29]).

Let $\log $ denote the branch of the complex logarithm such that the imaginary parts of $\log (z)$ are contained in $[0,2\pi )$ . Then, a choice of exponents defined on $\mathbf {C} \setminus \mathbf {R}_{\geq 0}$ is

$$ \begin{align*} L = \frac{1}{2\pi i}\left(\begin{matrix}\log(u)&\frac{(\log(u)-\log(u^{-1}))u^{2}}{u^{2}-1}\\0&\log(u^{-1})\end{matrix}\right). \end{align*} $$

The apparent singularity at $u=-1$ is removable, whereas the singularity on the branch cut at $u=1$ is not. These values of u correspond to the specializations of $\rho $ where $\rho (T)$ is not diagonalizable.

Definition 2.12 Let $(\rho ,L)$ denote a family of representations and a choice of exponents on some open subset $U\subseteq \mathbf {C}$ . Then, an analytic function $P\colon U \to \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ is said to put Linto Jordan canonical form provided that $PLP^{-1}$ is in Jordan canonical form for all $u \in \mathbf {C}$ .

Note that while each fiber $L_{u}$ of a choice of exponents can be put into Jordan canonical form, the existence of an analytic choice of change of basis matrix P putting L simultaneously into Jordan canonical form at all points of U is not guaranteed.

Definition 2.13 Let $\rho $ be a family of representations of $\Gamma $ on a set U. Define

$$ \begin{align*} \operatorname{\mathrm{Pos}}(\rho) := \bigcap_{u\in U} \operatorname{\mathrm{Pos}}(\rho_{u}). \end{align*} $$

Remark 2.14 It is frequently the case that $\operatorname {\mathrm {Pos}}(\rho )$ is nonempty for nontrivial families $\rho $ . For example, if U is connected and $\rho $ is irreducible, then $\rho (-I) = \pm I$ is constant on U, and so $\operatorname {\mathrm {Pos}}(\rho )$ is the set of all positive definite matrices in $\operatorname {\mathrm {Herm}}_{d}$ .

Proposition 2.15 Let $U \subseteq \mathbf {C}^{m}$ be open, let $\rho $ be a family of representations of $\Gamma $ on U, let L be a corresponding choice of exponents, and suppose that P puts L into Jordan canonical form. Then, for each compact subset $K\subseteq U$ , there exists a real number A depending on K, $\rho |_{K}$ , $L|_{K}$ , and $P|_{K}$ , such that the following hold:

  1. (1) For each $u \in K$ , the series $H(\rho _{u},L_{u},h,\tau ,s)$ converges uniformly and absolutely to a smooth function of $\tau $ for all $s \in \mathbf {C}$ with $\operatorname {Re}(s)> A$ , and for all $h \in \operatorname {\mathrm {Pos}}(\rho )$ .

  2. (2) This function is holomorphic as a function of s and u, and real analytic as a function of h.

Proof The proof is similar to the proof of Theorem 3.3 of [Reference Knopp and Mason17], although there are enough differences in the statements of these results that we repeat some details. If M is a matrix, then let $\left \lVert M\right \rVert $ denote the supremum norm on its entries. Then, we have

$$ \begin{align*} \left\lVert H(h,\tau,s)\right\rVert \leq 1+\left\lVert h\right\rVert\sum_{c=1}^{\infty}\sum_{\substack{d\in\mathbf{Z}\\ \gcd(c,d)=1}} \left\lVert \rho(\gamma)\right\rVert^{2}\left\lVert {\mathbf{e}}(-L(\gamma \cdot \tau))\right\rVert^{2}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}, \end{align*} $$

where $\gamma = \left (\begin {smallmatrix}a&b\\c&d\end {smallmatrix}\right )$ for some choice of $a,b\in \mathbf {Z}$ . After possibly replacing $\gamma $ by $T^{n}\gamma $ , we may assume that $0\leq \operatorname {Re}(\gamma \cdot \tau ) < 1$ . We then have the basic estimate

$$ \begin{align*}\left\lvert \gamma \cdot \tau\right\rvert^{2} \leq 1+\operatorname{Im}(\gamma \cdot \tau)^{2}=1+\frac{y^{2}}{((cx+d)^{2}+c^{2}y^{2})^{2}}\leq 1+\frac{1}{c^{4}y^{2}}.\end{align*} $$

Use the existence of the Jordan canonical form on U to write $-L$ in the form $-L=P(D+N)P^{-1}$ where D is diagonal, $DN=ND$ , and $N^{d}=0$ , where $d=\dim \rho $ . We obtain the estimate

$$ \begin{align*} \left\lVert {\mathbf{e}}(-L(\gamma \cdot \tau))\right\rVert^{2} \leq &\left\lVert P\right\rVert^{2}\left\lVert P^{-1}\right\rVert^{2}\left\lVert {\mathbf{e}}(D(\gamma \cdot \tau))\right\rVert^{2}\left\lVert {\mathbf{e}}(N(\gamma \cdot \tau))\right\rVert^{2}\\ \leq &\left\lVert P\right\rVert^{2}\left\lVert P^{-1}\right\rVert^{2}\left\lVert {\mathbf{e}}(D(\gamma \cdot \tau))\right\rVert^{2}\sum_{k=0}^{d-1}\frac{(2\pi)^{k}}{k!}\left\lVert N\right\rVert^{2k}\left\lvert \gamma \cdot \tau\right\rvert^{2k}. \end{align*} $$

Thus, if we set $C_{1} = e\max (1,\left \lVert P\right \rVert ^{2},\left \lVert P^{-1}\right \rVert ^{2}, \left \lVert N\right \rVert ,\left \lVert N^{2}\right \rVert ,\ldots , \left \lVert N^{d-1}\right \rVert )$ , then we deduce that

$$ \begin{align*} \left\lVert {\mathbf{e}}(-L(\gamma \cdot \tau))\right\rVert^{2} \leq C_{1}\left\lVert {\mathbf{e}}(D(\gamma \cdot \tau))\right\rVert^{2}e^{\frac{2\pi}{c^{4}y^{2}}}. \end{align*} $$

To continue, write $D = U+iV$ where U and V are real diagonal matrices, so that in particular $UV=VU$ . Then,

$$ \begin{align*} \left\lVert {\mathbf{e}}(D(\gamma \cdot \tau))\right\rVert^{2} \leq &\left\lVert {\mathbf{e}}(U(\gamma \cdot \tau))\right\rVert^{2}\left\lVert {\mathbf{e}}(iV(\gamma \cdot \tau))\right\rVert^{2}\\ =&\left\lVert {\mathbf{e}}(iU\operatorname{Im}(\gamma \cdot \tau))\right\rVert^{2}\left\lVert {\mathbf{e}}(iV\operatorname{Re}(\gamma \cdot \tau))\right\rVert^{2}\\ =&\left\lVert e^{-\frac{2\pi yU}{\left\lvert c\tau+d\right\rvert^{2}}}\right\rVert^{2}\left\lVert e^{-2\pi V\operatorname{Re}(\gamma \cdot \tau)}\right\rVert^{2}\\ \leq & C_{2}\left\lVert e^{-\frac{2\pi yU}{\left\lvert c\tau+d\right\rvert^{2}}}\right\rVert^{2}, \end{align*} $$

where $C_{2} = \max (1,\left \lVert e^{-2\pi V}\right \rVert ^{2})$ . Putting these estimates together, we have shown that if we normalize our representatives $\gamma $ for cosets in $\langle \pm T\rangle \backslash \Gamma $ such that $0\leq \operatorname {Re}(\gamma \tau ) < 1$ , then

(2.6) $$ \begin{align} \left\lVert H(h,\tau,s)\right\rVert \leq 1+C_{1}C_{2}\left\lVert h\right\rVert y^{s}\sum_{c=1}^{\infty}\sum_{\substack{d\in\mathbf{Z}\\ \gcd(c,d)=1}} \left\lVert \rho(\gamma)\right\rVert^{2}\left\lVert e^{-\frac{2\pi yU}{\left\lvert c\tau+d\right\rvert^{2}}}\right\rVert^{2}\frac{e^{\frac{2\pi}{c^{4}y^{2}}}}{\left\lvert c\tau+d\right\rvert^{2s}}. \end{align} $$

It remains to estimate $\left \lVert \rho (\gamma )\right \rVert $ . For this, one can use Corollary 3.5 of [Reference Knopp and Mason20] to obtain an estimate for this term that is polynomial in $c^{2}+d^{2}$ , with constants and degree that depend only on $\rho $ . Since the exponential factors in equation (2.6) converge to $1$ as c grows, this estimate is sufficient to prove part (1) of the proposition. All constants in these various estimates can be chosen uniformly on K, so that (2) follows as well.▪

Remark 2.16 The case of a single representation can be deduced from Proposition 2.15 by considering a constant family of representations on $\mathbf {C}$ . One can always put a constant family of exponents into Jordan canonical form, so the hypothesis that such a Jordan canonical form exists can be ignored when considering individual representations.

3 Tame harmonic metrics

Nonabelian Hodge theory describes a correspondence between categories of representations of fundamental groups, and categories of Higgs bundles on the underlying base manifold. A key ingredient for establishing such correspondences lies in proving the existence of metrics satisfying a nonlinear differential equation as in the following definition.

Definition 3.1 A Hermitian positive definite metric $H \colon \operatorname {\mathrm {\mathcal {H}}} \to M_{d}(\mathbf {C})$ for a representation $\rho $ is said to be harmonic provided that

$$ \begin{align*} \partial \bar \partial \log(H) = \tfrac 12 [\bar\partial \log(H),\partial \log(H)], \end{align*} $$

where $\partial \log (H) = H^{-1}\partial (H)$ , $\bar \partial \log (H) = H^{-1}\bar \partial (H)$ .

Example 3.2 If $\rho $ is a one-dimensional character (necessarily unitary in the case $\Gamma = \operatorname {\mathrm {SL}}_{2}(\mathbf {Z})$ ), then the commutator in Definition 3.1 vanishes, and the harmonicity condition simplifies to $\log (H)$ being harmonic in the usual sense. If more generally $\rho $ is unitary, the constant map $H=I$ satisfies the necessary invariance property in this case, and it defines a harmonic metric. This is the usual Petersson inner product for unitary representations.

Lemma 3.3 Let H be a harmonic metric for $\rho $ , and let $g \in \operatorname {\mathrm {GL}}_{d}(\mathbf {C})$ . Then, $g^{t}H\bar g$ is a harmonic metric for $g^{-1}\rho g$ .

Proof First, since $\rho (\gamma )^{t}H(\gamma \tau )\overline {\rho (\gamma )} = H(\tau )$ , we find that

$$ \begin{align*} g^{t}H(\tau)\bar g &= g^{t}\rho(\gamma)^{t}H(\gamma \tau)\overline{\rho(\gamma)g}\\ &= (g^{-1}\rho(\gamma)g)^{t}g^{t}H(\gamma\tau)\bar g (\overline{g^{-1}\rho(\gamma)g}). \end{align*} $$

Next, notice that

$$ \begin{align*} \partial\log(g^{t}H\bar g) =(g^{t}H\bar g)^{-1}\partial(g^{t}H\bar g)=\bar g^{-1}H^{-1}g^{-t}g^{t}\partial(H)\bar g = \bar g^{-1}\partial \log(H) \bar g, \end{align*} $$

and similarly for $\bar \partial \log (g^{t}H\bar g)$ . Therefore,

$$ \begin{align*} \partial\bar\partial\log(g^{t}H\bar g ) &=\bar g^{-1}\partial\bar\partial \log(H) \bar g\\ &= \tfrac 12\bar g^{-1}[\bar\partial \log(H),\partial \log(H)] \bar g\\ &= \tfrac 12\bar [\bar g^{-1}\bar\partial \log(H)\bar g,\bar g^{-1}\partial \log(H) \bar g]\\ &=\tfrac 12[\bar\partial \log(g^{t}H\bar g),\partial \log(g^{t}H\bar g)].\\[-36pt] \end{align*} $$

Definition 3.4 A metric $H \colon \operatorname {\mathrm {\mathcal {H}}} \to M_{d}(\mathbf {C})$ for a representation $\rho $ is said to be tame, or of slow growth, for a choice of exponents L provided that there exist constants C, N such that

$$ \begin{align*} \left\lVert {\mathbf{e}}(L^{t}\tau)H(\tau)\overline{{\mathbf{e}}(L\tau)}\right\rVert \leq Cy^{N}. \end{align*} $$

Tameness arises naturally in [Reference Simpson27] when considering correspondences between stable connections and stable Higgs bundles. In general, it can be difficult to write down explicit examples of harmonic metrics. Our interest in Eisenstein metrics is that they provide analytic families of metrics with appropriate invariance properties under the action of $\Gamma $ . The question is whether some specialization, residue, or some other metric derived from $H(\tau ,h,s)$ could satisfy the harmonicity and tameness conditions. Moreover, since the formation of Eisenstein metrics is well adapted to working with families of representations, one might obtain universal families of harmonic metrics living over moduli spaces of Higgs bundles. At present, no general results in this direction are known, although we discuss some preliminary examples below.

4 The inclusion representation

Suppose that $\rho \colon \Gamma \hookrightarrow \operatorname {\mathrm {GL}}_{2}(\mathbf {C})$ is the inclusion representation. In this case, a harmonic metric is known to be

(4.1) $$ \begin{align} K(\tau) = \frac{1}{y}\left(\begin{matrix}1&-x\\-x&x^{2}+y^{2}\end{matrix}\right), \end{align} $$

where $\tau = x+iy$ . This case is rather special, since $\rho $ in fact extends to the ambient Lie group $\operatorname {\mathrm {SL}}_{2}(\mathbf {R})$ . In fact, this metric is an example of a totally geodesic metric as in Example 4.4 of [Reference Franc and Rayan10], or Example 14.1.2 of [Reference Carlson, Müller-Stach and Peters3]. In this section, we show that this metric $K(\tau )$ arises as a residue of Eisenstein metrics.

Notice that since in this case $\rho (-I) = -I$ , the set $\operatorname {\mathrm {Pos}}(\rho )$ is the set of all positive definite matrices in $\operatorname {\mathrm {Herm}}_{2}$ . The possible exponent choices L take the form $2\pi i L = \left (\begin {smallmatrix}2\pi i n&1\\0&2\pi i n\end {smallmatrix}\right )$ for integers $n\in \mathbf {Z}$ . Fix $n \in \mathbf {Z}$ and observe that ${\mathbf {e}}(L\tau )=q^{n}\left (\begin {smallmatrix}1&\tau \\0&1\end {smallmatrix}\right )$ , where $q = {\mathbf {e}}(\tau )$ . Thus, if for our choice of $h\in \operatorname {\mathrm {Pos}}(\rho )$ we take $h=I$ , then

$$ \begin{align*} h(\tau) = \left\lvert q\right\rvert^{2n}\left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\left(\begin{matrix}1&-\overline\tau\\0&1\end{matrix}\right)=e^{-4\pi ny}\left(\begin{matrix}1&-\bar\tau\\-\tau&1+\left\lvert \tau\right\rvert^{2}\end{matrix}\right). \end{align*} $$

Write $H(\tau ,s) = H(\rho ,L,I,\tau ,s)$ and compute

$$ \begin{align*} H(\tau,s) &= y^{s}h(\tau)+\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}a&c\\b&d\end{matrix}\right)\left(\begin{matrix}1&0\\-\gamma \tau&1\end{matrix}\right)\left(\begin{matrix}1&-\gamma \bar\tau\\0&1\end{matrix}\right)\left(\begin{matrix}a&b\\c&d\end{matrix}\right)\\ &\quad \times e^{-4\pi n \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= y^{s}h(\tau)+\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}a-c\gamma \tau&c\\b-d\gamma\tau&d\end{matrix}\right)\left(\begin{matrix}a-c\gamma\bar\tau&b-d\gamma\bar\tau\\c&d\end{matrix}\right)\\&\quad \times e^{-4\pi n \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= y^{s}h(\tau)+\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}1&c\\-\tau&d\end{matrix}\right)\left(\begin{matrix}\left\lvert c\tau+d\right\rvert^{-2}&0\\0&1\end{matrix}\right)\left(\begin{matrix}1&-\bar\tau\\c&d\end{matrix}\right)\\&\quad \times e^{-4\pi n \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= y^{s}h(\tau)+\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}\left\lvert c\tau+d\right\rvert^{-2}&c\\-\tfrac{\tau}{\left\lvert c\tau+d\right\rvert^{2}}&d\end{matrix}\right)\left(\begin{matrix}1&-\bar\tau\\c&d\end{matrix}\right)e^{-4\pi n \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= y^{s}h(\tau)+\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}c^{2}+\frac{1}{\left\lvert c\tau+d\right\rvert^{2}}&cd-\tfrac{\bar\tau}{\left\lvert c\tau+d\right\rvert^{2}}\\cd-\tfrac{\tau}{\left\lvert c\tau+d\right\rvert^{2}}&d^{2}+\left\lvert \tfrac{\tau}{c\tau+d}\right\rvert^{2}\end{matrix}\right)e^{-4\pi n \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= y^{s}h(\tau)+\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}c^{2}+\frac{1}{\left\lvert c\tau+d\right\rvert^{2}}&cd-\tfrac{\bar\tau}{\left\lvert c\tau+d\right\rvert^{2}}\\cd-\tfrac{\tau}{\left\lvert c\tau+d\right\rvert^{2}}&d^{2}+\left\lvert \tfrac{\tau}{c\tau+d}\right\rvert^{2}\end{matrix}\right)\\&\quad \times \frac{y^{s+k}}{\left\lvert c\tau+d\right\rvert^{2(s+k)}}. \end{align*} $$

Notice that with $E(\tau ,s) = y^{s}+\sum _{c\geq 1}\sum _{\gcd (c,d)=1}\frac {y^{s}}{\left \lvert c\tau +d\right \rvert ^{2s}}$ equal to the usual real-analytic Eisenstein series, we have

(4.2) $$ \begin{align} \nonumber H(\tau,s) =& e^{-4\pi ny}y^{s}\left(\begin{matrix}0&0\\0&1\end{matrix}\right)+y^{-1}\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}E(\tau,s+k+1)\left(\begin{matrix}1&-\bar\tau\\-\tau&\left\lvert \tau\right\rvert^{2}\end{matrix}\right)+\\ &\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}c^{2}&cd\\cd&d^{2}\end{matrix}\right)\frac{y^{s+k}}{\left\lvert c\tau+d\right\rvert^{2(s+k)}}. \end{align} $$

We now focus on the final terms using a standard approach:

$$ \begin{align*} &\sum_{c\geq 1}\sum_{\gcd(c,d) =1} \left(\begin{matrix}c^{2}&cd\\cd&d^{2}\end{matrix}\right)\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ =&\sum_{c\geq 1}c^{2-2s}\sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c}\sum_{u\in\mathbf{Z}}\left(\begin{matrix}1&\tfrac dc + u\\\tfrac dc + u&(\tfrac dc+u)^{2}\end{matrix}\right)\frac{y^{s}}{\left\lvert \tau+\tfrac dc+u\right\rvert^{2s}}\\ =&\left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\left(\sum_{c\geq 1}c^{2-2s}\sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c}\sum_{u\in\mathbf{Z}} \left(\begin{matrix}1&\bar\tau+\tfrac dc + u\\\tau+\tfrac dc + u&\left\lvert \tau+\tfrac dc+u\right\rvert^{2}\end{matrix}\right)\frac{y^{s}}{\left\lvert \tau+\tfrac dc+u\right\rvert^{2s}}\!\right)\!\left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right). \end{align*} $$

If $f(\tau )$ denotes the sum over u in the previous line, then $f(\tau +1)=f(\tau )$ and the Poisson summation formula may be used. We must first compute the Fourier transform of the summand terms: if ${\mathbf {e}}(z) := e^{2\pi i z}$ , then the Fourier transform is

$$ \begin{align*} &y^{s}\int_{-\infty}^{\infty} \left(\begin{matrix}1&x-iy+\tfrac dc + t\\x+iy+\tfrac dc + t&\left\lvert x+iy+\tfrac dc+t\right\rvert^{2}\end{matrix}\right)\frac{{\mathbf{e}}(-mt)}{\left\lvert x+iy+\tfrac dc+t\right\rvert^{2s}}dt\\ =&y^{s}{\mathbf{e}}(mx + m\tfrac dc)\int_{-\infty}^{\infty} \left(\begin{matrix}1&r-iy\\r+iy&r^{2}+y^{2}\end{matrix}\right)\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}dr. \end{align*} $$

Let $g(m,y,s) = y^{s}\int _{-\infty }^{\infty } \left (\begin {smallmatrix}1&r-iy\\r+iy&r^{2}+y^{2}\end {smallmatrix}\right )\tfrac {{\mathbf {e}}(-mr)}{(r^{2}+y^{2})^{s}}dr$ . Then, putting everything together, we have shown that

(4.3) $$ \begin{align} \nonumber H(\tau,s) &= y^{s}e^{-4\pi ny}\left(\begin{matrix}0&0\\0&1\end{matrix}\right)+y^{-1}\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}E(\tau,s+k+1)\left(\begin{matrix}1&-\bar\tau\\-\tau&\left\lvert \tau\right\rvert^{2}\end{matrix}\right)+\left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\\ & \quad \times \left(\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\sum_{m\in\mathbf{Z}}{\mathbf{e}}(mx)g(m,y,s+k)\sum_{c\geq 1}c^{2-2(s+k)}\sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c} {\mathbf{e}}(m\tfrac dc)\right)\\&\quad \times \left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right). \nonumber\end{align} $$

Recall that we have the following evaluation of the Ramanujan sum:

$$ \begin{align*} \sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c} {\mathbf{e}}(m\tfrac dc)=\begin{cases} \sum_{g\mid \gcd(c,m)}\mu(\tfrac cg)g, & m\neq 0,\\ \phi(c), & m=0, \end{cases} \end{align*} $$

and therefore

$$ \begin{align*} \sum_{c\geq 1}c^{2-2(s+k)}\sum_{\substack{d=1\\ \gcd(c,d)=1}}^{c} {\mathbf{e}}(m\tfrac dc)=\begin{cases} \frac{\sigma_{3-2(s+k)}(\left\lvert m\right\rvert)}{\zeta(2(s+k)-2)}, & m\neq 0,\\ \frac{\zeta(2(s+k)-3)}{\zeta(2(s+k)-2)},&m=0. \end{cases} \end{align*} $$

That is, we have shown the following: $H(\tau ,s) = \left (\begin {smallmatrix}1&0\\-\tau &1\end {smallmatrix}\right ) \tilde H(\tau ,s) \left (\begin {smallmatrix}1&-\bar \tau \\0&1\end {smallmatrix}\right )$ where

(4.4) $$ \begin{align} \nonumber \tilde H(\tau,s) &= y^{s}e^{-4\pi ny}\left(\begin{matrix}0&0\\0&1\end{matrix}\right)+y^{-1}\sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}E(\tau,s+k+1)\left(\begin{matrix}1&0\\0&0\end{matrix}\right)\\ &\quad + \sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\left(\frac{\zeta(2(s+k)-3)}{\zeta(2(s+k)-2)}g(0,y,s+k)\right.\nonumber\\&\quad + \left. \sum_{\substack{m\in\mathbf{Z}\\ m \neq 0}}\frac{{\mathbf{e}}(mx)\sigma_{3-2(s+k)}(\left\lvert m\right\rvert)g(m,y,s+k)}{\zeta(2(s+k)-2)}\right). \end{align} $$

To conclude the computation of the Fourier coefficients of $H(\tau ,s)$ , it now remains to give a more concrete expression for $g(n,y)$ . Equations (3.18) and (3.19) of [Reference Iwaniec15] yield

(4.5) $$ \begin{align} \int_{-\infty}^{\infty} \frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}dr = \begin{cases} \pi^{\frac 12}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}y^{1-2s}, & m=0,\\ 2\pi^{s}\Gamma(s)^{-1}\left\lvert m\right\rvert^{s-\frac 12}y^{-s+\frac 12}K_{s-\frac 12}(2\pi \left\lvert m\right\rvert y), & m\neq 0. \end{cases} \end{align} $$

This allows us to evaluate the diagonal terms in $g(m,y,s)$ . It remains to treat the antidiagonal terms, and we first suppose $m\neq 0$ . By the product rule, and since $\operatorname {Re}(s) \gg 0$ ,

$$ \begin{align*} & \int_{-\infty}^{\infty} \frac{r{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}dr \\ =&-\frac{1}{2\pi in}\lim_{N \to \infty}\left. \frac{r{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}\right|^{N}_{-N}\!+\frac{1}{2\pi im}\!\int_{-\infty}^{\infty}\!\frac{((r^{2}+y^{2})^{s}-2sr^{2}(r^{2}+y^{2})^{s-1}){\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{2s}}dr\\ =&\frac{1}{2\pi i m}\int_{-\infty}^{\infty}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}dr-\frac{2s}{2\pi i m}\int_{-\infty}^{\infty}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}dr+\frac{2sy^{2}}{2\pi i m}\int_{-\infty}^{\infty}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s+1}}dr. \end{align*} $$

That is, for $m\neq 0$ , we have shown that

$$ \begin{align*} &g(m,y,s)\\ &= y^{s}\!\!\!\!\int_{-\infty}^{\infty} \!\!\left(\!\begin{matrix}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}&\!\!\left(\!\frac{1-2s}{2\pi i m}-iy\!\right)\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}+\frac{2sy^{2}}{2\pi i m}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s+1}}\\\left(\!\frac{1-2s}{2\pi i m}+iy\!\right)\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s}}+\frac{2sy^{2}}{2\pi i m}\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s+1}}&\!\!\frac{{\mathbf{e}}(-mr)}{(r^{2}+y^{2})^{s-1}}\end{matrix}\!\!\right)dr\\ &= y^{s}\left(\begin{matrix}1&\frac{1-2s}{2\pi i m}-iy\\\frac{1-2s}{2\pi i m}+iy&0\end{matrix}\right)2\pi^{s}\Gamma(s)^{-1}\left\lvert m\right\rvert^{s-\frac 12}y^{-s+\frac 12}K_{s-\frac 12}(2\pi \left\lvert m\right\rvert y)\\ &\quad + y^{s}\left(\begin{matrix}0&\tfrac{2sy^{2}}{2\pi i m}\\\tfrac{2sy^{2}}{2\pi i m}&0\end{matrix}\right)2\pi^{s+1}\Gamma(s+1)^{-1}\left\lvert m\right\rvert^{s+\frac 12}y^{-s-\frac 12}K_{s+\frac 12}(2\pi \left\lvert m\right\rvert y)\\ & \quad + y^{s}\left(\begin{matrix}0&0\\0&1\end{matrix}\right)2\pi^{s-1}\Gamma(s-1)^{-1}\left\lvert m\right\rvert^{s-\frac 32}y^{-s+\frac 32}K_{s-\frac 32}(2\pi \left\lvert m\right\rvert y). \end{align*} $$

This can be cleaned up somewhat using the functional equation $\Gamma (s+1)=s\Gamma (s)$ for the gamma function

$$ \begin{align*} g(m,y,s) &=\left(\begin{matrix}1&\frac{1-2s}{2\pi i m}-iy\\\frac{1-2s}{2\pi im}+iy&0\end{matrix}\right)2\pi^{s}\Gamma(s)^{-1}\left\lvert m\right\rvert^{s-\frac 12}y^{\frac 12}K_{s-\frac 12}(2\pi \left\lvert m\right\rvert y)\\ & \quad + \left(\begin{matrix}0&\tfrac{2y^{2}}{2\pi im}\\\tfrac{2y^{2}}{2\pi i m}&0\end{matrix}\right)2\pi^{s+1}\Gamma(s)^{-1}\left\lvert m\right\rvert^{s+\frac 12}y^{-\frac 12}K_{s+\frac 12}(2\pi \left\lvert m\right\rvert y)\\ & \quad + \left(\begin{matrix}0&0\\0&1\end{matrix}\right)2\pi^{s-1}(s-1)\Gamma(s)^{-1}\left\lvert m\right\rvert^{s-\frac 32}y^{\frac 32}K_{s-\frac 32}(2\pi \left\lvert m\right\rvert y). \end{align*} $$

In the interest of pairing terms corresponding to $\pm m$ , notice that for $m> 0$

$$ \begin{align*} &{\mathbf{e}}(mx)g(m,y,s)+{\mathbf{e}}(-mx)g(-m,y,s) \\ &=\cos(2\pi mx)(g(m,y,s)+g(-m,y,s))+i\sin(2\pi mx)(g(m,y,s)-g(-m,y,s))\\ &= \frac{4y^{\frac 12}\pi^{s}m^{s-\frac 12}}{\Gamma(s)}\left(\cos(2\pi mx)\left(\begin{matrix}K_{s-\frac 12}(2\pi my)&-iyK_{s-\frac 12}(2\pi my)\\iyK_{s-\frac 12}(2\pi my)&\frac{(s-1)y}{\pi m}K_{s-\frac 32}(2\pi my)\end{matrix}\right) \right.\\ & \quad + \left.\sin(2\pi mx)\left(\frac{1-2s}{2\pi m}K_{s-\frac 12}(2\pi my)+yK_{s+\frac 12}(2\pi my)\right)\left(\begin{matrix}0&1\\1&0\end{matrix}\right)\right). \end{align*} $$

Finally, to evaluate $g(0,y)$ , it remains to observe that since $\tfrac {r}{(r^{2}+s^{2})^{s}}$ is an odd function of r, $\int _{-\infty }^{\infty } \frac {r}{(r^{2}+y^{2})^{s}}dr=0$ . Therefore, we find that

$$ \begin{align*} g(0,y,s) &= y^{s}\int_{-\infty}^{\infty} \left(\begin{smallmatrix}1&r-iy\\r+iy&r^{2}+y^{2}\end{smallmatrix}\right)\tfrac{1}{(r^{2}+y^{2})^{s}}dr\\ &=y^{s}\int_{-\infty}^{\infty} \left(\begin{smallmatrix}1&-iy\\iy&r^{2}+y^{2}\end{smallmatrix}\right)\tfrac{1}{(r^{2}+y^{2})^{s}}dr\\ &=y^{s}\left(\begin{matrix}\pi^{\frac 12}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}y^{1-2s}&-iy\pi^{\frac 12}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}y^{1-2s}\\iy\pi^{\frac 12}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}y^{1-2s}&\pi^{\frac 12}\frac{\Gamma(s-\frac 32)}{\Gamma(s-1)}y^{3-2s}\end{matrix}\right), \end{align*} $$

so that

$$ \begin{align*} g(0,y,s) = \frac{\pi^{\frac 12}y^{1-s}\Gamma(s-\frac 12)}{\Gamma(s)}\left(\begin{matrix}1&-iy\\iy&\frac{2s-2}{2s-3}y^{2}\end{matrix}\right). \end{align*} $$

Finally, recall that the Fourier expansion for $E(\tau ,s+1)$ is

$$ \begin{align*} E(\tau,s+1) &= y^{s+1}+\frac{\pi^{2s+1}\Gamma(-s)\zeta(-2s)}{\Gamma(s+1)\zeta(2s+2)}y^{-s}\\ & \quad + \frac{4\pi^{s+1}y^{\frac 12}}{\Gamma(s+1)\zeta(2s+2)}\sum_{m=1}^{\infty} m^{-s-\frac 12}\sigma_{2s+1}(m)\cos(2\pi m x)K_{s+\frac 12}(2\pi m y). \end{align*} $$

Putting these computations together allows us to prove the following.

Theorem 4.1 Let $\rho $ denote the inclusion representation of $\Gamma $ , let $L = \left (\begin {smallmatrix}n&(2\pi i )^{-1}\\0&n\end {smallmatrix}\right )$ for $n \in \mathbf {Z}$ , and set $H(\tau ,s) := H(\rho ,L,I,\tau ,s)$ . Then, the Eisenstein metric $H(\tau ,s)$ has a Fourier expansion of the form

$$ \begin{align*}H(\tau,s) = \left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\left(\sum_{m\geq 0} H_{m}(\tau,s)\right)\left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right),\end{align*} $$

where

$$ \begin{align*} H_{0}(\tau,s) &= y^{s}e^{-4\pi ny}\left(\begin{matrix}1&0\\0&1\end{matrix}\right)\\ & \quad + \sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\frac{\pi^{2s+2k+1}\Gamma(-s-k)\zeta(-2s-2k)}{\Gamma(s+k+1)\zeta(2s+2k+2)}y^{-s-k-1}\left(\begin{matrix}1&0\\0&0\end{matrix}\right)\\ & \quad + \sum_{k\geq 0}\frac{(-4\pi n)^{k}}{k!}\frac{\pi^{\frac 12}y^{1-s-k}\Gamma(s+k-\frac 12)\zeta(2s+2k-3)}{\Gamma(s+k)\zeta(2s+2k-2)}\left(\begin{matrix}1&-iy\\iy&\frac{2s+2k-2}{2s+2k-3}y^{2}\end{matrix}\right), \end{align*} $$

and for $m\geq 1$ ,

$$ \begin{align*} & H_{m}(\tau,s) =4\pi^{s}\sum_{k\geq 0}\frac{(-4\pi^{2} n)^{k}}{k!}\left(\frac{\pi\sigma_{2s+2k+1}(m)\cos(2\pi m x)K_{s+k+\frac 12}(2\pi m y)}{m^{s+k+\frac 12}\Gamma(s+k+1)\zeta(2s+2k+2)y^{\frac 12}}\left(\begin{matrix}1&0\\0&0\end{matrix}\right) \right.\\ & + \frac{y^{\frac 12}m^{s+k-\frac 12}\sigma_{3-2s-2k}(m)}{\Gamma(s+k)\zeta(2s+2k-2)}\cos(2\pi mx)\left(\begin{matrix}K_{s+k-\frac 12}(2\pi my)&\!\!-iyK_{s+k-\frac 12}(2\pi my)\\iyK_{s+k-\frac 12}(2\pi my)&\!\!\frac{(s+k-1)y}{\pi m}K_{s+k-\frac 32}(2\pi my)\end{matrix}\right)\\ & + \left.\frac{y^{\frac 12}m^{s+k-\frac 12}\sigma_{3-2s-2k}(m)}{\Gamma(s+k)\zeta(2s+2k-2)}\sin(2\pi mx) \right. \\& \times \left. \left(\frac{1-2s-2k}{2\pi m}K_{s+k-\frac 12}(2\pi my)+yK_{s+k+\frac 12}(2\pi my)\right) \left(\begin{matrix}0&1\\1&0\end{matrix}\right)\right). \end{align*} $$

Moreover, $H(\tau ,s)$ admits meromorphic continuation to the region $\operatorname {Re}(s)> \tfrac 32$ , and its only pole in this region is simple and located at $s=2$ . This pole comes from the constant term $H_{0}(\tau ,s)$ , and the residue at $s=2$ is a tame harmonic metric for the inclusion representation

$$ \begin{align*} \operatorname{\mathrm{Res}}_{s=2}H(\tau,s) = \frac{3}{2\pi y}\left(\begin{matrix}1&-x\\-x&x^{2}+y^{2}\end{matrix}\right) = \frac{3}{2\pi}K(\tau). \end{align*} $$

In particular, the residue does not depend on the choice of exponent matrix L.

Proof The computation of the Fourier coefficients is accomplished by substituting our expressions for the Fourier transforms $g(m,y,s)$ into equation (4.4) and simplifying. For the meromorphic continuation, recall that $\Gamma (s/2)\zeta (s)$ has two simple poles, at $s=0$ and $s=1$ , but it is otherwise holomorphic. Furthermore, it is nonvanishing in a neighborhood of $s=1$ and for $\operatorname {Re}(s) \geq 1$ .

The first term in the expression for $H_{0}(\tau ,s)$ is holomorphic in s. For each summand in the second line of the expression for $H_{0}(\tau ,s)$ , consider the ratio

$$ \begin{align*} \frac{\Gamma(-s-k)\zeta(-2s-2k)}{\Gamma(s+k+1)\zeta(2s+2k+2)}, \end{align*} $$

which is $O(1)$ as a function of k. If $\operatorname {Re}(s)> \frac 32$ , then $\operatorname {Re}(-2s-2k) < -3-2k \leq -3$ . Therefore, the numerator of the lined formula above is holomorphic in this region. Likewise, $\operatorname {Re}(2s+2k+2)> 5+2k$ , so that the denominator is holomorphic and nonvanishing in this region. Therefore, the second line in the description of $H_{0}(\tau ,s)$ converges to a holomorphic function when $\operatorname {Re}(s)> \frac 32$ .

For the final set of terms in $H_{0}(\tau ,s)$ , we consider the expressions

$$ \begin{align*} \frac{\Gamma(s+k-\frac 12)\zeta(2s+2k-3)}{\Gamma(s+k)\zeta(2s+2k-2)}. \end{align*} $$

Here, if $\operatorname {Re}(s)> \tfrac 32$ , then $\operatorname {Re}(2s+2k-3)> 2k > 0$ , so that the only possible pole can come from solutions to $2s+2k-3=1$ , which is $s=2-k$ . In the region $\operatorname {Re}(s)>\tfrac 32$ , this can only occur if $k=0$ , in which case a pole occurs at $s=2$ . For the denominators, we have that $\operatorname {Re}(2s+2k-2)> 1+2k > 1$ , so that the denominator is holomorphic and nonvanishing. It follows that when $\operatorname {Re}(s)> \tfrac 32$ , the constant term $H_{0}(\tau ,s)$ is holomorphic save for a simple pole arising from the $k=0$ term in the last sum of its expression in Theorem 4.1.

The higher Fourier coefficients $H_{m}(\tau ,s)$ are holomorphic in the region $\operatorname {Re}(s)> \tfrac 32$ , and the proof is similar to the constant term. A new feature is that one must estimate sums such as

$$ \begin{align*}\sum_{k\geq 0}\frac{(4\pi^{2}n)^{k}\sigma_{2s+2k+1}(m)K_{s+k+\frac 12}(2\pi my)}{m^{k}k!\Gamma(s+k+1)\zeta(2s+2k+2)}.\end{align*} $$

When $\left \lvert \nu \right \rvert $ is large relative to x, one can estimate $K_{\nu }(x)$ via the first few terms of its Taylor expansion (see Appendix B above (B.35) of [Reference Iwaniec15]). More precisely, in the tail of the sum where $\left \lvert 2\pi m y\right \rvert \ll 1+\left \lvert s+k+\frac 12\right \rvert ^{1/2}$ , one can approximate

(4.6) $$ \begin{align} K_{s+k+\frac 12}(2\pi m y) \approx\tfrac{1}{2}\Gamma(s+k+\tfrac 12)(\pi m y)^{-s-k-\frac 12}. \end{align} $$

In this way, one can show that for large k, the Bessel terms are mollified by the $\Gamma $ -terms in the denominators. Standard and more elementary estimates for the remaining factors appearing in $H_{m}(\tau ,s)$ then allow one to deduce sufficiently fast convergence to show that these sums indeed yield a holomorphic function in the region $\operatorname {Re}(s)> \tfrac 32$ .

It remains to compute the residue at $s=2$ , and by the preceding analysis, we have

$$ \begin{align*} \operatorname{\mathrm{Res}}_{s=2} H(\tau,s) =& \left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\operatorname{\mathrm{Res}}_{s=2}H_{0}(\tau,s)\left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right)\\ =&\left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\operatorname{\mathrm{Res}}_{s=2}\frac{\pi^{\frac 12}y^{1-s}\Gamma(s-\frac 12)\zeta(2s-3)}{\Gamma(s)\zeta(2s-2)}\left(\begin{matrix}1&-iy\\iy&\frac{2s-2}{2s-3}y^{2}\end{matrix}\right)\left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right)\\ =&\frac{\pi^{\frac 12}\Gamma(\frac 32)\operatorname{\mathrm{Res}}_{s=2}\zeta(2s-3)}{y\Gamma(2)\zeta(2)}\left(\begin{matrix}1&0\\-\tau&1\end{matrix}\right)\left(\begin{matrix}1&-iy\\iy&2y^{2}\end{matrix}\right)\left(\begin{matrix}1&-\bar\tau\\0&1\end{matrix}\right)\\ =&\frac{3}{2\pi y}\left(\begin{matrix}1&-x\\-x&x^{2}+y^{2}\end{matrix}\right). \end{align*} $$

This concludes the proof of Theorem 4.1.▪

5 Unitary monodromy at the cusp

Now, let $\rho $ be a representation with $\rho (T)$ unitary. Then, we can write $\rho (T) = e^{2\pi i L}$ where L is Hermitian and real. In fact, we may and shall suppose that $\rho (T)$ and L are both diagonal.

Lemma 5.1 Suppose that $\rho (T)$ and L are both diagonal. If $h\in \operatorname {\mathrm {Pos}}(\rho )$ , then h satisfies

$$ \begin{align*} {\mathbf{e}}(-Lz)h{\mathbf{e}}(Lz)= h, \end{align*} $$

for all $z\in \mathbf {C}$ .

Proof The lemma holds for $z=1$ by the hypothesis $h \in \operatorname {\mathrm {Pos}}(\rho )$ . Without loss of generality, we can suppose that the distinct eigenvalues of L are $r_{1},\ldots , r_{m}$ each with corresponding multiplicity $\mu _{j}$ for $j=1,\ldots , m$ . The commutator algebra of $\rho (T) = {\mathbf {e}}(L)$ consists of the Levi subalgebra of block diagonal matrices with blocks of size $(\mu _{1},\ldots ,\mu _{m})$ . The commutant of ${\mathbf {e}}(Lz)$ for $z\in \mathbf {C}$ can only possibly increase in size (e.g., if $z=0$ ). This proves the lemma.▪

Write $\tau =x+iy$ , so that for $h\in \operatorname {\mathrm {Pos}}(\rho )$ , the preceding lemma implies that

$$ \begin{align*} h(\tau) = e^{-2\pi i L(x+iy)}h e^{2\pi i L(x-iy)} = e^{4\pi Ly}h. \end{align*} $$

By abuse of notation below, $a,b$ denote integers chosen, so that the corresponding matrix is unimodular; in particular, b changes in the last line of the following computation, but the result is independent of this choice, which justifies our abuse of notation. With this point made, we compute

$$ \begin{align*} & H(h,\tau,s)\\ &= e^{4\pi Ly}y^{s}h + \sum_{c\geq 1}\sum_{\gcd(c,d)=1} \rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t} e^{4\pi L \frac{y}{\left\lvert c\tau+d\right\rvert^{2}}}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\frac{y^{s}}{\left\lvert c\tau+d\right\rvert^{2s}}\\ &= e^{4\pi Ly}y^{s}h + \sum_{c\geq 1}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\sum_{r\in\mathbf{Z}} \rho\left(\begin{smallmatrix}a&b\\c&cr+d\end{smallmatrix}\right)^{t}e^{4\pi L \frac{y}{\left\lvert c(\tau+r)+d\right\rvert^{2}}}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&cr+d\end{smallmatrix}\right)}\frac{y^{s}}{\left\lvert c(\tau+r)+d\right\rvert^{2s}}\\ &=e^{4\pi Ly}y^{s}h + \sum_{c\geq 1}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\sum_{r\in\mathbf{Z}}\sum_{k\geq 0} \frac{(4\pi)^{k}}{k!}\rho\left(\begin{smallmatrix}a&b\\c&cr+d\end{smallmatrix}\right)^{t} L^{k}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&cr+d\end{smallmatrix}\right)}\frac{y^{s+k}}{\left\lvert c(\tau+r)+d\right\rvert^{2(s+k)}}\\ &=e^{4\pi Ly}y^{s}h + y^{s}\sum_{k\geq 0} \frac{(4\pi y)^{k}}{k!}\sum_{c\geq 1}\frac{1}{c^{2(s+k)}}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\\&\quad \times \sum_{r\in\mathbf{Z}}\frac{\rho(T^{r})\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t} L^{k}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\rho(T^{-r})}{\left\lvert \tau+r+\frac{d}{c}\right\rvert^{2(s+k)}}, \end{align*} $$

where in the last line we have used that $\rho (T)$ is diagonal to drop the transpose. Likewise, since L is real, this means $\rho (T)$ is unitary and we have used the identity $\overline {\rho (T^{r})} = \rho (T^{-r})$ .

Let $G(\tau )$ denote the sum over r above. Notice that $G(\tau +1) = \rho (T)^{-1}G(\tau )\rho (T)$ , so that if we write

$$ \begin{align*} \tilde G(\tau) = {\mathbf{e}}(Lx)G(\tau)\,{\mathbf{e}}(-Lx), \end{align*} $$

then $\tilde G(\tau +1)=\tilde G(\tau )$ . For simplicity, to study $\tilde G$ , we momentarily write

$$ \begin{align*} M =& \rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t} L^{k}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)},\\ L =& \operatorname{\mathrm{diag}}(e_{1},\ldots, e_{n}), \end{align*} $$

for real exponents $e_{j}$ . Then, the $(i,j)$ -entry of $\tilde G$ is

$$ \begin{align*} \tilde G_{ij} =& M_{ij}\sum_{r\in\mathbf{Z}}{\mathbf{e}}((e_{i}-e_{j})(x+r))\frac{1}{((x+r+\frac dc)^{2}+y^{2})^{s+k}}\\ =& {\mathbf{e}}\left((e_{j}-e_{i})\frac dc\right) M_{ij}\sum_{r\in\mathbf{Z}}{\mathbf{e}}((e_{i}-e_{j})(X+r))\frac{1}{((X+r)^{2}+y^{2})^{s+k}}, \end{align*} $$

where $X = x+\frac dc$ . We can evaluate this last sum using Poisson summation: with the Fourier transform,

$$ \begin{align*} f(u) =& \int_{-\infty}^{\infty} {\mathbf{e}}((e_{i}-e_{j})(X+r))\frac{1}{((X+r)^{2}+y^{2})^{s+k}}{\mathbf{e}}(-ur)dr\\ =&{\mathbf{e}}(Xu)\int_{-\infty}^{\infty} {\mathbf{e}}((e_{i}-e_{j}-u)r)\frac{1}{(r^{2}+y^{2})^{s+k}}dr. \end{align*} $$

Poisson summation gives

$$ \begin{align*} \tilde G_{ij} ={\mathbf{e}}\left((e_{j}-e_{i})\frac dc\right) M_{ij}\sum_{u\in\mathbf{Z}} f(u). \end{align*} $$

Formulas (3.18) and (3.19) of [Reference Iwaniec15] yield expressions

(5.1) $$ \begin{align} f(u) = \begin{cases} \pi^{\frac 12}{\mathbf{e}}((x+\frac dc)u)\frac{\Gamma(s+k-\frac 12)}{\Gamma(s+k)}y^{1-2(s+k)}, & e_{i}-e_{j}=u,\\ 2\frac{\pi^{s+k}{\mathbf{e}}((x+\frac dc)u)\left\lvert u+e_{j}-e_{i}\right\rvert^{s+k-\frac 12}}{y^{s+k-\frac 12}\Gamma(s+k)}K_{s+k-\frac 12}(2\pi \left\lvert u+e_{j}-e_{i}\right\rvert y),&e_{i}-e_{j} \neq u. \end{cases} \end{align} $$

Thus, if

$$ \begin{align*} z_{ij} = \begin{cases} 1, & e_{i}-e_{j} \in \mathbf{Z},\\ 0, & e_{i}-e_{j} \not \in \mathbf{Z}, \end{cases} \end{align*} $$

then we deduce that

$$ \begin{align*} \tilde G_{ij} &= \pi^{\frac 12}{\mathbf{e}}((e_{i}-e_{j})x)\frac{\Gamma(s+k-\frac 12)}{\Gamma(s+k)}y^{1-2(s+k)}M_{ij}z_{ij}\\ &\quad +\frac{2\pi^{s+k}{\mathbf{e}}\left((e_{j}-e_{i})\frac dc\right) M_{ij}}{y^{s+k-\frac 12}\Gamma(s+k)}\\&\quad \times \sum_{\substack{u\in\mathbf{Z}\\ u\neq e_{i}-e_{j}}} {\mathbf{e}}((x+\tfrac dc)u)\left\lvert u+e_{j}-e_{i}\right\rvert^{s+k-\frac 12}K_{s+k-\frac 12}(2\pi \left\lvert u+e_{j}-e_{i}\right\rvert y). \end{align*} $$

Notice that

$$ \begin{align*} G_{ij} = ({\mathbf{e}}(-Lx)\tilde G {\mathbf{e}}(Lx))_{ij} ={\mathbf{e}}((e_{j}-e_{i})x)\tilde G_{ij}. \end{align*} $$

Therefore, putting all of this together, we have shown that

$$ \begin{align*} H_{ij} &= e^{4\pi e_{i}y}y^{s}h_{ij}+\pi^{\frac 12}y^{1-s}\sum_{k\geq 0} \frac{(4\pi)^{k}}{k!}\sum_{c\geq 1}\frac{1}{c^{2(s+k)}}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\frac{\Gamma(s+k-\frac 12)}{y^{k}\Gamma(s+k)}M_{ij}z_{ij}\\ & \quad + y^{\frac 12}\sum_{k\geq 0} \frac{(4\pi)^{k}}{k!}\sum_{c\geq 1}\frac{1}{c^{2(s+k)}}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\frac{2\pi^{s+k} M_{ij}}{\Gamma(s+k)} \\ & \quad \times \sum_{\substack{u\in\mathbf{Z}\\ u\neq e_{i}-e_{j}}} {\mathbf{e}}((x+\tfrac dc)(u+e_{j}-e_{i}))\left\lvert u+e_{j}-e_{i}\right\rvert^{s+k-\frac 12}K_{s+k-\frac 12}(2\pi \left\lvert u+e_{j}-e_{i}\right\rvert y). \end{align*} $$

Rearranging terms yields the following Fourier expansion for the $(i,j)$ -entry of $H(h,\tau ,s)$ :

(5.2) $$ \begin{align} \nonumber H_{ij} &= e^{4\pi e_{i}y}y^{s}h_{ij}+\pi^{\frac 12}y^{1-s}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}\nonumber\\&\quad \sum_{c\geq 1}\frac{1}{c^{2s}}\left(\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}{}_{1}F_{1}(s-\tfrac 12,s,\tfrac{4\pi L}{c^{2}y})h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\right)_{ij}z_{ij}\nonumber\\ &\quad +2y^{\frac 12}\pi^{s}\sum_{\substack{u\in\mathbf{Z}\\ u\neq e_{i}-e_{j}}}\left\lvert u+e_{j}-e_{i}\right\rvert^{s-\frac 12}{\mathbf{e}}((u+e_{j}-e_{i})x)\nonumber\\&\quad \times \sum_{c\geq 1}\frac{1}{c^{2s}}\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}{\mathbf{e}}((u+e_{j}-e_{i})\tfrac dc)\\ \nonumber & \left(\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t} \sum_{k\geq 0}\left(\frac{K_{s+k-\frac 12}(2\pi \left\lvert u+e_{j}-e_{i}\right\rvert y)}{k!\Gamma(s+k)}\left(\frac{4\pi^{2}\left\lvert u+e_{j}-e_{i}\right\rvert L}{c^{2}}\right)^{k}\right)h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\right)_{ij}. \end{align} $$

Note that the convergence of the sum on k is deduced as in equation (4.6) in the proof of Theorem 4.1, which uses the expansion of $K_{s+k-\frac 12}$ near zero when k is large.

The first line in equation (5.2) gives the constant term of $H(\tau ,s)$ , which in particular is independent of x, unlike for the inclusion representation in Section 4. Inspired by the discussion in Section 4, it is natural to consider the rightmost pole of this constant term (if such a pole exists!) and its corresponding residue. Restrict to the case where $e_{i}-e_{j}\not \in \mathbf {Z}$ unless $i=j$ , which is a familiar condition from the study of ordinary differential equations. This condition implies that $z_{ij} = \delta _{ij}$ . Since the term $e^{4\pi Ly}y^{s}h$ is entire as a function of s, we are interested in the diagonal terms of the matrix valued function

$$ \begin{align*} C(\tau,s)=\pi^{\frac 12}y^{1-s}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}\sum_{c\geq 1}\frac{1}{c^{2s}}\left(\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}{}_{1}F_{1}(s-\tfrac 12,s,\tfrac{4\pi L}{c^{2}y})h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\right). \end{align*} $$

Unfortunately, the Kloosterman sums appearing above are somewhat unwieldy to handle via a direct approach in general. However, basic estimates show that the rightmost pole arises from the constant term of ${}_{1}F_{1}(s-\tfrac 12,s,\tfrac {4\pi L}{c^{2}y})$ in its Taylor expansion in $4\pi L/c^{2}y$ , so that one is really interested in the analytic properties of the diagonal terms of

$$ \begin{align*} C_{0}(\tau,s) = \pi^{\frac 12}y^{1-s}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}\sum_{c\geq 1}\frac{1}{c^{2s}}\left(\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}\right). \end{align*} $$

This expression is a little more manageable. For example, suppose that $h=I_{d}$ and $\rho $ is unitary, so that $\rho (\gamma )^{t}\overline {\rho (\gamma )}=I_{d}$ . In this case,

$$ \begin{align*} C_{0}(\tau,s) = \pi^{\frac 12}y^{1-s}\frac{\Gamma(s-\frac 12)}{\Gamma(s)}\sum_{c\geq 1}\frac{\phi(c)}{c^{2s}}I_{d}=\pi^{\frac 12}y^{1-s}\frac{\Gamma(s-\frac 12)\zeta(2s-1)}{\Gamma(s)\zeta(2s)}I_{d}. \end{align*} $$

It follows that the rightmost pole occurs at $s=1$ , and the residue is a multiple of the identity matrix. Therefore, this shows that when $\rho $ is unitary, the rightmost pole of $H(\tau ,s)$ for $h=I_{d}$ occurs at $s=1$ and the residue is a multiple of the Petersson inner product, which is a harmonic metric for trivial reasons. It is unclear how general this phenomenon is. In the next section, we discuss an example where $\rho (T)$ is unitary, but $\rho $ is not unitarizable, to indicate some of the difficulties of analyzing expressions like $C_{0}(\tau ,s)$ in more general circumstances.

Remark 5.2 Since L is diagonal, the two matrix sums

$$ \begin{align*} &\sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)},&& \sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}{\mathbf{e}}(-L\tfrac dc)\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)^{t}h\overline{\rho\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)}{\mathbf{e}}(L\tfrac dc) \end{align*} $$

have the same diagonal entries. The matrix on the right, however, only depends on $d \mod {c}$ (this observation uses that L is real), whereas the matrix expression on the left undergoes a monodromy transformation after changing the values of d mod c. Thus, the right sum involving ${\mathbf {e}}(\pm L\tfrac dc)$ could be used in the definition of $C_{0}(\tau ,s)$ , giving a more natural expression. This would also give an expression for the constant term of $H(\tau ,s)$ that is more uniform with the higher Fourier coefficients, which already incorporate such exponential factors. In the following section, we consistently work with Kloosterman sums that include these additional exponential factors.

6 An indecomposable family

Let $\chi $ be the character of the modular form $\eta ^{4}$ , where $\eta $ is the Dedekind eta function. For $z \in \operatorname {\mathrm {\mathcal {H}}}$ and $\alpha \in \mathbf {C}$ , define a $\mathbf {C}$ -valued group cocycle on $\Gamma $ by the integral

$$ \begin{align*} \kappa(\gamma) = \int_{z}^{\gamma z}\alpha \eta^{4}(\tau)d\tau. \end{align*} $$

This satisfies the cocycle identity

$$ \begin{align*} \kappa(\gamma_{1}\gamma_{2}) = \chi(\gamma_{1})\kappa(\gamma_{2})+\kappa(\gamma_{1}), \end{align*} $$

for all $\gamma _{1},\gamma _{2} \in \Gamma $ . Changing z adjusts $\kappa $ by a coboundary. We can use this cocycle to define a representation of $\Gamma $ that contains a nontrivial subrepresentation, but which is not completely reducible into a direct sum of irreducible representations:

$$ \begin{align*} \rho(\gamma) = \left(\begin{matrix}\chi(\gamma)&\kappa(\gamma)\\0&1\end{matrix}\right). \end{align*} $$

This defines a family of representations in the parameters z and $\alpha $ defining $\kappa $ . If $\zeta = e^{2\pi i/6}$ , then

$$ \begin{align*} \rho(T) &= \left(\begin{matrix}\zeta&\kappa(T)\\0&1\end{matrix}\right), & \rho(S) &= \left(\begin{matrix}-1&\kappa(S)\\0&1\end{matrix}\right), & \rho(-1) &= \left(\begin{matrix}1&0\\0&1\end{matrix}\right). \end{align*} $$

Observe that $\rho (T)^{6}=I$ , so that $\rho (T)$ is diagonalizable. However, it is not possible to diagonalize $\rho (T)$ while keeping $\rho (S)$ diagonal. From this, one sees that $\rho $ contains a nontrivial subrepresentation, but it is not completely reducible for generic values of $\alpha $ and z.

Consider the representation of $\Gamma $ on the real vector space $\operatorname {\mathrm {Herm}}_{2}$ defined by

$$ \begin{align*} M\cdot \gamma = \rho(\gamma)^{t}M\overline{\rho(\gamma)}. \end{align*} $$

Let $U=\operatorname {\mathrm {Herm}}_{2}^{\Gamma }$ denote the invariants for this action. A simple computation shows that generically U is spanned by $\left (\begin {smallmatrix}0&0\\0&1\end {smallmatrix}\right )$ . In particular, U does not contain any positive definite matrices, so that there does not exist a harmonic metric for $\rho $ that is constant as a function of $\tau $ for generic choices of $\kappa $ .

To apply the material of Section 5 in our study of metrics for this representation, it will be necessary to change basis, so that the T-matrix is diagonal. If we set $\zeta = e^{2\pi i/6}$ , $P=\left (\begin {smallmatrix}0&\zeta \\1&-\zeta \kappa (T)\end {smallmatrix}\right )$ , and $\psi = P\rho P^{-1}$ , then one checks that

$$ \begin{align*} \psi(T) &= \left(\begin{matrix}1&0\\0&\zeta\end{matrix}\right), &\psi(S) &= \left(\begin{matrix}1&0\\ (1-\zeta)\kappa(S)-2\kappa(T)&-1\end{matrix}\right), \\[-10pt]\end{align*} $$

and more generally

$$ \begin{align*} \psi(\gamma) = \left(\begin{matrix}1&0\\ (1-\zeta)\kappa(\gamma)+(\chi(\gamma)-1)\kappa(T)&\chi(\gamma)\end{matrix}\right).\\[-10pt] \end{align*} $$

The identity

(6.1) $$ \begin{align} \frac{d\kappa}{dz}(\gamma) = (\chi(\gamma)-1)\alpha\eta^{4}(z)\\[-10pt]\nonumber \end{align} $$

implies that $\frac {d\psi }{dz}=0$ , so that the change of basis has made $\psi $ independent of z. It is thus a one-parameter family of representations determined by the choice of $\alpha \in \mathbf {C}$ in the definition of $\kappa $ . The specializations of this family are not completely reducible unless $\alpha =0$ .

Turning to the associated Eisenstein metrics, we take for our exponent matrix $L = \left (\begin {smallmatrix}0&0\\0&\frac 16\end {smallmatrix}\right )$ . Observe that since $\psi (T)$ is diagonal with distinct eigenvalues, $\operatorname {\mathrm {Herm}}_{2}^{\psi (T)}$ consists of real diagonal matrices. Therefore, $\operatorname {\mathrm {Pos}}(\psi )$ consists of positive real diagonal matrices, and since the choice of $h \in \operatorname {\mathrm {Pos}}(\psi )$ only really depends on h up to scaling, we can write

$$ \begin{align*}h = \left(\begin{matrix}1&0\\0&A\end{matrix}\right),\\[-10pt]\end{align*} $$

for $A \in \mathbf {R}_{>0}$ . With these choices of parameters, we write $H(\tau ,s) = H(\psi ,L,h,\tau ,s)$ , whose Fourier expansion is given by equation (5.2).

As usual, much of the difficulty in studying the Fourier expansion of $H(\tau ,s)$ lies in understanding the Kloosterman sumsFootnote 1 and their associated generating series:

$$ \begin{align*} \operatorname{\mathrm{Kl}}(c) :=& \sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}{\mathbf{e}}(-L\tfrac{d}{c})\psi\left(\begin{matrix}a&b\\c&d\end{matrix}\right)^{t}\left(\begin{matrix}1&0\\0&A\end{matrix}\right)\overline{\psi\left(\begin{matrix}a&b\\c&d\end{matrix}\right)}{\mathbf{e}}(L\tfrac{d}{c}),\\ D(s) :=& \sum_{c\geq 1}\frac{\operatorname{\mathrm{Kl}}(c)}{c^{s}}. \end{align*} $$

These families of Kloosterman sums admit a second-order Taylor expansion centered on the reducible specializations of $\psi $ satisfying $\kappa (S)=2\zeta \kappa (T)$ (which we have seen is equivalent to the condition $\alpha =\kappa =0$ in the definition of $\kappa $ ).

Proposition 6.1 There exist sequences $a_{c}\in \mathbf {Z}_{\geq 0}$ and $b_{c}\in \mathbf {Z}[e^{2\pi i/6c}]$ , independent of $\kappa $ , such that

$$ \begin{align*} \operatorname{\mathrm{Kl}}(c) = \left(\begin{matrix}\kappa(S)-2\zeta\kappa(T)&0\\0&1\end{matrix}\right)\left(\begin{matrix}a_{c}&b_{c}\\\overline{b_{c}}&0\end{matrix}\right)\left(\begin{matrix}\overline{\kappa(S)-2\zeta\kappa(T)}&0\\0&1\end{matrix}\right)A+\phi(c)\left(\begin{matrix}1&0\\0&A\end{matrix}\right), \end{align*} $$

for all $c\geq 1$ .

Proof Write $\gamma = \left (\begin {smallmatrix}a&b\\c&d\end {smallmatrix}\right ) \in \Gamma $ , and observe that the lower-triangular forms of $\psi (T)$ and $\psi (S)$ show that after writing $\gamma $ as a word in S and T, we have

$$ \begin{align*} \psi(\gamma) = \left(\begin{matrix}1&0\\\lambda a(\gamma)&\chi(\gamma)\end{matrix}\right), \end{align*} $$

where $\lambda = \kappa (S)-2\zeta \kappa (T)$ and $a(\gamma ) \in \mathbf {Z}[\zeta ]$ . Furthermore, $a(\gamma )$ is independent of $\kappa $ , as it only depends on how one writes $\gamma $ as a word in S and T. Equivalently, this independence can be seen by writing $a(\gamma )$ as a ratio of linear combinations of values of $\kappa $ . Then, $a(\gamma )$ is seen to be independent of z by differentiation, via equation (6.1). The possible dependence of $a(\gamma )$ on $\alpha $ cancels in the ratio defining $a(\gamma )$ , so that it is indeed entirely independent of the choice of $\kappa $ .

If we write $\omega = e^{2\pi i /c}$ , then the general term in the sum defining $\operatorname {\mathrm {Kl}}(c)$ takes the form

$$ \begin{align*} &{\mathbf{e}}(-L\tfrac{d}{c})\psi\left(\begin{matrix}a&b\\c&d\end{matrix}\right)^{t}\left(\begin{matrix}1&0\\0&A\end{matrix}\right)\overline{\psi\left(\begin{matrix}a&b\\c&d\end{matrix}\right)}{\mathbf{e}}(L\tfrac{d}{c})\\ =&\left(\begin{matrix}1&0\\0&w^{-d}\end{matrix}\right)\left(\begin{matrix}1&\lambda a(\gamma)\\0&\chi(\gamma)\end{matrix}\right)\left(\begin{matrix}1&0\\0&A\end{matrix}\right)\left(\begin{matrix}1&0\\\overline{\lambda a(\gamma)}&\overline{\chi(\gamma)}\end{matrix}\right)\left(\begin{matrix}1&0\\0&\omega^{d}\end{matrix}\right)\\ =&\left(\begin{matrix}1&A\lambda a(\gamma)\\0&A\omega^{-d}\chi(\gamma)\end{matrix}\right)\left(\begin{matrix}1&0\\\overline{\lambda a(\gamma)}&\overline{\chi(\gamma)}\omega^{d}\end{matrix}\right)\\ =&\left(\begin{matrix}1+A\left\lvert \lambda a(\gamma)\right\rvert^{2}&A\lambda a(\gamma)\overline{\chi(\gamma)}\omega^{d}\\A\overline{\lambda a(\gamma)}\chi(\gamma)\omega^{-d}&A\end{matrix}\right). \end{align*} $$

Therefore, from this expression, we see that the Proposition holds with

$$ \begin{align*} a_{c} &= \sum_{\substack{d=1\\\gcd(c,d)=1}}^{c} \left\lvert a(\gamma)\right\rvert^{2}, & b_{c}&= \sum_{\substack{d=1\\\gcd(c,d)=1}}^{c}a(\gamma)\overline{\chi(\gamma)}\omega^{d}.\\[-46pt] \end{align*} $$

Values of $a_{c}$ and $b_{c}$ from Proposition 6.1 are listed in Table 1, and plots of the values $a_{c}/\phi (c)$ and $\left \lvert b_{c}\right \rvert /\phi (c)$ are contained in Figure 1. Polynomial growth estimates can be obtained for both $a_{c}$ and $b_{c}$ using Eichler length estimates as in Corollary 3.5 of [Reference Knopp and Mason20], although establishing an exact abscissa of convergence for $D(s)$ , and the computation of the corresponding residue, would require a more detailed analysis. Completion of such an analysis would likely enable one to establish analytic continuation of $H(\tau ,s)$ around its rightmost pole and compute the corresponding residue. The analysis of Section 5 shows that it is really the diagonal terms of $D(s)$ that intervene in this residue computation, so that it is the sequence $a_{c}$ that is most important for this analysis. We leave this computation, and whether the resulting computation produces a harmonic metric for these nonunitary representations, as an open question for future investigation.

Footnotes

1 For simplicity, we focus on the constant term $u=0$ of the Fourier expansion; otherwise, we should incorporate an additional exponential factor in the Kloosterman sum

References

Bruggeman, R. W., Families of automorphic forms, Modern Birkhäuser Classics, Birkhäuser, Basel, 2010, Reprint of the 1994 edition.Google Scholar
Candelori, L. and Franc, C., Vector-valued modular forms and the modular orbifold of elliptic curves . Int. J. Number Theory 13(2017), no. 1, 3963.CrossRefGoogle Scholar
Carlson, J., Müller-Stach, S., and Peters, C., Period mappings and period domains, Cambridge Studies in Advanced Mathematics, 168, Cambridge University Press, Cambridge, 2017, Second edition of MR2012297.Google Scholar
Corlette, K., Flat $~G$ -bundles with canonical metrics . J. Differential Geom. 28(1988), no. 3, 361382.CrossRefGoogle Scholar
Deitmar, A., Spectral theory for non-unitary twists. Hiroshima Math. J. 49(2019), no. 2, 235–249. arXiv:1703.03709Google Scholar
Deitmar, A. and Monheim, F., A trace formula for non-unitary representations of a uniform lattice . Math. Z. 284(2016), nos. 3–4, 11991210.CrossRefGoogle Scholar
Deitmar, A. and Monheim, F., Eisenstein series with non-unitary twists . J. Korean Math. Soc. 55(2018), no. 3, 507530.Google Scholar
Donaldson, S. K., Twisted harmonic maps and the self-duality equations . Proc. Lond. Math. Soc. (3) 55(1987), no. 1, 127131.CrossRefGoogle Scholar
Franc, C. and Mason, G., On the structure of modules of vector-valued modular forms . Ramanujan J. 47(2018), no. 1, 117139.CrossRefGoogle Scholar
Franc, C. and Rayan, S., Nonabelian Hodge theory and vector valued modular forms . In: Vertex operator algebras, number theory and related topics, Contemporary Mathematics, 753, American Mathematical Society, Providence, RI, 2020, pp. 95118.CrossRefGoogle Scholar
García-Raboso, A. and Rayan, S., Introduction to nonabelian Hodge theory: flat connections, Higgs bundles and complex variations of Hodge structure . In: Calabi–Yau varieties: arithmetic, geometry and physics, Fields Institute Monographs, 34, Fields Institute for Research in Mathematical Sciences, Toronto, ON, 2015, pp. 131171.CrossRefGoogle Scholar
Goldman, W. M. and Xia, E. Z., Rank one Higgs bundles and representations of fundamental groups of Riemann surfaces . Mem. Amer. Math. Soc. 193(2008), no. 904, viii+69.Google Scholar
Hitchin, N., Stable bundles and integrable systems . Duke Math. J. 54(1987), no. 1, 91114.CrossRefGoogle Scholar
Hitchin, N. J., The self-duality equations on a Riemann surface . Proc. Lond. Math. Soc. (3) 55(1987), no. 1, 59126.CrossRefGoogle Scholar
Iwaniec, H., Spectral methods of automorphic forms. 2nd ed., Graduate Studies in Mathematics, 53, American Mathematical Society, Providence, RI, 2002, pp. xii+220.Google Scholar
Katz, N. M., Gauss sums, Kloosterman sums, and monodromy groups, Annals of Mathematics Studies, 116, Princeton University Press, Princeton, NJ, 1988.CrossRefGoogle Scholar
Knopp, M. and Mason, G., On vector-valued modular forms and their Fourier coefficients . Acta Arith. 110(2003), no. 2, 117124.CrossRefGoogle Scholar
Knopp, M. and Mason, G., Vector-valued modular forms and Poincaré series . Illinois J. Math. 48(2004), no. 4, 13451366.CrossRefGoogle Scholar
Knopp, M. and Mason, G., Logarithmic vector-valued modular forms . Acta Arith. 147(2011), no. 3, 261262.CrossRefGoogle Scholar
Knopp, M. and Mason, G., Logarithmic vector-valued modular forms and polynomial-growth estimates of their Fourier coefficients . Ramanujan J. 29(2012), nos. 1–3, 213223.CrossRefGoogle Scholar
Marks, C. and Mason, G., Structure of the module of vector-valued modular forms . J. Lond. Math. Soc. (2) 82(2010), no. 1, 3248.CrossRefGoogle Scholar
Mason, G., On the Fourier coefficients of 2-dimensional vector-valued modular forms . Proc. Amer. Math. Soc. 140(2012), no. 6, 19211930.CrossRefGoogle Scholar
Mehta, V. B. and Seshadri, C. S., Moduli of vector bundles on curves with parabolic structures . Math. Ann. 248(1980), no. 3, 205239.CrossRefGoogle Scholar
Müller, W., A Selberg trace formula for non-unitary twists . Int. Math. Res. Not. IMRN 9(2011), 20682109.Google Scholar
Narasimhan, M. S. and Seshadri, C. S., Holomorphic vector bundles on a compact Riemann surface . Math. Ann. 155(1964), 6980.CrossRefGoogle Scholar
Ngô, B. C., Le lemme fondamental pour les algèbres de Lie . Publ. Math. Inst. Hautes Études Sci. 111(2010), 1169.CrossRefGoogle Scholar
Simpson, C. T., Harmonic bundles on noncompact curves . J. Amer. Math. Soc. 3(1990), no. 3, 713770.CrossRefGoogle Scholar
Simpson, C. T., Higgs bundles and local systems . Publ. Math. Inst. Hautes Études Sci. 75(1992), 595.CrossRefGoogle Scholar
Tuba, I. and Wenzl, H., Representations of the braid group ${B}_3$ and of $SL\left(2,Z\right)$ . Pacific J. Math. 197(2001), no. 2, 491510.CrossRefGoogle Scholar
Figure 0

Table 1: Values of the sequences $a_{c}$ and $\left \lvert b_{c}\right \rvert $ for small values of c.

Figure 1

Figure 1: Values of the sequences $\frac {a_{c}}{\phi (c)}$ and $\frac {\left \lvert b_{c}\right \rvert }{\phi (c)}$ in blue and red, respectively.