Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-23T18:50:19.017Z Has data issue: false hasContentIssue false

On possible limit functions on a Fatou component in non-autonomous iteration

Published online by Cambridge University Press:  28 October 2024

MARK COMERFORD*
Affiliation:
Department of Mathematics, University of Rhode Island, 5 Lippitt Road, Room 102F, Kingston, RI 02881, USA
CHRISTOPHER STANISZEWSKI
Affiliation:
Department of Mathematics, Framingham State University, 100 State Street, Framingham, MA 01701, USA (e-mail: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

The possibilities for limit functions on a Fatou component for the iteration of a single polynomial or rational function are well understood and quite restricted. In non-autonomous iteration, where one considers compositions of arbitrary polynomials with suitably bounded degrees and coefficients, one should observe a far greater range of behavior. We show this is indeed the case and we exhibit a bounded sequence of quadratic polynomials which has a bounded Fatou component on which one obtains as limit functions every member of the classical Schlicht family of normalized univalent functions on the unit disc. The proof is based on quasiconformal surgery and the use of high iterates of a quadratic polynomial with a Siegel disc which closely approximate the identity on compact subsets. Careful bookkeeping using the hyperbolic metric is required to control the errors in approximating the desired limit functions and ensure that these errors ultimately tend to zero.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

This work is concerned with non-autonomous iteration of bounded sequences of polynomials, a relatively new area of complex dynamics. In classical complex dynamics, one studies the iteration of a (fixed) rational function on the Riemann sphere. Often in applications of dynamical systems, noise is introduced, and thus it is natural to consider a scheme of iteration where the function at each stage is allowed to vary. Here we study the situation where the functions being applied are polynomials with appropriate bounds on the degrees and coefficients.

Non-autonomous iteration, in our context, was first studied by Fornaess and Sibony [Reference Fornaess and SibonyFS91] and also by Sester, Sumi, and others who were working in the closely related area of skew-products [Reference SesterSes99, Reference SumiSum00, Reference SumiSum01, Reference SumiSum06, Reference SumiSum10]. Further work was done by Rainer Brück, Stefan Reitz, Matthias Büger [Reference BrückBrü00, Reference BrückBrü01, Reference Brück, Büger and ReitzBBR99, Reference BügerBüg97], Michael Benedicks, and the first author [Reference Comerford, Lapidus and van FrankenhuisjenCom04, Reference ComerfordCom06, Reference ComerfordCom08, Reference ComerfordCom13b, Reference Woodard and ComerfordMC13], among others.

One of the main topics of interest in non-autonomous iteration is discovering which results in classical complex dynamics generalize to the non-autonomous setting and which do not. For instance, the first author proved there is a generalization of the Sullivan straightening theorem [Reference Carleson and GamelinCG93, Reference ComerfordCom12, Reference Douady and HubbardDH85], while Sullivan’s non-wandering theorem [Reference Carleson and GamelinCG93, Reference SullivanSul85] no longer holds in this context [Reference ComerfordCom03]. One can thus construct polynomial sequences which either provide counterexamples or have interesting properties in their own right.

1.1. Non-autonomous iteration

Following [Reference ComerfordCom12, Reference Fornaess and SibonyFS91], let $d \ge 2$ , $M \geq 0$ , $K \geq ~1$ , and let $\{P_m \}_{m=1}^{\infty }$ be a sequence of polynomials where each $P_m(z) = a_{d_m,m}z^{d_m} + a_{d_m-1,m}z^{d_m-1} + \cdots \cdots + a_{1,m}z + a_{0,m}$ is a polynomial of degree $2 \leq d_m \leq d$ whose coefficients satisfy

$$ \begin{align*} 1/K \leq |a_{d_m,m}| \leq K,\quad m \geq 1,\quad |a_{k,m}| \leq M,\quad m \geq 1,\quad 0 \leq k \leq d_m -1. \end{align*} $$

Such sequences are called bounded sequences of polynomials or simply bounded sequences. For a constant $C\ge 1$ , we will say that a bounded sequence is C-bounded if all of the coefficients in the sequence are bounded above in absolute value by C while the leading coefficients are also bounded below in absolute value by ${1}/{C}$ .

For each $1 \le m$ , let $Q_m$ be the composition $P_m \circ \cdots \cdots \circ P_2 \circ P_1$ and, for each $0 \le m \le n$ , let $Q_{m,n}$ be the composition $P_n \circ \cdots \cdots \circ P_{m+2} \circ P_{m+1}$ (where we set $Q_{m,m} = \mathrm {Id}$ for each $m \ge 0$ ). Let the degrees of these compositions be $D_m$ and $D_{m,n}$ , respectively, so that $D_m = \prod _{i=1}^m d_i$ , $D_{m,n} = \prod _{i=m+1}^n d_i$ .

For each $m \ge 0$ , define the mth iterated Fatou set or simply the Fatou set at time m, ${\mathcal F}_m$ , by

$$ \begin{align*} {\mathcal F}_m = \{z \in \hat{\mathbb C} : \{Q_{m,n}\}_{n=m}^{\infty} \;\mbox{is a normal family on some neighborhood of}\; z \} \end{align*} $$

where we take our neighborhoods with respect to the spherical topology on $\hat {\mathbb C}$ . Components of ${\mathcal F}_m$ are referred to as Fatou components at time m and we also define the mth iterated Julia set or simply the Julia set at time m, ${\mathcal J}_m$ , to be the complement $\hat {\mathbb C} \setminus {\mathcal F}_m$ .

It is easy to show that these iterated Fatou and Julia sets are completely invariant in the following sense.

Theorem 1.1. For any $0\le m \le n$ , $Q_{m,n}({\mathcal J}_m) = {\mathcal J}_n$ and $Q_{m,n}({\mathcal F}_m) = {\mathcal F}_n$ , with components of ${\mathcal F}_m$ being mapped surjectively onto those of ${\mathcal F}_n$ by $Q_{m,n}$ .

It is easy to see that, given bounds d, K, M as above, we can find some radius R depending only on d, K, M so that, for any sequence $\{P_m \}_{m=1}^{\infty }$ with these bounds and any $m \ge 0$ ,

$$ \begin{align*} |Q_{m,n}(z)| \to \infty \quad \mbox{as } n \to \infty,\quad |z|> R, \end{align*} $$

which shows in particular that, as for classical polynomial Julia sets, there will be a basin of infinity at time m, denoted ${\mathcal A}_{\infty , m}$ on which all points escape locally uniformly to infinity under iteration. Such a radius will be called an escape radius for the bounds d, K, M. Note that the maximum principle shows that, just as in the classical case (see [Reference Carleson and GamelinCG93]), there can be only one component on which $\infty $ is a limit function and so the sets ${\mathcal A}_{\infty , m}$ are completely invariant in the sense given in Theorem 1.1.

The complement of ${\mathcal A}_{\infty , m}$ is called the filled Julia set at time m for the sequence $\{P_m \}_{m=1}^{\infty }$ and is denoted by ${\mathcal K}_m$ . The same argument using Montel’s theorem as in the classical case then shows that $\partial {\mathcal K}_m = {\mathcal J}_m$ (see [Reference Carleson and GamelinCG93]). When $m=0$ , we will refer to the Fatou set, Julia set, filled Julia set, and basin of infinity for a bounded polynomial sequence $\{P_m \}_{m=1}^{\infty }$ as simply the Fatou set, Julia set, filled Julia set, and basin of infinity (respectively) for $\{P_m \}_{m=1}^{\infty }$ and denote them by ${\mathcal F}$ , ${\mathcal J}$ , ${\mathcal K}$ , and ${\mathcal A}_{\infty }$ .

In view of the existence of the escape radius above, we have the following obvious result which we will use in proving our main result (see Theorem 1.3 below).

Proposition 1.2. If V is an open connected set for which there exists a subsequence $\{m_k\}_{k=1}^{\infty }$ such that the sequence of forward images $\{Q_{m_k}(V)\}_{k=1}^{\infty }$ is uniformly bounded, then V is contained in a bounded Fatou component for $\{P_m \}_{m=1}^{\infty }$ .

1.2. The Schlicht class

The Schlicht class of functions, commonly denoted by $\mathcal {S}$ , is the set of univalent functions defined on the unit disk such that, for all $f \in {\mathcal S}$ , we have $f(0)=0$ and $f'(0)=1$ . This is a classical class of functions for which many results are known. A common and useful technique is to use scaling or conformal mapping to apply results for $\mathcal {S}$ to arbitrary univalent functions defined on arbitrary domains (see for example Theorem 1.4).

1.2.1. Statement of the main theorem

We now give the statement of the main result of this paper.

Theorem 1.3. There exists a bounded sequence of quadratic polynomials $\{P_m \}_{m=1}^{\infty }$ and a bounded Fatou component $V \subset {\mathbb D}$ for this sequence containing $0$ such that for all $f\in \mathcal {S}$ , there exists a subsequence of iterates $\{Q_{m_k}\}_{k=1}^{\infty }$ which converges locally uniformly to f on V.

The strength of this statement is that every member of ${\mathcal S}$ is a limit function on the same Fatou component for a single polynomial sequence.

The proof relies on a scaled version of the polynomial $P_{\unicode{x3bb} }(z)= \unicode{x3bb} z(1-z)$ , where $\unicode{x3bb} = e^{2\pi i ({\sqrt {5}-1})/{2}}$ . As $P_{\unicode{x3bb} }$ is conjugate to an irrational rotation on its Siegel disk about $0$ , which we denote by $U_{\unicode{x3bb} }$ , we may find a subsequence of iterates which converges uniformly to the identity on compact subsets of $U_{\unicode{x3bb} }$ . We will rescale $P_{\unicode{x3bb} }$ so that the filled Julia set for the scaled version P of $P_{\unicode{x3bb} }$ is contained in a small Euclidean disc about $0$ . This is done so that, for any $f \in {\mathcal S}$ , we can use the distortion theorems to control $|f'|$ on a relatively large hyperbolic disk inside U, the scaled version of the Siegel disc $U_{\unicode{x3bb} }$ (see Figure 1).

The initial inspiration for this proof came from the concept of Löwner chains (see e.g. [Reference Contreras, Diaz-Madrigal and GumenyukCDMG10, Reference DurenDur83]), particularly the idea that a univalent function can be expressed as a composition of many functions which are close to the identity. Given our remarks above about iterates of $P_{\unicode{x3bb} }$ which converge to the identity locally uniformly on $U_{\unicode{x3bb} }$ , this encouraged us to think we might be able to approximate these univalent functions which are close to the identity in some way with polynomials, and then compose these polynomials to get an approximation of the desired univalent function on some suitable subset of $U_{\unicode{x3bb} }$ , a principle which we like to summarize as ‘Do almost nothing and you can do almost anything’. As a matter of fact, there is now only one point in our proof where we make use of a Löwner chain, although it is not necessary to know this: the interested reader can find this in the ‘up’ section in the proof of Phase II (Lemma 5.17).

Figure 1 The filled Julia set ${\mathcal K}_{\unicode{x3bb} }$ for $P_{\unicode{x3bb} }$ with Siegel disc highlighted.

The proof of Theorem 1.3 will follow from an inductive argument, and each step in the induction will be broken up into two phases.

  • Phase I: Construct a bounded polynomial composition which approximates a suitable net of functions from $\mathcal {S}$ on a subset of the unit disk.

  • Phase II: Construct a bounded polynomial composition which corrects the error of the previous Phase I composition to arbitrary accuracy on a slightly smaller subset.

A key idea in the proof is the fact that, since $\mathcal {S}$ is normal, it has a countable dense subset and we can approximate all of $\mathcal {S}$ to any desired accuracy on any compact subset of ${\mathbb D}$ by choosing a suitable finite net of functions (see Lemma 2.1). Great care is needed to control the error in the approximations and to ensure that the domain loss that necessarily occurs in each Phase II correction eventually stabilizes, so that we are left with a non-empty region upon which the desired approximations hold. To be a little more specific, the induction hypothesis will consist of nine parts. The first three are bookkeeping estimates about the radii of the hyperbolic discs in U on which our approximations hold which ensure that these radii do not get too small. The fourth hypothesis states that our polynomial sequence will be bounded while the fifth allows us to compose inverse branches which is necessary, since the Phase II correction to the error needs to ‘undo’ the error accumulated thus far and so it is the inverse of this error which needs to be approximated. The sixth hypothesis says that the forward compositions will be univalent on a disc which is not too small and which eventually becomes part of the Fatou component V of the statement of Theorem 1.3. The seventh hypothesis concerns the accuracy of the Phase II correction of the error while the eighth hypothesis is a bound on the size of the error after Phase I which needs to be corrected by the next Phase II. The ninth and final hypothesis is about how Phase I gives accurate approximations to all the functions in a suitably chosen net for ${\mathcal S}$ which is obtained using Lemma 2.1.

To create our polynomial approximations, we use what we call the Polynomial Implementation Lemma (Lemma 3.9). Suppose we want to approximate a given univalent function f with a polynomial composition. Let ${\mathcal K}$ be the filled Julia set for P and let $\gamma $ , $\Gamma $ be two Jordan curves enclosing ${\mathcal K}$ with $\gamma $ lying inside $\Gamma $ . In addition, we require that f be defined inside and on a neighborhood of $\gamma $ , and that $f(\gamma )$ lie inside $\Gamma $ . We construct a homeomorphism of the sphere as follows: define it to be f inside $\gamma $ , the identity outside $\Gamma $ , and extend by interpolation to the region between $\gamma $ and $\Gamma $ . The homeomorphism can be made quasiconformal, with non-zero dilatation (possibly) only on the region between $\gamma $ and $\Gamma $ . If we then pull back with a high iterate of P which is close to the identity, the support of the dilatation becomes small, which will eventually allow us to conclude that, when we straighten, we get a polynomial composition that approximates f closely on a large compact subset of U. In Phase I (Lemma 4.8), we then apply this process repeatedly to create a polynomial composition which approximates a finite set of functions from ${\mathcal S}$ .

In Phase II (Lemma 5.17), we wish to correct the error from the Phase I composition. This error is defined on a subset of the Siegel disk, but to apply the Polynomial Implementation Lemma to create a composition which corrects the error, we need the error to be defined on a region which contains ${\mathcal K}$ .

To get around this, we conjugate so that the conjugated error is defined on a region which contains ${\mathcal K}$ . This introduces a further problem, namely that we must now cancel the conjugacy with polynomial compositions. A key element of the proof is viewing the expanding part of the conjugacy as a dilation in the correct conformal coordinates. An inevitable loss of domain occurs in using these conformal coordinates, but we are, in the end, able to create a Phase II composition which corrects the error of the Phase I approximation on a (slightly smaller) compact subset of U. What allows us to control the loss of domain is first that, while some loss of domain is unavoidable, the accuracy of the Phase II correction is completely at our disposal. Second, one can show that the loss of domain will tend to zero as the size of the error to be corrected tends to zero (Lemma 5.15 and also Claims 5.18, 5.19 in the proof of Phase II—Lemma 5.17). This eventually allows us to control the loss of domain. We then implement a fairly lengthy inductive argument to prove the theorem, getting better approximations to more functions in the Schlicht class with each stage in the induction, while ensuring that the region upon which the approximation holds does not shrink to nothing.

Theorem 1.3 can be generalized somewhat to suitable normal families on arbitrary open sets.

Theorem 1.4. Let $\Omega \subset \mathbb C$ be open and let ${\mathcal N}$ be a locally bounded normal family of univalent functions on $\Omega $ all of whose limit functions are non-constant. Let $z_0 \in \Omega $ . Then there exists a bounded sequence $\{\tilde P_m \}_{m=1}^{\infty }$ of quadratic polynomials and a bounded Fatou component $W \subset \Omega $ for this sequence containing $z_0$ such that for all $f \in {\mathcal N}$ , there exists a subsequence of iterates $\{\tilde Q_{m_k}\}_{k=1}^{\infty }$ which converges locally uniformly to f on W.

1.3. Related results

In our proof, we will make extensive use of the hyperbolic metric. This has two main advantages—conformal invariance and the fact that hyperbolic Riemann surfaces are infinitely large when measured using their hyperbolic metrics which allows one to neatly characterize relatively compact subsets using the external hyperbolic radius (see Definition 2.2 below on the hyperbolic metric). An alternative approach is to try to do everything using the Euclidean metric. This requires, among other things that, in the analogue of the ‘up’ portion of the proof of our Phase II (Lemma 5.17), we must ensure that the image of the Siegel disc under a dilation about the fixed point by a factor which is just larger than $1$ will cover the Siegel disc—in other words, we need a Siegel disc which is star-shaped (about the fixed point). Fortunately, there is a result in the paper of Avila, Buff, and Chéritat [Reference Avila, Buff and ChéritatABC04, Main Theorem] which guarantees the existence of such Siegel discs. This led the authors to extensively investigate using this approach to prove a version of Theorem 1.3 but, in practice, although this can probably be made to work, we found this to be at least as complicated as the proof outlined in the current manuscript.

Results on approximating a large class of analytic germs of diffeomorphisms were proved in the paper of Loray [Reference LorayLor06], particularly Théorème 3.2.3 in his work, where he uses a pseudo-group induced by a non-solvable subgroup of diffeomorphisms to approximate all germs of conformal maps which send one prescribed point to another with only very mild restrictions. Although we cannot rule out the possibility that these results could be used to obtain a version of our Theorems 1.3 and 1.4, this would be far from immediate. For example, pseudo-groups are closed under taking inverses (see [Reference LorayLor06, Définition 3.4.1]). In our context, we can at best only approximate inverses, e.g. the suitable inverse branch of $P_{\unicode{x3bb} }$ on $U_{\unicode{x3bb} }$ which fixes $0$ . Moreover, one would need to be able to compose many such approximations while still ensuring that the resulting composition would be close to the ideal version, as well as being defined on a set which was not too small. Thus, one would unavoidably require a complex bookkeeping scheme for tracking the sizes of errors and domains, which is a large part of what we need to concern ourselves with below.

Also worth mentioning is the work done by a number of authors in the area of polynomial skew-products. A seminal paper was the work of Astorg et al [Reference Astorg, Buff, Dujardin, Peters and RaissyABD+16] who used an ingenious idea of Lyubich based on Lavaurs mappings to construct a polynomial skew-product with a wandering domain. It is worth noting that the more rigid nature of polynomial skew-products combined with the fact that the Fatou set is considered as a subset of ${\mathbb C}^2$ make it more difficult to construct a wandering domain than in the context of non-autonomous iteration where one has greater freedom in choosing the members of one’s sequence of polynomials. Further examples of wandering domains were obtained in [Reference Astorg and ThalerAT, Reference Astorg, Thaler and PetersATP23] for a different class of skew-products than originally considered in [Reference Astorg, Buff, Dujardin, Peters and RaissyABD+16]. Other classification results were obtained where the possibility of wandering domains was ruled out if the skew-product satisfied certain additional conditions on the dynamics. See for example the work of Ji [Reference JiJi20, Reference JiJi23], the work of Ji and Shen [Reference Ji and ShenJS], as well as Lilov [Reference LilovLil04], Peters and Raissy [Reference Peters and RaissyPR19], Peters and Smit [Reference Peters and SmitPS18], and finally Peters and Vivas [Reference Peters and VivasPV16].

Finally, in [Reference Gelfreich and TuraevGT10], Gelfriech and Turaev show that an area-preserving two-dimensional map with an elliptic periodic point can be renormalized so that the renormalized iterates are dense in the set of all real-analytic symplectic maps of a two-dimensional disk. However, this is clearly not as close to what we do as the two other cases mentioned above.

2. Background

We will now discuss some background which will be instrumental in proving Theorem 1.3. Some of the more standard results we need can be found in the appendices—see Appendix A.1.

2.1. The hyperbolic metric

We first establish some notation for hyperbolic discs. Let R be a hyperbolic Riemann surface and let $\Delta _R(z,r)$ be the (open) hyperbolic disc in R centered at z of radius r. If the domain is obvious in context, we may simply denote this disc as $\Delta (z,r)$ . Lastly, let ${d}\rho _R$ represent the hyperbolic length element for R. If D is a domain in $\mathbb {C}$ and $z\in D$ , let $\delta _D(z)$ denote the Euclidean distance to $\partial D$ . We will be using the hyperbolic metric to measure both the accuracy of our approximations and the loss of domain that occurs in each Phase II composition. One immediate application of Lemma A.4 in the appendices is the following which will be essential to us later in the proof of the induction (Lemma 6.2) leading up to the main result (Theorem 1.3).

Lemma 2.1. Let $K \subset \mathbb D$ be relatively compact and let $\varepsilon> 0$ . We can then find a finite set $\{f_{i} \}_{i=1}^N \subset \mathcal {S}$ such that, given $f \in \mathcal {S}$ , there exists (at least one) $1 \le k \le N$ such that

$$ \begin{align*} \sup_{z \in K} \rho_{\mathbb D}(f(z),f_k(z))<\varepsilon. \end{align*} $$

Proof. This follows immediately from the normality of ${\mathcal S}$ (Corollary A.3), combined with [Reference ConwayCon78, Proposition VII.1.16].

A set $\{f_{i} \}_{i=1}^N \in \mathcal {S}$ as above will be called an $\varepsilon $ -net for ${\mathcal S}$ on K or simply an $\varepsilon $ -net if the set K is clear from the context.

Next, we will need a notion of internal and external hyperbolic radii, which is one of the crucial bookkeeping tools we will be using, especially for controlling loss of domain in Phase II.

Definition 2.2. Suppose V is a hyperbolic Riemann surface, $v \in V$ , and X is a non-empty subset of V. Define the external hyperbolic radius of X in V about v, denoted $R^{\mathrm {ext}}_{(V,v)}X$ , by

$$ \begin{align*}R^{\mathrm{ext}}_{(V,v)}X=\sup_{z \in X} \rho_{V}(v,z).\end{align*} $$

If $v \in X$ , we further define the internal hyperbolic radius of X in V about v, denoted $R^{\mathrm {int}}_{(V,v)}X$ , by

$$ \begin{align*}R^{\mathrm{int}}_{(V,v)}X = \inf_{z \in V \setminus X} \rho_{V}(v,z).\end{align*} $$

If $v \in X$ and it happens that $R^{\mathrm {int}}_{(V,v)}X=R^{\mathrm {ext}}_{(V,v)}X$ , we will call their common value the hyperbolic radius of X in V about v, and denote it by $R_{(V,v)}X$ .

We remark that, for any $v \in V$ , if $X=V$ , then $R^{\mathrm {int}}_{(V,v)}X=R^{\mathrm {ext}}_{(V,v)}X=\infty $ . Also, if $v \in X$ and $X \subsetneq V$ , then $R^{\mathrm {int}}_{(V,v)}X<\infty $ . Indeed, let $w \in V \setminus X$ . Then,

$$ \begin{align*} R^{\mathrm{int}}_{(V,v)}X&=\inf_{z \in V \setminus X} \rho_{V}(v,z) \\ &\leq\rho_{V}(v,w) \\ &<\infty. \end{align*} $$

We also remark that the internal and external hyperbolic radii are increasing with respect to set-theoretic inclusion in the obvious way. Namely, if $\emptyset \ne X \subset Y$ are subsets of V, then $R^{\mathrm {ext}}_{(V,v)}X \le R^{\mathrm {ext}}_{(V,v)}Y$ , while if $v \in X$ , we also have $R^{\mathrm {int}}_{(V,v)}X \le R^{\mathrm {int}}_{(V,v)}Y$ .

The names ‘internal hyperbolic radius’ and ‘external hyperbolic radius’ are justified in view of the following lemma which is how they are often used in practice.

Lemma 2.3. Let $V \subsetneq \mathbb C$ be a simply connected domain, $v \in V$ , and X be a non-empty subset of V. We then have the following:

  1. (1) if $0 < R^{\mathrm {ext}}_{(V,v)}X < \infty $ , then $X \subset \overline \Delta _V(v, R^{\mathrm {ext}}_{(V,v)}X)$ ;

  2. (2) if $v \in X$ and $0 < R^{\mathrm {int}}_{(V,v)}X < \infty $ , then $R^{\mathrm {int}}_{(V,v)}X = \sup \{r: \Delta _V(v, r) \subset X\}$ so that, in particular, $\Delta _V(v, R^{\mathrm {int}}_{(V,v)}X) \subset X$ ;

  3. (3) if $v \in X$ , then $R^{\mathrm {int}}_{(V,v)}X \le R^{\mathrm {ext}}_{(V,v)}X$ .

Proof. Item (1) follows immediately from the above definition for external hyperbolic radius. For item (2), if we temporarily let $R:= \sup \{r: \Delta _V(v, r) \subset X\}$ , then from the definition of internal hyperbolic radius, it follows easily that $V \setminus X \subset V \setminus \Delta _V(v, R^{\mathrm {int}}_{(V,v)}X)$ so that $\Delta _V(v, R^{\mathrm {int}}_{(V,v)}X) \subset X$ whence we have that $R^{\mathrm {int}}_{(V,v)}X \le R$ . Note that since $R^{\mathrm {int}}_{(V,v)}X> 0$ , this means that $R> 0$ and the set of which we take the supremum to find R must be non-empty. However, if we let $z \in V \setminus X$ (note that the requirement that $ R^{\mathrm {int}}_{(V,v)}X < \infty $ ensures that we can always find such a point), then we must have that $\rho _V(v,x) \ge R$ , and on taking an infimum over all such x, we have $R^{\mathrm {int}}_{(V,v)}X \ge R> 0$ from which we obtain item (2). Item (3) then follows from items (1) and (2) (the result being trivial in the cases where the external hyperbolic radius is infinite or the internal hyperbolic radius is zero) which completes the proof.

We remark that item (2) above illustrates how the internal hyperbolic radius $R^{\mathrm {int}}_{(V,v)}X$ is effectively the radius of the largest disc about v which lies inside X. The reason that we took $R^{\mathrm {int}}_{(V,v)}X = \inf _{z \in V \setminus X} \rho _{V}(v,z)$ as our definition and not the alternative $\sup \{r: \Delta _V(v, r) \subset X\}$ is that this version still works, even if $\inf _{z \in V \setminus X} \rho _{V}(v,z)$ is zero or infinite. This lemma leads to the following handy corollary.

Corollary 2.4. Suppose $V \subsetneq \mathbb C$ is a simply connected domain, $v \in V$ , and that X, Y are subsets of V, with $v \in Y$ .

  1. (1) If $R^{\mathrm {ext}}_{(V,v)}X \le R^{\mathrm {int}}_{(V,v)}Y$ , then $\overline X \subset \overline Y$ .

  2. (2) If $R^{\mathrm {ext}}_{(V,v)}X < R^{\mathrm {int}}_{(V,v)}Y$ , then $\overline X \subsetneq Y$ .

We also have the following equivalent formulation for the internal and external hyperbolic radii which is often very useful in practice.

Lemma 2.5. Let $V \subsetneq \mathbb C$ be a simply connected domain, let $v \in V$ , and let X be a non-empty subset of V. We then have the following:

  1. (1) if $v \in X$ , then $R^{\mathrm {int}}_{(V,v)}X = \inf _{z \in (\partial X) \cap V} \rho _{V}(v,z)$ ;

  2. (2) $R^{\mathrm {ext}}_{(V,v)}X \ge \sup _{z \in (\partial X) \cap V} \rho _{V}(v,z).$

If, in addition, $R^{\mathrm {ext}}_{(V,v)}X < \infty $ or $X \subsetneq V$ and $\hat {\mathbb C} \setminus X$ is connected, we also have:

  1. (3) $R^{\mathrm {ext}}_{(V,v)}X = \sup _{z \in (\partial X) \cap V} \rho _{V}(v,z).$

In particular, the above holds if $X = U \subsetneq V$ is a simply connected domain.

Note that we can get strict inequality in item (2) above. For example, let $V = \mathbb D$ , $v=0$ , and let $X = \{z: \tfrac {2}{3} \le |z| < 1\}$ . We leave the elementary details to the interested reader.

Proof. To prove item (1), we first observe that the result is trivial if $R^{\mathrm {int}}_{(V,v)}X = \infty $ which happens if and only if $X = V$ . So suppose now that $X \subsetneq V$ . Note that, in this case, $\partial X \cap V \ne \emptyset $ , since otherwise $\mathrm {int\,} X$ and $V \setminus \overline X$ would give a separation of the connected set V.

Now let $z \in \partial X \cap V$ and pick $\varepsilon> 0$ . Since $z \in \partial X$ , there exists $w \in V \setminus X$ with $\rho _V(z, w) < \varepsilon $ . By the triangle inequality $\rho _V(v, w) < \rho _V(v, z) + \varepsilon $ and, on taking the infimum on the left-hand side,

$$ \begin{align*} R^{\mathrm{int}}_{(V,v)}X \le \rho_V(v, z) + \varepsilon. \end{align*} $$

If we then take the infimum over all $z \in \partial X \cap V$ on the right-hand side and let $\varepsilon $ tend to $0$ , we then obtain that

$$ \begin{align*} R^{\mathrm{int}}_{(V,v)}X \le \inf_{z \in (\partial X) \cap V} \rho_{V}(v,z). \end{align*} $$

Now we show $R^{\mathrm {int}}_{(V,v)}X \geq \inf _{z \in (\partial X) \cap V} \rho _{V}(v,z)$ . Take a point $w \in V \setminus X$ and connect v to w with a geodesic segment $\gamma $ in V. Then $\gamma $ must meet $\partial X$ since otherwise, $\mathrm {int\,} X$ and $V \setminus \overline X$ would give a separation of the connected set $[\gamma ]$ (the track of $\gamma $ ). So let $z_0 \in \partial X \cap [\gamma ]$ . Clearly,

$$ \begin{align*} \rho_V(v,z_0)\leq \rho_V(v,w), \end{align*} $$

so

$$ \begin{align*} \inf_{z \in (\partial X) \cap V}\rho_{V}(v,z) \leq \rho_V(v,w), \end{align*} $$

and thus,

$$ \begin{align*} \inf_{z \in (\partial X) \cap V}\rho_{V}(v,z) \leq R^{\mathrm{int}}_{(V,v)}X. \end{align*} $$

This completes the proof of item (1).

To prove item (2), we first consider the case when $\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z)=\infty $ . Note that, since the supremum of the empty set is minus infinity, this in particular implies that $(\partial X) \cap V \neq \emptyset $ . Thus, we can find a sequence $\{z_n \}\in (\partial X)\cap V$ such that $\rho _{V}(v,z_n)\rightarrow \infty $ . For each $z_n$ , choose $x_n\in X$ such that $\rho _{V}(z_n,x_n)\leq 1$ . Then, $\rho _{V}(v,x_n)\rightarrow \infty $ by the reverse triangle inequality, which shows $R^{\mathrm {ext}}_{(V,v)}X=\infty $ so that we have equality.

Now consider the case when $\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z)<\infty $ . The result is trivial if this supremum is minus infinity, so again we can assume that $(\partial X) \cap V \neq \emptyset $ . Similarly to above, we can take a sequence $\{z_n \}\in (\partial X)\cap V$ for which $\rho _V(v,z_n)\rightarrow \sup _{z \in (\partial X) \cap V} \rho _{V}(v,z)$ . Then take a sequence $\{x_n \}\in X$ such that $\rho _V(x_n,z_n)< {1}/{n}$ . By definition of the external hyperbolic radius, we must have

$$ \begin{align*} \rho_V(v,x_n)\leq R^{\mathrm{ext}}_{(V,u)}X, \end{align*} $$

and since $\rho _V(x_n,z_n)< {1}/{n}$ , by the reverse triangle inequality, on letting n tend to infinity,

$$ \begin{align*} \sup_{z \in (\partial X) \cap V} \rho_{V}(v,z) \leq R^{\mathrm{ext}}_{(V,v)}X, \end{align*} $$

which proves item (2) as desired.

Now we show that, under the additional assumptions that $R^{\mathrm {ext}}_{(V,v)}X < \infty $ or $X \subsetneq V$ and $\hat {\mathbb C} \setminus X$ is connected, $\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z) \geq R^{\mathrm {ext}}_{(V,v)}X$ from which item (3) follows. Assume first that $R^{\mathrm {ext}}_{(V,v)}X < \infty $ and let $\{x_n \}\in X$ be a sequence in X such that $\rho _V(v, x_n) \to R^{\mathrm {ext}}_{(V,v)}X$ as n tends to infinity (recall that we have assumed $X \neq \emptyset $ so that the set over which we are taking our supremum to obtain the external hyperbolic radius is non-empty). Note also that $R^{\mathrm {ext}}_{(V,v)}X = 0$ if and only if $X = \partial X = \{v\}$ in which case, the result is trivial, so we can assume that $\rho _V(v, x_n)> 0$ for each n. For each n, let $\gamma _n$ be the unique hyperbolic geodesic in V which passes through v and $x_n$ . Then there must be a point $z_n$ (which may possibly be $x_n$ itself) on $\gamma _n \cap \partial X$ which does not lie on the same side of $x_n$ as v since otherwise, the portion of $\gamma _n$ on the same side of v as $x_n$ and which runs from $x_n$ to $\partial V$ would be separated by the open sets $\mathrm {int\,} X$ and $V \setminus \overline X$ . However, this is impossible since $x_n \in X$ while $R^{\mathrm {ext}}_{(V,v)}X < \infty $ which forces $\gamma _n$ to eventually leave X (in both directions). It then follows that for each n, we have that

$$ \begin{align*} \rho_V(v, z_n) \ge \rho_V(v, x_n) \end{align*} $$

so that

$$ \begin{align*} \sup_{z \in (\partial X) \cap V} \rho_V(v, z) \ge \rho_V(v, x_n),\end{align*} $$

and the desired conclusion then follows on letting n tend to infinity.

Now suppose that $X \subsetneq V$ and $\hat {\mathbb C} \setminus X$ is connected. We observe that, since V is connected, we must have $(\partial X) \cap V \ne \emptyset $ similarly to in the proof of item (1) above, while if $X = U$ is a simply connected domain, then $\hat {\mathbb C} \setminus X$ is automatically connected (e.g. [Reference NewmanNew51, VI.4.1] or [Reference ConwayCon78, Theorem VIII.2.2]).

In view of item (2) above, item (3) holds if $\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z) = \infty $ , so assume from now on that $\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z) < \infty $ and note that $(\partial X) \cap V \ne \emptyset $ implies that this supremum will be non-negative so that we can set $\rho :=\sup _{z \in (\partial X) \cap V} \rho _{V}(v,z)$ . Note that, if $\rho = 0$ , then $V \setminus \{v\} \cap \partial X = \emptyset $ and, since $V \setminus \{v\}$ is connected, then either $V \setminus \{v\} \subset \mathrm {int\,} X \subset X$ or $V \setminus \{v\} \subset V \setminus \overline X \subset V \setminus X$ . In the first case, $X = V \setminus \{v\}$ , in which case, one checks easily that item (3) fails. However, we can rule out this case since $\hat {\mathbb C} \setminus X = \{v\} \cup \hat {\mathbb C} \setminus V$ is disconnected, which violates our hypothesis that $\hat {\mathbb C} \setminus X$ be connected. In the second case, we have $X = \{v\}$ , in which case one easily checks that item (3) holds. Thus, we can assume from now on that $\rho> 0$ .

Claim 2.6. $X \subset {\overline \Delta _V}(v,\rho )$ .

Proof. Suppose not. Then there exists $x {\kern-1pt}\in{\kern-1pt} X$ such that $\rho _V(v, x){\kern-1pt}>{\kern-1pt}\rho $ . Set ${\tilde \rho }{\kern-1pt}:={\kern-1pt}\rho _V(v,x){\kern-1pt}>{\kern-1pt} \rho $ and define C to be the hyperbolic circle of radius ${\tilde \rho }$ about v with respect to the hyperbolic metric of V. Then we have $C \cap \partial X = \emptyset $ since, for all $z \in \partial X \cap V$ , by definition, we have $\rho _V(v,z) \le \rho < \tilde \rho $ . Thus, $x \in \mathrm {int\,} X$ .

Now we have $x \in C$ . We next argue that each point of C must lie in X. Suppose z is another point on C such that $z \notin X$ . Then z would be in $V\setminus X$ . As $C \cap \partial X = \emptyset $ , we have that $z \in V \setminus {\overline X}= \mathrm {int\,} (V \setminus X)$ . However, this is impossible as $\mathrm {int\,} X$ and $\mathrm {int\,} (V \setminus X)$ would then form a separation of the connected set C. Thus, $C \subset X$ and C induces a separation of $\hat {\mathbb C} \setminus X$ . Indeed, since $\rho < \tilde \rho $ , $\partial X$ is inside the Jordan curve C and hence there are points of $\hat {\mathbb C} \setminus X$ inside C. However, $\infty \in \hat {\mathbb C} \setminus V \subset \hat {\mathbb C} \setminus X$ lies outside C. This contradicts our assumption that $\hat {\mathbb C} \setminus X$ is connected.

Immediately from the above claim, we see that $\rho = \sup _{z \in (\partial X) \cap V} \rho _{V}(v,z) \geq R^{\mathrm {ext}}_{(V,v)}X$ , and thus, in the case where $X \subsetneq V$ and $\hat {\mathbb C} \setminus X$ is connected,

$$ \begin{align*} \sup_{z \in (\partial X) \cap V} \rho_{V}(v,z) = R^{\mathrm{ext}}_{(V,v)}X, \end{align*} $$

which proves item (3) as desired.

We will require the following elementary definition from metric spaces.

Definition 2.7. Suppose R is a hyperbolic Riemann surface, and that A and B are non-empty subsets of R. For $z \in R$ , we define

$$ \begin{align*} \rho_{R}(z, B)=\inf_{w\in B}\rho_R(z,w) \end{align*} $$

and

$$ \begin{align*} \rho_{R}(A,B)=\inf_{z\in A}\rho_R(z,B). \end{align*} $$

We say that a subset X of a simply connected domain $V \subsetneq \mathbb C$ is hyperbolically convex if, for every $z, w \in X$ , the geodesic segment $\gamma _{z,w}$ from z to w lies inside X (this is the same as the definition given in [Reference Ma and MindaMM94, §2]). We then have the following elementary but useful lemma.

Lemma 2.8. (The hyperbolic convexity lemma)

Let $V \subsetneq \mathbb C$ be a simply connected domain. Then any hyperbolic disc $\Delta _V(z, R)$ is hyperbolically convex with respect to the hyperbolic metric of V.

Proof. Let a, b be two points in $\Delta _V(z, R)$ . Using conformal invariance, we can apply a suitably chosen Riemann map from V to the unit disc ${\mathbb D}$ so that, without loss of generality, we can assume that $a=0$ while b is on the positive real axis whence the shortest geodesic segment from a to b is the line segment $[0,b]$ on the positive real axis. However, the disc $\Delta _{\mathbb D}(z, R)$ is a round disc ${\mathrm D}(w, r)$ for some $w \in D$ and $r \in (0,1)$ which is therefore convex (with respect to the Euclidean metric) and the result follows.

Ordinary derivatives are useful for estimating how points get moved apart by applying functions when using the Euclidean metric. In our case, we will need a notion of a derivative taken with respect to the hyperbolic metric.

Let $R,S$ be hyperbolic Riemann surfaces with metrics

$$ \begin{align*} {d}\rho_R &= \sigma_R(z)|{d}z|, \\ {d}\rho_S &= \sigma_S(z)|{d}z|, \end{align*} $$

respectively, and let $\ell _R$ , $\ell _S$ denote the hyperbolic length in R, S, respectively. Let $X \subset R$ and let f be defined and analytic on an open set containing X with $f(X) \subset S$ . For $z \in X$ , define the hyperbolic derivative:

(2.1) $$ \begin{align} f_{R,S}^{\natural}(z):= f'(z)\frac{\sigma_S(f(z))}{\sigma_R(z)}, \end{align} $$

see e.g. the differential operation $D_{h1}$ defined in [Reference Ma and MindaMM94, §2] and also [Reference Ma and MindaMM99, §2]. Note that the hyperbolic derivative satisfies the chain rule, that is, if R, S, T are hyperbolic Riemann surfaces with g defined and analytic on an open set containing $X \subset R$ , and f defined and analytic on an open set containing $Y \subset S$ with $f(X) \subset Y$ , then, on the set X,

(2.2) $$ \begin{align} (f\circ g)_{R,T}^{\natural}=(f^{\natural}_{S,T}\circ g)\cdot g_{R,S}^{\natural}. \end{align} $$

We also have a version of conformal invariance which is essentially [Reference Keen and LakicKL07, Theorem 7.1.1] or which the interested reader can simply deduce from the formula for the hyperbolic metric using a universal covering map from the disc (see, e.g. [Reference Carleson and GamelinCG93, p. 12]), namely:

(2.3) $$ \begin{align} \mbox{if} \: f: R \mapsto S\: \mbox{is a covering map, then} \: |f^{\natural}| = 1 \: \mbox{on} \: R. \end{align} $$

We observe that the above is basically another way of rephrasing part of the Schwarz lemma for the hyperbolic metric (e.g. [Reference Carleson and GamelinCG93, Theorem I.4.1]) where we have an isometry of hyperbolic metrics if and only if the mapping from one Riemann surface to the other lifts to an automorphism of the unit disc. The main utility of the hyperbolic derivative for us will be via the hyperbolic metric version of the standard M-L estimates for line integrals (see Lemma 2.9 below). First, however, we make one more definition.

Let R, S be hyperbolic Riemann surfaces, let X be a non-empty subset of R, and let f be defined and analytic on an open set containing X with $f(X) \subset S$ . Define the hyperbolic Lipschitz bound of f on X as

$$ \begin{align*} \|f_{R,S}^{\natural} \|_X := \sup_{z \in X}|f_{R,S}^{\natural}(z)|. \end{align*} $$

We recall that, for any two points z, w in R, the hyperbolic distance $\rho _R(z,w)$ is the same as the length of a shortest geodesic segment in R joining z to w (see e.g. [Reference Keen and LakicKL07, Theorems 7.1.2 and 7.2.3]).

Lemma 2.9. (Hyperbolic M-L estimates)

Suppose $R,S$ are hyperbolic Riemann surfaces. Let $\gamma $ be a piecewise smooth curve in R and let f be holomorphic on an open neighborhood of $[ \gamma ] $ and map this neighborhood inside S with $|f_{R,S}^{\natural }|\leq M$ on $[ \gamma ]$ . Then,

$$ \begin{align*} \ell_S(f(\gamma)) \leq M \ell_R (\gamma). \end{align*} $$

In particular, if $z, w \in R$ and $\gamma $ is a shortest hyperbolic geodesic segment connecting z and w, and $|f_{R,S}^{\natural }|\leq M$ on $[ \gamma ] $ , then

$$ \begin{align*} \rho_S(f(z), f(w)) \leq M\rho_R(z,w). \end{align*} $$

Proof. For the first part, if $\gamma : [a,b] \to R$ , we calculate

$$ \begin{align*} \ell_S(f(\gamma)) &=\int_{f(\gamma)}\,{d}\rho_S \\ &=\int_{a}^{b} \sigma_S(f(\gamma(t))) \cdot |f'(\gamma(t))| \cdot |\gamma'(t)|\, {d}t \\ &=\int_{a}^{b}|f^{\natural}(\gamma(t))|\cdot \sigma_R(\gamma(t)) \cdot |\gamma'(t)|\, {d}t \\ &=\int_{\gamma}|f^{\natural}|\, {d}\rho_R \\ &\leq M \int_{\gamma}\,{d}\rho_R \\ &= M \ell_R(\gamma). \end{align*} $$

The second part then follows immediately from this and the facts that by [Reference Keen and LakicKL07, Theorems 7.1.2 and 7.2.3], $\rho _R(z,w)$ is equal to the length of the shortest geodesic segment in R joining z and w, while $f(\gamma )$ is at least as long in S as the distance between $f(z)$ and $f(w)$ .

In this paper, we will be working with hyperbolic derivatives only for mappings which map a subset of U to U, where U is a suitably scaled version of the Siegel disc $U_{\unicode{x3bb} }$ introduced in §1 and where we are obviously using the hyperbolic density of U in the definition in equation (2.1). For the sake of readability, from now on, we will suppress the subscripts and simply write $f^{\natural }$ instead of $f_{U,U}^{\natural }$ for derivatives taken with respect to the hyperbolic metric of U.

3. The Polynomial Implementation Lemma

3.1. Setup

Let $\Omega ,\Omega '\subset {\mathbb C}$ be bounded Jordan domains with analytic boundary curves $\gamma $ and $\Gamma $ , respectively, such that ${\overline \Omega } \subset \Omega '$ . By making a translation if necessary, we can assume without loss of generality that $0 \in \Omega $ so that $\gamma $ then separates $0$ from $\infty $ . Suppose f is analytic and injective on a neighborhood of ${\overline \Omega }$ such that $f(\gamma )$ is still inside $\Gamma $ . Let $A=\Omega ' \setminus {\overline \Omega }$ be the conformal annulus bounded by $\gamma $ and $\Gamma $ , and let $\tilde A$ be the conformal annulus bounded by $f(\gamma )$ and $\Gamma $ . Define

$$ \begin{align*} F(z)= \left\{\! \begin{array}{ll} f(z), & z\in {\overline \Omega}, \\ z, & z \in \hat{\mathbb C} \setminus \Omega'. \\ \end{array} \right. \end{align*} $$

We wish to extend F to a quasiconformal homeomorphism of $\hat {\mathbb C}$ . To do this, the main tool we use will be a lemma of Lehto [Reference LehtoLeh65] which allows us to define F in the ‘missing’ region between $\Omega $ and $\hat {\mathbb C} \setminus \Omega '$ . First, however, we need to gather some terminology.

Recall that in [Reference NewmanNew51], a Jordan curve C in the plane (parameterized on the unit circle $\mathbb T$ ) is said to be positively oriented if the algebraic number of times a ray from the bounded complementary domain to the unbounded complementary domain crosses the curve is $1$ or, equivalently, the winding number of the curve about points in its bounded complementary region is also $1$ (the reader is referred to the discussion in [Reference NewmanNew51, pp. 188–194]).

Following the proof of Theorem VII.11.1, Newman goes on to define a homeomorphism g defined on $\mathbb C$ to be orientation-preserving or sense-preserving if it preserves the orientation of all simple closed curves. Lehto and Virtanen adopt Newman’s definitions in their text on quasiconformal mappings [Reference Lehto and VirtanenLV65], and they have a related and more general definition of orientation-preserving maps defined on an arbitrary plane domain G, where g is said to be orientation-preserving on G if the orientation of the boundary of every Jordan domain D with $\overline D \subset G$ is preserved [Reference Lehto and VirtanenLV65, p. 9].

Lehto and Virtanen also introduce the concept of the orientation of a Jordan curve C with respect to one of its complementary domains G [Reference Lehto and VirtanenLV65, p. 8]. Let $C(z): \mathbb T \mapsto C$ be a parameterization of C which defines its orientation and let $\Phi $ be a Möbius transformation which maps G to the bounded component of the complement of $\Phi (C)$ such that $0 \in \Phi (G)$ . Then, C is said to be positively oriented with respect to G if the argument of $\Phi \circ C(t)$ increases by $2\pi $ as one traverses $\mathbb T$ anticlockwise. Using this, if G is an n-connected domain whose boundary consists of n disjoint Jordan curves (what Lehto and Virtanen in [Reference Lehto and VirtanenLV65, p. 12] refer to as free boundary curves), it is easy to apply the above definition to define the orientation of G with respect to each curve which comprises $\partial G$ .

Recall that in Lehto’s paper [Reference LehtoLeh65], he considers a conformal annulus (ring domain) $D \subset \hat {\mathbb C}$ bounded by two Jordan curves $C_1$ and $C_2$ . If $\varphi $ is a homeomorphism of $C_1 \cup C_2$ into the plane, then the curves $\varphi (C_1)$ , $\varphi (C_2)$ will bound another conformal annulus which we call $D'$ . If, under the mapping $\varphi $ , the positive orientations of $C_1$ and $C_2$ with respect to D correspond to the positive orientations of $\varphi (C_1)$ and $\varphi (C_2)$ with respect to $D'$ , then $\varphi $ is called an admissible boundary function for D.

Lemma 3.1. (Lehto [Reference LehtoLeh65])

Let D be a conformal annulus in $\hat {\mathbb C}$ bounded by the Jordan curves $C_1$ and $C_2$ , and let $w_h: \hat {\mathbb C} \mapsto \hat {\mathbb C}$ , $h = 1, 2$ be quasiconformal mappings such that the restrictions of $w_h$ to $C_h$ , $h=1,2$ , constitute an admissible boundary function for D. Then there exists a quasiconformal mapping w of D such that $w(z) = w_h(z)$ for $z \in [C_h]$ , $h=1,2$ (where for each h, $[C_h]$ is the track of the curve $C_h$ ).

Applying this result to our situation, we have the following.

Lemma 3.2. For $\Omega $ , $\Omega '$ , f, F as above, we can extend the mapping F above to a quasiconformal homeomorphism of $\hat {\mathbb C}$ .

Proof. To apply Lehto’s lemma above, we need to verify two things: first that f (and the identity) can be extended as quasiconformal mappings from $\hat {\mathbb C}$ to itself and second that we have an admissible pair of mappings on $\partial A = \partial (\Omega ' \setminus \overline \Omega )$ according to Lehto’s definition given above.

First note that, in view of the argument principle, f, being univalent, is an orientation-preserving mapping on a neighborhood of $\overline \Omega $ . Using [Reference Lehto and VirtanenLV65, Satz II.8.1], f (and trivially the identity) can be extended as a quasiconformal mapping of $\mathbb C$ to itself. Using either [Reference NewmanNew51, Theorem VII.11.1] or the Orientierungssatz in [Reference Lehto and VirtanenLV65, p. 9], the above extension can be easily extended to an orientation-preserving homeomorphism of $\hat {\mathbb C}$ , which is then readily seen to be a quasiconformal mapping of $\hat {\mathbb C}$ to itself as follows easily from [Reference Lehto and VirtanenLV65, Satz I.8.1].

Both f and the identity preserve the positive orientations of $\gamma $ and $\Gamma $ , respectively. In addition, since $f(\gamma )$ lies inside $\Gamma $ , it follows that the orientations of $\gamma $ and $\Gamma $ with respect to A are the same as those of $\tilde A$ . To be precise, let $\gamma $ be positively oriented with respect to A and let ${1}/{A}$ , ${1}/({\tilde A - f(0)})$ denote the images of A, $\tilde A$ , respectively, under ${1}/{z}$ , ${1}/({z - f(0)})$ , respectively. Since A lies in the unbounded complementary component of $\gamma $ , it follows from the above definition of the orientation of a boundary curve for a domain that the winding number of ${1}/{\gamma }$ about points of ${1}/{A}$ is $1$ so that the winding number of $\gamma $ about $0$ (which lies inside $\gamma $ ) is $-1$ . By the argument principle, $f(0)$ lies inside $f(\gamma )$ and, since f is orientation-preserving, the winding number of $f(\gamma )$ about $f(0)$ is also $-1$ .

A simple calculation then shows that the winding number of ${1}/({f(\gamma ) - f(0)})$ about $0$ and thus also about points in ${1}/({\tilde A - f(0)})$ is also $1$ . This shows that f and thus F preserve the positive orientations of $\gamma $ , $f(\gamma )$ with respect to A and $\tilde A$ , respectively. Since F is the identity on $[\Gamma ]$ , it trivially preserves the positive orientation of $\Gamma $ with respect to A and $\tilde A$ (both of which lie inside $\Gamma $ ) and, with this, we have shown the hypotheses of Lemma 3.1 above from [Reference LehtoLeh65] are met.

Lemma 3.1 now allows us to extend F to a quasiconformal mapping on the conformal annulus $A = \Omega ' \setminus \overline \Omega $ such that this extension agrees with the original values of F on the boundary and maps A to $\tilde A$ . We can then use [Reference Lehto and VirtanenLV65, Satz I.8.3] on the removeability of analytic arcs or, remembering that f is defined on a neighborhood of $\overline \Omega $ while the identity is defined on all of $\hat {\mathbb C}$ , twice invoke Rickman’s lemma (e.g. [Reference Douady and HubbardDH85, Lemma 2]) to conclude that the resulting homeomorphism of $\hat {\mathbb C}$ is quasiconformal.

We can summarize the above in the following useful definition.

Definition 3.3. If f, F, $\gamma $ , $\Gamma $ , and A are all as above, with F an admissible boundary function for A, we will say that $(f,\mathrm {Id})$ is an admissible pair on $(\gamma ,\Gamma )$ .

Recall we have $P_{\unicode{x3bb} }= \unicode{x3bb} z(1-z)$ , where $\unicode{x3bb} = e^{2\pi i ({\sqrt {5}-1})/{2}}$ . Let $\mathcal K_{\unicode{x3bb} }$ be the filled Julia set for $P_{\unicode{x3bb} }$ and let $U_{\unicode{x3bb} }$ be the corresponding Siegel disc containing $0$ . Let $\kappa \ge 1$ and set $P=P_{\kappa }= ({1}/{\kappa }) P_{\unicode{x3bb} }(\kappa z) =\unicode{x3bb} z - \unicode{x3bb} \kappa z^2$ . Then, if ${\mathcal K}$ is the filled Julia set for P, we have $\mathcal K\subset D(0, ({2}/{\kappa }))$ . Let U be the Siegel disk for P and note that $U= \{z \in \mathbb C \text { : } z= {w}/{\kappa } \text { for some } w\in U_{\unicode{x3bb} }\}$ . Now choose the Jordan domains $\Omega $ , $\Omega '$ above such that $\mathcal K \subset \Omega \subset {\overline \Omega } \subset \Omega ' \subset \mathrm {D}(0, ({2}/{\kappa }))$ , where from above ${2}/{\kappa }\le 2$ is an escape radius for P.

Let $(f,\mathrm {Id})$ be an admissible pair where f, F, $\gamma $ , $\Gamma $ , and A are all as above. In view of Lemma 3.2, F can be extended to a quasiconformal homeomorphism of $\hat {\mathbb C}$ and we let $\mu _F$ denote the complex dilatation of F. Next let $N \in \mathbb N$ and, for each $0 \le m \le N$ , set $\mu _m^N:= (P^{N-m})^*\mu _F$ that is $\mu _m^N(z) = \mu _{F\circ P^{N-m}}(z)$ (here and in what follows, we draw the reader’s attention to the fact that the superscript N is an index rather than an iterate or a power). Let $\varphi _N^N:=F$ and, for $0 \leq m \leq N-1$ , let $\varphi _m^N$ be the unique normalized solution of the Beltrami equation for $\mu _m^N$ which satisfies $\varphi _m^N(z) = z + \mathcal {O}({1}/{|z|})$ near $\infty $ (see e.g. [Reference Carleson and GamelinCG93, Theorem I.7.4]). For $1 \leq m \leq N$ , let

$$ \begin{align*} {\tilde P}_m^N(z)= \varphi_m^N \circ P \circ (\varphi_{m-1}^N)^{-1}(z). \end{align*} $$

Then for each m, ${\tilde P}_m^N$ is an analytic degree 2 branched cover of $\mathbb C$ which has a double pole at $\infty $ and no other poles. Thus, ${\tilde P}_m^N$ is a quadratic polynomial and the fact that each $\varphi _m^N$ is tangent to the identity at $\infty $ ensures that the leading coefficient of ${\tilde P}_m^N$ is $-\unicode{x3bb} \kappa $ and thus has absolute value $\kappa $ . Let $\alpha _m^N:=\varphi _m^N(0)$ . Since the dilatation of $\varphi _m^N$ is zero on $\hat {\mathbb C} \setminus {\overline {\mathrm D}}(0, ({2}/{\kappa }))$ , we know $\varphi _m^N$ is univalent on this set. Thus, ${1}/{\varphi _m^N(1/z)}$ is univalent on ${\mathrm D}(0, ({\kappa }/{2}))$ and is tangent to the identity at $0$ . It follows from the Koebe one-quarter theorem (Theorem A.1) and the injectivity of $\varphi _m^N$ that $|\alpha _m^N|\leq 4 ({2}/{\kappa }) = {8}/{\kappa }$ .

Define $\psi _m^N(z):= \varphi _m^N(z)-\alpha _m^N$ . Then for each $0 \leq m \leq N$ , if we define

(3.1) $$ \begin{align} P_m^N(z)= \psi_m^N \circ P \circ (\psi_{m-1}^N)^{-1}(z), \end{align} $$

we have that $P_m^N$ is a quadratic polynomial whose leading coefficient is again $-\unicode{x3bb} \kappa $ and thus has absolute value $\kappa $ . Moreover, $P_m^N$ fixes $0$ as it is ${\tilde P}_m^N$ composed with suitably chosen (uniformly bounded) translations. We now turn to calculating bounds on the coefficients of each $P_m^N$ .

Lemma 3.4. Any sequence formed from the polynomials $P_m^N(z)$ for $0\leq m \leq N$ as above is a $(17+\kappa )$ -bounded sequence of polynomials.

Proof. By the construction in equation (3.1) above, the leading coefficient has absolute value $\kappa $ while the constant term is zero. Now, for $|z|$ sufficiently large,

$$ \begin{align*} P_m^N(z) &=\unicode{x3bb} \bigg( z+\alpha_{m-1}^N+\mathcal{O}\bigg(\frac{1}{|z|}\bigg) \bigg) \bigg (1-\kappa z-\kappa \alpha_{m-1}^N+\mathcal{O}\bigg (\frac{1}{|z|}\bigg )\bigg )\\ &\quad - \alpha_{m}^N + \mathcal{O}\bigg (\frac{1}{|P\circ (\psi_{m-1}^N)^{-1}(z))|}\bigg ), \end{align*} $$

and one sees easily that the $\mathcal {O}({1}/|P\circ (\psi _{m-1}^N)^{-1}(z))|)$ term is actually $\mathcal {O}({1}/{|z|^2})$ . Therefore, the coefficient of the linear term is $\unicode{x3bb} -2 \unicode{x3bb} \kappa \alpha _{m-1}^N$ , and thus is bounded in modulus by $1+2\cdot 1\cdot \kappa \cdot ({8}/{\kappa }) =17$ . Lastly, since $\kappa \ge 1$ , ${1}/({17 + \kappa }) < \kappa < 17 + \kappa $ and so we have indeed constructed a $17+ \kappa $ -bounded sequence of polynomials (as defined near the start of §1.1), proving the lemma as desired.

Lemma 3.5. Both $\psi _0^N$ and $(\psi _0^N)^{-1}$ converge uniformly to the identity on $\mathbb C$ (with respect to the Euclidean metric).

Proof. Recall that $\Gamma $ is the boundary of $\Omega '$ and that we chose $\mathcal K \subset \Omega \subset {\overline \Omega } \subset \Omega ' \subset \mathrm {D}(0, ({2}/{\kappa }))$ . Let $G(z)$ be the Green’s function for P and set $h:= \sup _{z \in \Gamma }G(z)$ . Then, for each $0 \le m < N$ , $\mathrm {supp}\, \mu _m^N \subset \{z: 0 < G(z) \leq h \cdot 2^{m-N} \}$ and so $\mathrm {supp}\, \mu _0^N \subset \{z: 0 < G(z) \leq h \cdot 2^{-N} \}$ . Thus, $\mu _0^N \rightarrow 0$ everywhere as $N \rightarrow \infty $ . By [Reference Carleson and GamelinCG93, Theorem I.7.5] (see also [Reference AhlforsAhl66, Lemma 1]), we have that $\varphi _0^N$ and $(\varphi _0^N)^{-1}$ both converge uniformly to the identity on $\mathbb C$ (recall that the unique solution for $\mu \equiv 0$ is the identity in view of the uniqueness part of the measurable Riemann mapping theorem for solving the Beltrami equation e.g. [Reference Carleson and GamelinCG93, Theorem I.7.4]). Finally, $\alpha _0^N=\varphi _{0}^N(0) \rightarrow 0$ as $N\rightarrow \infty $ , and since $\psi _0^N=\varphi _0^N(z)-\alpha _0^N$ , the result follows.

The support of each $\mu _0^N$ is contained in the basin of infinity for P, $A_{\infty }$ . Since we had $2^{-N}\inf _{z \in \gamma }G(z)>0$ , $\psi _0^N$ is analytic on a neighborhood of $\overline {U}$ . Then if we define $U^N :=\psi _0^N(U)$ , we have that $(\psi _0^N)^{-1}$ is analytic on a neighborhood of $\overline {U^N}$ . We now prove two fairly straightforward technical lemmas (see Figure 2).

Lemma 3.6. $(U^N,0)\rightarrow (U,0)$ in the Carathéodory topology.

Proof. Define $\psi ^{-1}: \mathbb D \rightarrow U$ to be the unique inverse Riemann map from ${\mathbb D}$ to U satisfying $\psi ^{-1}(0)=0$ , $(\psi ^{-1})'(0)>0$ . By Lemma 3.5, $\psi _{0}^N \circ \psi ^{-1} $ converges locally uniformly to $\mathrm {Id} \circ \psi ^{-1}$ on ${\mathbb D}$ . The result then follows from Theorem A.9 in view of the fact that by the above result, $(\psi _{0}^N)'(0) \to 1$ as $N \to \infty $ (so that the argument of $(\psi _{0}^N)'(0)$ converges to $0$ as $N \to \infty $ ).

Lemma 3.7. For any $\varepsilon>0$ and any relatively compact subset A of U, there exists an $N_0$ such that

$$ \begin{align*} |(\psi_0^N)^{\natural}(z)-1|&<\varepsilon, \\ |((\psi_0^N)^{-1})^{\natural}(z)-1|&<\varepsilon \end{align*} $$

for all z in A, $N\geq N_0$ .

Figure 2 Supports of dilatations converging to zero almost everywhere.

Proof. Let ${d}\rho _U=\sigma _U(z)|{d}z|$ , where the hyperbolic density $\sigma _U$ is continuous on U (e.g. [Reference Keen and LakicKL07, Theorem 7.2.2]) and bounded away from $0$ on any relatively compact subset of U. For each N, $\psi _0^N$ is analytic on a neighborhood of $\overline U$ , while by Lemma 3.6 and part (2) of the definition of convergence in the Carathéodory topology (Definition A.7), $(\psi _0^N)^{-1}$ is analytic on any relatively compact subset of U for N sufficiently large, so that by Lemma 3.5, both $(\psi _0^N)'$ and $((\psi _0^N)^{-1})'$ converge uniformly to $1$ on A. Since A is a relatively compact subset of U, there exists $\delta _0>0$ such that U contains a Euclidean $2\delta _0$ -neighborhood of A. Let $\tilde A$ denote a Euclidean $\delta _0$ -neighborhood of A, so that $\tilde A$ is still a relatively compact subset of U. By Lemma 3.5 again, we can choose $N_0$ large enough such that $\psi _0^N(A)\subset \tilde {A}$ for all $N\geq N_0$ . Then, since $\sigma _U$ is continuous on the relatively compact subset A of U, there exists $\sigma>0$ such that $|\sigma _U|\ge \sigma $ on A. Then for $z \in A$ , using the uniform continuity of $\sigma _U$ on the relatively compact subset $\tilde {A}$ of U,

$$ \begin{align*} (\psi_0^N)^{\natural}(z)&= \frac{(\psi_0^N)'(z)\sigma_U(\psi_0^N(z))}{\sigma_U(z)} \end{align*} $$

converges uniformly to 1 on A, as desired. The proof for $((\psi _0^N)^{-1})^{\natural }$ is similar.

3.2. Statement and proof of the Polynomial Implementation Lemma

Recall that we had defined $P_m^N(z)= \psi _m^N \circ P \circ (\psi _{m-1}^N)^{-1}(z)$ so that we have defined $P_m^N$ for $0 \leq m \leq N$ . Recall also that we have a strictly increasing sequence $\{n_k\}_{k=1}^{\infty }$ for which the subsequence $\{P^{\circ n_k}\}_{k=1}^{\infty }$ converges uniformly to the identity on compact subsets of U (in fact, we can choose $\{n_k \}_{k=1}^{\infty }$ to be the Fibonacci sequence e.g. [Reference MilnorMil06, Problem C-3]).

Define $Q_{n_k}^{n_k}(z)= P_{n_k}^{n_k}\circ P_{n_k -1}^{n_k}\circ \cdots \cdots \circ P_{2}^{n_k}\circ P_{1}^{n_k}(z)$ (again we remind the reader that the superscripts $n_k$ here are indices and do not denote powers or iteration) and note that this simplifies so that $Q_{n_k}^{n_k}(z)=\psi _{n_k}^{n_k}\circ P^{\circ n_k} \circ (\psi _{0}^{n_k})^{-1}(z)$ . Essentially the same argument as in the proof of Lemma 3.7 allows us to prove the following.

Lemma 3.8. For any $\varepsilon>0$ and any relatively compact subset A of U, there exists $k_0$ such that

$$ \begin{align*} |(P^{\circ n_k})^{\natural}(z)-1|&<\varepsilon \end{align*} $$

for all z in A, $k\geq k_0$ .

We now state the Polynomial Implementation Lemma. It is by means of this lemma that we create all polynomials constructed in the proofs of Phases I and II. First we make the definition that, for a relatively compact set A of U and $\delta>0$ , the set $\{z \in U: \rho _U(z, A) < \delta \}$ (where $ \rho _U(z, A)$ is the hyperbolic distance in U from z to A as specified in Definition 2.7) is called the $\delta $ -neighborhood of A. Observe that such a neighborhood is again a relatively compact subset of U.

Lemma 3.9. (The Polynomial Implementation Lemma)

Let $P_{\unicode{x3bb} }$ , $U_{\unicode{x3bb} }$ , $\kappa $ , P, U, $\{n_k\}_{k=1}^{\infty }$ , $\Omega $ , $\Omega '$ , $\gamma $ , $\Gamma $ , and f be as above where, in addition, we also require $f(0) = 0$ . Suppose $A\subset U$ is relatively compact and $\delta $ , M are positive such that if $\hat {A}$ is the $\delta $ -neighborhood of A with respect to $\rho _U$ as above, then we have $f(\hat A) \subset U$ and $\|f^{\natural } \|_{\hat {A}}\leq M$ . Then, for all $\varepsilon> 0$ , there exists $k_0 \ge 1$ (determined by $\kappa $ , the curves $\gamma $ , $\Gamma $ , the function f, as well as A, $\delta $ , M, and $\varepsilon $ ) such that for each $k_1 \ge k_0$ , there exists a (17+ $\kappa $ )-bounded finite sequence of quadratic polynomials $\{P_m^{n_{k_1}}\}_{m=1}^{n_{k_1}}$ (which also depends on $\kappa $ , $\gamma $ , $\Gamma $ , f, A, $\delta $ , M, $\varepsilon $ , as well as $k_1$ ) such that $Q_{n_{k_1}}^{n_{k_1}}$ is univalent on A and:

  1. (1) $\rho _U(Q_{n_{k_1}}^{n_{k_1}}(z), f(z))< \varepsilon $ for all $z \in A$ ;

  2. (2) $\|(Q_{n_{k_1}}^{n_{k_1}})^{\natural } \|_{A}\leq M(1+\varepsilon )$ ;

  3. (3) $Q_{n_{k_1}}^{n_{k_1}}(0)=0$ .

Before embarking on the proof, a couple of remarks: first, this result is set up so that the subsequence of iterates $\{n_k\}_{k=1}^{\infty }$ used is always the same. Although we do not require this, it is convenient as it allows us to apply the theorem to approximate many functions simultaneously (which may be of use in some future application) while using the same number of polynomials in each of the compositions we obtain. Second, one can view this result as a weak form of our main theorem (Theorem 1.3), in that it allows to to approximate a single element of ${\mathcal S}$ with arbitrary accuracy using a finite composition of quadratic polynomials.

Proof. Let $\varepsilon ,\delta $ be as above and, without loss of generality, take $\varepsilon <\min \{\delta ,1\}$ . By Lemma A.4, the Euclidean and hyperbolic metrics are equivalent on compact subsets of U, and we can then use Lemma 3.5 to pick $k_0$ sufficiently large so that for all $k_1 \ge k_0$ ,

(3.2) $$ \begin{align} \rho_U((\psi_0^{n_{k_1}})^{-1}(z),z)< \frac{\varepsilon}{3(M+1)},\quad z\in A. \end{align} $$

This also implies that if we let $\check A$ be the ${\delta }/{2}$ -neighborhood of A in U, then, since $\varepsilon < \delta $ ,

(3.3) $$ \begin{align} (\psi_0^{n_{k_1}})^{-1}(A) \subset \check{A}. \end{align} $$

Next, by Lemma 3.7, we can make $k_0$ larger if needed such that for all $k_1 \ge k_0$ ,

(3.4) $$ \begin{align} |((\psi_0^{n_{k_1}})^{-1})^{\natural}(z)-1|< \frac{\varepsilon}{3},\quad z \in A. \end{align} $$

From above, since $\{P^{\circ n_k}\}_{k=1}^{\infty }$ converges locally uniformly to the identity on U (with respect to the Euclidean metric), using Lemma A.4, we can again make $k_0$ larger if necessary to ensure for all $k_1 \ge k_0$ that

(3.5) $$ \begin{align} \rho_U(P^{\circ n_{k_1}}(z),z)<\frac{\varepsilon}{3(M+1)},\quad z \in \check{A}. \end{align} $$

This also implies

(3.6) $$ \begin{align} P^{\circ n_{k_1}}(\check{A}) \subset \hat{A}. \end{align} $$

By Lemma 3.8, we can again make $k_0$ larger if needed such that for all $k_1 \ge k_0$ ,

(3.7) $$ \begin{align} |(P^{\circ n_{k_1}})^{\natural}(z)-1|< \frac{\varepsilon}{3},\quad z \in \check{A}. \end{align} $$

We remark that this is the last of our requirements on $k_0$ and we are now in a position to establish the dependencies of $k_0$ on $\kappa $ , $\gamma $ , $\Gamma $ , f, A, $\delta $ , M, $\varepsilon $ in the statement. To be precise, the requirements on $k_0$ in equation (3.2) depend on $\kappa $ , $\gamma $ , $\Gamma $ , f, A, M, $\varepsilon $ , and $\delta $ , while those in equation (3.4) depend on $\kappa $ , $\gamma $ , $\Gamma $ , f, A, $\varepsilon $ , and $\delta $ , (but not M). Note that the dependency of these two estimates on the curves $\gamma $ , $\Gamma $ , or equivalently on the domains $\Omega $ , $\Omega '$ , (which in turn depend on the scaling factor $\kappa $ ) as well as the function f, arises from the quasiconformal interpolation performed with the aid of Lemma 3.2 which is clearly dependent on these curves and this function. Further, the requirements on $k_0$ in equation (3.5) depend on $\kappa $ , A, $\delta $ , M, and $\varepsilon $ (but not $\gamma $ , $\Gamma $ , or f) while those in equation (3.7) depend on $\kappa $ , A, $\delta $ , and $\varepsilon $ (but not $\gamma $ , $\Gamma $ , f, or M). Finally, for the remaining estimates, equation (3.3) is a direct consequence of equation (3.2), while equation (3.6) follows immediately from equation (3.5) so that none of these three introduces any further dependencies.

Now fix $k_1 \ge k_0$ arbitrarily and let the finite sequence $\{P_m^{n_{k_1}}\}_{m=1}^{k_1}$ be constructed according to the sequence $\{n_k \}_{k=1}^{\infty }$ specified at the start of this section and the prescription given in equation (3.1). Note that this sequence is then $(17+\kappa )$ -bounded in view of Lemma 3.4. By construction, $Q_{n_k}^{n_k}(0)=0$ for every k so that item (3) in the statement above will be automatically satisfied.

Now equations (3.1), (3.3), and (3.6), the univalence of P on U, and the univalence of f on a neighborhood of ${\mathcal K}$ imply that $Q_{n_{k_1}}^{n_{k_1}}$ is univalent on A. Now let $z \in A$ and, using equation (3.2), consider a geodesic segment $\gamma $ connecting z to ${(\psi _{0}^{n_{k_1}})}^{-1}(z)$ which, since $\varepsilon < \delta $ , has length smaller than ${\varepsilon }/{3}$ . Since $\varepsilon < \min \{\delta , 1\}$ , ${\varepsilon }/{3}$ is in turn smaller than ${\delta }/{2}$ and so, by the definition of $\check A$ , we have $[\gamma ] \subset \check A$ . This allows us to apply equations (3.2) and (3.7), and the hyperbolic M-L estimates (Lemma 2.9) for $P^{\circ n_{k_1}}$ to conclude that the length of $P^{\circ n_{k_1}}(\gamma )$ is at most $(1 + ({\varepsilon }/{3})) {\varepsilon }/{3(M +1)}$ , which is smaller than ${\delta }/{2}$ since $\varepsilon < \min \{\delta , 1\}$ . As $[\gamma ] \subset \check A$ , by equation (3.6), $[P^{\circ n_{k_1}}(\gamma )] \subset \hat A$ and we are then able to apply the hyperbolic M-L estimates for f since by hypothesis, we have $|f^{\natural }(z)|\leq M$ on $\hat {A}$ .

In a similar manner, if instead we consider a geodesic segment connecting z to $P^{\circ n_{k_1}}(z)$ , then, since $\varepsilon < \delta $ , by equation (3.5), this segment again has length less than ${\delta }/{2}$ and starts at $z \in A$ , whence it lies inside $\check A \subset \hat A$ and we are again able to apply the hyperbolic M-L estimates for f to this segment. Recall that $\psi _{n_{k_1}}^{n_{k_1}} = f$ on U in view of the definition of this function using quasiconformal interpolation in Lemma 3.2 and also the fact that the hyperbolic distance between any two points of U is less than or equal to the hyperbolic length of any curve connecting them. Using the triangle inequality and applying the estimates in equations (3.2)–(3.7) (except equation (3.4)) as well as $|f^{\natural }(z)|\leq M$ on $\hat {A}$ from the statement, for each $z \in A$ , since $P^{\circ n_{k_1}} \circ (\psi _{0}^{n_{k_1}})^{-1}(z) \in \hat A$ and $f(\hat A) \subset U$ by hypothesis, we then have

$$ \begin{align*} \rho_U(Q_{n_{k_1}}^{n_{k_1}}(z),f(z)) &= \rho_U(\psi_{n_{k_1}}^{n_{k_1}}\circ P^{\circ n_{k_1}} \circ (\psi_{0}^{n_{k_1}})^{-1}(z),f(z)) \\ &\leq \rho_U(f\circ P^{\circ n_{k_1}} \circ (\psi_{0}^{n_{k_1}})^{-1}(z),f\circ P^{\circ n_{k_1}}(z))\\ &\quad + \rho_U(f\circ P^{\circ n_{k_1}}(z),f(z)) \\ &<M \bigg(1+\frac{\varepsilon}{3}\bigg) \bigg(\frac{\varepsilon}{3(M+1)}\bigg)+ M \bigg(\frac{\varepsilon}{3(M+1)}\bigg ) \\ &<\varepsilon \end{align*} $$

(recall that we assumed $\varepsilon <1$ ), which proves item (1). Also, using the chain rule in equation (2.2) for the hyperbolic derivative, the estimate $|f^{\natural }(z)|\leq M$ on $\hat {A}$ , and equations (3.3), (3.4), (3.6), and (3.7), for each $z \in A$ ,

$$ \begin{align*} |((Q_{n_{k_1}}^{n_{k_1}})^{\natural})(z)| &= |f^{\natural}(P^{\circ n_{k_1}}\circ (\psi_0^{n_{k_1}})^{-1}(z))\cdot (P^{\circ n_{k_1}})^{\natural}((\psi_0^{n_{k_1}})^{-1}(z))\cdot ((\psi_{0}^{n_{k_1}})^{-1})^{\natural}(z)| \\ &\leq M \bigg(1+\frac{\varepsilon}{3}\bigg) \bigg(1+\frac{\varepsilon}{3}\bigg ) \\ &<M(1+\varepsilon), \end{align*} $$

again using $\varepsilon < 1$ at the end, which proves item (2) as desired.

4. Phase I

4.1. Setup

We begin by finding a suitable disk on which $f \circ g^{-1}$ is defined for arbitrary $f,g \in {\mathcal S}$ .

Lemma 4.1. If $f,g \in {\mathcal S}$ , then $f\circ g^{-1}$ is defined on $\mathrm {D}(0, ({1}/{12}))$ and

$$ \begin{align*} (f\circ g^{-1})(\mathrm{D}(0, ({1}/{12})))\subset \mathrm{D}(0, \tfrac{1}{3}). \end{align*} $$

Proof. Let $f,g \in {\mathcal S}$ . By the Koebe one-quarter theorem (Theorem A.1) we have $\mathrm {D}(0,\tfrac 14) \subset g(\mathbb D)$ so $g^{-1}$ is defined on $\mathrm {D}(0,\tfrac 14)$ . Then if $h(w):=4g^{-1}({w}/{4})$ for $w \in \mathbb D$ , we have that $h\in {\mathcal S}$ and $g^{-1}(z) = \tfrac 14h(4z)$ for $z \in \mathrm {D}(0,\tfrac 14)$ , where $z= {w}/{4}$ . Thus, if $|z|\leq {1}/{12}$ , we have $|w| \leq \tfrac 13$ and by the distortion theorems (Theorem A.2), we have that $|h(w)|\leq \tfrac 34$ and $|g^{-1}(z)|\leq ({3}/{16}) < 1$ so that, in particular, $f \circ g^{-1}(z)$ exists. Then, using the distortion theorems again, if $z \in \mathrm {D}(0, ({1}/{12}))$ , we have that $(f \circ g^{-1})(z) \leq ({48}/{169}) < \tfrac 13$ . Thus, $f \circ g^{-1}$ is defined on $\mathrm {D}(0, ({1}/{12}))$ for all $f,g \in {\mathcal S}$ and maps $\mathrm {D}(0,({1}/{12}))$ into $\mathrm {D}(0,\tfrac 13)$ as required.

In the proof of Phase I, we will scale the filled Julia set for the polynomial $P_{\unicode{x3bb} }(z)=\unicode{x3bb} z (1-z)$ , where $\unicode{x3bb} = e^{2\pi i (({\sqrt {5}-1})/{2})}$ so that the filled Julia set is a subset of $\mathrm {D}(0,({1}/{12}))$ . We are then able to apply $f\circ g^{-1}$ for $f,g \in {\mathcal S}$ , which are then defined on this filled Julia set. We wish to find a suitable subdomain of this scaled filled Julia set so that we may control the size of the hyperbolic derivative $(f\circ g^{-1})^{\natural }$ on that subdomain. There are two possible strategies for doing this: one can either consider a small hyperbolic disk in the Siegel disc, or one can scale $P_{\unicode{x3bb} }$ so that the scaled filled Julia set lies inside a small Euclidean disc about $0$ . We found the second option more convenient, as it allows us to consider an arbitrarily large hyperbolic disk inside the scaled Siegel disc on which $|(f\circ g^{-1})'|$ is tame and $|(f\circ g^{-1})^{\natural }|$ is thus easier to control. Lemmas 4.24.7 deal with finding a suitable scaling which allows us to obtain good estimates for $|(f\circ g)^{\natural }|$ .

Lemma 4.2. There exists $K_1>0$ such that for all $f,g \in {\mathcal S}$ , if $|z|\leq {1}/{24}$ , then

$$ \begin{align*} |(f\circ g^{-1})(z)-z|\leq K_1|z|^2. \end{align*} $$

Proof. Let $f,g \in {\mathcal S}$ . By Lemma 4.1, the function $f\circ g^{-1}$ is defined on $\mathrm {D}(0, ({1}/{12}))$ . Let $w\in \mathbb D$ , $z = ({1}/{12}) w$ , so that $z \in \mathrm {D}(0,({1}/{12}))$ , and define $h(w)=12(f\circ g^{-1})({w}/{12})$ so that $h \in {\mathcal S}$ . Then, letting $w + \sum _{n=2}^{\infty } {a_n w^n}$ denote the Taylor series about $0$ for h and setting $K_0 = e \sum _{n=2}^{\infty }n^3 ({1}/{2^{n-2}})$ , if $|w|\leq \tfrac 12$ , we have

$$ \begin{align*} |h'(w)-1|&=\bigg|w\sum_{n=2}^{\infty}na_nw^{n-2} \bigg| \\ &\leq |w|\sum_{n=2}^{\infty}n|a_n||w|^{n-2} \\ &\leq |w| e\sum_{n=2}^{\infty}n^3\frac{1}{2^{n-2}} \\ &=K_0|w|, \end{align*} $$

where we used that $|a_n|\leq en^2$ as $h\in {\mathcal S}$ (see e.g. [Reference Carleson and GamelinCG93, Theorem I.1.8]). Let $\gamma =[0,w]$ be the radial line segment from $0$ to w. Then, if $|w|\leq \tfrac 12$ ,

$$ \begin{align*} |h(w)-w|&=\bigg|\int_{\gamma} [ h'(\zeta)-1 ]\,{d}\zeta\bigg| \\ &\leq K_0|w|\int_{\gamma}|\,{d}\zeta| \\ &=K_0|w|^2. \end{align*} $$

Then, if $|z|\leq {1}/{24}$ (so that $|w|\leq \tfrac 12$ ), a straightforward calculation shows

$$ \begin{align*} |(f\circ g^{-1})(z)-z|&\leq 12K_0|z|^2, \end{align*} $$

from which the lemma follows on setting $K_1=12K_0$ .

Recall $P_{\unicode{x3bb} } = \unicode{x3bb} z(1-z)$ and the corresponding Siegel disc $U_{\unicode{x3bb} }$ . Now fix $R>0$ arbitrarily and let ${\tilde U}_R$ denote $\Delta _{U_{\unicode{x3bb} }}(0,R)$ , the hyperbolic disc of radius R about $0$ in $U_{\unicode{x3bb} }$ . Let $\psi _{\unicode{x3bb} }:{U_{\unicode{x3bb} }}\rightarrow \mathbb D$ be the unique Riemann map satisfying $\psi _{\unicode{x3bb} }(0)=0$ , $\psi _{\unicode{x3bb} }^{\prime }(0)>0$ . Let $\tilde {r}_0=\tilde {r}_0(R):={d}(\partial {\tilde U}_R,\partial {U_{\unicode{x3bb} }})$ , the Euclidean distance from $\partial {\tilde U}_R$ to $\partial U_{\unicode{x3bb} }$ . Similarly to in §3, for $\kappa>0$ arbitrary, set $P:= ({1}/{\kappa }) {P_{\unicode{x3bb} }}(\kappa z)$ and note that P obviously depends on $\kappa $ . Then, if $\mathcal K=\mathcal K(\kappa )$ is the filled Julia set for P, we have $\mathcal K\subset \mathrm {D}(0, ({2}/{\kappa }))$ . Let $U= \{z: \kappa z \in U_{\unicode{x3bb} }\}$ be the corresponding Siegel disc for P and set $U_R=\Delta _{U}(0,R)$ . Define $\psi (z) := \psi _{\unicode{x3bb} }(\kappa z)$ and observe that $\psi $ is the unique Riemann map from U to ${\mathbb D}$ satisfying $\psi (0)=0$ , $\psi '(0)>0$ . Lastly, define $r_0=r_0(\kappa ,R):={d}(\partial U_R, \partial U)$ and note $r_0= {\tilde {r}_0}/{\kappa }$ . Observe that $\tilde {r}_0$ and $r_0$ are decreasing in R while we must have $\tilde {r}_0 \le 2$ . In what follows, let $P_{\unicode{x3bb} }$ , $U_{\unicode{x3bb} }$ , $\psi _{\unicode{x3bb} }$ , P, U, $\psi $ , $\tilde {r}_0$ , and $r_0$ be fixed. For the moment, we let $\kappa>0$ be arbitrary. We will, however, be fixing a lower bound on $\kappa $ in the lemmas which follow.

Lemma 4.3. (Local distortion)

For all $\kappa ,\, R_0>0$ , there exists $C_0=C_0(R_0)$ depending on $R_0$ (in particular, $C_0$ is independent of $\kappa $ ) which is increasing, real-valued, and (thus) bounded on any bounded subset of $[0,\infty )$ such that, if $U_{R_0}$ and $r_0 = r_0(\kappa , R_0)= {d}(\partial U_{R_0}, \partial U)$ are as above and $z_0\in \overline U_{R_0}$ , $z\in U$ with $|z-z_0|\leq s<r_0$ , we have:

  1. (1) $|\psi (z)-\psi (z_0)|\leq {C_0 ({s}/{r_0})}/{(1- ({s}/{r_0}))^2}$ ;

  2. (2) ${(1- ({s}/{r_0}))}/{(1+ ({s}/{r_0}))^3}\leq |{\psi '(z)}/{\psi '(z_0)}| \leq {(1+ ({s}/{r_0}))}/{(1- ({s}/{r_0}))^3}.$

Proof. Set $C_0=C_0(R_0)=2\max _{z\in {\overline {\tilde U}_{R_0}}}|\psi _{\unicode{x3bb} } '(z)| = ({2}/{\kappa })\max _{z\in {\overline {U}_{R_0}}}| \psi '(z)|$ . Then $C_0(R_0)$ does not depend on $\kappa $ and is clearly increasing in $R_0$ , and therefore bounded on any bounded subinterval of $[0,\infty )$ . For $z \in {\mathrm D}(z_0, r_0)$ , set $\zeta = {(z-z_0)}/{r_0}$ and note that if we define $\varphi (\zeta ):= ({\psi (r_0\zeta +z_0)-\psi (z_0)})/{r_0\psi '(z_0)}$ , we have that $\varphi \in {\mathcal S}$ . Applying the distortion theorems (Theorem A.2) to $\varphi $ , we see

$$ \begin{align*} |\varphi(\zeta)|&\leq\frac{|\zeta|}{(1-|\zeta|)^2} \\ &\leq\frac{{s}/{r_0}}{(1- ({s}/{r_0}))^2}, \end{align*} $$

from which we can conclude (using $r_0 = {\tilde r_0}/{\kappa }$ and $\tilde {r}_0 \le 2$ )

$$ \begin{align*} |\psi(z)-\psi(z_0)| \le \frac{{s}/{r_0}}{(1- ({s}/{r_0}))^2}\cdot C_0, \end{align*} $$

which proves item (1). For (2), we again apply the distortion theorems to $\varphi $ and observe

$$ \begin{align*} \frac{1- ({s}/{r_0})}{(1+ ({s}/{r_0}))^3}\leq\frac{1-|\zeta|}{(1+|\zeta|)^3}\leq |\varphi'(\zeta)|\leq\frac{1+|\zeta|}{(1-|\zeta|)^3} \leq\frac{1+ ({s}/{r_0})}{(1- ({s}/{r_0}))^3}, \end{align*} $$

from which item (2) follows as $\varphi '(\zeta )= {\psi '(z)}/{\psi '(z_0)}$ .

Lemma 4.4. For any $R_0> 0$ and $\eta>0$ , there exists $\kappa _0=\kappa _0(R_0, \eta ) \ge 48$ such that, for all $\kappa \geq \kappa _0$ , $f,g\in {\mathcal S}$ and $z\in U$ ,

$$ \begin{align*} |(f\circ g^{-1})(z)-z|\leq \eta r_0, \end{align*} $$

where $r_0 = r_0(\kappa , R_0) = {d}(\partial U_{R_0}, \partial U)$ is as above. In particular, this holds for $z \in \overline U_{R_0}$ .

Proof. Fix $\kappa _0\geq 48$ . By Lemma 4.2, we have, on $U\subset \mathrm {D}(0, ({2}/{\kappa }))\subset \mathrm {D}(0,({1}/{24}))$ , that $|(f\circ g^{-1})(z)-z|<K_1|z|^2$ for some $K_1>0$ (note that $f\circ g^{-1}$ is defined on U by Lemma 4.1). So $|(f\circ g^{-1})(z)-z|< {4K_1}/{\kappa ^2}$ since $|z|< {2}/{\kappa }$ . Then make $\kappa _0$ larger if necessary to ensure that ${4K_1}/{\kappa ^2} \le \eta r_0 = \eta {\tilde r_0}/{\kappa }$ for all $\kappa \geq \kappa _0$ (where we recall that $\tilde r_0 = \tilde r_0(R_0) = {d}(\partial {\tilde U}_{R_0},\partial U_{\unicode{x3bb} }) = \kappa r_0$ ). In fact, $\kappa _0=\max \{48, ({4K_1}/{\eta \tilde r_0})\}$ will suffice and since $\tilde r_0$ depends only on $R_0$ , we have the correct dependencies for $\kappa _0$ and the proof is complete.

Lemmas 4.24.4 are technical lemmas that assist in proving the following result which will be essential for controlling the hyperbolic derivative of $\psi $ .

Lemma 4.5. Given $R_0> 0$ , there exists $\kappa _0= \kappa _0(R_0) \ge 48$ such that for all $\kappa \geq \kappa _0$ , $f,g\in {\mathcal S}$ , and $z\in \overline U_{R_0}$ , $(f\circ g^{-1})(z) \in U$ and:

  1. (1) $({1-|\psi (z)|^2})/({1-|\psi ((f\circ g^{-1})(z))|^2})\leq {10}/{9}$ ;

  2. (2) ${|\psi '((f\circ g^{-1})(z))|}/{|\psi '(z)|}\leq \tfrac 98.$

Proof. For $R> 0$ , set $c_R:= {e^R-1}/{e^R+1}$ . Then, if we fix $z_0 \in \overline U_{R_0}$ , we have that $|\psi (z_0)|\leq c_{R_0}$ (recall that $\rho _{\mathbb D}(0,z)=\log (({1+|z|})/({1-|z|}))$ for $z \in \mathbb D$ ). Thus, $c_{R_0}<1$ and

(4.1) $$ \begin{align} 1-|\psi(z_0)|^2 \ge 1-c_{R_0}^2>0. \end{align} $$

As in the proof of Lemma 4.3, set $C_0=C_0(R_0)=2\max _{z\in {\overline {\tilde U}_{R_0}}}|{\tilde \psi _{\unicode{x3bb} }}'(z)|$ . Let $0<\eta _1=\eta _1(R_0)<\tfrac 12$ be such that

(4.2) $$ \begin{align} \frac{C_0 \eta_1}{(1-\eta_1)^2}\leq \frac{1}{2} (\log 10 - \log 9)(1-c_{R_0 + \log 3}^2) \end{align} $$

and note that $\eta _1$ depends only on $R_0$ . Using Lemma 4.4, we can pick $\kappa _1=\kappa _1(R_0, \eta _1) = \kappa _1(R_0)>0$ such that, if $\kappa \geq \kappa _1$ , then $|(f\circ g^{-1})(z)-z|<\eta _1 r_0$ on $U \supset \overline U_{R_0}$ (recall the definitions of $\tilde {r_0} = \tilde {r_0}(R_0)$ and $r_0 = r_0(\kappa , R_0)$ given before Lemma 4.3).

Now set $s:=|(f\circ g^{-1})(z_0)-z_0|$ . We have $|(f\circ g^{-1})(z_0)-z_0|=s<\eta _1 r_0 < {r_0}/{2}$ as $\eta _1<\tfrac 12$ . Then, recalling the definition of $r_0 = {d}(\partial U_{R_0}, \partial U)$ , we have $(f\circ g^{-1})(z_0) \in {\mathrm D}(z_0, ({r_0}/{2})) \subset {\mathrm D}(z_0, r_0) \subset U$ as in the statement so that, in particular, $\psi (f\circ g^{-1})(z_0)$ is well defined. Again using $\rho _{\mathbb D}(0,z)=\log (({1+|z|})/({1-|z|}))$ for $z \in \mathbb D$ combined with the Schwarz lemma for the hyperbolic metric, we must have that $(f\circ g^{-1})(z_0) \in \overline \Delta _U(z_0, \log 3)$ . By the triangle inequality for the hyperbolic metric, $(f\circ g^{-1})(z_0) \in \overline \Delta _U(0, R_0 + \log 3) = \overline U_{R_0 + \log 3}$ so that $|\psi (f\circ g^{-1})(z_0)| \le c_{R_0 + \log 3}$ . Then, similarly to equation (4.1),

(4.3) $$ \begin{align} 1-|\psi((f\circ g^{-1})(z_0))|^2 \ge 1-c_{R_0 + \log 3}^2> 0. \end{align} $$

We may then apply item (1) of Lemma 4.3 and equation (4.2) to see that

$$ \begin{align*} |\psi(z_0)-\psi((f\circ g^{-1})(z_0))|&\leq \frac{C_0 ({s}/{r_0})}{(1- ({s}/{r_0}))^2} \\ &\leq \frac{C_0\eta_1}{(1-\eta_1)^2} \\ &\leq \frac{1}{2} (\log 10 - \log 9)(1-c_{R_0 + \log 3}^2). \end{align*} $$

Thus, using the triangle inequality (and the fact that $\psi $ is a Riemann mapping to the unit disc which has radius $1$ ), we see that

$$ \begin{align*} |(1-|\psi(z_0)|^2)-(1-|\psi((f\circ g^{-1})(z_0))|^2)|< (\log 10 - \log 9)(1-c_{R_0 + \log 3}^2). \end{align*} $$

Making use of equations (4.1) and (4.3), noting that $C_R$ is an increasing function of R, and applying the mean value theorem to the logarithm function on the interval $[1-c_{R_0 + \log 3}^2,\infty )$ , we have

$$ \begin{align*} |\!\log(1-|\psi(z_0)|^2)-\log(1-|\psi((f\circ g^{-1})(z_0))|^2)|<\log 10-\log 9 \end{align*} $$

from which item (1) follows easily. For item (2), let $0<\eta _2<1$ (e.g. $\eta _2 = {1}/{35}$ ) be such that

$$ \begin{align*} \frac{1+\eta_2}{(1-\eta_2)^3}<\frac{9}{8}. \end{align*} $$

By Lemma 4.4, using the same $R_0> 0$ as above, we can pick $\kappa _2=\kappa _2(R_0, \eta _2) = \kappa _2(R_0)>48$ such that for all $\kappa \geq \kappa _2$ , if $z\in U \supset \overline U_{R_0}$ ,

$$ \begin{align*} |(f\circ g^{-1})(z)-z|<\eta_2r_0. \end{align*} $$

Using the same $z_0\in \overline U_{R_0}$ as above, in a similar way to how we used item (1) of Lemma 4.3 above, we can apply item (2) of the same result to see that

$$ \begin{align*} \frac{|\psi'((f\circ g^{-1})(z_0))|}{|\psi'(z_0)|}\leq \frac{9}{8}, \end{align*} $$

as desired. The result follows if we set $\kappa _0=\kappa _0(R_0)= \max \{\kappa _1(R_0),\kappa _2(R_0)\}$ .

Lemma 4.6. For all $\kappa \geq \kappa _0:=576$ , for any $f,g \in {\mathcal S}$ and $z \in {\overline U}$ ,

$$ \begin{align*} |(f\circ g^{-1})'(z)|\leq \frac{6}{5}. \end{align*} $$

Proof. As in the proof of Lemma 4.2, define $h(w)=12(f\circ g^{-1})({w}/{12})$ . Note that h is defined on all of ${\mathbb D}$ by Lemma 4.1 and that $h\in {\mathcal S}$ . Let $z={w}/{12}$ . Using the distortion theorems (Theorem A.2), we have that, for $z \in {\mathrm D}(0, ({1}/{12}))$ ,

(4.4) $$ \begin{align} |(f\circ g^{-1})'(z)|\leq \frac{1+|12z|}{(1-|12z|)^3}. \end{align} $$

If $\kappa \geq \kappa _0$ , we have that $\mathrm {D}(0,({2}/{\kappa })) \subset \mathrm {D}(0,({2}/{\kappa _0}))=\mathrm {D}(0,({1}/{288}))$ . Let $z \in \overline {U}$ and, since ${\overline U}\subset \mathcal K \subset \mathrm {D}(0,({2}/{\kappa }))\subset \mathrm {D}(0,({1}/{288}))$ , we have $|z|< {1}/{288}$ for $\kappa \geq \kappa _0$ . Thus, the right-hand side of equation (4.4) is less than ${25\cdot 24^2}/{23^3}$ , which in turn is less than $\tfrac 65$ for all $\kappa \geq \kappa _0$ as desired.

As all the previous lemmas hold for all $\kappa $ sufficiently large, applying them in tandem in the next result is valid. In general, each lemma may require a different choice of $\kappa _0$ , but we may choose the maximum so that all results hold simultaneously. The purpose of Lemmas 4.5 and 4.6 is to prove the following.

Lemma 4.7. Given $R_0> 0$ , there exists $\kappa _0= \kappa _0(R_0) \ge 576$ such that, for all $\kappa \geq \kappa _0$ , for any $f,g \in {\mathcal S}$ , and $z \in \overline U_{R_0}$ , $(f\circ g^{-1})(z) \in U$ and

$$ \begin{align*} |(f\circ g^{-1})^{\natural}(z)|\leq \frac{3}{2}. \end{align*} $$

Proof. Applying Lemmas 4.5 and 4.6 to the definition in equation (2.1) of the hyperbolic derivative taken with respect to the hyperbolic metric of U, and letting $\kappa _0$ be the maximum of the two lower bounds on $\kappa $ in these lemmas, we have that there exists a $\kappa _0 \ge 576$ depending on $R_0$ such that, for all $\kappa \geq \kappa _0$ , and $z \in \overline U_{R_0}$ , $(f\circ g^{-1})(z) \in U$ and

$$ \begin{align*} |(f\circ g^{-1})^{\natural}(z)|&=\frac{1-|\psi(z)|^2}{1-|\psi((f\circ g^{-1})(z))|^2}\cdot\frac{2\cdot|\psi'((f\circ g^{-1})(z))|}{2\cdot|\psi'(z)|}\cdot |(f\circ g^{-1})'(z)| \\ &\leq \frac{10}{9}\cdot\frac{9}{8}\cdot\frac{6}{5} \\ &= \frac{3}{2}, \end{align*} $$

as desired.

4.2. Statement and proof of Phase I

Lemma 4.8. (Phase I)

Let $P_{\unicode{x3bb} }$ , $U_{\unicode{x3bb} }$ , $\kappa $ , P, and U be as above. Let $R_0> 0$ be given, and let $\tilde {U}_{R_0}$ and $U_{R_0}$ also be as above. Then, there exists $\kappa _0=\kappa _0(R_0) \ge 576$ such that, for all $\kappa \geq \kappa _0$ , $\varepsilon>0$ , and $N \in \mathbb N$ , if $\{f_i\}_{i=0}^{N+1}$ is a collection of mappings with $f_i \in {\mathcal S}$ for $i=0,1,2,\ldots \ldots , N+1$ with $f_0=f_{N+1}= \mathrm {Id}$ , there exists an integer $M_N$ and a $(17+\kappa )$ -bounded finite sequence $\{P_m\}_{m=1}^{(N+1)M_N}$ of quadratic polynomials both of which depend on $R_0$ , $\kappa $ , N, the functions $\{f_i\}_{i=0}^{N+1}$ , and $\varepsilon $ such that, for each $1\leq i \leq N+1$ :

  1. (1) $Q_{i M_N}(0)=0$ ;

  2. (2) $Q_{iM_N}$ is univalent on $U_{2R_0}$ ;

  3. (3) $\rho _{U}(f_i(z), Q_{iM_N}(z))<\varepsilon $ on $U_{2R_0}$ ;

  4. (4) $\|Q_{iM_N}^{\natural } \|_{U_{R_0}}\leq 7$ .

Before proving this result, we remark first that the initial function $f_0 = \mathrm {Id}$ in the sequence $\{f_i\}_{i=0}^{N+1}$ does not actually get approximated. The reason we included this function was purely for convenience as this allowed us to describe all the functions being approximated in the proof using the Polynomial Implementation Lemma (Lemma 3.9) as $f_{i+1} \circ f_i^{-1}$ , $0 \le i \le N$ .

Second, we can view this result as a weak form of our main theorem in that it allows to approximate finitely many elements of ${\mathcal S}$ with arbitrary accuracy using a finite composition of quadratic polynomials. Phase I is thus intermediate in strength between the Polynomial Implementation Lemma (Lemma 3.9) and our main result (Theorem 1.3).

Proof. Step 1: Setup. Without loss of generality, make $\varepsilon $ smaller if necessary to ensure $\varepsilon <R_0$ . Let $\kappa _0= \kappa _0(R_0) \ge 576$ be as in the statement of Lemma 4.7 so that the conclusions of this lemma as well as those of Lemmas 4.5 and 4.6 also hold. Then for all $\kappa \geq \kappa _0$ , we have $U \subset \mathcal K \subset \mathrm {D}(0,({2}/{\kappa }))\subset \mathrm {D}(0,({1}/{288})) \subset \mathrm {D}(0, ({1}/{12}))$ . Note that the last inclusion implies that, if $f,g \in {\mathcal S}$ , then $f\circ g^{-1}$ is defined on U in view of Lemma 4.1.

Step 2: Application of the Polynomial Implementation Lemma. First apply Lemma 4.7 with $5R_0 + 1$ replacing $R_0$ so that, for all $\kappa \geq \kappa _0$ , if $f,g \in {\mathcal S}$ , we have $(f\circ g^{-1})(U_{5R_0 +1}) \subset U$ and

(4.5) $$ \begin{align} \| (f \circ g^{-1})^{\natural} \|_{U_{5R_0}}\leq \| (f \circ g^{-1})^{\natural} \|_{U_{5R_0+1}} \le\frac{3}{2}. \end{align} $$

Note that, by Lemmas 2.8 and 2.9, since $U_{2R_0}$ is then hyperbolically convex while $(f \circ g^{-1})(0)=0$ , this implies

(4.6) $$ \begin{align} (f \circ g^{-1})(U_{2R_0}) &\subset U_{3R_0}. \end{align} $$

We observe that since $\mathrm {Id} \in {\mathcal S}$ , in particular, we have $f(U_{2R_0})\subset U_{3R_0}$ for all $f \in {\mathcal S}$ .

Fix $\kappa \ge \kappa _0$ and for each $0 \leq i \leq N$ , using equation (4.5), apply the Polynomial Implementation Lemma (Lemma 3.9), with $\Omega =\mathrm {D}(0, ({1}/{24}))$ , $\Omega '=\mathrm {D}(0,\tfrac 12)$ , $\gamma =\mathrm {C}(0,({1}/{24}))$ , $\Gamma =\mathrm {C}(0,\tfrac 12)$ (where both of these circles are positively oriented with respect to the round annulus of which they form the boundary), $f=f_{i+1}\circ f_{i}^{-1}$ , $A=U_{5R_0}$ , $\delta = 1$ (and hence, ${\hat {A}}= U_{5R_0+1}$ ), $M = \tfrac {3}{2}$ , and $\varepsilon $ replaced with ${\varepsilon }/{3^{N}}$ . Note that $f(0)=0$ , $(f\circ g^{-1})(U_{5R_0 +1}) \subset U$ (as noted above), and that, in view of Lemma 4.1, f is analytic and injective on a neighborhood of $\overline \Omega $ and maps $\gamma $ inside $\mathrm {D}(0,\tfrac 13)$ which lies inside $\Gamma $ , so that $(f, \mathrm {Id})$ is indeed an admissible pair on $(\gamma , \Gamma )$ in the sense given in Definition 3.3 in §3 on the Polynomial Implementation Lemma, which then allows us to obtain a quasiconformal homeomorphism of $\hat {\mathbb C}$ using Lemma 3.2.

Let $M_N$ be the maximum of the integers $n_{k_0}$ in the statement of Lemma 3.9 for each of the $N+1$ applications of this lemma above. Note that each $k_0$ depends on $\kappa $ , the curves $\gamma =\mathrm {C}(0,({1}/{24}))$ , $\Gamma =\mathrm {C}(0,\tfrac 12)$ , and the individual function $f = f_{i+1}\circ f_{i}^{-1}$ being approximated, as well as A, $\delta $ , the upper bound M on the hyperbolic derivative (which, in our case, by equation (4.5) is $\tfrac {3}{2}$ for every function we are approximating) and finally $\varepsilon $ . Thus, $M_N$ , in addition to N, then also depends on $R_0$ , $\kappa $ , the finite sequence of functions $\{f_i\}_{i=0}^{N+1}$ , and, finally (recalling that here we have $\gamma =\mathrm {C}(0,({1}/{24}))$ , $\Gamma =\mathrm {C}(0,\tfrac 12)$ and $A=U_{5R_0}$ , $\delta = 1$ , $M = \tfrac {3}{2}$ ), $\varepsilon $ . From these $N+1$ applications, we also then obtain (after a suitable and obvious labeling) a finite $(17 + \kappa )$ -bounded sequence $\{P_m\}_{m=1}^{(N+1)M_N}$ such that each $Q_{iM_N,(i+1)M_N}$ is univalent on $U_{5R_0}$ , and we have, for each $0 \le i \le N$ and each $z \in U_{5R_0}$ ,

(4.7) $$ \begin{align} \rho_U(Q_{iM_N,(i+1)M_N}(z),f_{i+1}\circ f_{i}^{-1}(z) )<\frac{\varepsilon}{3^N}. \end{align} $$

It also follows from Lemma 3.9 that each $Q_{iM_N,(i+1)M_N}$ depends on N, $R_0$ , $\kappa $ , the functions $f_i$ , $f_{i+1}$ , and $\varepsilon $ so that we obtain the correct dependencies for $M_N$ and $\{P_m\}_{m=1}^{(N+1)M_N}$ in the statement. Lastly, by item (3) of Lemma 3.9, $Q_{iM_N,(i+1)M_N}(0)=0$ , for each i, proving item (1) in the statement above.

Step 3: Estimates on the compositions $\{Q_{iM_N}\}_{i=1}^{N+1}$ . We use the following claim to prove items (2) and (3) in the statement (note that we do not require part (ii) of the claim below for this, but we will need it in proving item (4) later).

Claim 4.9. For each $1\leq j \leq N+1$ , we have that $Q_{jM_N}$ is univalent on $U_{2R_0}$ and, for each $z \in U_{2R_0}$ ,

$$ \begin{align*} \mathrm{(i)}& \; \rho_U(Q_{jM_N}(z),f_{j}(z))< \frac{\varepsilon}{3^{N+1-j}}, \\ \mathrm{(ii)}& \; \rho_U(Q_{jM_N}(z),0)<4 R_0. \end{align*} $$

Note that the error in this polynomial approximation for $j=N+1$ is the largest, as this error combines errors from the largest number of prior mappings.

Proof. We prove the claim by induction on j. Let $z \in U_{2R_0}$ . For the base case, we have that univalence and part (i) in the claim follow immediately from our applications of the Polynomial Implementation Lemma and in particular from equation (4.7) (with $j = i+1 = 1$ so that $i=0$ ) since $f_0= \mathrm {Id}$ . For part (ii), using part (i) (or equation (4.7)) and equation (4.6), compute

$$ \begin{align*} \rho_U(Q_{M_N}(z),0) &\leq \rho_U(Q_{M_N}(z),f_{1}(z))+\rho_U(f_{1}(z),0) \\ &<\frac{\varepsilon}{3^{N}}+3R_0 \\ &<4R_0, \end{align*} $$

which completes the proof of the base case since we had assumed $\varepsilon < R_0$ . Now suppose the claim holds for some $1\leq j <N+1$ . Then,

$$ \begin{align*} \rho_U(Q_{(j+1)M_N}(z),f_{j+1}(z) )& \leq \rho_U(Q_{jM_N,(j+1)M_N}\circ Q_{jM_N}(z),(f_{j+1}\circ f_{j}^{-1})\circ Q_{jM_N}(z)) \\ &\quad +\rho_U((f_{j+1}\circ f_{j}^{-1})\circ Q_{jM_N}(z),(f_{j+1}\circ f_{j}^{-1})\circ f_j(z) ). \end{align*} $$

Now $Q_{jM_N}(z) \in U_{4R_0} \subset U_{5R_0}$ by the induction hypothesis, so equation (4.7) implies that the first term on the right-hand side in the inequality above is less than ${\varepsilon }/{3^N}$ . Again by the induction hypothesis, $Q_{jM_N}(z) \in U_{4R_0}\subset U_{5R_0}$ , while we also have $f_j(z)\in U_{3R_0}\subset U_{5R_0}$ by equation (4.6). Thus, equation (4.5), the hyperbolic convexity of $U_{5R_0}$ from Lemmas 2.8, 2.9, and the induction hypothesis imply that the second term in the inequality is less than $\tfrac 32\cdot {\varepsilon }/{3^{N+1-j}}$ . Thus, we have $\rho _U(Q_{(j+1)M_N}(z),f_{j+1}(z) ) < {\varepsilon }/{3^{N+1-(j+1)}}$ , proving the first part of the claim.

Also, using what we just proved, equation (4.6), and our assumption that $\varepsilon < R_0$ ,

$$ \begin{align*} \rho_U(Q_{(j+1)M_N}(z),0) &\leq \rho_U(Q_{(j+1)M_N}(z),f_{j+1}(z))+\rho_U(f_{j+1}(z),0) \\ &<\frac{\varepsilon}{3^{N+1-(j+1)}}+3R_0 \\ &<4R_0, \end{align*} $$

which proves part (ii) in the claim. Univalence of $Q_{(j+1)M_N}$ follows by hypothesis as $Q_{jM_N}(U_{2R_0})\subset U_{4R_0}$ , while $Q_{(j+1)M_n,jM_N}$ is univalent on $A=U_{5R_0}\supset U_{4R_0}$ by the Polynomial Implementation Lemma as stated immediately before equation (4.7). This completes the proof of the claim, from which items (2) and (3) in the statement of Phase I follow easily.

Step 4: Proof of item (4) in the statement. To finish the proof, we need to give a bound on the size of the hyperbolic derivatives of the compositions $Q_{iM_N}$ , $1 \le i \le N+1$ . It will be of essential importance to us later that this bound not depend on the number of functions being approximated, the reason being that, in the inductive construction in Lemma 6.2, the error from the prior application of Phase II (Lemma 5.17) needs to pass through all these compositions while remaining small. This means that the estimate on the size of the hyperbolic derivative in item (2) of the statement of Lemma 3.9 is too crude for our purposes and so we have to proceed with greater care.

Let ${d}\rho _U(z)$ be the hyperbolic length element in U and write ${d}\rho _U(z)=\sigma _U(z)|{d}z|$ , where the hyperbolic density $\sigma _U$ (as introduced in the proof of Lemma 3.7) is continuous and positive on U (e.g. [Reference Keen and LakicKL07, Theorem 7.2.2]) and therefore uniformly continuous on $U_{4R_0}$ , as $U_{4R_0}$ is relatively compact in U. Let $\sigma = \sigma (R_0)> 0$ be the infimum of $\sigma _U$ on $U_{4R_0}$ so that

(4.8) $$ \begin{align} \sigma_U(z) \ge \sigma, \quad z \in U_{4R_0}. \end{align} $$

Let $z \in U_{2R_0}$ and observe that, since $\kappa \ge \kappa _0 \ge 576$ , $U \subset {\mathrm D}(0, ({1}/{288})) \subset {\mathbb D}$ . Then item (3) in the statement together with the Schwarz lemma for the hyperbolic metric (e.g. [Reference Carleson and GamelinCG93, Theorems 4.1 or 4.2]) give, for $1 \leq i \leq N+1$ , $\rho _{\mathbb D}(Q_{iM_N}(z),f_i(z))\leq \rho _{U}(Q_{iM_N}(z),f_i(z))<\varepsilon $ . If $\gamma $ is a geodesic segment in ${\mathbb D}$ from $Q_{iM_N}(z)$ to $f_i(z)$ , we see that

$$ \begin{align*} \varepsilon&>\rho_{U}(Q_{iM_N}(z),f_i(z)) \\ &\geq \rho_{\mathbb D}(Q_{iM_N}(z),f_i(z)) \\ &= \int_{\gamma}\,{d}\rho_{\mathbb D} \\ &= \int_{\gamma}\frac{2|{d}w|}{1-|w|^2} \\ &\geq \int_{\gamma}2|{d}w| \\ &=2l(\gamma) \\ &\geq 2|Q_{iM_N}(z)-f_i(z)| \end{align*} $$

and so, in particular,

(4.9) $$ \begin{align} |Q_{iM_N}(z)-f_i(z)|<\varepsilon. \end{align} $$

Now suppose further that $z \in U_{R_0}$ and set

(4.10) $$ \begin{align} \delta_0 = \delta_0(R_0) = \min_{w\in \partial U_{R_0}}{d}(w,\partial U_{({3}/{2})R_0}), \end{align} $$

where ${d}(\cdot \,, \cdot )$ denotes Euclidean distance. By [Reference NewmanNew51, Theorem VII.9.1], the winding number of $\partial U_{\tfrac 32R_0}$ (suitably oriented) around z is $1$ . Then, using [Reference ConwayCon78, Corollary IV.5.9] together with the standard distortion estimates in Theorem A.2 and equation (4.9) above, we obtain

$$ \begin{align*} |Q_{iM_N}^{\prime}(z)|&\leq |f_i^{\prime}(z)|+|Q_{iM_N}^{\prime}(z)-f_i^{\prime}(z)| \\ &= |f_i^{\prime}(z)|+\bigg| \frac{1}{2\pi i} \int_{\partial U_{({3}/{2})R_0}}\frac{Q_{iM_N}(w)-f_i(w)}{(w-z)^2}\,{d}w\bigg| \\ &\leq \frac{1+ ({1}/{288})}{(1- ({1}/{288}))^3} + \frac{\varepsilon}{2\pi\delta_0^2}l(\partial U_{({3}/{2})R_0}), \end{align*} $$

where $l(\partial U_{\tfrac 32R_0})$ is the Euclidean length of $\partial U_{\tfrac 32R_0}$ . By making $\varepsilon $ smaller if needed, we can thus ensure, for $z \in U_{R_0}$ , that

(4.11) $$ \begin{align} |Q_{iM_N}^{\prime}(z)|\leq \tfrac{3}{2}. \end{align} $$

We can make $\varepsilon $ smaller still if needed to guarantee that, if $z,w \in U_{4R_0}$ and $|z-w|<\varepsilon $ , then, by uniform continuity of $\sigma _U$ on $U_{4R_0}$ ,

(4.12) $$ \begin{align} |\sigma_U(z)-\sigma_U(w)|<\sigma. \end{align} $$

Note that both equations (4.11) and (4.12) required us to make $\varepsilon $ smaller, but these requirements depended only on $R_0$ and, in particular, not on the sequence of polynomials we have constructed. Although this means we may possibly need to run the earlier part of the argument again to find a new integer $M_N$ and then construct a new polynomial sequence $\{P_m \; : \; 1 \leq m \leq (N+1)M_N, \; 0 \leq i \leq N \}$ , our requirements on $\varepsilon $ above will then automatically be met. Alternatively, these requirements on $\varepsilon $ could be made before the sequence is constructed. However, we decided to make them here for the sake of convenience.

If $z \in U_{R_0}$ , we then have

$$ \begin{align*} |Q_{iM_N}^{\natural}(z)|&\leq |f_i^{\natural}(z)| + |Q_{iM_N}^{\natural}(z)-f_i^{\natural}(z)| \\ &=|f_i^{\natural}(z)| + \bigg|\frac{\sigma_U(Q_{iM_n}(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)-\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}f_{i}^{\prime}(z)\bigg| \\ &\leq |f_i^{\natural}(z)| + \bigg|\frac{\sigma_U(Q_{iM_n}(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)-\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)\bigg| \\ & \quad + \bigg|\frac{\sigma_U(f_i(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)-\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}f_{i}^{\prime}(z)\bigg|. \\ \end{align*} $$

We need to bound each of the three terms on the right-hand side of the above inequality. Recall that, as $g=\mathrm {Id} \in {\mathcal S}$ , we have that $|f_i^{\natural }(z)|\leq \tfrac 32$ by equation (4.5). For the second term, by equations (4.6), (4.8), (4.9), (4.11), and (4.12), and part (ii) in Claim 4.9, we have

$$ \begin{align*} &\bigg|\frac{\sigma_U(Q_{iM_n}(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)-\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)\bigg| \\ &\quad = \frac{1}{|\sigma_U(z)|}\cdot |Q_{iM_N}^{\prime}(z)|\cdot |\sigma_U(Q_{iM_N}(z))-\sigma_U(f_{i}(z))| \\ &\quad \leq \frac{1}{\sigma}\cdot \frac{3}{2}\cdot \sigma = \frac{3}{2}. \end{align*} $$

For the third and final term, recall that we chose $\kappa _0 = \kappa _0(R_0)$ sufficiently large to ensure that the conclusions of Lemmas 4.5 and 4.6 hold. We can then apply Lemmas 4.5 and 4.6, together with equation (4.11) to obtain that

$$ \begin{align*} \bigg|\frac{\sigma_U(f_i(z))}{\sigma_U(z)}Q_{iM_N}^{\prime}(z)-\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}f_{i}^{\prime}(z)\bigg| &\leq \bigg|\frac{\sigma_U(f_{i}(z))}{\sigma_U(z)}\bigg|\cdot (|Q_{iM_N}^{\prime}(z)|+|f_{i}^{\prime}(z)|) \\ &\leq \frac{10}{9}\cdot \frac{9}{8}\bigg(\frac{3}{2}+\frac{6}{5} \bigg) \\ &< 4. \end{align*} $$

Thus,

$$ \begin{align*} |Q_{iM_N}^{\natural}(z)|&\leq \frac{3}{2}+\frac{3}{2}+4 =7, \end{align*} $$

as desired.

5. Phase II

The approximations in Phase I inevitably involve errors and the correction of these errors is the purpose of Phase II. However, this correction comes at a price in that it is only valid on a domain which is smaller than that on which the error itself is originally defined; in other words there is an unavoidable loss of domain. There are two things here which work in our favor and stop this getting out of control: the first is the Fitting Lemma (Lemma 5.15) which shows us that loss of domain can be controlled and in fact diminishes to zero as the size of the error to be corrected tends to zero, while the second is that the accuracy of the correction can be made arbitrarily small, which allows us to control the errors in subsequent approximations.

We will be interpolating functions between Green’s lines of a scaled version of the polynomial $P_{\unicode{x3bb} }= \unicode{x3bb} z(1-z)$ , where $\unicode{x3bb} = e^{2\pi i (({\sqrt {5}-1})/{2})}$ . If we denote the corresponding Green’s function by G, we will want to be able to choose h small enough so that the regions between the Green’s lines $\{z: G(z) =h \}$ and $\{z: G(z) =2h \}$ are small in a sense to be made precise later. This will eventually allow us to control the loss of domain. However, we will want h to be large enough so that, if we distort the inner Green’s line $\{z: G(z) =h \}$ slightly (with a suitably conjugated version of that same error function), the distorted region between them will still be a conformal annulus which will then allow us to invoke the Polynomial Implementation Lemma (Lemma 3.9). However, first we must prove several technical lemmas.

5.1. Setup and the target and fitting lemmas

We begin this section with continuous versions of Definition A.7 of Carathéodory convergence and of local uniform convergence and continuity on varying domains [Reference ComerfordCom13a, Definition 3.1].

Definition 5.1. Let $\mathcal W=\{(W_{h},w_{h}) \}_{h \in I}$ be a sequence of pointed domains indexed by a non-empty set $I\subset \mathbb R$ . We say that $\mathcal W$ varies continuously in the Carathéodory topology at $h_0 \in I$ or is continuous at $h_0$ if, for any sequence $\{h_n\}_{n=1}^{\infty }$ in I tending to $h_0$ , $(W_{h_n},w_{h_n})\rightarrow (W_{h_0},w_{h_0})$ as $n \to \infty $ . If this property holds for all $h \in I$ , we say $\mathcal W$ varies continuously in the Carathéodory topology over I.

For each $h \in I$ , let $g_h$ be an analytic function defined on $W_h$ . If $h_0 \in I$ and $\mathcal W$ is continuous at $h_0$ as above, we say $g_h$ converges locally uniformly to $g_{h_0}$ on $W_{h_0}$ if, for every compact subset K of $W_{h_0}$ and every sequence $\{h_n\}_{n=1}^{\infty }$ in I tending to $h_0$ , $g_{h_n}$ converges uniformly to $g_{h_0}$ uniformly on K as $n \to \infty $ .

Finally, if we let $\mathcal G = \{g_h\}_{h \in I}$ be the corresponding family of functions, we say that $\mathcal G$ is continuous at $h_0 \in I$ if $g_h$ converges locally uniformly to $g_{h_0}$ on $W_{h_0}$ as above. If this property holds for all $h \in I$ , we say $\mathcal G$ is continuous over I.

Definition 5.2. Let $I \subset \mathbb R$ be non-empty and let $\{\gamma _h \}_{h\in I}$ be a family of Jordan curves indexed over I. We say that $\{\gamma _h \}_{h\in I}$ is a continuously varying family of Jordan curves over I if we can find a continuous function $F: \mathbb {T}\times I \rightarrow \mathbb C$ which is injective in the first coordinate such that, for each $h \in I$ fixed, $F(z,h)$ is a parameterization of $\gamma _h$ .

Recall that a Jordan curve $\gamma $ divides the plane into exactly two complementary components whose common boundary is $[\gamma ]$ (e.g. [Reference MunkresMun00, Theorem 63.4] or [Reference NewmanNew51, Theorem V.10.2]). It is well known that we can use winding numbers to distinguish between the two complementary components of $[\gamma ]$ . More precisely, we can parameterize (that is, orient) $\gamma $ , such that $n(\gamma , z) = 1$ for those points in the bounded complementary component of $\mathbb C \setminus [\gamma ]$ , while $n(\gamma , z) = 0$ for those points in the unbounded complementary component (e.g. [Reference NewmanNew51, Corollary 2 to Theorem VII.8.7 combined with Theorem VII.9.1]).

Lemma 5.3. Let $I \subset \mathbb R$ be non-empty and $\{\gamma _h \}_{h\in I}$ be a continuously varying family of Jordan curves indexed over I. For each $h \in I$ , let $W_h$ be the Jordan domain which is the bounded component of $\hat {\mathbb C} \setminus [ \gamma _h ]$ , and let $w:I\rightarrow \mathbb C$ be continuous with $w(h) \in W_h$ for all h. Then the family $\{(W_h,w(h)) \}_{h \in I}$ varies continuously in the Carathéodory topology over I.

Proof. The continuity of w implies item (1) of Carathéodory convergence in the sense of Definitions A.7, 5.1. For item (2), fix $h_0 \in I$ , let $K \subset W_{h_0}$ be compact, and let $z \in K$ . Set $\delta :={\mathrm d}(K,\partial W_{h_0})$ . By the uniform continuity of F on compact subsets of $\mathbb {T}\times I$ , we can find $\eta>0$ such that, for each $h \in I$ with $|h - h_0| < \eta $ ,

$$ \begin{align*} |\gamma_h(t) - \gamma_{h_0}(t)| < \frac{\delta}{2} \quad \mbox{for all } t\in \mathbb T, \end{align*} $$

and $\gamma _h$ is thus homotopic to $\gamma _{h_0}$ in $\mathbb C \setminus K$ . We observe that we have not assumed that $I \cap (h_0-\eta , h_0 + \eta )$ is an interval, so we may not be able to use the parameterization induced by $\gamma _h(z)$ to make the homotopy. However, using the above, it is a routine matter to construct the desired homotopy using convex linear combinations. By the above remark on winding numbers and Cauchy’s theorem, one then obtains

$$ \begin{align*} n(\gamma_h,w)= n(\gamma_{h_0},w) = 1 \quad \mbox{for all } w \in K. \end{align*} $$

Thus, if $|h - h_0| < \eta $ , then $K \subset W_h$ , and item (2) of Carathéodory convergence follows readily from this.

To show item (3) of Carathéodory convergence, let $\{h_n\}$ be any sequence in I which converges to $h_0$ and suppose N is an open connected set containing $w(h_0)$ such that $N\subset W_{h_n}$ for infinitely many n. Without loss of generality, we may pass to a subsequence to assume that $N\subset W_{h_n}$ for all n. Let $z \in N$ and connect z to $w(h_0)$ by a curve $\eta $ in N. As $[\eta ]$ is compact, there exists $\delta>0$ such that a Euclidean $\delta $ -neighborhood of $[\eta ]$ is contained in N and thus avoids $\gamma _{h_n}$ for all n. By the continuity of F, this neighborhood also avoids $\gamma _{h_0}$ . Since $w(h_0)$ and z are connected by $\eta $ which avoids $\gamma _{h_0}$ , they are in the same region determined by $\gamma _{h_0}$ so that $n(\gamma _{h_0},z)=n(\gamma _{h_0},w(h_0))$ . Hoewever, since by hypothesis $w(h_0) \in W_{h_0}$ , by [Reference NewmanNew51, Corollary 2 to Theorem VII.8.7 combined with Theorem VII.9.1], $n(\gamma _{h_0},w(h_0))=1$ whence $z \in W_{h_0}$ . As z is arbitrary, we have $N \subset W_{h_0}$ and item (3) of Carathéodory convergence, and the result then follows.

Recall that a Riemann surface is said to be hyperbolic if its universal cover is the unit disc ${\mathbb D}$ . For a simply connected domain $U \subset \mathbb C$ , this is equivalent to U being a proper subset of $\mathbb C$ . The next lemma makes use of the following definition, originally given in [Reference ComerfordCom14] for families of pointed domains of finite connectivity. Recall that for a domain $U \subset \mathbb C$ , we use the notation $\delta _U(z)$ for the Euclidean distance from a point z in U to the boundary of U.

Definition 5.4. [Reference ComerfordCom14, Definition 6.1]

Let $\mathcal V = \{(V_\alpha , v_\alpha )\}_{\alpha \in A}$ be a family of hyperbolic simply connected domains and let $\mathcal U = \{(U_\alpha , u_\alpha )\}_{\alpha \in A}$ be another family of hyperbolic simply connected domains indexed over the same set A, where $U_\alpha \subset V_\alpha $ for each $\alpha $ . We say that $\mathcal U$ is bounded above and below or just bounded in $\mathcal V$ with constant $K \ge 1$ if:

  1. (1) $U_\alpha $ is a subset of $V_\alpha $ which lies within hyperbolic distance at most K about $v_\alpha $ in $V_\alpha $ ;

  2. (2)

    $$ \begin{align*} \delta_{U_\alpha} (u_\alpha) \ge \frac{1}{K} \delta_{V_\alpha} (u_\alpha).\end{align*} $$

In this case, we write $\textrm {pt} \sqsubset {\mathcal U} \sqsubset {\mathcal V}$ .

The essential point of this definition is that the domains of the family $\mathcal U$ are neither too large nor too small in those of the family $\mathcal V$ . For families of pointed domains of higher connectivity, two extra conditions are required relating to certain hyperbolic geodesics of the family $\mathcal U$ . See [Reference ComerfordCom14] for details.

Lemma 5.5. Let $I \subset \mathbb R$ be non-empty, $\mathcal U=\{(U_h,v_h) \}_{h\in I}$ be a sequence of pointed Jordan domains, and $\mathcal V=\{(V_h,v_h) \}_{h\in I}$ be a sequence of pointed hyperbolic simply connected domains with the same base points, both indexed over I. If $\textrm {pt} \sqsubset \mathcal U \sqsubset \mathcal V$ , $\mathcal V$ varies continuously in the Carathéodory topology over I, and $\partial U_h$ is a continuously varying family of Jordan curves on I, then $R^{\mathrm {ext}}_{(V_h,v_h)}U_h$ is continuous on I.

Before embarking on the proof, we observe that, since both families $\mathcal U$ and $\mathcal V$ have the same basepoints, it follows from Lemma 5.3 and the fact that $\mathcal V$ varies continuously in the Carathéodory topology over I that $\mathcal U$ also varies continuously in the Carathéodory topology over I. However, we do not need to make use of this in the proof below.

Proof. Using Definition 5.2, as $\partial U_h$ is a continuously varying family of Jordan curves, let $F:\mathbb {T} \times I\rightarrow \mathbb {C}$ be a continuous mapping, injective in the first coordinate where, for each h fixed, $F(t,h)$ is a parameterization of $\partial U_h$ . We first need to uniformize the domains $V_h$ by mapping to the unit disc ${\mathbb D}$ where we can compare hyperbolic distances directly. So let $\varphi _h$ be the unique normalized Riemann map from $V_h$ to ${\mathbb D}$ satisfying $\varphi _h(v_h)=0$ , $\varphi _h^{\prime }(v_h)>0$ .

By Definitions 2.2, 5.4, since $\textrm {pt} \sqsubset {\mathcal U} \sqsubset {\mathcal V}$ , there exists $K \ge 1$ such that $R^{\mathrm {ext}}_{(V_h,v_h)}U_h\leq K$ and thus $\varphi _h(U_h)\subset \Delta _{\mathbb D}(0,K) = \mathrm {D}(0, ({(e^K-1)}/{(e^K+1)}))$ . Also, for any $h_0 \in I$ , we know from Theorem A.9 that $\varphi _h$ converges to $\varphi _{h_0}$ locally uniformly on $V_{h_0}$ as $h \to h_0$ since $(V_h,v_h)\rightarrow (V_{h_0},v_{h_0})$ in the sense of Definition 5.1. Now, set ${\tilde \varphi }(z,h):=\varphi _h(z)$ .

Claim 5.6. For all $h_0 \in I$ and $z_0 \in V_{h_0}$ , ${\tilde \varphi }(z,h)$ is jointly continuous in $z,h$ on a suitable neighborhood of $(z_0,h_0)$ .

Proof. Let $\varepsilon>0$ . Let $\{h_n \}$ be a sequence in I which converges to $h_0$ and $\{z_n \}$ be a sequence in $V_{h_0}$ which converges to $z_0$ . Using item (2) of Carathéodory convergence (Definition A.7) and the fact that $V_{h_0}$ is open, we have that $z_n \in V_{h_n}$ for all sufficiently large n. Then, for n sufficiently large so that $z_n$ and $h_n$ are sufficiently close to $z_0$ and $h_0$ , respectively, since $\varphi _h$ converges to $\varphi _{h_0}$ locally uniformly on $V_{h_0}$ and $\varphi _{h_0}$ is continuous, we have

$$ \begin{align*} |{\tilde \varphi}(z_n,h_n)-{\tilde \varphi}(z_0,h_0)|&=|\varphi_{h_n}(z_n)-\varphi_{h_0}(z_0)| \\ &\leq |\varphi_{h_n}(z_n)-\varphi_{h_0}(z_n)|+|\varphi_{h_0}(z_n)-\varphi_{h_0}(z_0)| \\ &<\frac{\varepsilon}{2}+\frac{\varepsilon}{2} \\ &=\varepsilon, \end{align*} $$

which proves the claim.

Using this claim, if we now define $\psi (t,h):={\tilde \varphi }(F(t,h),h)$ , we have that $\psi (t,h)$ is jointly continuous in t and h on $\mathbb T \times I$ .

Now let $h_0 \in I$ be arbitrary and let $\{h_n\}$ be any sequence in I which converges to $h_0$ . If we write $R_n=R^{\mathrm {ext}}_{(V_{h_n},v_{h_n})}U_{h_n}$ and $R_0=R^{\mathrm {ext}}_{(V_{h_0},v_{h_0})}U_{h_0}$ , we then wish to show that $R_n \rightarrow R_0$ as $n \to \infty $ . As $\textrm {pt} \sqsubset {\mathcal U} \sqsubset {\mathcal V}$ , we may choose a subsequence $\{R_{n_k} \}$ which converges using Definition 5.4 to some finite limit in $[0,K]$ . If we can show that the limit is $R_0$ , we will have completed the proof. In view of Lemma 2.5, for each k, we have that $R_{n_k}$ is attained at some $z_{n_k}\in \partial U_{n_k}$ , so we may write $R_{n_k}=\rho _{V_{h_{n_k}}}(v_{h_{n_k}},z_{n_k})=\rho _{\mathbb D}(0,{\tilde \varphi }(z_{n_k},h_{n_k}))$ . Now $z_{n_k}=F(t_{n_k},h_{n_k})$ for some $t_{n_k}\in \mathbb {T}$ , so $R_{n_k}=\rho _{\mathbb D}(0,\psi (t_{n_k},h_{n_k}))$ . As $h_{n_k}\rightarrow h_0$ , applying the compactness of $\mathbb T$ and passing to a further subsequence if necessary, we have that $(t_{n_k},h_{n_k})\rightarrow (t_0,h_0)$ for some $t_0 \in \mathbb {T}$ , so that $\psi (t_{n_k},h_{n_k})$ converges to $\psi (t_0, h_0)$ by the continuity of $\psi $ and $R_{n_k} = \rho _{\mathbb D}(0,\psi (t_{n_k},h_{n_k}))$ then converges to some limit $\tilde R_0$ . Observe that there is no loss of generality in passing to such a further subsequence.

Claim 5.7. $R^{\mathrm {ext}}_{(V_{h_0},v_{h_0})}U_{h_0} =\, R_0 = \tilde R_0 = \rho _{\mathbb D}(0,\psi (t_{0},h_{0})) = \rho _{V_{h_0}}(v_{h_0}, F(t_0, h_0)).$

Proof. Suppose not. Since $\partial U_h$ is a continuously varying family of Jordan curves on I, $F(t, h_0) \in \partial U_{h_0}$ for any $t \in \mathbb T$ . In view of Lemma 2.5, this means that the external hyperbolic radius for $U_{h_0}$ is not attained at $F(t_0, h_0)$ and so there must exist $\tilde t_0 \in \mathbb {T}$ such that $\rho _{V_{h_0}}(v_{h_0}, F(t_0, h_0)) = \tilde R_0 < R_0 = \rho _{V_{h_0}}(v_{h_0}, F(\tilde t_0, h_0))$ , that is, $\rho _{\mathbb D}(0, \psi (t_0, h_0)) < \rho _{\mathbb D}(0, \psi (\tilde t_0, h_0))$ whence $|\psi (t_0,h_0)|<|\psi ({\tilde t_0}, h_0)|$ . Choose a sequence $\{({\tilde t_{n_k}},h_{n_k}) \}$ in $\mathbb {T}\times I$ which converges to $({\tilde t_0}, h_0)$ . Then by joint continuity of $\psi $ , there exists a $k_0 \in \mathbb N$ such that for all $k\geq k_0$ , we have that $|\psi ({\tilde t_{n_k}},h_{n_k})|>|\psi (t_{n_k},h_{n_k})|$ , which contradicts the fact that $R_{n_k}=\rho _{\mathbb D}(0,\psi (t_{n_k},h_{n_k}))$ , again by Lemma 2.5. This completes the proof of both the claim and the lemma.

Recall that we had $P_{\unicode{x3bb} } = \unicode{x3bb} z(1-z)$ , where $\unicode{x3bb} = e^{2\pi i (({\sqrt {5}-1})/{2})}$ . For $\kappa \ge 1$ , we then defined $P = ({1}/{\kappa }) {P_{\unicode{x3bb} } }(\kappa z)$ and let G be the Green’s function for this polynomial. For each $h> 0$ , set $V_h:=\{z \in \mathbb C \; : \; G(z)<h \}$ —see Figure 3 for an illustration showing two of these domains.

Lemma 5.8. The family $\{\partial V_h \}_{h>0}$ gives a continuously varying family of Jordan curves.

Proof. Let P be as above, let ${\mathcal K}$ be the filled Julia set for P, and let $\varphi :\hat {\mathbb C} \setminus \mathcal K \rightarrow \hat {\mathbb C} \setminus {\overline {\mathbb D}}$ be the associated Böttcher map. Then the map $F:\mathbb {T}\times (0,\infty )\rightarrow \mathbb C$ , $F(e^{i\theta },h)\mapsto \varphi ^{-1}(e^{h+i\theta })$ is the desired mapping which yields a continuously varying family of Jordan curves.

Lemma 5.9. $(V_h,0)\rightarrow (U,0)$ as $h\rightarrow 0_+$ .

Figure 3 The filled Julia set ${\mathcal K}$ for P with the Green’s lines $\partial V_h = \{z: G(z)=h\}$ and $\partial V_{2h} =$ $\{z: G(z)=2h\}$ .

Proof. By appealing to Definitions A.7, 5.1, and Theorem A.8, we can make use of the Carathéodory kernel version of Carathéodory convergence to prove this. So let $h_n$ be any sequence of positive numbers such that $h_n\rightarrow 0$ as $n \to \infty $ . From above, we will then be done if we can show that the Carathéodory kernel of $\{(V_{h_n},0) \}_{n=1}^{\infty }$ , as well as that of every subsequence of this sequence of pointed domains, is U.

Let $\{(V_{h_{n_k}},0) \}_{k=1}^{\infty }$ be an arbitrary subsequence of $\{(V_{h_n},0) \}_{n=1}^{\infty }$ (which could possibly be all of $\{(V_{h_n},0) \}_{n=1}^{\infty }$ ) and let W be the Carathéodory kernel of this subsequence $\{(V_{h_{n_k}},0) \}_{k=1}^{\infty }$ . Since $U \subset V_{h}$ for every $h>0$ , clearly $U\subseteq W$ . To show containment in the other direction, let $z\in W$ be arbitrary and construct a path $\gamma $ from $0$ to z in W. By definition of W as the Carathéodory kernel of the domains $\{(V_{h_{n_k}},0) \}_{k=1}^{\infty }$ , the track $[\gamma ]$ is contained in $V_{h_{n_k}}$ for all k sufficiently large. From this, it follows that the iterates of P are bounded on W which immediately implies that $W \subset \mathcal K$ . Since W is open, $W \subset \text {int } \mathcal K$ . Moreover, since W is connected, W is then contained in a Fatou component for P and, since $0 \in W$ , $W \subseteq U$ . Since we have already shown $U \subseteq W$ , we have $W=U$ , as desired.

As in the discussion in the proof of Phase I in §4 just before Lemma 4.3, let ${\mathcal K}$ be the filled Julia set for P and let U be the Siegel disc about $0$ for P. Again, for $R>0$ , define $U_R:= \Delta _U(0,R)$ .

For the remainder of this section, we will be working extensively with these hyperbolic discs $U_R$ of radius R about $0$ in U. At this point, we choose $0 < r_0$ and restrict ourselves to $R \ge r_0$ (we will also impose an upper bound on R just before stating the Target Lemma (Lemma 5.13)).

Again, let $\psi :U\rightarrow \mathbb D$ be the unique normalized Riemann map from U to ${\mathbb D}$ satisfying $\psi (0)=0$ , $\psi '(0)>0$ . For $h> 0$ , let $\psi _{2h}:V_{2h}\rightarrow \mathbb D$ be the unique normalized Riemann map from $V_{2h}$ to ${\mathbb D}$ satisfying $\psi _{2h}(0)=0$ , $\psi _{2h}^{\prime }(0)>0$ . Set ${\tilde R}=R_{(V_{2h},0)}^{\mathrm {int}}U_R$ and define ${\tilde V_{2h}}=\Delta _{V_{2h}}(0,{\tilde R})$ . Let $\varphi _{2h}:{\tilde V_{2h}}\rightarrow V_{2h}$ be the unique conformal map from ${\tilde V_{2h}}$ to $V_{2h}$ normalized so that $\varphi _{2h}(0)=0$ and $\varphi _{2h}^{\prime }(0)>0$ . An important fact to note is that ${\tilde V_{2h}}$ is round in the conformal coordinates of $V_{2h}$ , that is, $\psi _{2h}(\tilde V_{2h})$ is a disc (about $0$ ). This is an essential point we will be making use of later in the ‘up’ portion of Phase II. We now prove a small lemma concerning this conformal disc ${\tilde V_{2h}}$ .

Lemma 5.10. For $R \ge r_0$ , we have the following:

  1. (1) there exists $d_0>0$ , determined by $\kappa $ and $r_0$ such that

    $$ \begin{align*} {d}(0,\partial U_R)\geq d_0 \end{align*} $$
    (where ${d}(0,\partial U_R)$ denotes the Euclidean distance from $0$ to $\partial U_R$ );
  2. (2) given any finite upper bound $h_0 \in (0,\infty )$ , there exists $\rho _0>0$ , determined by $r_0$ and $h_0$ , such that, for all $h\in (0,h_0]$ , we have that the hyperbolic radius $R_{(V_{2h},0)}{\tilde V_{2h}}$ of ${\tilde V_{2h}}$ in $V_{2h}$ about $0$ satisfies

    $$ \begin{align*} R_{(V_{2h},0)}{\tilde V_{2h}}\geq \rho_0. \end{align*} $$

Proof. Since $R \ge r_0$ , we have that $\partial U_R$ is the image under $\psi ^{-1}$ of the circle ${\mathrm C}(0, s)$ in ${\mathbb D}$ , where $s \ge s_0 := ({e^{r_0} - 1})/({e^{r_0} + 1})$ . Item (1) then follows on applying the Koebe one-quarter theorem (Theorem A.1).

For item (2), using Lemma 2.5, since $\partial {\tilde V_{2h}}$ is the hyperbolic ‘incircle’ about 0 of $\partial U_R$ in the hyperbolic metric of $V_{2h}$ , we have that for all $h\in (0,h_0]$ , there exists $z_{2h} \in \partial {\tilde V_{2h}} \cap \partial U_R$ . By item (1), we have $|z_{2h}|\geq d_0$ . However, as the domains $\{V_{2h} \}_{h\in (0,h_0] }$ are increasing in h, there exists $D_0$ depending only on $\kappa $ and $h_0$ such that for all $z\in U$ , and for all $h \in (0,h_0]$ , we have $\delta _{V_{2h}}(z)\leq D_0$ (where $\delta _{V_{2h}}(z)$ is the Euclidean distance from z to $\partial V_{2h}$ ). Letting $\rho _{2h}$ be the hyperbolic radius about 0 of ${\tilde V_{2h}}$ in $V_{2h}$ , we have

$$ \begin{align*} \rho_{2h} = \int_{\gamma}\,{d}\rho_{V_{2h}(z)}, \end{align*} $$

where $\gamma $ is a geodesic segment in $V_{2h}$ from $0$ to $z_h$ . Then, using Lemma A.4, we have

$$ \begin{align*} \rho_{2h} &= \int_{\gamma}\,{d}\rho_{V_{2h}(z)} \\ &\geq \frac{1}{2}\int_{\gamma}\frac{1}{\delta_{V_{2h}}(z)}|{d}z| \\ &\geq \frac{1}{2D_0}l(\gamma) \\ &\geq \frac{1}{2D_0}|z_{2h}| \\ &\geq \frac{d_0}{2D_0}, \end{align*} $$

from which the desired lower bound follows by setting $\rho _0= {d_0}/{2D_0}$ (note that in the above, we use l to denote Euclidean arc length). Finally, the fact that $\rho _0$ does not depend on the scaling factor $\kappa $ follows immediately by the conformal invariance of the hyperbolic metric of $V_{2h}$ with respect to (Euclidean) scaling.

Now define ${\tilde V_h}:= \varphi _{2h}^{-1}(V_h)$ and recall that ${\tilde V_{2h}}=\varphi _{2h}^{-1}(V_{2h})$ . Further, define ${\check R}(h):=R_{(V_{2h},0)}^{\mathrm {ext}}V_h$ and note that the function ${\check R}(h)$ does not depend on the scaling factor $\kappa $ , while by conformal invariance, we have ${\check R}(h) = R_{(\tilde V_{2h},0)}^{\mathrm {ext}}\tilde V_h$ .

Lemma 5.11. ${\check R}(h)$ is continuous on $(0,\infty )$ .

Proof. This follows easily from Lemmas 5.3, 5.5, and 5.8. Note that it follows easily from Lemmas A.4 and 5.8 that the family $(V_h,0)$ is bounded above and below in the family $(V_{2h},0)$ , where h is allowed to range over any closed bounded subset I of $(0, \infty )$ .

Further, we have the following.

Lemma 5.12. ${\check R}(h) \rightarrow \infty $ as $h \rightarrow 0_+$ .

Proof. By Lemma 5.9 and Theorem A.9, $\psi _{2h}$ converges locally uniformly on U to $\psi $ as $h \to 0_+$ (in the sense given in Definition 5.1), where we recall that $\psi _{2h}$ and $\psi $ are the suitably normalized Riemann maps from $V_{2h}$ and U, respectively, to the unit disc (these were introduced in the discussion before Lemma 5.10).

Now let $R> 0$ be large and let $z \in \partial U_R$ , where $U_R$ is the hyperbolic disc of radius R about $0$ in U introduced above. From the above, we then have that $\rho _{V_{2h}}(0, z) \ge R-1$ for all h sufficiently small so that by the definition of external hyperbolic radius (Definition 2.2), we must have $R_{(V_{2h},0)}^{\mathrm {ext}}U_R \ge R-1$ . Since $U_R \subset U \subset V_h$ , we must have ${\check R}(h):=R_{(V_{2h},0)}^{\mathrm {ext}}V_h \ge R_{(V_{2h},0)}^{\mathrm {ext}}U \ge R_{(V_{2h},0)}^{\mathrm {ext}}U_R \ge R-1$ . The result then follows on letting R tend to infinity.

At this point, we choose $0 < r_0 < R_0 \le {\pi }/{2}$ and restrict ourselves to $R \in [r_0, R_0]$ . The upper bound ${\pi }/{2}$ is chosen so that the disc $U_R$ as well as its image under any conformal mapping whose domain of definition contains U is star-shaped (about the image of $0$ —see Lemma A.6).

Given $\varepsilon _1>0$ , using the hyperbolic metric of U, construct a $2\varepsilon _1$ -open neighborhood of $\partial {\tilde V_{2h}}$ which we will denote by ${\hat N}$ . We now fix our upper bound $h_0$ on the value of the Green’s function $G(z)$ . Recall the lower bound $\rho _0$ on the hyperbolic radius about $0$ of $\tilde V_{2h}$ in $V_{2h}$ as in item (2) of the statement of Lemma 5.10. Recall also the scaling factor $\kappa $ and that $U \subset \mathrm {D}(0, ({2}/{\kappa }))$ . We now state and prove one of the most important lemmas we need to prove Phase II (Lemma 5.17).

Lemma 5.13. (Target Lemma)

There exist an upper bound ${\tilde \varepsilon _1}\in (0, ({\rho _0}/{2}))$ and a continuous function $T:(0,{\tilde \varepsilon _1}]\rightarrow (0, \infty )$ , both of which are determined by $h_0$ and $r_0$ such that, for all $h\in (0,h_0]$ and $R \in [r_0, R_0]$ , we have:

  1. (1) $R^{\mathrm {int}}_{({\tilde V_{2h}},0)}({\tilde V_{2h}} \setminus {\hat N})\geq T(\varepsilon _1)$ for all $\varepsilon _1 \in (0,{\tilde \varepsilon _1}]$ ;

  2. (2) $T(\varepsilon _1) = \tfrac {1}{2}\log ({1}/{\varepsilon _1}) + C_0$ on $(0,{\tilde \varepsilon _1}]$ , where $C_0 = C_0(h_0, r_0)$ , so that, in particular;

  3. (3) $T(\varepsilon _1)\rightarrow \infty $ as $\varepsilon _1\rightarrow 0_+$ .

Before embarking on the proof, we remark that item (1) in the statement of Lemma 5.13 will help us to interpolate in the ‘during’ portion of Phase II. Item (3) will be vital for the Fitting Lemma (Lemma 5.15); it allows us to conclude that $h\rightarrow 0$ as $\varepsilon _1 \rightarrow 0_+$ (see the statement of the Fitting Lemma), which is key to controlling the inevitable loss of domain incurred in correcting the errors in our approximations from Phase I (Lemma 4.8). We observe that, even though we require $R \in [r_0, R_0]$ , the upper bound $R_0$ does not appear in the dependencies for $\tilde \varepsilon _1$ and the function T above. The reason for this is that we apply the upper bound $R_0 \le {\pi }/{2}$ in the proof which eliminates the dependence on $R_0$ . Lastly, we observe that, although the domain $\tilde V_{2h}$ by definition will depend on R (as will the mapping $\varphi _{2h}: \tilde V_{2h} \mapsto V_{2h}$ ), $\tilde \varepsilon _1$ and $T(\varepsilon _1)$ do not depend on R since we are obtaining estimates which work simultaneously for all $R \in [r_0, R_0]$ .

Proof. We first deduce the existence of ${\tilde \varepsilon _1}$ . Regarding the upper bound ${\rho _0}/{2}$ on $\tilde \varepsilon _1$ in the statement: we note that, if $\varepsilon _1$ is too large, then we would actually have ${\tilde V_{2h}} \subset {\hat N}$ so that ${\tilde V_{2h}} \setminus {\hat N}=\emptyset $ . Recall that, by item (2) of Lemma 5.10, we have that $\rho _0>0$ is such that for all $R \in [r_0, R_0]$ and $h \in (0, h_0]$ , we have $R_{(V_{2h},0)}{\tilde V_{2h}}\geq \rho _0$ . Using the Schwarz lemma for the hyperbolic metric (e.g. [Reference Carleson and GamelinCG93, Theorems I.4.1 or I.4.2]), we see that $R^{\mathrm {int}}_{(U,0)}{\tilde V_{2h}} \geq R_{(V_{2h},0)}{\tilde V_{2h}}\geq \rho _0$ , so setting ${\tilde \varepsilon _1}:= {\rho _0}/{4} < {\rho _0}/{2}$ implies that, if $\varepsilon _1 \le \tilde \varepsilon _1$ , then $0 \in {\tilde V_{2h}} \setminus {\hat N}$ and, in particular, ${\tilde V_{2h}} \setminus {\hat N}\neq \emptyset $ (so that the internal hyperbolic radius of this set is well defined in view of Definition 2.2). Note that, in view of Lemma 5.10, since $\rho _0$ depends on $r_0$ and $h_0$ , the quantity $\tilde \varepsilon _1$ inherits these dependencies.

Recall the lower bound $d_0 = d_0(\kappa , r_0)> 0$ from item (1) of the statement of Lemma 5.10 for which we have $\textit {d}(0,\partial U_R)\geq d_0$ so that, if $\xi \in \partial U_R$ , then $|\xi |\geq d_0$ . With the distortion theorems in mind, applied to $\psi _{2h}^{\circ -1}$ , we define

$$ \begin{align*} r_1&:=\frac{e^{\pi/2}-1}{e^{\pi/2}+1}, \\ D_1&:=\bigg(\frac{1+r_1}{1-r_1}\bigg)^2=e^{\pi}. \end{align*} $$

Note that $r_1$ is chosen so that $\mathrm {D}(0,r_1)$ has hyperbolic radius ${\pi }/{2}$ in ${\mathbb D}$ , that is, $\mathrm {D}(0,r_1)=\Delta _{\mathbb D}(0,({\pi }/{2}))$ . By the Schwarz lemma for the hyperbolic metric, since $\tilde V_{2h} \subset U_R$ , $U \subset V_{2h}$ , and $R\leq R_0 \le {\pi }/{2}$ , we have

$$ \begin{align*} R_{(V_{2h},0)}{\tilde V_{2h}} = R^{\mathrm{ext}}_{(V_{2h},0)}{\tilde V_{2h}} \le R^{\mathrm{ext}}_{(U,0)}{\tilde V_{2h}} \le R^{\mathrm{ext}}_{(U,0)}{U_R} = R_{(U,0)}{U_R} = R \le \frac{\pi}{2} \end{align*} $$

(recall that ${\tilde V_{2h}}$ and $U_R$ are round in the conformal coordinates of $V_{2h}$ , U, respectively, so that the internal and external hyperbolic radii coincide). By Lemma 2.5 and the definition of $\tilde V_{2h}$ given before Lemma 5.10, $\partial U_R$ and $\partial \tilde V_{2h}$ meet, and it then follows by comparing the maximum and minimum values of $|\psi _{2h}^{-1}|$ given by the distortion theorems (Theorem A.2) that

(5.1) $$ \begin{align} |z|\geq \frac{d_0}{D_1} = d_0 e^{-\pi}\quad \mbox{if } z \in \partial {\tilde V_{2h}}. \end{align} $$

Now suppose $\zeta _0 \in \partial {\tilde V_{2h}}$ and let $\varepsilon _1 \in (0, \tilde \varepsilon _1]$ . If $\zeta \in \overline \Delta _U(\zeta _0,2\varepsilon _1)$ , we wish to find an upper bound on the Euclidean distance from $\zeta $ to $\zeta _0$ . Let $\gamma _0$ be a geodesic segment in U from $\zeta _0$ to $\zeta $ . Then, using Lemma A.4 and the fact that $U \subset \mathrm {D}(0, ({2}/{\kappa }))$ , we calculate

$$ \begin{align*} 2\varepsilon_1 & \ge \int_{\gamma_0}\,{d}\rho_U \\&\geq\frac{1}{2}\int_{\gamma_0} \frac{|{d}w|}{\delta_U(w)} \\&\geq \frac{\kappa}{4}\int_{\gamma_0}|{d}w| \\&=\frac{\kappa}{4} \,l(\gamma_0) \\&\geq \frac{\kappa}{4}|\zeta-\zeta_0|, \end{align*} $$

where $l(\gamma _0)$ is (as usual) the Euclidean arc length of $\gamma _0$ . Thus, $|\zeta -\zeta _0| \le ({8}/{\kappa }) \varepsilon _1$ . As $\zeta _0$ , $\zeta $ were arbitrary, this implies that

(5.2) $$ \begin{align} \overline \Delta_U(\zeta_0,2\varepsilon_1)\subset \overline{\mathrm{D}}\bigg (\zeta_0,\frac{8}{\kappa}\varepsilon_1 \bigg ) \quad \text{for any } \zeta_0 \in \partial {\tilde V_{2h}}. \end{align} $$

Now we aim to specify the value of the function $T(\varepsilon _1)$ . Choose a point $z_0 \in \tilde V_{2h} \cap \hat N = \tilde V_{2h} \setminus (\tilde V_{2h} \setminus \hat N)$ . Pick $z \in \partial {\tilde V_{2h}}$ which is closest to $z_0$ in the hyperbolic metric of U (see Figure 4 for an illustration). Then $\rho _U(z_0,z)\leq 2\varepsilon _1$ , which by equation (5.2) implies $|z_0-z|\leq ({8}/{\kappa })\varepsilon _1$ . Note that as $|z|\geq {d_0}/{D_1}$ by equation (5.1), using the reverse triangle inequality, we have that

(5.3) $$ \begin{align} |z_0| &\geq \frac{d_0}{D_1}-\frac{8}{\kappa}\varepsilon_1. \end{align} $$

Figure 4 Finding a lower bound for $\rho _{{\tilde V_{2h}}}(0,z_0)$ .

Note also that, to make sure that $T(\varepsilon _1)$ is defined and positive on $(0, \tilde \varepsilon _1]$ , it will be essential (because we will be taking the difference of the logs in the two terms in this quantity) that $({d_0}/{D_1}) - ({8}/{\kappa }) \varepsilon _1> 0$ , so we may need to make $\tilde \varepsilon _1$ smaller if needed so that $\tilde \varepsilon _1< {\kappa d_0}/{8D_1}$ . Since the constant $d_0$ is determined by $\kappa $ and the lower bound $r_0$ for R, $\tilde \varepsilon _1$ will then be determined by these same constants as well as $h_0$ in view of our earlier discussion on $\tilde \varepsilon _1$ above (we will argue later that the dependence on the scaling factor $\kappa $ can be removed). Now let $\gamma $ be a geodesic segment in ${\tilde V_{2h}}$ from $z_0$ to $0$ . If $w \in [\gamma ]$ , since $|z_0-z|\leq ({8}/{\kappa })\varepsilon _1$ from equation (5.2), we have

$$ \begin{align*} \delta_{{\tilde V_{2h}}}(w)&\leq |w-z| \leq |w-z_0|+|z_0-z| \leq |w-z_0|+\frac{8}{\kappa}\varepsilon_1. \end{align*} $$

So, once more using Lemma A.4,

$$ \begin{align*} \rho_{{\tilde V_{2h}}}(0,z_0) &= \int_{\gamma}\,{d}\rho_{\tilde V_{2h}(w)}\\&\geq \frac{1}{2}\int_{\gamma}\frac{|{d}w|}{\delta_{{\tilde V_{2h}}}(w)} \\&\geq \frac{1}{2}\int_{\gamma}\frac{|{d}w|}{|w-z_0| + (8/\kappa)\varepsilon_1}. \end{align*} $$

Now parameterize $\gamma $ by $w=\gamma (t)=z_0+r(t)e^{i\theta (t)}$ for $t \in [0,1]$ , and note that, as $\gamma $ is a geodesic segment in ${\tilde V_{2h}}$ from $z_0$ to $0$ , $r(1)e^{i\theta (1)}=-z_0$ . Since $\gamma $ is not self-intersecting, we have $r(t)>0$ for all $t\in (0,1]$ . Then, using equation (5.3),

$$ \begin{align*} \frac{1}{2}\int_{\gamma}\frac{|{d}w|}{|w-z_0|+ ({8}/{\kappa})\varepsilon_1} &= \frac{1}{2} \int_0^1 \frac{|r'(t)e^{i\theta(t)}+i\theta'(t) r(t)e^{i\theta(t)}|}{r(t)+ ({8}/{\kappa})\varepsilon_1}\,{d}t \\ &\geq \frac{1}{2}\int_0^1\frac{|r'(t)|}{r(t)+ ({8}/{\kappa})\varepsilon_1}\,{d}t, \\ &\geq \frac{1}{2} \bigg| \int_0^1\frac{r'(t)}{r(t)+ ({8}/{\kappa)}\varepsilon_1}\,{d}t \bigg|\\ &=\frac{1}{2}\int_0^{|z_0|}\frac{1}{u+ ({8}/{\kappa})\varepsilon_1}\,{d}u \\ &\geq \frac{1}{2}\int_0^{({d_0}/{D_1})- ({8}/{\kappa})\varepsilon_1}\frac{1}{u+ ({8}/{\kappa})\varepsilon_1}\,{d}u \\ &= \frac{1}{2} \int_{({8}/{\kappa})\varepsilon_1}^{{d_0}/{D_1}} \frac{1}{x}\,{d}x \\ &= \frac{1}{2} \bigg(\log \bigg(\frac{d_0}{D_1}\bigg) - \log \bigg(\frac{8}{\kappa}\varepsilon_1 \bigg)\bigg)\\ &= \frac{\log d_0 - \pi - \log \varepsilon_1 - \log 8 + \log \kappa}{2}\\ &= \frac{1}{2}\log \frac{1}{\varepsilon_1} + \frac{\log d_0 - \pi - \log 8 + \log \kappa}{2}. \end{align*} $$

Taking an infimum over all $z_0 \in \tilde V_{2h} \cap \hat N$ and applying the definition (Definition 2.2) of internal hyperbolic radius (which in particular does not require that ${\tilde V_{2h}} \setminus {\hat N}$ be connected), and setting $T(\varepsilon _1)=\tfrac 12\log ({1}/{\varepsilon _1}) + \tfrac 12 (\log d_0 - \pi - \log 8 + \log \kappa ) $ , then $T(\varepsilon _1)$ is continuous, strictly positive (in view of the definition of $\tilde \varepsilon _1$ given in the discussion after equation (5.3)), and gives the desired lower bound on $R^{\mathrm {int}}_{({\tilde V_{2h}},0)}({\tilde V_{2h}} \setminus {\hat N})$ . Explicitly, the function $T(\varepsilon _1)$ above is determined by $\kappa $ , $r_0$ , and $h_0$ (this last being due to the requirement that $\varepsilon _1 \le \tilde \varepsilon _1$ ). However, similarly to the end of the proof of Lemma 5.10, we may eliminate the dependence on $\kappa $ (for both $\tilde \varepsilon _1$ and T) given the conformal invariance of the hyperbolic metric of $\tilde V_{2h}$ with respect to the scaling factor $\kappa $ .

Before turning to the Fitting Lemma (Lemma 5.15), we prove a small lemma from real analysis.

Lemma 5.14. Let $b>0$ and let $\varphi :(0,b] \rightarrow [0,\infty )$ be a continuous function such that $\varphi (x)\rightarrow \infty $ as $x \rightarrow 0_+$ . Then, for all $y\ge \min \{\varphi (x):x \in (0,b] \}$ , if we set

$$ \begin{align*} x(y):=\min \{x: \varphi(x) = y\}, \end{align*} $$

we have $x(y)\rightarrow 0$ as $y\rightarrow \infty $ .

Proof. We note first that, since $\varphi $ is continuous while $\varphi (x)\rightarrow \infty $ as $x \rightarrow 0_+$ , $\varphi $ attains its minimum on $(0,b]$ . Also, in view of the intermediate value theorem, for each $y\ge \min \{\varphi (x):x \in (0,b] \}$ , the set $\{x: \varphi (x) = y\}$ is non-empty. Because $\varphi (x)\rightarrow \infty $ as $x \rightarrow 0_+$ , the infimum of this set is strictly positive, and, as $\varphi $ is continuous, we must have that $\varphi (x(y)) = y$ so that this infimum is attained and is in fact a minimum. Suppose now the conclusion is false and that there exists a sequence $\{y_n\}_{n=1}^{\infty }$ such that $y_n \rightarrow \infty $ , but $x(y_n)\not \rightarrow 0$ as $n \rightarrow \infty $ . Set $x_n:=x(y_n)$ . Since $x_n\not \rightarrow 0$ , we can take a convergent subsequence $\{x_{n_k}\}_{k=1}^{\infty }$ which converges to a limit $x_0> 0$ . This then leads to a contradiction to the continuity of $\varphi $ at $x_0$ .

Recall the quantity ${\check R}(h):=R_{(V_{2h},0)}^{\mathrm {ext}}V_h = R_{(\tilde V_{2h},0)}^{\mathrm {ext}}\tilde V_h$ which was introduced before Lemma 5.11 and the $2\varepsilon _1$ -open neighborhood $\hat N$ of $\partial \tilde V_{2h}$ which was introduced before the statement of Lemma 5.13. We now state and prove the Fitting Lemma.

Lemma 5.15. (The Fitting Lemma)

There exists $\tilde \varepsilon _1> 0$ and a function $h: (0 ,\tilde \varepsilon _1] \mapsto (0, \infty )$ both of which are determined by $h_0$ , $r_0$ for which the following hold:

  1. (1) $\overline {\tilde V}_{h(\varepsilon _1)} \subset \tilde V_{2h(\varepsilon _1)} \setminus \hat N$ for each $0 < \varepsilon _1 \le \tilde \varepsilon _1$ ;

  2. (2) $h(\varepsilon _1)\rightarrow 0$ as $\varepsilon _1 \rightarrow 0_+$ .

Proof. We first apply the Target Lemma (Lemma 5.13) to find $\tilde \varepsilon _1> 0$ and a function $T: (0, \tilde \varepsilon _1] \mapsto (0, \infty )$ as above, both of which are determined by $h_0$ , $r_0$ . We now show how to use the function T to define an appropriate value h of the Green’s function for which item (1) above holds, which will then allow us to do the interpolation in the ‘during’ part of the proof of Phase II (Lemma 5.17). Our first step is to fix a (possibly) smaller value of $\tilde \varepsilon _1$ which still has the same dependencies as in the statement of Lemma 5.13. Since by item (3) in the statement of Lemma 5.13 $T(\varepsilon _1) \to \infty $ as $\varepsilon _1 \to 0_+$ , we can make $\tilde \varepsilon _1$ smaller if needed so as to ensure that

(5.4) $$ \begin{align} \min_{0 < \varepsilon_1 \le \tilde \varepsilon_1} T(\varepsilon_1) \:\ge \min_{0 < h \le h_0}{\check R}(h). \end{align} $$

Note that $T(\varepsilon _1)$ and ${\check R}(h)$ attain their minimum values above in view of the fact that $T(\varepsilon _1)$ is continuous on $(0, \tilde \varepsilon _1]$ by Lemma 5.13 and ${\check R}(h)$ is continuous on $(0, h_0]$ by Lemma 5.11, while $T(\varepsilon _1) \to \infty $ as $\varepsilon _1 \to 0_+$ and ${\check R}(h) \to \infty $ as $h \to 0_+$ by item (3) in the statement of Lemmas 5.13 and 5.12, respectively. We now define the function h of the variable $\varepsilon _1$ on the interval $(0, \tilde \varepsilon _1]$ by setting, for each $0 < \varepsilon _1 \le \tilde \varepsilon _1$ ,

(5.5) $$ \begin{align} h(\varepsilon_1) := \min\{h\in (0, h_0] \: : \:{\check R}(h)=T(\varepsilon_1) \}. \end{align} $$

Note that in view of equation (5.4), again since ${\check R}$ is continuous and ${\check R}(h) \to \infty $ as $h \to 0_+$ , using the intermediate value theorem, the set of which we are taking the minimum above will be non-empty, and so this function is well defined. It also follows that the set $\{h\in (0, h_0] \: : \:{\check R}(h)=T(\varepsilon _1) \}$ has a positive infimum which, by the continuity of ${\check R}(h)$ , is attained and is thus in fact a minimum, and moreover,

(5.6) $$ \begin{align} {\check R}(h(\varepsilon_1)) = T(\varepsilon_1). \end{align} $$

The right-hand side of equation (5.4) depends only on $h_0$ . Hence, the upper bound $\tilde \varepsilon _1$ and the function T from the statement of Lemma 5.13 will still only depend on $h_0$ and $r_0$ , and the dependencies in the statement of that lemma thus remain unaltered. Lastly, we observe that this function h above is then determined by $h_0$ and $r_0$ in view of equation (5.5).

By item (1) of the Target Lemma (Lemma 5.13), for each $0 < \varepsilon _1 \le \tilde \varepsilon _1$ ,

(5.7) $$ \begin{align} R^{\mathrm{int}}_{(\tilde V_{2h}, 0)}(\tilde V_{2h} \setminus {\hat N}) \ge T(\varepsilon_1). \end{align} $$

However, by equation (5.6), in view of the definition of ${\check R}(h)$ given before Lemma 5.11,

(5.8) $$ \begin{align} R^{\mathrm{ext}}_{(\tilde V_{2h(\varepsilon_1)}, 0)}\tilde V_{h(\varepsilon_1)} = T(\varepsilon_1). \end{align} $$

Thus, using item (1) of Corollary 2.4, if we set $X = \tilde V_{h(\varepsilon _1)}$ , $Y = \tilde V_{2h(\varepsilon _1)} \setminus {\hat N}$ , we have ${\overline {\tilde V}}_{h(\varepsilon _1)} \subset \tilde V_{2h(\varepsilon _1)} \setminus {\hat N}$ (this latter set clearly being closed) and so we obtain item (1).

Again by item (3) of the statement of Lemma 5.13, $T(\varepsilon _1)\rightarrow \infty $ as $\varepsilon _1\rightarrow 0_+$ . Lemma 5.12, together with the fact that ${\check R}$ is continuous in view of Lemma 5.11 then ensure that the hypotheses of Lemma 5.14 are met. Equation (5.5) and Lemma 5.14 then imply that $h(\varepsilon _1)\rightarrow 0$ as $\varepsilon _1 \to 0_+$ as desired, which proves item (2).

As we remarked earlier, the Fitting Lemma will be essential for proving Phase II. Basically, item (1) of the statement says that, for each $0 < \varepsilon _1 \le \tilde \varepsilon _1$ , the domain $\tilde V_{h(\varepsilon _1)}$ ‘fits’ inside ${\tilde V_{2h(\varepsilon _1)}} \setminus {\hat N}$ , which will allow us to apply the Polynomial Implementation Lemma, but we will need to correct the error from the Phase I immediately prior to this. However, item (2) of the statement says that $h(\varepsilon _1) \rightarrow 0_+$ as $\varepsilon _1 \rightarrow 0_+$ which, as we will see, is the key to controlling the loss of domain incurred by the correction of the error from the Phase I immediately prior to this and which is the purpose of Phase II.

Observe that getting $\tilde V_h$ to fit inside ${\tilde V_{2h}} \setminus {\hat N}$ as above is easier if the value of h is large, while ensuring the loss of domain is small requires a value of h which is small. Indeed it is the tension between these competing requirements for h which makes proving Phase II so delicate and why the Target and Fitting Lemmas are so essential. Before we move on to the statement and proof of Phase II, we state one last technical lemma that will be of use to us later.

Lemma 5.16. Let $D\subset \mathbb C$ be a bounded simply connected domain and let $z_0 \in D$ . Then for all $\varepsilon> 0$ , there exists $R_{\varepsilon }> 0$ such that if X is any set containing $z_0$ and contained in D such that $R_{(D,z_0)}^{\mathrm {int}}X>R_\varepsilon $ , then ${d}(\partial X, \partial D) < \varepsilon $ .

Proof. Define $D_{\varepsilon }=\{z \in D \: : \: {d}(z,\partial D)\geq \varepsilon /2 \}$ . Since D is bounded, $D_{\varepsilon }$ is a compact subset of D and we can find $R_{\varepsilon }> 0$ such that $D_{\varepsilon }\subset \Delta _D(z_0, R_{\varepsilon })$ . Then, if X is any set containing $z_0$ and contained in D such that $R_{(D, z_0)}^{\mathrm {int}}X \ge R_{\varepsilon }$ , by the definition of internal hyperbolic radius (Definition 2.2), for every $z \in D \setminus X$ , we have $\rho _D(z_0, z) \ge R_{\varepsilon }$ . It then follows that for every $z \in \partial X \cap D = \partial (D \setminus X) \cap D$ , we also have $\rho _D(z_0, z) \ge R_{\varepsilon }$ from which it follows that $z \notin D_{\varepsilon }$ . Since $\partial X = (\partial X \cap D) \cup (\partial X \cap \partial D)$ , it follows that $\partial X \subset \{z \in \overline D \: : \: {d}(z,\partial D)< \varepsilon /2 \}$ , and from the compactness of the bounded set $\partial X \subset \overline D$ , we get ${d}(\partial X, \partial D) \le \varepsilon /2 < \varepsilon $ , as desired.

5.2. Statement and proof of Phase II

Recall the scaling factor $\kappa \geq 1$ and upper bound $h_0$ on the value of the Green’s function from the statement of Lemma 5.10. Recall also the bounds $0 < r_0 < R_0 \le {\pi }/{2}$ for R and that the upper bound of ${\pi }/{2}$ was chosen in the discussion before the Target Lemma (Lemma 5.13) so that $U_R$ as well as its image under any conformal mapping whose domain of definition contains U is star-shaped (about the appropriate image of $0$ ).

Lemma 5.17. (Phase II)

Let $\kappa $ , $h_0$ , $r_0$ , and $R_0$ be fixed as above. Then there exist an upper bound ${\tilde \varepsilon _1}>0$ and a function $\delta :(0, \tilde \varepsilon _1] \rightarrow (0,({r_0}/{4}))$ , with $\delta (x)\rightarrow 0$ as $x\rightarrow 0_+$ , both of which are determined by $h_0$ , $r_0$ , and $R_0$ such that, for all $\varepsilon _1\in (0,{\tilde \varepsilon _1}]$ , there exists an upper bound ${\tilde \varepsilon _2}>0$ , determined by $\varepsilon _1$ , $h_0$ , and $r_0$ , $R_0$ , such that, for all $\varepsilon _2\in (0,{\tilde \varepsilon _2}]$ , all $R \in [r_0, R_0]$ , and all functions ${\mathcal E}$ univalent on $U_R$ with ${\mathcal E}(0)=0$ and $\rho _U({\mathcal E}(z),z)<\varepsilon _1$ for $z \in U_R$ , there exists a $(17+\kappa )$ -bounded composition $\mathbf {Q}$ of finitely many quadratic polynomials which depend on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , R, $h_0$ , $r_0$ , $R_0$ , and ${\mathcal E}$ such that:

  1. (i) $\mathbf {Q}$ is univalent on a neighborhood of $\overline {U}_{R-\delta (\varepsilon _1)}$ ;

  2. (ii) for all $z\in \overline U_{R-\delta (\varepsilon _1)}$ , we have

    $$ \begin{align*} \rho_U(\mathbf{Q}(z),{\mathcal E}(z))<\varepsilon_2; \end{align*} $$
  3. (iii) $\mathbf {Q}(0)=0$ .

Because we will be using the Polynomial Implementation Lemma repeatedly to construct our polynomial composition, we need to interpolate functions outside of ${\mathcal K}$ , the filled Julia set for P. Indeed, as we saw in the Polynomial Implementation Lemma (Lemma 3.9), the solutions to the Beltrami equation converge to the identity precisely because the supports of the Beltrami data become small in measure. However, ${\mathcal E}$ is only defined on a subset of U and hence we will need to map a suitable subset of U on which ${\mathcal E}$ is defined to a domain which contains ${\mathcal K}$ , and correct the conjugated error using the Polynomial Implementation Lemma. The trick to doing this is that we choose our subset of U such that the mapping to blow this subset up to U can be expressed as a high iterate of a map which is defined on the whole of the Green’s domain $V_h$ , not just on this subset. This will allow us to interpolate outside ${\mathcal K}$ . Further, we will then use the Polynomial Implementation Lemma twice more to ‘undo’ the conjugating map and its inverse.

The two key considerations in the proof are controlling loss of domain (which is measured by the function $\delta $ in the statement above), and showing that the error in our polynomial approximation to the function ${\mathcal E}$ (measured by the quantity $\varepsilon _2$ above) is mild and, in particular, can be made as small as desired. In controlling the loss of domain, one main difficulty will arise in converting between the hyperbolic metrics of different domains, U and $V_{2h}$ , and we will deal with this by means of the convergence of the pointed domains $(V_{2h},0)$ to $(U,0)$ in the Carathéodory topology as h tends to zero. One last thing is worth mentioning: since this result involves many functions and quantities which depend on one another, the interested reader is encouraged to make use of the dependency tables in the appendices to help keep track of them.

Proof. Ideal loss of domain: The techniques for controlling loss of domain will be the Fitting Lemma, and again the fact that $(V_{2h},0)\rightarrow (U,0)$ in the Carathéodory topology (Lemma 5.9) as h tends to zero combined with the Target Lemma. As stated above, we will apply the Polynomial Implementation Lemma to our conjugated version of ${\mathcal E}$ , which will be $\varphi _{2h}\circ {\mathcal E}\circ \varphi _{2h}^{-1}$ in what we call the ‘During’ portion of the error calculations. To approximate ${\mathcal E}$ itself rather than this conjugated version, we then wish to ‘cancel’ the conjugacy, so ‘During’ is bookended by ‘Up’ and ‘Down’ portions, in which we apply the Polynomial Implementation Lemma to get polynomial compositions which are arbitrarily close to $\varphi _{2h}$ and $\varphi _{2h}^{-1}$ , respectively, on suitable domains.

We begin the proof of Phase II by considering ‘Ideal Loss of Domain’. In creating polynomial approximations using Phase I (Lemma 4.8), errors will be created which will have an impact on the loss of domain that occurs. We first describe the loss of domain that is forced on us before this error is taken into account. During what follows, the reader might find it helpful to consult Figure 5, where most of the relevant domains are shown in rotated logarithmic coordinates where the up direction corresponds to increasing distance from the fixed point for P at $0$ .

Figure 5 The setup for Phase II in rotated logarithmic coordinates.

We first turn our attention to controlling loss of domain. Let $R \in [r_0, R_0]$ be arbitrary as in the statement. Note that because we need to consider uniform convergence in R to define the function $\delta $ which measures loss of domain in the statement above, we consider R for now as varying over the whole of the interval $[r_0, R_0]$ . However, later we will fix an (arbitrary) value of R from this interval at the start of the ‘up’ portion of the proof). Recall the discussion before the statement of Lemma 5.10, where we let $\psi :U\rightarrow \mathbb D$ be the unique normalized Riemann map from U to ${\mathbb D}$ satisfying $\psi (0)=0$ , $\psi '(0)>0$ . Recalling the upper bound $h_0$ for the value of the Green’s function G for P, for $h \in (0, h_0]$ arbitrary, let $\psi _{2h}:V_{2h}\rightarrow {\mathbb D}$ be the unique normalized Riemann map from $V_{2h}$ to ${\mathbb D}$ satisfying $\psi _{2h}(0)=0$ , $\psi _{2h}^{\prime }(0)>0$ . Recall also that we had ${\tilde R}=R_{(V_{2h},0)}^{\mathrm {int}}U_R$ , ${\tilde V_{2h}}=\Delta _{V_{2h}}(0,{\tilde R})$ and $\varphi _{2h}:{\tilde V_{2h}}\rightarrow V_{2h}$ , which was the unique conformal map from ${\tilde V_{2h}}$ to $V_{2h}$ normalized so that $\varphi _{2h}(0)=0$ and $\varphi _{2h}^{\prime }(0)>0$ . Now define $R'=R_{(U,0)}^{\mathrm {int}}{\tilde V_{2h}}$ and note that the value of this quantity is completely determined by those of R and h. We prove the following claim.

Claim 5.18. $R-R'\rightarrow 0$ uniformly on $[r_0, R_0]$ as $h \rightarrow 0_+$ .

Proof. By Lemma 5.9, $(V_{2h},0)\rightarrow (U,0)$ in the Carathéodory topology as $h\rightarrow 0_+$ and thus $\psi _{2h}$ converges locally uniformly to $\psi $ on U in view of Theorem A.9.

Let $\{h_n\}_{n=1}^{\infty }$ be an arbitrary sequence of positive numbers such that $h_n \rightarrow 0$ as $n \rightarrow \infty $ . By the definitions of ${\tilde V_{2h}}$ and $R'$ , and Lemma 2.5, there exists $w_{h_n,1} \in \partial {\tilde V_{2h_n}}\cap \partial U_R$ and $w_{h_n,2} \in \partial {\tilde V_{2h_n}}\cap \partial U_{R'}$ . Let $0<s,s^{\prime }_n,s_n^{\prime \prime }<1$ be such that $\psi (\partial U_R)=\mathrm {C}(0,s)$ , $\psi (\partial U_{R'})=\mathrm {C}(0,s_n^{\prime })$ , and $\psi _{2h_n}(\partial {\tilde V_{2h_n}})=\mathrm {C}(0,s_n^{\prime \prime })$ .

Let $\varepsilon _0>0$ . By the local uniform convergence of $\psi _{2h_n}$ to $\psi $ on U, there exists $n_0$ , such that for all $n \ge n_0$ ,

$$ \begin{align*} |\psi_{2h_n}(z) - \psi(z)|<\frac{\varepsilon_0}{2}, \quad z \in \overline U_{R_0}. \end{align*} $$

Thus, for any $n \ge n_0$ and any $R \in [r_0, R_0]$ ,

$$ \begin{align*} |s - s_n^{\prime\prime}| &= \bigg | |\psi(w_{h_n,1})| - |\psi_{2h_n}(w_{h_n,1})| \bigg| \le |\psi(w_{h_n,1}) - \psi_{2h_n}(w_{h_n,1})| <\frac{\varepsilon_0}{2},\\ |s_n^{\prime\prime} - s_n^{\prime}| &= ||\psi_{2h_n}(w_{h_n,2})| - | \psi(w_{h_n,2})|| \le |\psi_{2h_n}(w_{h_n,2}) - \psi(w_{h_n,2})| <\frac{\varepsilon_0}{2} \end{align*} $$

whence

$$ \begin{align*} |s - s_n^{\prime}| < \varepsilon_0. \end{align*} $$

Since the sequence $\{h_n\}_{n=1}^{\infty }$ was arbitrary, the desired uniform convergence then follows on applying the conformal invariance of the hyperbolic metric under $\psi ^{-1}$ .

Now define the Internal Siegel disc, ${\tilde U}:=\varphi _{2h}^{-1}(U)$ , and set $R"=R_{(U,0)}^{\mathrm {int}}{\tilde U}$ , noting again that the value of this quantity is completely determined by those of R and h. Next, we show the following claim.

Claim 5.19. $R-R"\rightarrow 0$ uniformly on $[r_0, R_0]$ as $h \rightarrow 0_+$ .

Proof. First, we show $R_{(V_{2h},0)}^{\mathrm {int}}U\rightarrow \infty $ as $h \rightarrow 0_+$ (note that this convergence will be trivially uniform with respect to R on $[r_0, R_0]$ as there is no dependence on R). Fix $R_1>0$ and set $X:=U_{R_1}$ , $Y:=U_{R_1+1}$ so that $\psi (X)=\Delta _{\mathbb D}(0,R_1)$ , $\psi (Y)=\Delta _{\mathbb D}(0,R_1+1)$ . As ${\overline \Delta _{\mathbb D}(0,R_1)}\subset \Delta _{\mathbb D}(0,R_1+1)$ , if we let $\eta ={d}(\partial \Delta _{\mathbb D}(0,R_1),\partial \Delta _{\mathbb D}(0,R_1+1))$ , then $\eta> 0$ . Now let $z \in \partial Y$ and $w \in \Delta _{\mathbb D}(0,R_1)$ . We have that $(V_{2h},0)\rightarrow (U,0)$ as $h\rightarrow 0_+$ in view of Lemma 5.9, so, by Theorem A.9, we again have that $\psi _{2h}$ converges to $\psi $ uniformly on compact subsets of U in the sense given in Definition 5.1. Then, for all h sufficiently small, we have

$$ \begin{align*} |(\psi_{2h}(z)-w)-(\psi(z)-w)|&= |\psi_{2h}(z)-\psi(z)|\\ &<\eta \\ &\leq|\psi(z)-w|. \end{align*} $$

Thus, by Rouché’s theorem, since the convergence is uniform and $w \in \Delta _{\mathbb D}(0,R_1)$ was arbitrary, $\Delta _{\mathbb D}(0,R_1)\subset \psi _{2h}(Y)$ . Then $\psi _{2h}^{-1}(\Delta _{\mathbb D}(0,R_1))\subset Y$ , so $R^{\mathrm {int}}_{(V_{2h},0)}Y\geq R_1$ . We also have that $Y\subset U$ so $R^{\mathrm {int}}_{(V_{2h},0)}U\geq R^{\mathrm {int}}_{(V_{2h},0)}Y$ , and thus $R^{\mathrm {int}}_{(V_{2h},0)}U\geq R_1$ . Since $R_1$ was arbitrary, we do indeed have that $R_{(V_{2h},0)}^{\mathrm {int}}U\rightarrow \infty $ as $h \rightarrow 0_+$ .

For a constant $c> 0$ and a set $X \subset \mathbb C$ , define the scaled set $cX:= \{z \in \mathbb {C} \: : \: z=cw \text { for some }w\in X \}$ . Let $0 < r_{2h} < 1$ be such that $\psi _{2h}({\tilde V_{2h}})=\mathrm {D}(0,r_{2h})$ . The quantity $r_{2h}$ then depends on $r_0$ , $R_0$ , R, and h (and thus ultimately on $\varepsilon _1$ , R, $h_0$ , $r_0$ , and $R_0$ once we make our determination of the function $h = h(\varepsilon _1)$ immediately before equation (5.11) below) and clearly ${1}/{r_{2h}}\psi _{2h}({\tilde V_{2h}})=\mathbb D$ .

By conformal invariance,

$$ \begin{align*} R_{(\frac{1}{r_{2h}}\psi_{2h} ({\tilde V_{2h}}),0)}^{\mathrm{int}}\bigg (\frac{1}{r_{2h}}\psi_{2h}({\tilde U})\bigg) = R_{(\psi_{2h} ({\tilde V_{2h}}),0)}^{\mathrm{int}}(\psi_{2h}({\tilde U})) = R_{({\tilde V_{2h}},0)}^{\mathrm{int}}{\tilde U} =R_{(V_{2h},0)}^{\mathrm{int}}U. \end{align*} $$

As $R_{(V_{2h},0)}^{\mathrm {int}}U\rightarrow \infty $ as $h \rightarrow 0_+$ from above, it follows that, uniformly on $[r_0, R_0]$ ,

$$ \begin{align*} R_{(\frac{1}{r_{2h}}\psi_{2h} ({\tilde V_{2h}}),0)}^{\mathrm{int}} \bigg(\frac{1}{r_{2h}}\psi_{2h}({\tilde U}) \bigg) = R_{(V_{2h},0)}^{\mathrm{int}}U\rightarrow \infty \quad \mbox{as } h \rightarrow 0_+. \end{align*} $$

We can then apply Lemma 5.16 to conclude using ${1}/{r_{2h}}\psi _{2h}({\tilde V_{2h}})=\mathbb D$ that on letting $h \to 0_+$ , we have

$$ \begin{align*} {d}\bigg(\partial \bigg(\frac{1}{r_{2h}}\psi_{2h}({\tilde U})\bigg ),\partial \bigg(\frac{1}{r_{2h}}\psi_{2h}({\tilde V_{2h}})\bigg)\bigg ) = {d}\bigg (\partial \bigg(\frac{1}{r_{2h}}\psi_{2h}({\tilde U})\bigg),\partial \mathbb D\bigg)\rightarrow 0, \end{align*} $$

where again the convergence is uniform on $[r_0, R_0]$ . Thus, scaling by $r_{2h}$ , we have, again uniformly on $[r_0, R_0]$ ,

(5.9) $$ \begin{align} {d}(\partial (\psi_{2h}({\tilde U})),\partial (\psi_{2h}({\tilde V_{2h}})))\rightarrow 0_+\quad\text{as }h \rightarrow 0_+. \end{align} $$

We observe that, since $r_{2h}$ depends on R, this is the first time when the convergence being uniform on $[r_0, R_0]$ is not entirely trivial.

Further, using the Schwarz lemma for the hyperbolic metric [Reference Carleson and GamelinCG93, Theorems I.4.1 or I.4.2], we have that

(5.10) $$ \begin{align} \psi_{2h}({\tilde U})\subset \psi_{2h}({\tilde V_{2h}})\subset \psi_{2h}(U_R)\subset \psi_{2h}( U_{{\pi}/{2}})\subset\psi_{2h}(\Delta_{V_{2h}}(0, ({\pi}/{2}))) = \Delta_{\mathbb D}(0, ({\pi}/{2})), \end{align} $$

(where we use the Schwarz Lemma for the hyperbolic metric for the last inclusion) which shows that both $\psi _{2h}({\tilde U})$ and $\psi _{2h}({\tilde V_{2h}})$ lie within (hyperbolic) distance ${\pi }/{2}$ of $0$ within ${\mathbb D}$ . Since $\psi _{2h}^{-1}$ converges to $\psi ^{-1}$ uniformly on compact subsets of ${\mathbb D}$ by Theorem A.9, using equations (5.9) and (5.10), we have that ${d}(\partial {\tilde U},\partial {\tilde V_{2h}})\rightarrow 0_+$ uniformly on $[r_0, R_0]$ as $h \rightarrow 0_+$ . Using Lemma A.4 and the fact that $\tilde U \subset \tilde V_{2h} \subset U_R \subset \Delta _U(0, ({\pi }/{2}))$ , we see that we can say the same for distances with respect to the hyperbolic metric for U and that

$$ \begin{align*} \rho_{U}(\partial {{\tilde U}},\partial {\tilde V_{2h}})\rightarrow 0 \quad \mbox{as } h \rightarrow 0_+ \end{align*} $$

uniformly on $[r_0, R_0]$ .

Fix $\varepsilon _0>0$ . Using Lemma 2.5, pick $z \in \partial {\tilde U}$ such that $\rho _U(0,z)=R"$ . From above, for all h sufficiently small, we can pick $w_{2h} \in \partial {\tilde V_{2h}}$ such that

$$ \begin{align*} \rho_U(z,w_{2h})<\frac{\varepsilon_0}{2} \end{align*} $$

(for any $R \in [r_0, R_0]$ ). Now let $\gamma $ be the unique geodesic in U passing through $0,w_{2h}$ . As $\gamma $ must eventually leave $U_R$ , let w be the first point on $\gamma \cap \partial U_R$ after we pass along $\gamma $ from $0$ to $w_{2h}$ . Then $0$ , $w_{2h}$ , and w are on the same geodesic and $w_{2h}$ is on the hyperbolic segment $\gamma _U[0,w]$ in U from $0$ to w. We now have $\rho _U(0,w)=R$ and $\rho _U(0,w_{2h})\geq R' = R_{(U,0)}^{\mathrm {int}}{\tilde V_{2h}}$ using Lemma 2.5. Then, since $w_{2h} \in \gamma _U[0,w]$ , using our Claim 5.18, we have, uniformly on $[r_0, R_0]$ ,

$$ \begin{align*} \rho_U(w,w_{2h})&=\rho_U(0,w)-\rho_U(0,w_{2h}) \\ &\leq R-R' \\ &<\frac{\varepsilon_0}{2} \end{align*} $$

for h sufficiently small. Further, we have

$$ \begin{align*} R-R" &= \rho_U(0,w)-\rho_U(0,z) \\ &\leq \rho_{U}(z,w)\\ &\leq \rho_U(z,w_{2h}) + \rho_U(w_{2h},w) \\ &<\frac{\varepsilon_0}{2}+\frac{\varepsilon_0}{2} \\ &=\varepsilon_0 \end{align*} $$

for h sufficiently small, and thus $R-R"\rightarrow 0$ as $h\rightarrow 0_+$ while this convergence is uniform on $[r_0, R_0]$ as desired (see Figure 6).

Figure 6 Showing $R - R" \rightarrow 0$ as $h \to 0_+$ .

By the Fitting Lemma (Lemma 5.15), there exist $\tilde \varepsilon _1> 0 $ and a function h defined on $(0, \tilde \varepsilon _1]$ , both of which depend on $h_0$ , $r_0$ which we fixed before the statement for which we have (by item (2) of the statement of this result) that

(5.11) $$ \begin{align} h(\varepsilon_1) \rightarrow 0_+ \quad \mbox{as } \varepsilon_1\rightarrow 0_+. \end{align} $$

From this, using using this function and Claim 5.19, we have that

(5.12) $$ \begin{align} R-R"\rightarrow 0 \text{ as }\varepsilon_1\rightarrow 0_+ \end{align} $$

while this convergence is uniform on $[r_0, R_0]$ .

To conclude this section of the proof, we make our final determination of the upper bound $\tilde \varepsilon _1$ and define the function $\delta $ on $(0, \tilde \varepsilon _1]$ . Using the value of $\tilde \varepsilon _1$ above which comes from Lemma 5.15, for $\varepsilon _1 \in (0, \tilde \varepsilon _1]$ , set

(5.13) $$ \begin{align} \delta(\varepsilon_1) := \sup_{[r_0, R_0]}{(R-R")}+5\varepsilon_1 \end{align} $$

(the justification for this definition of this function will be made clear later). Note that, in view of the above dependencies of the function h on $h_0$ and $r_0$ , the function $\delta $ then depends on $h_0$ and the bounds $r_0$ , $R_0$ for R (but not on R itself), all of which we regard as fixed in advance. It follows from equation (5.12) that $\delta (\varepsilon _1) \rightarrow 0$ as $\varepsilon _1\rightarrow 0_+$ (we remark that this is the point where we require that the convergence above be uniform). We can then make ${\tilde \varepsilon _1}$ smaller if needed such that

(5.14) $$ \begin{align} \sup_{(0, \tilde \varepsilon_1]} \delta(\varepsilon_1)<\frac{r_0}{4}, \end{align} $$

which ensures that $U_{R-\delta (\varepsilon _1)}\neq \emptyset $ . Note from above that $\tilde \varepsilon _1$ will therefore also depend on $h_0$ and the bounds $r_0$ , $R_0$ for R (in other words, we pick up an extra dependency on $R_0$ from the definition of the value $\delta (\varepsilon _1)$ in equation (5.13)), so that we now have the correct dependencies for both the quantity $\tilde \varepsilon _1$ and the function $\delta : (0, \tilde \varepsilon _1] \mapsto (0, ({r_0}/{4}))$ as in the statement. Note in addition that this change in $\tilde \varepsilon _1$ may require us to redefine the function h above by restricting its domain of definition. Note that restricting $\tilde \varepsilon _1$ in this way will not violate equation (5.4) in the proof of the Fitting Lemma (Lemma 5.15) so that we can still define $h(\varepsilon _1)$ according to equation (5.5) in the proof and, in particular, equation (5.6) still holds. Note that what we are essentially doing here is defining new ‘copies’ of the functions $T(\varepsilon _1)$ , $h(\varepsilon _1)$ , $\delta (\varepsilon _1)$ with restricted domains, but the same values as the originals, and then relabeling them with the original names so that there is no danger of circular reasoning. It is also worth noting that one only needs to carry out this restriction once, after which equation (5.14) is then automatically satisfied. In addition, because the function $\delta $ depends on $h_0$ , $r_0$ , and $R_0$ , the redefined function h now depends on $h_0$ , $r_0$ , and $R_0$ as in the statement. Lastly, note that, in particular, equation (5.14) implies that

(5.15) $$ \begin{align} U_{R - \delta} \supset U_{3r_0/4}. \end{align} $$

Controlling error: ‘Up’: Now fix $\varepsilon _1 \in (0, \tilde \varepsilon _1]$ , $h = h(\varepsilon _1)$ using the function h introduced before equation (5.11) (and redefined later so as to satisfy equation (5.14)), and also fix $R \in [r_0, R_0]$ as in the statement. Recall from the discussion before Lemma 5.10 that we had ${\tilde R}=R_{(V_{2h},0)}^{\mathrm {int}}U_R$ , ${\tilde V_{2h}}=\Delta _{V_{2h}}(0,{\tilde R})$ , and $\varphi _{2h}:{\tilde V_{2h}}\rightarrow V_{2h}$ which was the unique conformal map from ${\tilde V_{2h}}$ to $V_{2h}$ normalized so that $\varphi _{2h}(0)=0$ and $\varphi _{2h}^{\prime }(0)>0$ . Recall also that $\psi _{2h}$ is the unique normalized Riemann map which sends $V_{2h}$ to ${\mathbb D}$ . Since $\tilde V_{2h}$ is a hyperbolic disc about $0$ in $V_{2h}$ , it follows that, in the conformal coordinates of $V_{2h}$ , $\varphi _{2h}$ is then a dilation of ${\tilde V_{2h}}$ . To estimate the error in approximating $\varphi _{2h}$ , we wish to break this dilation into many smaller dilations, and apply the Polynomial Implementation Lemma (Lemma 3.9) so as to approximate each of these small dilations with a polynomial composition. The key idea here is that conformal dilations by small amounts can have larger domains of definition and, by dilating by a sufficiently small amount, we can ensure this domain of definition includes the filled Julia set and indeed all of the Green’s domain $V_h$ , which ultimately allows us to apply the Polynomial Implementation Lemma to approximate it to an arbitrarily high degree of accuracy.

As before, let $r_{2h}\in (0,1)$ be such that $\psi _{2h}({\tilde V_{2h}})=\mathrm {D}(0,r_{2h})$ and recall that $r_{2h}$ depends on $\varepsilon _1$ , R, $h_0$ , $r_0$ , and $R_0$ (via h, R, and $\psi _{2h}$ , noting that we can ignore the dependence of $\psi _{2h}$ on $\kappa $ in view of conformal invariance). Pick $s \in (0,1)$ so that ${\psi _{2h}(\overline V_h)}\subset \mathrm {D}(0,s)$ . Note that s depends immediately on h and $\psi _{2h}$ , but does not depend on $\kappa $ by conformal invariance, so that s also depends ultimately on $\varepsilon _1$ , R, $h_0$ , $r_0$ , and $R_0$ . Note also that we must have $s> r_{2h}$ since $V_h \supset U \supset \overline U_R \supset \tilde V_{2h}$ . Now fix N such that

(5.16) $$ \begin{align} s \sqrt[N]{\frac{1}{r_{2h}}} < \sqrt s, \end{align} $$

and note that this choice of N will depend immediately on s and $r_{2h}$ , and thus ultimately on $\varepsilon _1$ , R, $h_0$ , $r_0$ , $R_0$ , from above.

This choice will ensure that our conformal dilations in the composition do not distort $\partial V_h$ so much so that we no longer have a conformal annulus for interpolation when we apply the Polynomial Implementation Lemma.

Next, define on $\psi _{2h}^{-1}(\mathrm {D}(0,s))$ the map

(5.17) $$ \begin{align} g(z)=\psi_{2h}^{-1} \bigg(\sqrt[N]{\frac{1}{r_{2h}}}\psi_{2h}(z)\bigg) \end{align} $$

and note in particular that g is defined and, in addition, analytic and injective on a neighborhood of $\overline {V_h}$ as $\psi _{2h}(\overline V_h) \subset \mathrm {D}(0,s)$ by our choice of s. Further, since $\psi _{2h}$ fixes $0$ , we have $g(0) = 0$ and, given our choice of N in equation (5.16), we have

(5.18) $$ \begin{align} \overline{g(V_h)}\subset V_{2h}. \end{align} $$

By conformal invariance or simply because g corresponds to a dilation by $r_{2h}^{-1/N}$ in the conformal coordinates of $V_{2h}$ , recalling that we set $\tilde V_h := \varphi _{2h}^{-1}(V_h)$ (immediately before Lemma 5.11), we must then have that $\psi _{2h}(\tilde U) \subset \psi _{2h}(\tilde V_h) \subset \Delta _{\mathbb D}(0, r_{2h}s)$ . Again, since g corresponds to a dilation by $r_{2h}^{-1/N}$ in the conformal coordinates of $V_{2h}$ , the compositions $g^{\circ j}$ , $0 \le j \le N$ (where $g^{\circ \, 0}$ is the identity) are all then defined on $\tilde U$ and, in particular, we have $g^{\circ N} = \varphi _{2h}$ on $\tilde U$ . We observe that the functions $g^{\circ j}$ then form (part of) a Löwner chain on $\tilde U$ in a sense similar to that given in [Reference Contreras, Diaz-Madrigal and GumenyukCDMG10] (although these authors were working on the unit disc).

Since $R" < R \le R_0 \le {\pi }/{2}$ is bounded above, the external hyperbolic radius about $0$ of $U_{R"-\varepsilon _1}$ inside $U_{R"}$ (with respect to the hyperbolic metric of this slightly larger domain) can be uniformly bounded above in terms of $\varepsilon _1$ and the upper bound $R_0 \le {\pi }/{2}$ for R. By the Schwarz lemma for the hyperbolic metric (e.g. [Reference Carleson and GamelinCG93, Theorems I.4.1 or I.4.2]), the same is true for the external hyperbolic radius about $0$ of $U_{R"-\varepsilon _1}$ inside the larger (than $U_{R"}$ ) domain $\tilde U$ . By conformal invariance under $\varphi _{2h}$ , the same is also true for the external hyperbolic radius about $0$ of $\varphi _{2h}(U_{R"-\varepsilon _1})$ inside U. We can then find an upper bound $R_2$ for this external hyperbolic radius which depends directly on $\varepsilon _1$ and the upper bound $R_0$ on R, and thus ultimately on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ . As a result, we have

(5.19) $$ \begin{align} R^{\mathrm{ext}}_{(U,0)}\varphi_{2h}(U_{R" - 3\varepsilon_1}) < R^{\mathrm{ext}}_{(U,0)}\varphi_{2h}(U_{R" - 2\varepsilon_1}) < R^{\mathrm{ext}}_{(U,0)}\varphi_{2h}(U_{R" - \varepsilon_1}) \le R_2. \end{align} $$

Note that in view of equations (5.13), (5.15), $U_{R" - 3\varepsilon _1} \supset U_{3r_0/4}$ and all the sets in the above are thus non-empty. Note also that this upper bound is in particular independent of N and h (recall that, in fact, we have h is a function of $\varepsilon _1$ in view of equation (5.5) and the discussion after equation (5.14)). We also note that, at this point, we do not actually require that $R_2$ be independent of h. However, we will need this later when we turn to giving upper bound $\tilde \varepsilon _2$ for $\varepsilon _2$ which has the correct dependencies as listed in the statement. Finally, we note that this upper bound is for the set $U_{R" - \varepsilon _1}$ , while all we will need in this section of the proof is a bound on the slightly smaller sets $U_{R" - 2\varepsilon _1}$ , $U_{R" - 3\varepsilon _1}$ . However, we will need the bound on the larger set when it comes to the ‘during’ part of the proof later on.

Since, from above, the compositions $g^{\circ j}$ , $0 \le j \le N$ are all defined on $\tilde U \supset U_{R" - 3\varepsilon _1}$ , we can now set $B := g^{\circ N}(U_{R"-3\varepsilon _1})$ and note that, since $g^{\circ N} = \varphi _{2h}$ on $U_{R" - 3\varepsilon _1}$ , it follows from equation (5.19) that $g^{\circ N}$ maps $U_{R"-3\varepsilon _1}$ inside $U_{R_2}$ , which is a relatively compact subset of U. However, recalling the normalized Riemann map $\psi $ from U to ${\mathbb D}$ (which was introduced before the start of Lemma 5.10), since $R" \le R_0 \le {\pi }/{2}$ , by Lemma A.6, the set $\psi _{2h}(U_{R"-3\varepsilon _1}) = \psi _{2h} \circ \psi ^{-1}( \Delta _{\mathbb D}(0, R"-3\varepsilon _1))$ is star-shaped with respect to $0$ . Since g corresponds to a dilation by $r_{2h}^{-1/N}>1$ in the conformal coordinates of $V_{2h}$ , it therefore follows that the sets $g^{\circ j}(U_{R"-3\varepsilon _1})$ , $0 \le j \le N$ (which from above are well defined) are increasing in j and therefore all contained in B. Thus, any estimate which holds on B will also automatically hold on these sets also.

Now set $A := U_{R_2+1} \supset B$ (we remark that the ‘extra’ $1$ here is due to the fact that $\varphi _{2h} = g^{\circ N}$ is a composition of g with itself many times and each of these compositions with g will be approximated so that we need to be able to allow for the total error which arises—see the proof of Claim 5.20 below for details). Since the function g is defined on a neighborhood of $\overline V_h \supset U$ and this containment does not change if we increase the value of N in view of equation (5.16), it follows that the functions $\psi _{2h}^{-1}(\sqrt [N]{({1}/{r_{2h}})} \psi _{2h}(z))$ clearly converge to the identity locally uniformly on U as $N \to \infty $ (and, in particular, for all sufficiently large N, are defined on any relatively compact subset of U and map it into another relatively compact subset).

Using a similar argument as in Lemma 3.7 and also in Step 4 of the proof of Phase I (Lemma 4.8), based on the hyperbolic density $\sigma _U$ for the hyperbolic metric of U and that this density is uniformly continuous and bounded below away from $0$ on any relatively compact subset of U, it follows that the functions $\psi _{2h}^{-1}(\sqrt [N]{{1}/{r_{2h}}} \psi _{2h}(z))$ converge uniformly to the identity while their hyperbolic derivatives converge uniformly to $1$ on any relatively compact subset of U. Since A depends on $R_2$ , which from above does not depend on N, it follows that, if we fix a constant $K_1 = \tfrac {3}{2}$ , we may therefore make N larger if needed (without invalidating equation (5.16)) so that, if $\hat A$ is a $1$ -hyperbolic neighborhood of A in the hyperbolic metric of U (which implies that $\hat A = U_{R_2 +2}$ ), then g still maps $\hat A$ into a relatively compact subset of U and we have

(5.20) $$ \begin{align} \| g^{\natural} \|_{\hat{A}}\leq K_1, \end{align} $$

where, as usual, we are taking our hyperbolic derivatives with respect to the hyperbolic metric of U. Our new choice of N will depend directly on $R_2$ and g in addition to the old dependencies on $\varepsilon _1$ , R, $h_0$ , $r_0$ , R from the discussion after equation (5.16) and so, using equation (5.17) and the dependencies of $R_2$ from equation (5.19), ultimately N depends on $\kappa $ , $\varepsilon _1$ , $h_0$ , $r_0$ , $R_0$ , and R (this last also being via $r_{2h}$ which we are not allowed to alter at this stage). However, since we are estimating a hyperbolic derivative here, we can eliminate the dependence on $\kappa $ so that N ultimately depends on $\varepsilon _1$ , $h_0$ , $r_0$ , $R_0$ , and R (which is the same as for the original version). Note also that the function g defined in equation (5.17) is being redefined here as we are changing N (but not $r_{2h}$ or s), but we could have introduced the requirement in equation (5.20) as part of the definition of N and thus of g by introducing the bound $R_2$ (which depends only on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ , but not on g) earlier, so there is no danger of circular reasoning here. One also easily checks that all the properties of g listed above still remain true for the new version. This new version of g will then depend on the six quantities $\kappa $ , $\varepsilon _1$ , $h_0$ , $r_0$ , $R_0$ , and R (and, in particular, we cannot eliminate the dependence on $\kappa $ since the domain of g depends on $\kappa $ via $\psi _{2h}$ ).

Note also that by Lemma 2.8, $\hat A$ is hyperbolically convex which will be useful (though not essential) when we come to apply the hyperbolic M-L estimates (Lemma 2.9) later on. Also important to note is that N is fixed from now on which means that we can choose our subsequent approximations using the Polynomial Implementation Lemma with this N in mind.

Set $\tilde \varepsilon _2 : =1$ and let $0< \varepsilon _2 \le \tilde \varepsilon _2$ be arbitrary and fixed (note that this upper bound $\tilde \varepsilon _2$ is universal, but we will be making further restrictions later in the proof to deduce the upper bound $\tilde \varepsilon _2$ with the same dependencies as in the statement). Define $\gamma :=\partial V_h$ and $\Gamma := \partial V_{2h}$ (with positive orientations as Jordan curves with respect to the conformal annulus bounded by $\partial V_h$ and $\partial V_{2h}$ ), and note that, since g is injective and analytic on a neighborhood of $\overline V_h$ while $\overline {g(V_h)}\subset V_{2h}$ from equation (5.18), we must have that $g(\gamma )$ lies inside $\Gamma $ (so that $(g, \mathrm {Id})$ is an admissible pair on $(\gamma , \Gamma )$ in the sense given in Definition 3.3).

Now set $\varepsilon $ in the statement of the Polynomial Implementation Lemma (Lemma 3.9) to be ${\varepsilon _2}/{3(2K_1)^{N-1}K_2 K_3}$ , where $K_2$ and $K_3$ are bounds on hyperbolic derivatives which will be chosen later. For now, we just assume that $K_i>1$ for $i=2,3$ (these are just constants, and we can always choose a larger constant). Note that we have ${\varepsilon _2}/{3(2K_1)^{N-1}}<1$ , which implies that $\varepsilon <1$ . Further, note that $\varepsilon <\varepsilon _2$ . Now, since $g(0)=0$ , $g(\gamma )$ lies inside $\Gamma $ and we have the estimate in equation (5.20) on the hyperbolic derivative of g, we can apply the Polynomial Implementation Lemma (Lemma 3.9), with $\Omega = V_h$ , $\Omega ' = V_{2h}$ , $\gamma =\Gamma _h$ , $\Gamma =\Gamma _{2h}$ , $f = g$ , $A = U_{R_2 + 1}$ , $\delta =1$ , $M = K_1$ , and $\varepsilon = {\varepsilon _2}/({3(2K_1)^{N-1}K_2 K_3})$ as above to g to get $n_{k_0}>0$ , and a (17+ $\kappa $ )-bounded finite sequence of quadratic polynomials $\{Q_m \}_{m=1}^{n_{k_0}}$ such that the composition of these polynomials, $Q_{n_{k_0}}$ , is univalent on A and satisfies

(5.21) $$ \begin{align} \rho_U(Q_{n_{k_0}}(z),g(z))&< \frac{\varepsilon_2}{3(2K_1)^{N-1}K_2 K_3} = \varepsilon , \quad z\in A, \end{align} $$
(5.22) $$ \begin{align} \|Q^{\natural}_{n_{k_0}}\|_{A}&\leq K_1\bigg(1+\frac{\varepsilon_2}{3(2K_1)^{N-1}K_2 K_3} \bigg ), \end{align} $$
(5.23) $$ \begin{align} Q_{n_{k_0}}(0) &=0. \end{align} $$

Note that by Lemma 3.9, since $M = K_1 = \tfrac {3}{2}$ and $\delta = 1$ , $n_{k_0}$ , and $Q_{n_{k_0}}$ depend directly on $\kappa $ , $K_1$ , $\varepsilon $ , $R_2$ , g, and h, one can check that $n_{k_0}$ and $Q_{n_{k_0}}$ eventually depend on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , $K_2$ , $K_3$ , R, $h_0$ , $r_0$ , and $R_0$ (and then ultimately on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , R, $h_0$ , $r_0$ , $R_0$ , and ${\mathcal E}$ via the ultimate dependencies of $K_2$ and $K_3$ in the ‘during’ and ‘down’ sections of the proof below, the dependence on ${\mathcal E}$ coming from $K_2$ ).

For $1 \le j \le N$ , define $Q_{jn_{k_0}}:=Q^{\circ j}_{n_{k_0}}$ . We prove the following claim, which will allow us to control the error in the ‘Up’ portion of Phase II.

Claim 5.20. For each $1\leq j \leq N$ , we have:

$$ \begin{align*} \mathrm{(i)}& \; \rho_U(Q_{j n _{k_0}}(z),g^{\circ j}(z) )<\frac{\varepsilon_2}{3(2K_1)^{N-j}K_2K_3} < 1, \quad z \in U_{R"-3\varepsilon_1}; \\ \mathrm{(ii)}& \; Q_{j n _{k_0}}(z)\in A, \quad z \in U_{R"-3\varepsilon_1};\\ \mathrm{(iii)}&\; Q_{j n _{k_0}} \quad \text{is univalent on } U_{R"-3\varepsilon_1}. \end{align*} $$

Proof. For the base case $j=1$ , recall that, from the discussion before the definition of $R_2$ given in equation (5.19), we have that the external hyperbolic radius of $U_{R"-3\varepsilon _1} \subset U_{R"-2\varepsilon _1}$ inside $\tilde U$ is bounded above by $R_2$ . Since $\tilde U \subset U$ , by the Schwarz lemma for the hyperbolic metric (e.g. [Reference Carleson and GamelinCG93, Theorems I.4.1 or I.4.2]), we have that $U_{R"-3\varepsilon _1} \subset U_{R_2} \subset U_{R_2 + 1} = A$ . Part (i) then follows from equation (5.21).

For part (ii), recall that the sets $g^{\circ j}(U_{R"-3\varepsilon _1})$ , $0 \le j \le N$ , are increasing in j and, in view of equation (5.19), therefore, all contained in $B = g^{\circ N}(U_{R"-3\varepsilon _1}) = \varphi _{2h}(U_{R"-3\varepsilon _1}) \subset U_{R_2}$ . Thus, $g(z) \in U_{R_2}$ and the result follows from equation (5.21) on recalling that $\varepsilon < 1$ and that $A = U_{R_2 + 1}$ contains a $1$ -neighborhood of B (in the hyperbolic metric of U).

Finally, part (iii) simply follows from the above fact that $Q_{n _{k_0}}$ is univalent on A, which we already saw contains $U_{R"-3\varepsilon _1}$ .

Now assume the claim is true for some $1\leq j < N$ . For $z \in U_{R"-3\varepsilon _1}$ , we have

$$ \begin{align*} \rho_U(Q_{(j+1)n_{k_0}}(z),g^{\circ j+1}(z)) &\leq \rho_U(Q_{(j+1)n_{k_0}}(z),g\circ Q_{j n_{k_0}}(z))\\ &\quad +\rho_U(g\circ Q_{j n_{k_0}}(z),g^{\circ j+1}(z)). \end{align*} $$

Now $Q_{j n_{k_0}}(z)\in A$ by hypothesis, so the first term in the inequality above is less than $\varepsilon $ by equation (5.21). In addition to $Q_{j n_{k_0}}(z)\in A$ , we also have $g^{\circ j}(z) \in B \subset U_{R_2} \subset A$ (we remark that this is a place where we need to make use of the fact that the sets $g^{\circ j}(U_{R"-3\varepsilon _1}))$ are increasing in j and thus all contained in B). Using part (i) of the induction hypothesis above, equation (5.20) and the hyperbolic convexity of A which follows from Lemma 2.8, we see on applying the hyperbolic M-L estimates (Lemma 2.9) to g that the second term is less than $K_1 ({\varepsilon _2}/{3(2K_1)^{N-j}K_2 K_3})$ . Thus, we have

$$ \begin{align*} \rho_U(Q_{(j+1)n_{k_0}}(z),g^{\circ j+1}(z))& < \varepsilon+ K_1 \cdot {\frac{\varepsilon_2}{3(2K_1)^{N-j}K_2 K_3}} \\ &= \frac{1}{(2K_1)^j}\cdot\frac{\varepsilon_2}{3(2K_1)^{N-(j+1)}K_2 K_3}\\ &\quad +\frac{1}{2}\cdot\frac{\varepsilon_2}{3(2K_1)^{N-(j+1)}K_2 K_3} \\ & < \frac{\varepsilon_2}{3(2K_1)^{N-(j+1)}K_2 K_3}\\ & < 1, \end{align*} $$

which proves part (i) in the claim using the fact that $K_1> 1$ for the second last inequality and $\varepsilon _2 \le 1$ , $K_1, K_2, K_3> 1$ for the last inequality above.

Now $Q_{(j+1)n_{k_0}}(U_{R"-3\varepsilon _1})$ lies in a 1-neighborhood of $g^{\circ j+1}(U_{R"-3\varepsilon _1})$ by part (i) above. However, $g^{\circ j+1}(U_{R"-3\varepsilon _1})\in B$ (where again we note that the sets $g^{\circ j}(U_{R"-3\varepsilon _1}))$ are increasing in j and thus all contained in B), while a $1$ -neighborhood of $B\subset U_{R_2}$ lies inside A by the definition of A and so $Q_{(j+1)n_{k_0}}(z)\in A$ if $z \in U_{R"-3\varepsilon _1}$ (note that $j+1\leq N$ ), which finishes the proof of part (ii). To show part (iii) and see that $Q_{(j+1)n_{k_0}}(z)$ is univalent, we obviously have $Q_{(j+1)n_{k_0}}(z)=Q_{n_{k_0}} \circ Q_{j n_{k_0}}(z)$ . Since by hypothesis we have both that $Q_{j n_{k_0}}$ is univalent on $U_{R"-3\varepsilon _1}$ and $Q_{j n_{k_0}}(U_{R"-3\varepsilon _1}) \subset A$ , while $Q_{n_{k_0}}$ is univalent on A by our application of the Polynomial Implementation Lemma (Lemma 3.9), we have that $Q_{(j+1)n_{k_0}}$ is univalent on $U_{R"-3\varepsilon _1}$ . This completes the proof of the claim.

For convenience, set $\mathbf {Q_1} :=Q_{Nn_{k_0}}$ and recall that on $U_{R" - 3\varepsilon _1} \subset \tilde U$ , we had $g^{\circ N} = \varphi _{2h}$ . From above, $\mathbf {Q_1}$ then depends on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , $K_2$ , $K_3$ , R, $h_0$ , $r_0$ , and $R_0$ (recall that N depends on $\varepsilon _1$ , R, $h_0$ , $r_0$ , and $R_0$ , while the mapping $\varphi _{2h} = g^{\circ N}$ depends on $\kappa $ , $\varepsilon _1$ , R, $h_0$ , $r_0$ , and $R_0$ ). By part (iii) of Claim 5.20, $\mathbf {Q_1}$ is univalent on $U_{R"-3\varepsilon _1}$ and, on this hyperbolic disc, from part (i) of the same claim and the fact that $g^{\circ N}=\varphi _{2h}$ on $U_{R" - 3\varepsilon _1}$ , we have (on this set)

(5.24) $$ \begin{align} \rho_U(\mathbf{Q_1}(z), \varphi_{2h}(z))&<\frac{\varepsilon_2}{3K_2 K_3}, \end{align} $$

while, from part (ii) of this claim and equation (5.23),

(5.25) $$ \begin{align} \mathbf{Q_1}(z)&\in A, \end{align} $$
(5.26) $$ \begin{align} \mathbf{Q_1}(0) &=0. \end{align} $$

The mapping $\varphi _{2h}$ obviously maps $U_{R" - 3\varepsilon _1}$ to $\varphi _{2h}(U_{R"-3\varepsilon _1})$ and, provided the next polynomial in our construction has the desired properties on this set, we will be able to compose in a meaningful way so that the composition also has the desired properties. However, in practice, we are approximating $\varphi _{2h}$ with the composition $\mathbf {Q_1}$ which involves an error, and our next step is to show that we can map into the correct set $\varphi _{2h}(U_{R"-3\varepsilon _1})$ using $\mathbf {Q_1}$ provided we are wiling to ‘give up’ an extra $\varepsilon _1$ . First, however, we have the following important estimates which we will need later, especially when it comes to defining the upper bound $\tilde \varepsilon _2$ for $\varepsilon _2$ to obtain the same dependencies as given in the statement.

Claim 5.21. There exist $\eta _1, \eta _2> 0$ depending on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ such that

$$ \begin{align*} \eta_1 \le \|( \varphi_{2h}^{-1})^{\natural}\|_{U_{R_2 + 2}}\leq \eta_2. \end{align*} $$

Proof. Recall the upper bound $R_2$ on $R^{\mathrm {ext}}_{(U,0)}\varphi _{2h}(R_{R"-\varepsilon _1})$ given in equation (5.19) and the normalized Riemann map $\psi $ from U to ${\mathbb D}$ which was defined just before Lemma 5.10. Here, $\varphi _{2h}^{-1}$ maps U to $\tilde U \subset U$ so that the conjugated mapping $\psi \circ \varphi _{2h}^{-1} \circ \psi ^{-1}$ is defined on all of ${\mathbb D}$ . Using equations (5.14) and (5.19), one checks easily that

(5.27) $$ \begin{align} \varphi_{2h}^{-1}(U) \supset \varphi_{2h}^{-1}(U_{R_2+2}) \supset \varphi_{2h}^{-1}(\varphi_{2h}(U_{R" - 2\varepsilon_1})) = U_{R" - 2\varepsilon_1} \supset U_{3r_0/4}. \end{align} $$

Hence, $\psi \circ \varphi _{2h}^{-1} \circ \psi ^{-1}$ maps ${\mathbb D}$ to a domain which is contained in ${\mathbb D}$ and which contains $\Delta _{\mathbb D}(0, 3r_0/4)$ , and so by the Koebe one-quarter theorem (Theorem A.1), we then obtain (strictly positive) upper and lower bounds for the derivative $|(\psi \circ \varphi _{2h}^{-1} \circ \psi ^{-1})'(0)|$ which depend only on $h_0$ (because we assumed $h \le h_0$ ) and $r_0$ . Note that, in particular, these bounds do not depend on the values of h or R.

Since $\psi \circ \varphi _{2h}^{-1} \circ \psi ^{-1}$ is defined on the whole of the unit disc, on applying the distortion theorems (Theorem A.2), we obtain strictly positive upper and lower bounds for $|(\psi \circ \varphi _{2h}^{-1} \circ \psi ^{-1})'|$ on the set $\psi (U_{R_2+2}) = \Delta _{\mathbb D}(0, R_2+2)$ . Since by equation (5.19), $R_2$ depends on $\varepsilon _1$ and $R_0$ , these bounds depend on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ .

Here, $\varphi _{2h}^{-1}$ maps $U_{R_2 + 2}$ inside $\tilde U \subset U_{\pi /2}$ and, as $U_{R_2+2}$ and $U_{\pi /2}$ are both relatively compact subsets of U, $|{\psi }'|$ is uniformly bounded above and below away from $0$ on both of these sets. We therefore obtain strictly positive upper and lower bounds for the absolute value of the Euclidean, and thus the hyperbolic derivative of $\varphi _{2h}^{-1}$ on $U_{R_2 + 2}$ . These bounds will depend on $\kappa $ , $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ (the dependence on the scaling factor $\kappa $ arising via $\psi $ ). However, since we are estimating hyperbolic derivatives, we can actually eliminate the dependence on $\kappa $ and the claim then follows.

We now define our upper bound $\tilde \varepsilon _2$ on $\varepsilon _2$ by setting

(5.28) $$ \begin{align} \tilde \varepsilon_2 = \min \bigg\{1, \frac{\varepsilon_1}{\eta_2}\bigg\}. \end{align} $$

Given the dependencies of $\eta _2$ above (in Claim 5.21) as well as the dependence of $\varepsilon _1$ on $\tilde \varepsilon _1$ which in turn depends on $h_0$ , $r_0$ , and $R_0$ , this upper bound then depends on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ , which is the same as given in the statement.

Now make $\varepsilon _2$ smaller if necessary to ensure that $\varepsilon _2 \le \tilde \varepsilon _2$ , (note that this may require us to obtain a new composition $\mathbf {Q_1}$ as above, but since $\varepsilon _2$ and $\tilde \varepsilon _2$ in no way depend on $\mathbf {Q_1}$ , there is no danger of circular reasoning).

Claim 5.22. Given $\tilde \varepsilon _2$ , $0 < \varepsilon _2 \le \tilde \varepsilon _2$ and $\mathbf {Q_1}$ as above, we have

(5.29) $$ \begin{align} \mathbf{Q_1}(U_{R"-4\varepsilon_1})\subset \varphi_{2h}(U_{R"-3\varepsilon_1}). \end{align} $$

Proof. Let $z \in U_{R"-4\varepsilon _1}$ , $w \in \partial U_{R"-3\varepsilon _1}$ be arbitrary and note that $\rho _U(z,w) \ge \varepsilon _1$ , while both $\varphi _{2h}(z)$ and $\varphi _{2h}(w)$ lie inside $U_{R_2} = \Delta _U(0,R_2)$ in view of equation (5.19). As $\varphi _{2h}$ is a homeomorphism, we also have that $\varphi _{2h}(z)\in \mathrm {int} \,\varphi _{2h}(U_{R"-3\varepsilon _1})$ , while $\varphi _{2h}(w)\in \partial \varphi _{2h}(U_{R"-3\varepsilon _1})$ . Lemma 2.8 ensures that $\Delta _U(0,R_2+2)$ is hyperbolically convex and so we may apply the hyperbolic M-L estimates (Lemma 2.9) using Claim 5.21 on $\Delta _U(0,R_2+2)$ to $\varphi _{2h}^{-1}$ . Thus, we have $\rho _U(\varphi _{2h}(z),\varphi _{2h}(w)) \ge {\varepsilon _1}/{\eta _2}$ , which implies the hyperbolic distance from $\varphi _{2h}(z)$ to $\partial (\varphi _{2h}(U_{R"-3\varepsilon _1}))$ is at least ${\varepsilon _1}/{\eta _2}$ .

Again let $z \in U_{R"-4\varepsilon _1}$ be arbitrary. We then have using $K_2, K_3> 1$ , and equations (5.24) and (5.28) that

$$ \begin{align*} \rho_U(\mathbf{Q_1}(z),\varphi_{2h}(z))&<\frac{\varepsilon_2}{3K_2 K_3} \\ &<\varepsilon_2 \\ &\le\frac{\varepsilon_1}{\eta_2} \end{align*} $$

and since the hyperbolic distance from $\varphi _{2h}(z)$ to $\partial (\varphi _{2h}(U_{R"-3\varepsilon _1}))$ is at least ${\varepsilon _1}/{\eta _2}$ from above, it follows that $\mathbf {Q_1}(z)$ misses $\partial \varphi _{2h}(U_{R"-3\varepsilon _1})$ . Additionally, as $\mathbf {Q_1}(U_{R" - 4\varepsilon _1})$ is connected in view of part (iii) of Claim 5.20 while $\mathbf {Q_1}(0) = 0 \in \varphi _{2h}(U_{R" - 3\varepsilon _1})$ , it follows, since $z \in U_{R" - 4\varepsilon _1}$ was arbitrary, that $\mathbf {Q_1}(U_{R" - 4\varepsilon _1}) \subset \varphi _{2h}(U_{R"-3\varepsilon _1})$ and the claim follows, as desired.

Controlling error: ‘During’: Recall that at the start of the last section, we fixed a value of $\varepsilon _1$ in $(0, \tilde \varepsilon _1]$ , which in turn fixed the value of $h = h(\varepsilon _1)$ and that we also fixed a value of $R \in [r_0, R_0]$ . We now fix a function ${\mathcal E}$ as in the statement which is defined and univalent on $U_R$ with ${\mathcal E}(0) = 0$ and $\rho _U({\mathcal E}(z),z)<\varepsilon _1$ for $z \in U_R$ . Note that, in addition to R, ${\mathcal E}$ will also depend on $r_0$ , $R_0$ (via R) and also on $\kappa $ , the latter arising from the fact that ${\mathcal E}$ is defined on the set $U_R$ which depends on $\kappa $ . Recall the quantity ${\check R}(h):=R_{(V_{2h},0)}^{\mathrm {ext}}V_h = R_{(\tilde V_{2h},0)}^{\mathrm {ext}}\tilde V_h $ introduced before the statement of Lemma 5.11 and the function $T: (0, \tilde \varepsilon _1] \mapsto (0, \infty )$ which was introduced in the statement of the Target Lemma (Lemma 5.13) and which served as a lower bound for $R^{\mathrm {int}}_{({\tilde V_{2h}},0)}({\tilde V_{2h}} \setminus {\hat N})$ (where ${\hat N}$ was a $2\varepsilon _1$ -neighborhood of $\partial \tilde V_{2h}$ with respect to the hyperbolic metric of U).

Now ${\tilde U}\subset {\tilde V_h}$ (recall that $\tilde V_h = \varphi _{2h}^{-1}(V_h)$ was introduced immediately before Lemma 5.11), while in equations (5.5), (5.6), and (5.14), we chose $h = h(\varepsilon _1)$ (where, for convenience, we will suppress the dependence of h on $\varepsilon _1$ ) as small as possible so that $\check R(h) = T(\varepsilon _1)$ (cf. equation (5.6)). By item (1) of the Fitting Lemma (Lemma 5.15), we have ${\overline {\tilde V}_h} \subset \tilde V_{2h} \setminus {\hat N}$ (this latter set clearly being closed). Hence, the $2\varepsilon _1$ -neighborhood $\hat N$ of $\partial \tilde V_{2h}$ avoids $\overline {\tilde V}_h$ (and hence also the smaller set $\tilde U$ ). Thus, if we let $\mathcal O$ be an $\varepsilon _1$ -neighborhood (in the hyperbolic metric of U) of the closure $\overline {\tilde V_h}$ , then, by the hypotheses on ${\mathcal E}$ in the statement, ${\mathcal E}(\mathcal O) \subset \tilde V_{2h}$ , while ${\mathcal E}(\overline {\tilde V_h}) \subset {\mathcal E}( \tilde V_{2h} \setminus {\hat N})$ avoids an $\varepsilon _1$ -neighborhood of $\partial \tilde V_{2h}$ . In particular, again by the hypotheses on ${\mathcal E}$ , ${\mathcal E}(\partial \tilde V_h)$ is a simple closed curve which lies inside $\partial \tilde V_{2h}$ .

Next, on $\varphi _{2h}( \mathcal O)$ , define ${\hat {\mathcal E}}=\varphi _{2h}\circ {\mathcal E} \circ \varphi _{2h}^{-1}$ . Since from above ${\mathcal E}( \mathcal O) \subset \tilde V_{2h}$ , it follows that ${\hat {\mathcal E}}$ is well defined, analytic, and injective on a neighborhood of $\varphi _{2h}(\overline {\tilde V}_h) = \overline V_h$ . Then $\hat {\mathcal E}$ depends immediately on the six quantities $\kappa $ , $\varepsilon _1$ , h, R (these last two among other things being via the domain $\tilde V_{2h}$ ), $\varphi _{2h}$ , and ${\mathcal E}$ from which one can deduce (e.g. by using the tables in the appendices) that $\hat {\mathcal E}$ ultimately depends on $\kappa $ , $\varepsilon _1$ , R, $h_0$ , $r_0$ , $R_0$ , and ${\mathcal E}$ . As before, let $\gamma = \partial V_h$ , $\Gamma = \partial V_{2h}$ (again with positive orientations as Jordan curves with respect to the conformal annulus bounded by $\partial V_h$ and $\partial V_{2h}$ ). Then, from above and again by the hypotheses on ${\mathcal E}$ , since $\hat {\mathcal E}$ is defined on $\varphi _{2h}( \mathcal O)$ which contains $\gamma = \partial V_h$ (since $\overline {\tilde V}_h \subset {\mathcal O} \subset \tilde V_{2h} \setminus {\hat N}$ from above), we have from above that ${\hat {\mathcal E}}(\gamma )$ lies inside $\Gamma $ and so $({\hat {\mathcal E}}, \mathrm {Id})$ is an admissible pair on $(\gamma ,\Gamma )$ in the sense given in Definition 3.3 in §3 on the Polynomial Implementation Lemma. Lastly, since $\varphi _{2h}$ and ${\mathcal E}$ both fix $0$ , we must have that ${\hat {\mathcal E}}(0) = 0$ .

By the hypotheses on ${\mathcal E}$ in the statement,

(5.30) $$ \begin{align} {{\mathcal E}}(U_{R"-2\varepsilon_1})\subset U_{R"-\varepsilon_1}. \end{align} $$

Since ${\mathcal O} \supset \overline {\tilde V_h} \supset \tilde U \supset U_{R"}$ , it follows that $U_{R"-2\varepsilon _1} \subset {\mathcal O}$ and so $\varphi _{2h}(U_{R"-2\varepsilon _1}) \subset \varphi _{2h}({\mathcal O})$ , and from this and the definition of ${\hat {\mathcal E}}: = \varphi _{2h} \circ {{\mathcal E}} \circ \varphi _{2h}^{-1}$ ,

(5.31) $$ \begin{align} {\hat {\mathcal E}}(\varphi_{2h}(U_{R"-2\varepsilon_1}))\subset \varphi_{2h}(U_{R"-\varepsilon_1}). \end{align} $$

Hence, since ${\hat {\mathcal E}}$ maps the relatively compact subset $\varphi _{2h}(U_{R"-2\varepsilon _1})$ to another relatively compact subset of U, we can fix the value of $1 < K_2 < \infty $ such that

(5.32) $$ \begin{align} |{\hat {\mathcal E}}^{\natural}(z)|\leq K_2, \quad z \in \varphi_{2h}(U_{R"-2\varepsilon_1}), \end{align} $$

where, as usual, we take our hyperbolic derivative with respect to the hyperbolic metric of U. Immediately, $K_2$ depends on $\varepsilon _1$ , $R"$ , $\varphi _{2h}$ , and $\hat {\mathcal E}$ . Using the chain rule in equation (2.2) for the hyperbolic derivative, it follows from equation (5.19), Claim 5.21, equation (5.30), and the hypotheses on ${\mathcal E}$ in the statement that $K_2$ can be bounded uniformly in terms of $\kappa $ , $\varepsilon _1$ , $h_0$ , $r_0$ , $R_0$ , the function ${\mathcal E}$ , as well as the particular value of R (since ${\mathcal E}$ is defined on all of $U_R$ in the statement while the mapping $\varphi _{2h}$ also depends on R). However, the dependence on $\kappa $ (arising via $\varphi _{2h}$ ) can be eliminated since we are estimating a hyperbolic derivative. We also observe that this is the one point where we employ the full force of equation (5.19), and require an upper bound on the external hyperbolic radius of $ \varphi _{2h}(U_{R"-\varepsilon _1})$ and not just $\varphi _{2h}(U_{R"-2\varepsilon _1})$ or some smaller set.

Note in particular that this bound has nothing to do with the existence of the composition ${\mathbf Q_1}$ from the last section, and so there is no danger of circular reasoning in fixing the bound $K_2$ at this point. Note also that this does not affect our earlier assertion that $\varepsilon <1$ in the previous section on controlling the error for ‘up’. However, the same argument as used in the proof of Claim 5.22 shows that if we set $\delta _0 = {\varepsilon _1}/{\eta _2}$ , where $\eta _2$ is the upper bound on the hyperbolic derivative of $\varphi _{2h}^{-1}$ from Claim 5.21, then a $\delta _0$ -hyperbolic neighborhood in U of $\varphi _{2h}(U_{R"-3\varepsilon _1})$ is contained in $\varphi _{2h}(U_{R"-2\varepsilon _1})$ , while $\delta _0$ depends on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ .

Since ${\hat {\mathcal E}}(\gamma )$ lies inside $\Gamma $ while ${\hat {\mathcal E}}(0)=0$ , using equation (5.32), we can then apply the Polynomial Implementation Lemma (Lemma 3.9) with $\Omega = V_h$ , $\Omega ' = V_{2h}$ , $\gamma =\Gamma _h$ , $\Gamma =\Gamma _{2h}$ , $f = {\hat {\mathcal E}}$ , $A = \varphi _{2h}(U_{R"-3\varepsilon _1})$ , $\delta = \delta _0$ , $M = K_2$ , and $\varepsilon = {\varepsilon _2}/({3K_3})$ to construct a (17+ $\kappa $ )-bounded composition of quadratic polynomials, $\mathbf {Q_2}$ , univalent on $\varphi _{2h}(U_{R"-3\varepsilon _1})$ such that

(5.33) $$ \begin{align} \rho_U(\mathbf{Q_2}(z),{\hat {\mathcal E}}(z))&<\frac{\varepsilon_2}{3K_3}, \quad z \in \varphi_{2h}(U_{R"-3\varepsilon_1}), \end{align} $$
(5.34) $$ \begin{align} \| \mathbf{Q_2}^{\natural}\|_{\varphi_{2h}(U_{R"-3\varepsilon_1})}&\leq {K_2} \bigg (1+\frac{\varepsilon_2}{3K_3} \bigg), \end{align} $$
(5.35) $$ \begin{align} \mathbf{Q_2}(0) &=0, \end{align} $$

where the bound $K_3> 1$ is to be fixed in the next section. From the statement of Lemma 3.9, the composition $\mathbf {Q_2}$ depends directly on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , $K_2$ , $K_3$ , $\eta _2$ , $\varphi _{2h}$ , $R"$ (via the set $ \varphi _{2h}(U_{R"-3\varepsilon _1})$ , and $\hat {\mathcal E}$ . From this, one checks (e.g. using the tables) that $\mathbf {Q_2}$ depends eventually on on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , $K_3$ , R, $h_0$ , $r_0$ , $R_0$ , and the function ${\mathcal E}$ (and ultimately on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , R, $h_0$ , $r_0$ , $R_0$ , and ${\mathcal E}$ once the dependencies of $K_3$ in the ‘down’ section of the proof below are taken into account).

Controlling error: ‘Down’: Recall equation (5.31), where we had that ${\hat {\mathcal E}}(\varphi _{2h}(U_{R"-2\varepsilon _1})) \subset \varphi _{2h}(U_{R"-\varepsilon _1})$ . In exactly the same way, we have

(5.36) $$ \begin{align} {\hat {\mathcal E}}(\varphi_{2h}(U_{R"-3\varepsilon_1}))\subset \varphi_{2h}(U_{R"-2\varepsilon_1}). \end{align} $$

Recall also that from equation (5.19), we had that $R^{\mathrm {ext}}_{(U,0)}\varphi _{2h}(U_{R"-2\varepsilon _1}) \le R_2$ . Also, by equations (5.33) and (5.36), we have that $\mathbf {Q_2}(\varphi _{2h}(U_{R"-3\varepsilon _1}))$ is contained in an $({\varepsilon _2}/3)K_3$ -neighborhood of $\varphi _{2h}(U_{R"-2\varepsilon _1})$ (using the hyperbolic metric of U). Thus,

$$ \begin{align*} R^{\mathrm{ext}}_{(U,0)}\mathbf{Q_2}(\varphi_{2h}(U_{R"-3\varepsilon_1}))&\leq R_2+\frac{\varepsilon_2}{3K_3} \\ & < R_2+\varepsilon_2, \end{align*} $$

(recall that we assumed $K_3> 1$ ) and so

(5.37) $$ \begin{align} R^{\mathrm{ext}}_{(U,0)}\mathbf{Q_2}(\varphi_{2h}(U_{R"-3\varepsilon_1}))\leq R_2+1 \end{align} $$

as $\varepsilon _2<1$ using equation (5.28). Thus, $\mathbf {Q_2}(\varphi _{2h}(U_{R"-3\varepsilon _1}))\subset U_{R_2+1}\subset U_{R_2 +2}\subset {\overline U}\subset V_{2h}$ , while $\varphi _{2h}^{-1}$ maps $U_{R_2+2} \subset U$ inside $\varphi _{2h}^{-1}(U)= \tilde U$ , which is compactly contained in U. Using Claim 5.21, if we set $K_3 = \max \{\eta _2, \tfrac {3}{2}\}$ so that $K_3> 1$ , we have that

(5.38) $$ \begin{align} |(\varphi_{2h}^{-1})^{\natural}(z)|\leq K_3,\quad z\in U_{R_2+2}. \end{align} $$

Note that, in view of equation (5.19) and Claim 5.21, $K_3$ depends on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ . Again, note that this bound has nothing to do with the existence of the compositions ${\mathbf Q_1}$ , ${\mathbf Q_2}$ from the last sections, and so there is no danger of circular reasoning in fixing the bound $K_3$ at this point. Further, $\varphi _{2h}^{-1}$ is analytic and injective on a neighborhood of $\overline V_h$ and maps $\partial V_h$ inside $U \subset V_{2h}$ so that, if we set $\gamma = \partial V_h$ , $\Gamma = \partial V_{2h}$ again with positive orientations as Jordan curves with respect to the conformal annulus bounded by $\partial V_h$ and $\partial V_{2h}$ , we have that $\varphi _{2h}^{-1}(\gamma )$ lies inside $\Gamma $ . Thus, $(\varphi _{2h}^{-1}, \mathrm {Id})$ is easily seen to be an admissible pair on $(\gamma ,\Gamma )$ , as in Definition 3.3, and we also have that $\varphi _{2h}^{-1}(0)=0$ . Using equation (5.38), we can then apply the Polynomial Implementation Lemma (Lemma 3.9) with $\Omega = V_h$ , $\Omega ' = V_{2h}$ , $\gamma = \Gamma _h$ , $\Gamma = \Gamma _{2h}$ , $f = \varphi _{2h}^{-1}$ , $A = U_{R_2+1}$ , $\delta = 1$ (so that $\hat A = U_{R_2+2}$ ), $M = K_3$ , and $\varepsilon = {\varepsilon _2}/{3}$ to construct a (17+ $\kappa $ )-bounded quadratic polynomial composition $\mathbf { Q_3}$ that is univalent on $U_{R_2+1}$ for which we have

(5.39) $$ \begin{align} \rho_U(\mathbf{Q_3}(z),\varphi_{2h}^{-1}(z))&<\frac{\varepsilon_2}{3}, \quad z\in U_{R_2+1}, \end{align} $$
(5.40) $$ \begin{align} \| \mathbf{Q_3}^{\natural}\|_{U_{R_2+1}}&\leq K_3 \bigg(1+\frac{\varepsilon_2}{3} \bigg), \end{align} $$
(5.41) $$ \begin{align} \mathbf{Q_3}(0) &=0. \end{align} $$

Note that by Lemma 3.9, since $\delta = 1$ , $\mathbf {Q_3}$ depends directly on $\kappa $ , $K_3$ , $\varepsilon $ , $R_2$ , h (via the curves $\partial V_h$ , $\partial V_{2h}$ ), and $\varphi _{2h}$ so that one can check that $\mathbf {Q_3}$ ultimately depends on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , R, $h_0$ , $r_0$ , and $R_0$ .

Concluding the proof of Phase II: Now, as $\mathbf {Q_1}$ , $\mathbf {Q_2}$ , and $\mathbf {Q_3}$ were all constructed using the Polynomial Implementation Lemma, they are all (17+ $\kappa $ )-bounded compositions of quadratic polynomials. Next define the ( $17+\kappa $ )-bounded composition

(5.42) $$ \begin{align} \mathbf{Q}:=\mathbf{Q_3}\circ\mathbf{Q_2}\circ\mathbf{Q_1}. \end{align} $$

${\mathbf {Q}}$ then has the correct coefficient bound of $17 + \kappa $ as in the statement and, checking the dependencies of each of the compositions $\mathbf {Q_i}$ , $i =1, 2, 3$ , as well as those of the constants $K_2$ , $K_3$ , one sees that $\mathbf {Q}$ depends on $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , R, $h_0$ , $r_0$ , $R_0$ , and ${\mathcal E}$ , which is the same as given in the statement.

Using the definitions of the compositions $\mathbf {Q_1}$ , $\mathbf {Q_2}$ , $\mathbf {Q_3}$ (defined previously), in part (iii) of Claim 5.20, Claim 5.22, and equation (5.37) (respectively), we showed the following:

  1. (1) $\mathbf {Q_1}$ is univalent on $U_{R"-3\varepsilon _1} \supset U_{R"-4\varepsilon _1}$ and $\mathbf {Q_1}(U_{R"-4\varepsilon _1})\subset \varphi _{2h}(U_{R"-3\varepsilon _1})$ ;

  2. (2) $\mathbf {Q_2}$ is univalent on $\varphi _{2h}(U_{R"-3\varepsilon _1})$ and $\mathbf {Q_2}(\varphi _{2h}(U_{R"-3\varepsilon _1})) \subset U_{R_2 + 1}$ ;

  3. (3) $\mathbf {Q_3}$ is univalent on $U_{R_2 + 1}$ .

Combining these three observations, and recalling the definition of $\delta (\varepsilon _1) = \sup _{[r_0, R_0]}{(R-R")} +5\varepsilon _1$ which we set in equation (5.13) at the end of the section on ideal loss of domain, we see that the composition $\mathbf {Q}$ is univalent on $U_{R"-4\varepsilon _1}$ and therefore univalent on a neighborhood of $\overline U_{R"-5\varepsilon _1} \supset \overline U_{R -\delta (\varepsilon _1)}$ (this is the reason why the function $\delta : (0, \tilde \varepsilon _1] \mapsto (0, ({r_0}/{4}))$ was defined the way it was and, in particular, why we needed to include an ‘extra’ $\varepsilon _1$ in our definition of $\delta $ ), which gives part (i) in the statement. As all compositions were created with the Polynomial Implementation Lemma, we have using equations (5.26), (5.35), and (5.41) that $\mathbf {Q}(0)=0$ , which gives part (iii) in the statement.

The last thing we need to do is then establish part (ii) in the statement. Recall that in equation (5.14), we chose ${\tilde \varepsilon _1}$ sufficiently small such that, in particular, $\delta (\varepsilon _1)<{r_0}/{4}$ , which ensured that $U_{R-\delta (\varepsilon _1)}\neq \emptyset $ .

Then for $z \in \overline U_{R-\delta (\varepsilon _1)} \subset \overline U_{R"-5\varepsilon _1} \subset U_{R"-4\varepsilon _1}$ , we have

(5.43) $$ \begin{align} \notag \rho_U(\mathbf{Q}(z),{\mathcal E}(z))&\leq \rho_U(\mathbf{Q_3}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z),\varphi_{2h}^{-1}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z)) \\ \notag &\quad +\rho_U(\varphi_{2h}^{-1}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z),\varphi_{2h}^{-1}\circ{\hat {\mathcal E}}\circ\mathbf{Q_1}(z)) \\ &\quad +\rho_U(\varphi_{2h}^{-1}\circ{\hat {\mathcal E}}\circ\mathbf{Q_1},{\mathcal E}(z)). \end{align} $$

We now estimate the three terms on the right-hand side of the inequality above. We have that $z\in \overline U_{R"-5\varepsilon _1} \subset U_{R"-4\varepsilon _1}$ , so $\mathbf {Q_1}(z)\in \varphi _{2h}(U_{R"-3\varepsilon _1})$ by Claim 5.22. Then $\mathbf {Q_2}\circ \mathbf { Q_1}(z)\in U_{R_2+1}$ by equation (5.37). Thus,

(5.44) $$ \begin{align} \rho_U(\mathbf{Q_3}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z),\varphi_{2h}^{-1}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z))< \frac{\varepsilon_2}{3} \end{align} $$

by equation (5.39). For the second term, we still have $\mathbf {Q_1}(z)\in \varphi _{2h}(U_{R"-3\varepsilon _1})$ and $\mathbf {Q_2}\circ \mathbf {Q_1}(z)\in U_{R_2+1}\subset U_{R_2 +2}$ as above. Also, we have ${\hat {\mathcal E}}\circ \mathbf {Q_1}(z) \in \varphi _{2h}(U_{R"-2\varepsilon _1})\subset U_{R_2}\subset U_{R_2 +2}$ by equations (5.19) and (5.36). Thus, using the hyperbolic convexity lemma (Lemma 2.8) and the hyperbolic M-L estimates (Lemma 2.9) applied to $\varphi _{2h}^{-1}$ on $ U_{R_2 +2}$ , by equations (5.33) and (5.38), we have

(5.45) $$ \begin{align} \rho_U(\varphi_{2h}^{-1}\circ\mathbf{Q_2}\circ\mathbf{Q_1}(z),\varphi_{2h}^{-1}\circ{\hat {\mathcal E}}\circ\mathbf{Q_1}(z))&< K_3\cdot\frac{\varepsilon_2}{3K_3}\notag \\ &<\frac{\varepsilon_2}{3}. \end{align} $$

For the third term, we note that ${\mathcal E}(z)=\varphi _{2h}^{-1}\circ {\hat {\mathcal E}}\circ \varphi _{2h}$ on the set $\mathcal O \supset \tilde V_h \supset \overline {\tilde U} \supset \overline U_{R" - 5\varepsilon _1} \supset \overline U_{R-\delta (\varepsilon _1)}$ (where we remind the reader that $\mathcal O$ is an $\varepsilon _1$ -neighborhood of $\overline {\tilde V}_h$ in the hyperbolic metric of U) so that ${\mathcal E}$ and $\varphi _{2h}^{-1}\circ {\hat {\mathcal E}}\circ \mathbf {Q_1}$ differ in the first mapping of the composition. We still have $\mathbf {Q_1}(z)\in \varphi _{2h}(U_{R"-3\varepsilon _1})$ by Claim 5.22, and clearly $\varphi _{2h}(z)\in \varphi _{2h}(\overline U_{R"-5\varepsilon _1})\subset \varphi _{2h}(U_{R"-3\varepsilon _1})$ . We need to take care to ensure that we have at least a local version of hyperbolic convexity when it comes to applying the hyperbolic M-L estimates for ${\hat {\mathcal E}}$ and $\varphi _{2h}^{-1}$ . By equation (5.24) (and the fact that $K_2, K_3> 1$ ), we have that $\mathbf {Q_1}(z) \in \Delta _U (\varphi _{2h}(z), \varepsilon _2)$ . Since by equation (5.28), $\varepsilon _2 \le \tilde \varepsilon _2 \le 1$ , it follows from equation (5.19) that this hyperbolic disc is in turn contained in $U_{R_2 + 1}$ .

Recall that by Claim 5.21, we had $\eta _2$ depending only on $\varepsilon _1$ , $h_0$ , $r_0$ , and $R_0$ for which we had, in particular, $\|(\varphi _{2h}^{-1})^{\natural }\|_{\Delta _U(0,R_2+2)}\leq \eta _2$ . If we now apply the hyperbolic convexity lemma (Lemma 2.8) and the hyperbolic M-L estimates (Lemma 2.9) for the function $\varphi _{2h}^{-1}$ on the ball $\Delta _U (\varphi _{2h}(z), \varepsilon _2)$ , we have that $\varphi _{2h}^{-1} (\Delta _U (\varphi _{2h}(z),\varepsilon _2)) \subset \Delta _U(z, \eta _2 \varepsilon _2) \subset \Delta _U(z, \varepsilon _1)$ , the last inclusion following from equation (5.28), which implies that $\varepsilon _2 \le \tilde \varepsilon _2 \le {\varepsilon _1}/{\eta _2}$ . Thus, $\varphi _{2h}^{-1} (\Delta _U (\varphi _{2h}(z),\varepsilon _2)) \subset \Delta _U(z, \varepsilon _1) \subset U_{R" - 4 \varepsilon _1} \subset U_{R" - 3 \varepsilon _1}$ so that $ \Delta _U (\varphi _{2h}(z), \varepsilon _2) \subset \varphi _{2h}(U_{R" - 3 \varepsilon _1})$ . We also know using equation (5.32) that $|{\hat {\mathcal E}}^{\natural }|$ is bounded above on $ \varphi _{2h}(U_{R"-2\varepsilon _1}) \supset \varphi _{2h}(U_{R"-3\varepsilon _1}) \supset \Delta _U (\varphi _{2h}(z), \varepsilon _2)$ .

Thus, by equations (5.19) and (5.36), we have ${\hat {\mathcal E}}(\Delta _U (\varphi _{2h}(z), \varepsilon _2)) \subset {\hat {\mathcal E}}(\varphi _{2h}(U_{R"-3\varepsilon _1})) \subset \varphi _{2h}(U_{R"-2\varepsilon _1})\subset U_{R_2}\subset U_{R_2 + 2}$ so that, in particular, ${\hat {\mathcal E}}(\varphi _{2h}(z))$ and ${\hat {\mathcal E}}(\mathbf {Q_1}(z))$ both lie in $U_{R_2 +2}$ , while we know $|(\varphi _{2h}^{-1})^{\natural }|$ is bounded above on $ U_{R_2 + 2}$ using equation (5.38). Then, using equations (5.24), (5.32), and (5.38), and combining the hyperbolic convexity lemma (Lemma 2.8) and the hyperbolic M-L estimates (Lemma 2.9), applied first to ${\hat {\mathcal E}}$ on $\Delta _U (\varphi _{2h}(z),\varepsilon _2) \subset \varphi _{2h}(U_{R"-2\varepsilon _1})$ and then to $\varphi _{2h}^{-1}$ on $U_{R_2+2}$ , we have

(5.46) $$ \begin{align} \rho_U(\varphi_{2h}^{-1}\circ{\hat {\mathcal E}}\circ\mathbf{Q_1},{\mathcal E}(z))&<K_3\cdot K_2\cdot \frac{\varepsilon_2}{3K_2 K_3}\notag \\ &<\frac{\varepsilon_2}{3}. \end{align} $$

Finally, using equations (5.43), (5.44), (5.45), and (5.46), we have

$$ \begin{align*} \rho_U(\mathbf{Q}(z),{\mathcal E}(z))<\varepsilon_2, \end{align*} $$

which establishes part (ii) in the statement and completes the proof of Phase II.

Before going on to §6, we close with a couple of observations. It is possible if one wishes to find a bound on the absolute value of the hyperbolic derivative of the composition $\mathbf {Q}$ above on $\overline U_{R" - \delta (\varepsilon _1)}$ which is uniform in terms of the constants $\kappa $ , $\varepsilon _1$ , $\varepsilon _2$ , $h_0$ , $r_0$ , and $R_0$ (the hardest part of this is controlling the hyperbolic derivative of $\mathbf {Q_1}$ which can best be done using equation (5.24) and Claim 5.21 combined with Lemma A.4 and the version of Cauchy’s integral formula for derivatives—e.g. [Reference ConwayCon78, Corollary IV.5.9]).

However, we do not actually require estimates on the size of $\mathbf {Q}^{\natural }$ . The reason for this is that the purpose of Phase II is to correct the error from a previous Phase I (Lemma 4.8) approximation which essentially resets the error of which we need to keep a track. However, as we saw, this Phase II correction itself generates an error which is then passed through the next Phase I approximation. To control this, then, we do need an estimate on the hyperbolic derivative of the Phase I composition (which is item (4) in the statement of Phase I).

6. Proof of the main theorem

In this section, we prove Theorem 1.3. The proof of the theorem will follow from a large inductive argument. First, however, we need one more technical lemma. Recall the Siegel disc U for P and that, for $R> 0$ , $U_R = \Delta _U(0,R)$ is used to denote the hyperbolic disc of radius R about $0$ with respect to the hyperbolic metric of U.

Lemma 6.1. (The Jordan curve argument)

Let U and $U_R$ be as above. Given $0<\varepsilon <R$ , suppose g is a univalent function defined on a neighborhood of $\overline U_R$ such that $g(0)=0$ and $\rho _U(g(z),z) \le \varepsilon $ on $\partial U_R$ . Then, $g(U_R)\supset U_{R-\varepsilon }$ .

Proof. The function g is a homeomorphism and is bounded on $\overline U_R$ , so that it maps $\partial U_R$ to $g(\partial U_R)=\partial (g(U_R))$ which is a Jordan curve in $\mathbb C$ , while $U_R$ gets mapped to the bounded complementary component of this Jordan curve in view of the Jordan curve theorem (e.g. [Reference MunkresMun00, Theorem 63.4] or [Reference NewmanNew51, Theorem V.10.2]). Then $0 = g(0)$ lies in $g(U_R)$ and thus inside $\partial (g(U_R))$ , and since this curve avoids $U_{R-\varepsilon }$ , all of the connected set $U_{R-\varepsilon }$ lies inside $\partial (g(U_R))$ . Hence, $U_{R-\varepsilon }\subset g(U_R)$ .

Lemma 6.2. There exist:

  1. (a) a sequence of positive real numbers $\{\varepsilon _k \}_{k=1}^{\infty }$ which converges to $0$ ;

  2. (b) a sequence $\{J_i\}_{i=1}^{\infty }$ of natural numbers, a positive constant $\kappa _0 \ge 576$ , and a sequence of compositions of quadratic polynomials $\{\mathbf {Q^i} \}_{i=1}^{\infty }$ ;

  3. (c) a sequence of strictly decreasing hyperbolic radii $\{R_i \}_{i=0}^{\infty }$ ; and

  4. (d) a sequence of strictly increasing hyperbolic radii $\{S_i \}_{i=0}^{\infty }$ ,

such that:

  1. (1) for each $i\geq 0$ , $S_i< {1}/{10} < \tfrac 15 < R_i$ ;

  2. (2) for each $i \ge 1$ , $\mathbf {Q^i}$ is a composition of $J_i$ (17+ $\kappa _0$ )-bounded quadratic polynomials with $\mathbf {Q^i}(0)=0$ ;

  3. (3) for each $i \ge 1$ , $\mathbf {Q^i}\circ \cdots \cdots \circ \mathbf {Q^1}(U_{{1}/{20}})\subset U_{S_i}\subset U_{{1}/{10}} $ ; and

  4. (4) for each $i \ge 1$ and $1 \leq m \le J_i$ , if $\mathbf {Q_{m}^{i}}$ denotes the partial composition of the first m quadratics of $\mathbf { Q^i}$ , then, for all $f \in {\mathcal S}$ and for all $i =2k+1$ odd, there exists $1\leq m_k \leq J_i$ such that, for all $z\in U_{{1}/{20}}$ , we have

    $$ \begin{align*} \rho_U(\mathbf{Q^i_{m_k}}\circ \mathbf{Q^{i-1}}\circ \cdots \cdots \circ \mathbf{Q^1},f(z))<\varepsilon_{k+1}. \end{align*} $$

Let $J_i$ be the integers and $\mathbf {Q}^i$ the polynomial compositions from part (b) of the statement above. For $i=0$ , set $T_0 = 0$ and, for each $i \ge 1$ , set $T_i = \sum _{j=1}^i J_j$ . Given this, we define a sequence $\{P_m \}_{m=1}^{\infty }$ in the following natural way: for $m \ge 1$ , let $i \ge 1$ be the largest index such that $T_{i-1} < m$ so that $T_{i-1} < m \le T_i = T_{i-1} + J_i$ . Then simply let $P_m$ be the $(m - T_{i-1})$ th quadratic in the composition $\mathbf {Q}^i$ (which is a composition of $J_i$ quadratic polynomials).

The next lemma then follows as an immediate corollary (using items (2), (3), and (4) above).

Lemma 6.3. There exists a sequence of quadratic polynomials $\{P_m\}_{m=1}^{\infty }$ such that the following hold:

  1. (1) $\{P_m\}_{m=1}^{\infty }$ is (17+ $\kappa _0$ )-bounded;

  2. (2) $Q_m(U_{{1}/{20}})\subset U_{{1}/{10}}$ for infinitely many m;

  3. (3) for all $f\in {\mathcal S}$ , there exists a subsequence $\{Q_{m_k}\}_{k=1}^{\infty }$ which converges uniformly to f on $U_{{1}/{20}}$ as $k \rightarrow \infty $ .

Proof of Lemma 6.2

We begin by fixing the values of the constants in the statements of Phases I and II (Lemmas 4.8 and 5.17). Starting with Phase II, let $h_0 = 1$ be the maximum value for the Green’s function G and let $r_0 = {1}/{20}$ , $R_0 = \tfrac {1}{4} < {\pi }/{2}$ be the upper and lower bounds for the hyperbolic radii we consider in applying Phase II. We will also use $R_0 = \tfrac {1}{4}$ when we apply Phase I and we set $\kappa = \kappa _0 = \kappa _0 (\tfrac {1}{4}) \ge 576$ for both Phases I and II.

Let $C:=7$ be the bound on the hyperbolic derivative from item (4) of the statement of Phase I and let $\tilde \varepsilon _1> 0$ and $\delta (x)$ be the function defined on $(0, \tilde \varepsilon _1]$ measuring loss of hyperbolic radius from the statement of Phase II, both of which are determined by the values of $h_0$ , $r_0$ , and $R_0$ which we have just fixed. The reader might find it helpful to consult the block diagram for the scheme of the proof in Figure 7 for orientation in what follows. The proof of Lemma 6.2 will follow quickly from the following claim, which we prove by induction.

Figure 7 A block diagram illustrating the induction scheme.

Claim 6.4. There exist inductively defined infinite sequences of positive real numbers $\{\varepsilon _k \}_{k=1}^{\infty }$ , $\{\eta _k \}_{k=1}^{\infty }$ , and $\{\sigma _k \}_{k=1}^{\infty }$ , sequences of hyperbolic radii $\{R_i \}_{i=0}^{\infty }$ and $\{S_i \}_{i=0}^{\infty }$ , integers $\{J_i \}_{i=1}^{\infty }$ , and polynomial compositions $\{\mathbf {Q^i} \}_{i=1}^{\infty }$ such that, for each $n \in \mathbb N$ , the following hold.

  1. (i) The sequences $\{\varepsilon _k \}_{k=1}^{n}$ , $\{\eta _k \}_{k=1}^{n}$ , and $\{\sigma _k \}_{k=1}^{n}$ satisfy

    $$ \begin{align*} \eta_k= \left\{\! \begin{array}{ll} \displaystyle\frac{4\varepsilon_1}{3} + \delta(\varepsilon_1), & k=1, \\ \displaystyle\bigg(\frac{4}{3}+\frac{1}{3C}\bigg)\varepsilon_k + \delta(\varepsilon_k), & 2 \leq k \leq n, \\ \end{array} \right. \end{align*} $$
    $$ \begin{align*} \hspace{-1.2cm}\sigma_k= \left\{\! \begin{array}{ll} \displaystyle\frac{4\varepsilon_1}{3}, & k=1, \\ \displaystyle\bigg(\frac{4}{3}+\frac{1}{3C}\bigg)\varepsilon_k, & 2 \leq k \leq n, \\ \end{array} \right. \end{align*} $$
    where in addition, we require that $0< \varepsilon _k < \sigma _k<\eta _k< {1}/{40\cdot 2^k}$ and that $\varepsilon _k \le \tilde \varepsilon _1$ for each $1\leq k \leq n$ .
  2. (ii) The sequence $\{R_i \}_{i=0}^{2n-1}$ is strictly decreasing and is given by $R_0=\tfrac 14$ , $R_1=\tfrac 14 - ({\varepsilon _1}/{3})$ , and

    $$ \begin{align*} \hspace{-.4cm}R_i= \left\{\! \begin{array}{ll} \displaystyle \frac{1}{4}-\bigg(\sum_{j=1}^{k}\eta_j\bigg) -\frac{\varepsilon_{k+1}}{3C}, & i=2k \text{ for some } 1\leq k \leq n-1, \\ &\\ \displaystyle\frac{1}{4}-\bigg(\sum_{j=1}^{k}\eta_j\bigg) - \bigg(\frac{1}{3}+\frac{1}{3C}\bigg)\varepsilon_{k+1}, & i=2k+1 \text{ for some }\\ & 1\leq k \leq n-1.\\ \end{array} \right. \end{align*} $$
    The sequence $\{S_i \}_{i=0}^{2n-1}$ is strictly increasing and is given by $S_0= {1}/{20}$ , $S_1= ({1}/{20}) + ({\varepsilon _1}/{3})$ , and
    $$ \begin{align*} S_i= \left\{\! \begin{array}{ll} \displaystyle\frac{1}{20}+\bigg(\sum_{j=1}^{k}\sigma_j\bigg) +\frac{\varepsilon_{k+1}}{3C}, & i=2k \text{ for some } 1\leq k \leq n-1, \\ &\\ \displaystyle\frac{1}{20}+\bigg(\sum_{j=1}^{k}\sigma_j\bigg) + \bigg(\frac{1}{3}+\frac{1}{3C}\bigg)\varepsilon_{k+1}, & i=2k+1 \text{ for some } \\ &1\leq k \leq n-1.\\ \end{array} \right. \end{align*} $$
  3. (iii) ${1}/{20} \le S_i < {1}/{10} < \tfrac 15 < R_i \le \tfrac 14$ for each $0 \leq i \leq 2n-1$ .

  4. (iv) For each $1\leq i \leq 2n-1$ , $\mathbf {Q^i}$ is a (17+ $\kappa _0$ )-bounded composition of $J_i$ quadratic polynomials with $\mathbf { Q^i}(0)=0$ .

  5. (v) For each $1 \le i \le 2n-1$ , the branch of $(\mathbf {Q^i})^{-1}$ which fixes $0$ is well defined and univalent on $U_{R_i}$ , and maps $U_{R_i}$ inside $U_{R_{i-1}}$ . The branch of $(\mathbf {Q^i}\circ \cdots \cdots \circ \mathbf {Q^2} \circ \mathbf {Q^1})^{-1}$ which fixes $0$ is then also well defined and univalent on $U_{R_{i}}$ .

  6. (vi) For each $1\leq i \leq 2n-1$ , $\mathbf {Q^i}$ is univalent on $U_{S_{i-1}}$ and

    $$ \begin{align*} \mathbf{Q^i}(U_{S_{i-1}}) \subset U_{S_{i}}. \end{align*} $$

    Thus, $\mathbf {Q^i} \circ \cdots \cdots \circ \mathbf {Q^1}$ is univalent on $U_{{1}/{20}}$ and

    $$ \begin{align*} \mathbf{Q^i} \circ \cdots \cdots \circ \mathbf{Q^1}(U_{{1}/{20}})\subset U_{S_{i}}\subset U_{{1}/{10}}. \end{align*} $$
  7. (vii) If $i=2k$ with $1\leq k \leq n-1$ is even, and $z \in U_{R_{i-1}-\delta (\varepsilon _k)}$ ,

    $$ \begin{align*} \rho_U(\mathbf{Q^{i}}(z),(\mathbf{Q^{i-1}}\circ \cdots \cdots \circ \mathbf{Q^{1}})^{-1}(z) )<\frac{\varepsilon_{k+1}}{3C}, \end{align*} $$
    where we use the same branch of $(\mathbf {Q^{i-1}}\circ \cdots \cdots \circ \mathbf {Q^{1}})^{-1}$ which fixes $0$ from part (v) above.

    For the final two hypotheses, let $i=2k+1$ with $0\leq k \leq n-1$ be odd.

  8. (viii) If $z \in U_{R_i}$ , using the same inverse branch mentioned in statement (v), we have

    $$ \begin{align*} \rho_U((\mathbf{Q^{i}}\circ \cdots \cdots \circ \mathbf{Q^{1}})^{-1}(z),z)<\varepsilon_{k+1}. \end{align*} $$
  9. (ix) If, for each $1\leq m \leq J_i$ , $\mathbf {Q_{m}^{i}}$ denotes the partial composition of the first m quadratics of $\mathbf {Q^{i}}$ , then for all $f \in {\mathcal S}$ , there exists $1\leq m \leq J_i$ , such that, for all $z \in U_{{1}/{20}}$ , we have

    $$ \begin{align*} \rho_U(\mathbf{Q^i_m}\circ \mathbf{Q^{i-1}}\circ \cdots \cdots \circ \mathbf{Q^{1}}(z), f(z))<\varepsilon_{k+1}. \end{align*} $$

Remarks.

  1. (1) Statements (i)–(iii) are designed for keeping track of the domains on which estimates are holding and, in particular, to ensure that these domains do not get too small and that the constants $\varepsilon _i$ which keep track of the accuracy of the approximations do indeed tend to $0$ . The outer radii $R_i$ are chosen primarily so that the image of $U_{R_i}$ under the inverse branch of $\mathbf {Q^i}$ which fixes $0$ is contained in $U_{R_{i-1}}$ (this is statement (v) above). This allows us to compose the inverses of these compositions and then approximate this composition of inverses by means of Phase II. The inner radii $S_i$ are chosen primarily so that the image of $U_{S_{i-1}}$ under the polynomial composition $\mathbf { Q^i}$ lies inside $U_{S_i}$ (this is statement (vi) above). This allows us to compose these polynomial compositions and gives us our iterates which remain bounded and approximate the elements of ${\mathcal S}$ .

  2. (2) Statement (vii) is a ‘Phase II’ statement regarding error correction using Phase II of the inverse of an earlier polynomial composition. Effectively, the Phase II correction compensates for the error in the previous Phase I composition, whose deviation from the identity is measured in statement (viii) above.

  3. (3) Statements (viii) and (ix) are ‘Phase I’ statements. Statement (viii) is a bound on the error to be corrected by the next Phase II approximation. Statement (ix) is the key element for proving Theorem 1.3.

  4. (4) It follows readily from statement (i) that the sequence $\{\varepsilon _i\}_{i=1}^{\infty }$ converges to $0$ exponentially fast, which gives item (a) in the statement of the lemma. Item (b) follows from statement (iv) and our choice of $\kappa _0$ , while items (c) and (d) follow from statement (ii).

  5. (5) Part (1) of the second part of the statement of the lemma follows from statement (iii) above while part (2) of the statement follows from statement (iv). Lastly, part (3) follows from statement (vi), while part (4) follows from statement (ix).

Proof. Base case: $n=1$ . Recall the bound $\tilde \varepsilon _1> 0$ and function $\delta (x)$ defined on $(0, \tilde \varepsilon _1]$ whose existence is given by Phase II (recall that we have fixed the values of $h_0$ , $r_0$ , $R_0$ at the start of the proof) and that $\delta (x) \to 0$ as $x \to 0_+$ . We can then pick $0< \varepsilon _1 \, \le \, \tilde \varepsilon _1$ such that if we set

$$ \begin{align*} \eta_{1}&=\frac{4}{3}\varepsilon_{1}+\delta(\varepsilon_1), \\ \sigma_{1}&=\frac{4}{3}\varepsilon_1, \end{align*} $$

then we can ensure that $0< \varepsilon _1 < \sigma _1<\eta _1< {1}/({40\cdot 2}) = {1}/{80}$ . This verifies statement (i). Now recall that we already set $R_0=\tfrac 14$ , let $S_0= {1}/{20}$ , and set

$$ \begin{align*} R_1&= \frac{1}{4}-\frac{\varepsilon_1}{3},\\ S_1&= \frac{1}{20}+\frac{\varepsilon_1}{3}, \end{align*} $$

which verifies statement (ii) and then statement (iii) follows easily.

Applying Lemma 2.1, we choose an ${\varepsilon _1}/{3}$ -net $\{f_0, f_1, \ldots , f_{N_1+1} \}$ for ${\mathcal S}$ (consisting of elements of ${\mathcal S}$ ) on $U_{1/2}$ , where $N_1 = N_1 (\varepsilon _1) \in \mathbb {N}$ , and with $f_0=f_{N_1+1}=\mathrm {Id}$ . Apply Phase I (Lemma 4.8) for this collection of functions with $R_0=\tfrac 14$ , $\varepsilon = {\varepsilon _1}/{3}$ , to obtain $M_1 \in \mathbb N$ and a ( $17 + \kappa _0$ )-bounded finite sequence $\{P_m\}_{m=1}^{(N_1 +1)M_1}$ of quadratic polynomials both of which depend directly on $R_0$ , $\kappa _0$ , $N_1$ , the functions $\{f_i\}_{i=0}^{N_1 + 1}$ , and $\varepsilon $ , and thus ultimately on $\varepsilon _1$ and $\{f_i\}_{i=0}^{N_1 + 1}$ (recall that we set $h_0=1$ , $r_0 = {1}/{20}$ , $R_0 = \tfrac {1}{4}$ , as well as $\kappa _0 = \kappa _0(1/4)$ at the start of the proof in equation (6)) such that, for $1\leq i \leq N_1+1$ , if we let ${\mathbf {Q}}_m^1$ , $1 \le m \le J_1$ denote the composition of the first m polynomials of this sequence, we have:

  1. (1) $\mathbf {Q^1_{iM_1}}(0)=0$ ;

  2. (2) $\mathbf {Q^1_{iM_1}}$ is univalent on $U_{1/2}$ ;

  3. (3) $\rho _{U}(f_i(z), \mathbf {Q^1_{iM_1}}(z))< {\varepsilon _1}/{3}$ on $U_{1/2}$ ;

  4. (4) $\|(\mathbf {Q^1_{iM_1}})^{\natural } \|_{U_{1/4}} \leq C$ .

Now set $\mathbf {Q^1}=Q_{(N_1+1)M_1}$ . By item (1), $\mathbf {Q^1}(0)=0$ and, as Phase I guarantees $\mathbf {Q^1}$ is ( $17 + \kappa _0$ )-bounded, on setting $J_1 = (N_1+1)M_1$ , statement (iv) is verified.

Now we have that each $\mathbf {Q^1_{iM_1}}$ is univalent on $U_{1/2}\supset {\overline U_{1/4}}={\overline U_{R_0}}$ by item (2) above. Further, by item (3), if $\rho _U(0,z)=\tfrac 14$ , then $\rho _U(\mathbf {Q^1}(z),z)< {\varepsilon _1}/{3}$ , so by item (1) and the Jordan curve argument (Lemma 6.1), $\mathbf {Q^1}(U_{R_0})\supset U_{R_1}$ . The branch of $(\mathbf {Q^1})^{-1}$ which fixes $0$ is then well defined and univalent on $U_{R_1}$ and maps this set inside $U_{R_0}$ . With this, we have verified statement (v).

Likewise, if $\rho _U(0,z) < {1}/{20}$ , then $\rho _U(\mathbf {Q^1}(z),0)< ({1}/{20}) + ({\varepsilon _1}/{3})$ . This implies $\mathbf {Q^1}(U_{S_0})\subset U_{S_1}$ and, since by statement (iii), $S_1 < {1}/{10}$ while by item (2) above, $\mathbf {Q^1}$ is univalent on $U_{1/2}\supset U_{{1}/{20}}$ , which verifies statement (vi). We observe that hypothesis (vii) is vacuously true as it is concerned only with Phase II.

Now let $z \in U_{R_1}$ . Using the same branch of $(\mathbf {Q^1})^{-1}$ as in statement (v), it follows from statement (v) that we can write $z=\mathbf { Q^{1}}(w)$ for some $w\in U_{R_0}$ and that by item (2) above, this w is unique. Since $f_{N_1+1}= \mathrm {Id}$ , it follows from item (3) above that

$$ \begin{align*} \rho_U((\mathbf{Q^1})^{-1}(z),z)&=\rho_U(w,\mathbf{Q^1}(w)) \\ &<\frac{\varepsilon_1}{3} \\ &<\varepsilon_1 \end{align*} $$

which verifies statement (viii).

Finally, let $z \in U_{{1}/{20}}$ . For $f \in {\mathcal S}$ , let $f_i$ be a member of the net for which $\rho _U(f(w),f_i(w))< {\varepsilon _1}/{3}$ on $U_{1/2}\supset U_{{1}/{20}}$ , and, using item (3), let $\mathbf {Q_{iM_1}^1} $ be a partial composition which satisfies $\rho _U(\mathbf {Q_m^1}(w),f_i(w))< {\varepsilon _1}/{3}$ on $U_{1/2}\supset U_{{1}/{20}}$ Then, on setting $m = iM_1$ ,

$$ \begin{align*} \rho_U(\mathbf{Q_m^1}(z),f(z))&\leq \rho_U(\mathbf{Q_m^1}(z),f_i(z))+\rho_U(f_i(z),f(z)) \\ &\leq \frac{\varepsilon_1}{3}+\frac{\varepsilon_1}{3} \\ &<\varepsilon_1, \end{align*} $$

which verifies statement (ix) and completes the base case.

Induction hypothesis: Assume statements (i)–(ix) hold for some arbitrary $n \ge 1$ .

Induction step: We now show this is true for $n+1$ .

Since the above hypotheses hold for n, we have already defined $R_{2n-1}=R_{2n-2}- ({\varepsilon _n}/{3})$ . Using statement (viii) for n with $i=2n-1$ , we have

(6.1) $$ \begin{align} \rho_U&((\mathbf{Q^{2n-1}}\circ \cdots \cdots \circ \mathbf{Q^{1}})^{-1}(z),z)<\varepsilon_n,\quad z \in U_{R_{2n-1}}, \end{align} $$

where of course we are using the branch of $(\mathbf {Q^{2n-1}}\circ \cdots \cdots \circ \mathbf {Q^{1}})^{-1}$ from statement (v) which fixes $0$ .

Recalling that the function $\delta : (0, \tilde \varepsilon _1] \mapsto (0, ({r_0}/{4}))$ in Phase II (Lemma 5.17) has a limit of $0$ from the right, we can pick $\varepsilon _{n+1}>0$ sufficiently small such that $\varepsilon _{n+1} \le \tilde \varepsilon _1$ , and if we set

(6.2) $$ \begin{align} \eta_{n+1}&=\bigg( \frac{4}{3}+\frac{1}{3C}\bigg)\varepsilon_{n+1}+\delta(\varepsilon_{n+1}), \end{align} $$
(6.3) $$ \begin{align} \sigma_{n+1}&=\bigg( \frac{4}{3}+\frac{1}{3C} \bigg)\varepsilon_{n+1}, \end{align} $$

then we can ensure

(6.4) $$ \begin{align} 0&< \varepsilon_{n+1} < \sigma_{n+1}<\eta_{n+1}<\frac{1}{40\cdot 2^{n+1}}, \end{align} $$

which verifies statement (i) for $n+1$ . If we now apply Phase II, with $\kappa _0$ , $h_0$ , $r_0$ , $R_0$ as above, $R=R_{2n-1}$ , $\varepsilon _1=\varepsilon _n$ (recall that $\varepsilon _n \le \tilde \varepsilon _1$ in view of hypothesis (i) for n), $\varepsilon _2= {\varepsilon _{n+1}}/{3C}$ , and ${\mathcal E} = (\mathbf {Q^{2n-1}}\circ \cdots \circ \mathbf {Q^{1}})^{-1}$ , and make use of equation (6.1), we can find a $(17 + \kappa _0)$ -bounded composition of quadratic polynomials $\mathbf {Q^{2n}}$ which depends immediately on $\kappa _0$ , $\varepsilon _n$ , $\varepsilon _{n+1}$ , R, and ${\mathcal E}$ , and thus ultimately on $\{\varepsilon _k\}_{k=1}^{n+1}$ and $\{\mathbf { Q^i}\}_{i=1}^{2n-1}$ , such that $\mathbf {Q^{2n}}$ is univalent on a neighborhood of $\overline U_{R_{2n-1}-\delta (\varepsilon _n)}$ , satisfies $\mathbf {Q^{2n}}(0)=0$ , and

(6.5) $$ \begin{align} \rho_U&(\mathbf{Q^{2n}}(z),(\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(z))<\frac{\varepsilon_{n+1}}{3C},\quad z\in \overline U_{R_{2n-1}-\delta(\varepsilon_n)}, \end{align} $$

which verifies statement (vii) for $n+1$ . Note that, because of the upper bound ${\tilde \varepsilon _2}$ in the statement of Phase II, we may need to make $\varepsilon _{n+1}$ smaller, if necessary. However, this does not affect the estimates on $\eta _{n+1}$ or $\sigma _{n+1}$ or any of the other dependencies for $\mathbf {Q^{2n}}$ above. Finally, $\mathbf {Q^{2n}}(0)=0$ from above, so that, if we let $J_{2n}$ be the number of quadratics in $\mathbf {Q^{2n}}$ , we see that the first half of statement (iv) for $n+1$ is also verified. Now set

(6.6) $$ \begin{align} R_{2n}&=R_{2n-1}-\varepsilon_n- \delta(\varepsilon_n) - \frac{\varepsilon_{n+1}}{3C}, \end{align} $$
(6.7) $$ \begin{align} S_{2n}&=S_{2n-1}+\varepsilon_n+\frac{\varepsilon_{n+1}}{3C}. \end{align} $$

We observe that the $\varepsilon _n$ change in radius above is required in view of statement (viii) which measures how much the function $(\mathbf { Q^{2n-1}}\circ \cdots \cdots \circ \mathbf {Q^1})^{-1}$ which we are approximating moves points on $U_{R_{2n-1}-\delta (\varepsilon _n)}$ , the $\delta (\varepsilon _n)$ change is the loss of domain incurred by Phase II, while the additional ${\varepsilon _{n+1}}/{3C}$ is to account for the error in the Phase II approximation (the factor of C arising from the fact that this error needs to be passed through a subsequent Phase I to verify statement (ix) for $n+1$ ).

A final observation worth making is that here we are dealing with a loss of radius in passing from $R_{2n-1}$ to $R_{2n}$ arising from two distinct sources—the initial loss of domain by an amount $\delta (\varepsilon _n)$ arising from the need to make a Phase II approximation, and the subsequent losses of $\varepsilon _n$ and ${\varepsilon _{n+1}}/{3C}$ which arise via the Jordan curve argument (Lemma 6.1) due the amount that $\mathbf { Q^{2n}}$ moves points on $U_{R_{2n-1}-\delta (\varepsilon _n)}$ (for details, see equation (6.10) below as well as the discussions immediately preceding and succeeding this inequality).

One easily checks that using hypotheses (i) and (ii) for n that

(6.8) $$ \begin{align} R_{2n}&=\bigg(\frac{1}{4}-\bigg(\sum_{j=1}^{n-1}\eta_j\bigg) - \bigg(\frac{1}{3}+\frac{1}{3C}\bigg)\varepsilon_n \bigg)-\varepsilon_n-\delta(\varepsilon_n)-\frac{\varepsilon_{n+1}}{3C} \nonumber \\ &=\frac{1}{4}-\bigg(\sum_{j=1}^{n}\eta_j\bigg) -\frac{\varepsilon_{n+1}}{3C}, \end{align} $$
(6.9) $$ \begin{align} S_{2n}&=\bigg(\frac{1}{20}+\bigg(\sum_{j=1}^{n-1}\sigma_j\bigg) + \bigg(\frac{1}{3}+\frac{1}{3C}\bigg)\varepsilon_n \bigg) +\varepsilon_n+\frac{\varepsilon_{n+1}}{3C} \nonumber\\ &=\frac{1}{20}+\bigg(\sum_{j=1}^{n}\sigma_j\bigg) +\frac{\varepsilon_{n+1}}{3C}, \end{align} $$

which verifies the first half of statement (ii) for $n+1$ . We also observe at this stage that one can verify that the total loss of radius on passing from $R_{2n-2}$ to $R_{2n}$ is $({\varepsilon _n}/{3}) + ( \varepsilon _n + \delta (\varepsilon _n) + {\varepsilon _{n+1}}/{3C} ) = ( {4\varepsilon _n}/{3} + \delta (\varepsilon _n)) + {\varepsilon _{n+1}}/{3C}$ , which explains the form of the constants $\eta _i$ in statement (i) of the induction hypothesis. A similar argument also accounts for the other constants $\sigma _i$ in statement (i). Further, clearly $R_{2n} \le \tfrac {1}{4}$ and, using statement (iii) for n and equation (6.4),

$$ \begin{align*} R_{2n}&=\frac{1}{4}-\bigg( \sum_{j=1}^{n}\eta_j \bigg) -\frac{\varepsilon_{n+1}}{3C}\\ &>\frac{1}{4}-\bigg(\sum_{j=1}^{n}\frac{1}{40\cdot 2^j}\bigg) -\frac{1}{40\cdot 2^{n+1}} \\ &=\frac{1}{4}-\frac{1}{40}\bigg(1-\frac{1}{2^n}+\frac{1}{2^{n+1}}\bigg) \\ &>\frac{1}{5}. \end{align*} $$

The calculation for $S_{2n}$ is similar, and thus we have verified the first half of statement (iii) for $n+1$ . Combining equations (6.1) and (6.5), we have, on $\overline U_{R_{2n-1}-{\delta (\varepsilon _n)}}$ ,

(6.10) $$ \begin{align} \rho_U(\mathbf{Q^{2n}}(z),z)&\leq \rho_U(\mathbf{Q^{2n}}(z),(\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(z)) \nonumber \\ &\quad +\rho_U((\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(z),z) \nonumber\\ &<\frac{\varepsilon_{n+1}}{3C}+\varepsilon_n. \end{align} $$

This, combined with the Jordan curve argument (Lemma 6.1), the fact that $\mathbf {Q_{2n}}(0)=0$ , and equation (6.6) implies that

(6.11) $$ \begin{align} \mathbf{Q^{2n}}(U_{R_{2n-1}-\delta(\varepsilon_n)}) \supset U_{R_{2n-1}-\delta(\varepsilon_n)- \varepsilon_n - ({\varepsilon_{n+1}}/{3C})} = U_{R_{2n}}, \end{align} $$

and, since from above ${\mathbf {Q}}^{\mathbf{2n}}$ is univalent on a neighborhood of $\overline U_{R_{2n-1} - \delta (\varepsilon _n)}$ , the branch of $(\mathbf {Q^{2n}})^{-1}$ which fixes $0$ is well defined on $U_{R_{2n}}$ and maps this set inside $U_{R_{2n-1}-\delta (\varepsilon _n)} \subset U_{R_{2n-1}}$ , which verifies the first half of statement (v) for $n+1$ .

By equation (6.6), the first half of statement (iii) for $n+1$ , and statement (iii) for n, $R_{2n-1}-\delta (\varepsilon _n)> R_{2n} > \tfrac {1}{5} > S_{2n-1}$ so that $\mathbf {Q^{2n}}$ is univalent on $U_{S_{2n-1}}$ . It then follows using equations (6.7) and (6.10) that

$$ \begin{align*} \mathbf{Q^{2n}}(U_{S_{2n-1}})\subset U_{S_{2n-1}+\varepsilon_n+ {\varepsilon_{n+1}}/{3C}} =U_{S_{2n}}. \end{align*} $$

Since, by the first half of statement (iii) for $n+1$ , $S_{2n} < {1}/{10}$ and, together with statement (vi) for n, this verifies half of statement (vi) for $n+1$ and finishes the Phase II portion of the induction step.

Now again apply Lemma 2.1 to construct an ${\varepsilon _{n+1}}/{3}$ -net $\{f_0, f_1, \ldots , f_{N_{n+1}+1} \}$ for ${{\mathcal S}}$ (which again consists of elements of ${\mathcal S}$ ) on $U_{1/2}$ , where we obtain $N_{n+1}=N_{n+1}(\varepsilon _{n+1})\in \mathbb {N}$ and require $f_0=f_{N_{n+1}+1}=\mathrm {Id}$ . We apply Phase I (Lemma 4.8) with $R_0 = \tfrac 14$ and $\varepsilon ={\varepsilon _{n+1}}/{3}$ for this collection of functions to obtain $M_{n+1} \in {\mathbb N}$ , and a (17+ $\kappa _0$ )-bounded sequence of quadratic polynomials $\{P_m \}_{m=1}^{(N_{n+1}+1)M_{n+1}}$ both of which depend directly on $R_0$ , $\kappa _0$ , $N_{n+1}$ , the functions $\{f_i\}_{i=0}^{N_{n+1} + 1}$ , and $\varepsilon $ , and thus ultimately on $\varepsilon _{n+1}$ and $\{f_i\}_{i=0}^{N_{n+1} + 1}$ .

Now let $J_{2n+1}=M_{n+1}(N_{n+1}+1)$ be the number of quadratics and denote similarly to before the composition of the first m of these quadratics by $\mathbf {Q^{2n+1}_m}$ . By Phase I, these compositions satisfy, for each $1\le i \le N_{n+1}+1$ :

  1. (1) $\mathbf {Q^{2n+1}_{iM_{n+1}}}(0)=0$ ;

  2. (2) $\mathbf {Q^{2n+1}_{iM_{n+1}}}$ is univalent on $U_{1/2}$ ;

  3. (3) $\rho _U(f_i(z),\mathbf {Q^{2n+1}_{iM_{n+1}}}(z))< {\varepsilon _{n+1}}/{3}$ , $z\in U_{1/2}$ ;

  4. (4) $\| (\mathbf {Q_{iM_{n+1}}^{2n+1}})^{\natural } \|_{U_{1/4}} \leq C$ .

Now set $\mathbf {Q^{2n+1}}:=\mathbf {Q^{2n+1}_{(N_{n+1}+1)M_{n+1}}}$ . The polynomial composition $\mathbf {Q^{2n+1}}$ is then a $(17+\kappa _0)$ -bounded composition of $J_{2n+1}$ quadratic polynomials which by item (1) satisfies $\mathbf {Q^{2n+1}}(0)=0$ . This then verifies statement (iv) for $n+1$ .

Next, we define

(6.12) $$ \begin{align} R_{2n+1}&=R_{2n}-\frac{\varepsilon_{n+1}}{3}, \end{align} $$
(6.13) $$ \begin{align} S_{2n+1}&=S_{2n}+\frac{\varepsilon_{n+1}}{3}. \end{align} $$

We observe that the ${\varepsilon _{n+1}}/{3}$ change in radius above is required in view of item (3) above. One easily checks, using the above and equations (6.8) and (6.9),

$$ \begin{align*} R_{2n+1}&= \bigg(\frac{1}{4}-\sum_{j=1}^{n}\eta_j -\frac{\varepsilon_{n+1}}{3C} \bigg) - \frac{\varepsilon_{n+1}}{3} \\ &=\frac{1}{4}-\sum_{j=1}^{n}\eta_j - \bigg(\frac{1}{3} + \frac{1}{3C} \bigg) \varepsilon_{n+1}, \\ S_{2n+1}&= \bigg(\frac{1}{20}+\sum_{j=1}^{n}\sigma_j +\frac{\varepsilon_{n+1}}{3C} \bigg) +\frac{\varepsilon_{n+1}}{3} \\ &=\frac{1}{20}+\sum_{j=1}^{n}\sigma_j + \bigg(\frac{1}{3}+\frac{1}{3C} \bigg)\varepsilon_{n+1}. \end{align*} $$

Thus, we have verified statement (ii) for $n+1$ and a similar calculation (again using the first half of statement (iii) for $n+1$ and equation (6.4)) to that for verifying the first half of statement (iii) for $n+1$ allows us to complete the verification of statement (iii) for $n+1$ .

By items (1) and (3) above applied to the function $f_{N_{n+1}+1}= \mathrm {Id}$ , together with statement (iii) for $n+1$ , equation (6.12), and Lemma 6.1, we have

(6.14) $$ \begin{align} \mathbf{Q^{2n+1}}(U_{R_{2n}})\supset U_{R_{2n}- {\varepsilon_{n+1}}/{3}} = U_{R_{2n+1}}, \end{align} $$

while $\mathbf {Q^{2n+1}}$ is univalent on a neighborhood of this set by item (2). Hence, the branch of $(\mathbf {Q^{2n+1}})^{-1}$ which fixes $0$ is well defined and univalent on $U_{R_{2n+1}}$ , and maps $U_{R_{2n+1}}$ inside $U_{R_{2n}}$ which then verifies statement (v) for $n+1$ .

By item (2) above and the first half of statement (iii) for $n+1$ , $\mathbf {Q^{2n+1}}$ is univalent on $U_{{1}/{2}} \supset U_{{1}/{10}} \supset U_{S_{2n}} $ . Again by item (3) applied to the function $f_{N_{n+1}+1}=\mathrm {Id}$ , statement (iii) for $n+1$ , and equation (6.13), we see

$$ \begin{align*} \mathbf{Q^{2n+1}}(U_{S_{2n}})\subset U_{S_{2n}+ {\varepsilon_{n+1}}/{3}}=U_{S_{2n+1}}. \end{align*} $$

By statement (iii) for $n+1$ , we have $U_{S_{2n+1}} \subset U_{{1}/{10}}$ and, together with statement (vi) for n, this verifies statement (vi) for $n+1$ .

Now let $w \in U_{R_{2n}}$ . Using the same branch of $(\mathbf {Q^{2n}})^{-1}$ which fixes $0$ as in the first part of statement (v) for $n+1$ , by equation (6.11), $(\mathbf {Q^{2n}})^{-1}(w) = \zeta $ for some (unique) $\zeta \in U_{R_{2n-1}-\delta (\varepsilon _n)}$ and thus $w=\mathbf {Q^{2n}}(\zeta )$ . Then, by equation (6.5),

(6.15) $$ \begin{align} &\rho_U((\mathbf{Q^{2n}}\circ \mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(w),w) \nonumber \\ &\quad= \rho_U((\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}\circ(\mathbf{Q^{2n}})^{-1}(w),w) \nonumber \\ &\quad =\rho_U((\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(\zeta),\mathbf{Q^{2n}}(\zeta)) \nonumber \\ &\quad <\frac{\varepsilon_{n+1}}{3C}. \end{align} $$

By equation (6.14), if now $z \in U_{R_{2n+1}} \subset \mathbf {Q^{2n+1}}(U_{R_{2n}})$ , then, using the same inverse branch as in statement (v) which fixes $0$ , $(\mathbf {Q^{2n+1}})^{-1}(z) = w$ for some (unique) $w\in U_{R_{2n}}$ and thus $z=\mathbf {Q^{2n+1}}(w)$ . By item (3) above and statement (iii) for $n+1$ , since $f_{N_{n+1}+1}=~\mathrm {Id}$ ,

(6.16) $$ \begin{align} \rho_U((\mathbf{Q^{2n+1}})^{-1}(z),z)& = \rho_U(w, \mathbf{Q^{2n+1}}(w)) \nonumber \\ & <\frac{\varepsilon_{n+1}}{3}. \end{align} $$

Then, if we now take $z \in U_{R_{2n+1}}$ and we let $w=(\mathbf {Q^{2n+1}})^{-1}(z)\in U_{R_{2n}}$ again as above using the branch of $(\mathbf { Q^{2n+1}})^{-1}$ which fixes $0$ , using equations (6.15) and (6.16), we have

(6.17) $$ \begin{align} &\rho_U((\mathbf{Q^{2n+1}}\circ \mathbf{Q^{2n}}\circ \cdots \circ \mathbf{Q^1})^{-1}(z),z) \nonumber \\ &\quad =\rho_U((\mathbf{Q^{2n}}\circ \cdots \circ \mathbf{Q^1})^{-1}\circ(\mathbf{Q^{2n+1}})^{-1}(z),z) \nonumber\\ &\quad \leq \rho_U((\mathbf{Q^{2n}}\circ \cdots \circ \mathbf{Q^1})^{-1}\circ (\mathbf{Q^{2n+1}})^{-1}(z),(\mathbf{Q^{2n+1}})^{-1}(z)) \nonumber\\ &\qquad +\rho_U((\mathbf{Q^{2n+1}})^{-1}(z),z) \nonumber \\ &\quad =\rho_U((\mathbf{Q^{2n}}\circ \mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{Q^1})^{-1}(w),w)+\rho_U((\mathbf{Q^{2n+1}})^{-1}(z),z) \nonumber \\ &\quad <\frac{\varepsilon_{n+1}}{3C}+\frac{\varepsilon_{n+1}}{3}<\varepsilon_{n+1}. \end{align} $$

This verifies statement (viii).

Now by statements (iii), (vi) for n together with equation (6.6), we have

(6.18) $$ \begin{align} (\mathbf{Q^{2n-1}}\circ \cdots \cdots \circ \mathbf{Q^1})(U_{{1}/{20}}) \subset U_{S_{2n-1}} \subset U_{{1}/{10}} \subset U_{R_{2n}} \subset U_{R_{2n-1}- \delta(\varepsilon_n)}, \end{align} $$

while, again by statement (vi) for n, the forward composition $\mathbf {Q^{2n-1}} \circ \cdots \circ \mathbf {Q^1}$ is univalent on $U_{{1}/{20}}$ . Lastly, applying statement (v) for n, we see that the branch of $(\mathbf {Q^{2n-1}} \circ \cdots \circ \mathbf {Q^1})^{-1}$ which fixes $0$ is well defined and univalent on $U_{R_{2n-1}} \supset \overline U_{R_{2n} - \delta (\varepsilon _n)}$ . Combining these three observations, we have the cancellation property

(6.19) $$ \begin{align} (\mathbf{Q^{2n-1}} \circ \cdots \cdots \circ \mathbf{Q^1})^{-1}\circ (\mathbf{Q^{2n-1}} \circ \cdots \cdots \circ \mathbf{Q^1}) = \mathrm{Id} \quad \text{on } U_{{1}/{20}}. \end{align} $$

Let $z \in U_{{1}/{20}}$ and set $\zeta = (\mathbf {Q^{2n-1}} \circ \cdots \circ \mathbf {Q^1})(z)$ . Then from equation (6.19), we have $(\mathbf {Q^{2n-1}}\circ \cdots \circ \mathbf {Q^1})^{-1}(\zeta )=z$ while by equation (6.18), $\zeta \in U_{R_{2n-1}-\delta (\varepsilon _n)}$ . We then calculate, using equation (6.5),

(6.20) $$ \begin{align} \rho_U(\mathbf{Q^{2n}}\circ \cdots \circ \mathbf{Q^1}(z),z)&=\rho_U(\mathbf{Q^{2n}}(\zeta), (\mathbf{Q^{2n-1}}\circ \cdots \circ \mathbf{ Q^1})^{-1}(\zeta)) \nonumber \\ &<\frac{\varepsilon_{n+1}}{3C}. \end{align} $$

Now let $f \in {\mathcal S}$ be arbitrary. Let $f_i \in {\mathcal S}$ be an element of the ${\varepsilon _{n+1}}/{3}$ -net which approximates f to within ${\varepsilon _{n+1}}/{3}$ on $U_{1/2}\supset U_{{1}/{20}}$ . Let $\mathbf {Q^{2n+1}_{iM_{n+1}}}$ be a partial composition of $\mathbf {Q^{2n+1}}$ which approximates $f_i$ to within ${\varepsilon _{n+1}}/{3}$ also on $U_{1/2}\supset U_{{1}/{20}}$ using item (3) above and let $m = iM_{n+1}$ so that $\mathbf { Q^{2n+1}_m}= \mathbf {Q^{2n+1}_{iM_{n+1}}}$ .

Applying statement (vi) for $n+1$ gives us that $\mathbf {Q^{2n}} \circ \cdots \circ \mathbf {Q^1}(z) \in U_{{1}/{10}} \subset U_{{1}/{4}}$ . Then, using the hyperbolic convexity of $U_{{1}/{4}}$ (which follows from Lemma 2.8), the hyperbolic M-L estimates (Lemma 2.9), equation (6.20), items (3), (4), and the fact that $f_i$ approximates f, we have

$$ \begin{align*} &\rho_U(\mathbf{Q^{2n+1}_m}\circ \mathbf{Q^{2n}} \circ \cdots \cdots \circ \mathbf{Q^1}(z),f(z)) \\ &\quad\leq \rho_U(\mathbf{Q^{2n+1}_m}\circ \mathbf{Q^{2n}} \circ \cdots \cdots \circ \mathbf{Q^1}(z),\mathbf{Q^{2n+1}_m}(z))\\ &\qquad +\rho_U(\mathbf{Q^{2n+1}_m}(z),f_i(z))+\rho_U(f_i(z),f(z)) \\ &\quad\leq C\cdot \frac{\varepsilon_{n+1}}{3C}+\frac{\varepsilon_{n+1}}{3}+\frac{\varepsilon_{n+1}}{3}\\ &\quad = \varepsilon_{n+1}, \end{align*} $$

which verifies statement (ix). Note that the first term uses Lemmas 2.8, 2.9, equation (6.20), and item (4), the second uses item (3), and the third uses the net approximation. This completes the proof of the claim.

Lemma 6.2 now follows.

We are now finally in a position to prove the main result of this paper.

Proof of Theorem 1.3

Let $f \in {\mathcal S}$ be arbitrary. Let $\{P_m \}_{m=1}^{\infty }$ be the sequence of quadratic polynomials which exists in view of Lemma 6.3 and which is bounded by part (1) of the statement. By Proposition 1.2 and part (2) of the statement, $U_{{1}/{20}}$ is contained in a bounded Fatou component V for this sequence. By part (3) of the statement, there exists a subsequence $\{Q_{m_k} \}_{k=1}^{\infty }$ of $\{Q_m \}_{m=1}^{\infty }$ such that the sequence of compositions $\{Q_{m_k}\}_{k=1}^{\infty }$ converges locally uniformly to f on $U_{{1}/{20}}$ . Since $\{Q_{m_k}\}_{k=1}^{\infty }$ is normal on V, we may pass to a further subsequence, if necessary, to ensure this subsequence of iterates will converge locally uniformly on all of V. By the identity principle, the limit must then be f. In fact, since every such convergent subsequence must have limit f, it follows readily that $\{Q_{m_k}\}_{k=1}^{\infty }$ converges locally uniformly to f on all of V.

Finally, we arrive at the last result of this paper, Theorem 1.4. We note that the proof of this result is not simply a ‘change of coordinates’ applied to Theorem 1.3. While it is straightforward to make a change of coordinates to transform one function from the family ${\mathcal N}$ to a member of ${{\mathcal S}}$ which we can then approximate, there are, in general, many functions in ${\mathcal N}$ , and each of these requires, in general, a different change of coordinates. We will use a density argument to approximate all the necessary changes of coordinates using a countable set. The proof therefore requires that one successfully integrates two approximation schemes, one for the changes of coordinates and the other for the approximations of suitable functions from ${\mathcal N}$ using Theorem 1.3, with the first approximation scheme operating on a longer time scale than the second. Essentially, the proof says that one has to first wait until one has approximately the right change of coordinates after which one chooses the right time when one also has approximately the right function from ${{\mathcal S}}$ .

Proof of Theorem 1.4

Let $r> 0$ be such that ${\mathrm D}(z_0, r) \subset \Omega $ . Then the function

(6.21) $$ \begin{align} g(w) = \frac{f(rw +z_0) - f(z_0)}{rf'(z_0)}, \quad w \in \mathbb D \end{align} $$

belongs to ${{\mathcal S}}$ , while f can clearly be recovered from g using the formula

(6.22) $$ \begin{align} f(z) = rf'(z_0) g\bigg(\frac{z-z_0}{r}\bigg) + f(z_0), \quad z \in {\mathrm D}(z_0, r). \end{align} $$

Since ${\mathcal N}$ is locally bounded and all limit functions are non-constant, using Hurwitz’s theorem e.g. [Reference ConwayCon78, Theorem VII.2.5 and also Corollary IV.5.9], we can find $K \ge 1$ such that, for all $f \in {\mathcal N}$ , we have

(6.23) $$ \begin{align} \frac{1}{K} \le |f'(z_0)| \le K, \quad |f(z_0)| \le K. \end{align} $$

Then, if we let X be the subset of $\mathbb C^2$ given by $X = \{(f'(z_0), f(z_0)), f \in {\mathcal N}\}$ , we can clearly pick a sequence $\{(\alpha _n, \beta _n)\}_{n=1}^{\infty }$ which densely approximates all of X and such that, for all n,

$$ \begin{align*} \frac{1}{2K} \le |\alpha_n| \le 2K, \quad |\beta_n| \le 2K. \end{align*} $$

We next wish to apply a suitable affine conjugacy to the polynomial sequence $\{P_m \}_{m=1}^{\infty }$ of Theorem 1.3 to construct the sequence $\{\tilde P_m \}_{m=1}^{\infty }$ needed to prove the current result. To this end, define $\varphi _0(w) = rw + z_0$ , and $\varphi _n(w) = r\alpha _n w + \beta _n$ for $n \ge 1$ . Recall the compositions $\{\mathbf {Q^i} \}_{i=1}^{\infty }$ from Lemma 6.2 and that each $\mathbf {Q^i}$ was a $(17 + \kappa _0)$ -bounded composition of $J_i$ quadratic polynomials.

As we did before the statement of Lemma 6.3, for $i=0$ , set $T_0 = 0$ and, for each $i \ge 1$ , set $T_i = \sum _{j=1}^i J_j$ . Recall that these compositions $\{\mathbf {Q^i} \}_{i=1}^{\infty }$ then gave rise the the polynomial sequence $\{P_m \}_{m=1}^{\infty }$ of Lemma 6.3 and ultimately Theorem 1.3.

For $m=1$ , we define $\tilde P_1 = \varphi _1 \circ P_1 \circ \varphi _0^{-1}$ . For $m> 1$ , let $i \ge 1$ be the largest index such that $T_{i-1} < m$ . For $i=2k$ even, we define $\tilde P_m$ by

(6.24) $$ \begin{align} \tilde P_m= \left\{\! \begin{array}{ll} \varphi_{k+1}\circ P_m \circ \varphi_k^{-1}, & m =T_{i-1} + 1, \\ \varphi_{k+1} \circ P_m \circ {\varphi_{k+1}}^{-1}, & T_{i-1} + 1 < m \le T_i, \end{array} \right. \end{align} $$

while for $i = 2k+1$ odd, we set

(6.25) $$ \begin{align} \tilde P_m = \varphi_{k+1} \circ P_m \circ {\varphi_{k+1}}^{-1}, \quad T_{i-1} + 1 \le m \le T_i. \end{align} $$

Then (whether i is even or odd), if as usual we let $Q_m = P_m \circ \cdots \cdots \circ P_2 \circ P_1$ and $\tilde Q_m = \tilde P_m \circ \cdots \cdots \circ \tilde P_2 \circ \tilde P_1$ , then

(6.26) $$ \begin{align} \tilde Q_m = \varphi_{k+1}\circ Q_m \circ \varphi_0^{-1}. \end{align} $$

Recall the Fatou component $V \supset U_{{1}/{20}} \ni 0$ from the proof of Theorem 1.3. Since the family $\{\varphi _n\}_{n=0}^{\infty }$ is bi-equicontinuous in the sense that the family $\{\varphi _n\}_{n=0}^{\infty }$ as well as the family of inverses $\{\varphi _n^{-1}\}_{n=0}^{\infty }$ are both equicontinuous and locally bounded on $\mathbb C$ , it follows from [Reference ComerfordCom03, Proposition 2.1] that $W = \varphi _0(V)$ is a bounded Fatou component for the sequence $\{\tilde P_m \}_{m=1}^{\infty }$ which contains $\varphi _0(U_{{1}/{20}})$ .

Let $\varepsilon> 0$ . It follows from applying the local equivalence of the Euclidean and hyperbolic metrics from Lemma A.4 to items (a), (b), and part (4) of Lemma 6.2, there exists $j_0$ such that, for each $j \ge j_0$ , there exists $\tilde m_j$ , $1 \le \tilde m_j \le J_{2j+1}$ such that for $w \in U_{{1}/{20}}$ , we have

(6.27) $$ \begin{align} |\mathbf{Q}_{\tilde{\mathbf{m}}_{\mathbf{j}}}^{\mathbf{2j+1}}\circ \mathbf{Q^{2j}}\circ \cdots \circ \mathbf{Q^1}(w) - g(w)| < \frac{\varepsilon}{2Kr}, \end{align} $$

where, as before, $\mathbf{Q}_{\tilde{\mathbf{m}}_{\mathbf{j}}}^{\mathbf{2j+1}}$ denotes the partial composition of the first $\tilde m_j$ quadratics of $\mathbf {Q^{2j+1}}$ .

Next, using the approximation property of the sequence $\{(\alpha _n \beta _n)\}_{n=1}^{\infty }$ to all of the set X above, we can find a subsequence $\{(\alpha _{n_k}, \beta _{n_k})\}_{j=1}^{\infty }$ which converges to $(f'(z_0), f(z_0))$ . Hence, we can find $k_0$ such that for all $k \ge k_0$ , if $|w| \le {1}/{288} \le {2}/{\kappa _0}$ , we have

(6.28) $$ \begin{align} |\varphi_{n_k}(w) - (rf'(z_0)w + f(z_0))| < \frac{\varepsilon}{2}. \end{align} $$

Now let $z \in \varphi _0(U_{{1}/{20}})$ be arbitrary, and let $k_0$ be sufficiently large so that $n_{k_0} \ge j_0$ . Then, for each $k \ge k_0$ , if we let $i = 2n_k +1$ so that $i = 2j +1$ , where $j = n_k$ , so that by equation (6.26) and the construction of the sequence $\{P_m \}_{m=1}^{\infty }$ from just before Lemma 6.3,

$$ \begin{align*} \tilde Q_{T_{2n_k} + \tilde m_{n_k}} = \varphi_{n_k + 1} \circ Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi_0^{-1} = \varphi_{n_k + 1} \circ \mathbf{Q}_{\tilde{\mathbf{m}}_{\mathbf{n_k}}}^{\mathbf{2n_k+1}}\circ \mathbf{Q^{2n_k}}\circ \cdots \circ \mathbf{Q^1} \circ \varphi_0^{-1} \end{align*} $$

and, using equations (6.22) and (6.26),

$$ \begin{align*} &|\tilde Q_{T_{2n_k} + \tilde m_{n_k}}(z) - f(z)|\\ &\quad = |\varphi_{n_k + 1} \circ Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi_0^{-1}(z) - (rf'(z_0)\cdot g \circ \varphi_0^{-1}(z) + f(z_0))|\\ &\quad \le |\varphi_{n_k+1} \circ Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi_0^{-1}(z) - (rf'(z_0)\cdot Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi_0^{-1}(z) + f(z_0))|\\ &\qquad + | (rf'(z_0)\cdot Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi_0^{-1}(z) + f(z_0)) - (rf'(z_0)\cdot g \circ \varphi_0^{-1}(z) + f(z_0))|. \end{align*} $$

Recall that we chose $\kappa _0 \ge 576$ in Lemma 6.2. From this, it follows that $Q_{T_{2n_k} + \tilde m_{n_k}} \circ \varphi _0^{-1}(z) \in Q_{T_{2n_k} + \tilde m_{n_k}}(U_{{1}/{20}}) \subset Q_{T_{2n_k} + \tilde m_{n_k}}(V) \subset {\mathrm D}(0, ({1}/{288}))$ so that the first term on the right-hand side of the above is less than ${\varepsilon }/{2}$ in view of equation (6.28). In addition, it follows from equation (6.27) that the second term is bounded above by $r|f'(z_0)| ({\varepsilon }/{2Kr}) \le {\varepsilon }/{2}$ in view of equation (6.23). Thus, if for $k \ge 1$ , we set $m_k : = T_{2n_k} + \tilde m_{n_k}$ , then for $k \ge k_0$ , we have

$$ \begin{align*} |\tilde Q_{m_k}(z) - f(z)| < \varepsilon \end{align*} $$

and, since $\varepsilon>0$ was arbitrary, $\{\tilde Q_{m_k}\}_{k=1}^{\infty }$ converges uniformly to f on $\varphi _0(U_{{1}/{20}})$ . The same argument using the identity principle as at the end of the proof of Theorem 1.3 shows that $\{\tilde Q_{m_k}\}_{k=1}^{\infty }$ converges locally uniformly on W to f and, as $f \in {\mathcal N}$ was arbitrary, this completes the argument.

Acknowledgements

We wish to express our gratitude to Xavier Buff, Arnaud Chéritat, and Pascale Roesch for their helpful comments and suggestions when the first author spent some time at the Université Paul Sabatier in Toulouse in 2016. We also wish to express our gratitude to Hiroki Sumi at Kyoto University for directing us to the work of Gelfriech and Turaev. Finally, we wish to thank Loïc Teyssier of the Université de Strasbourg for informing us about the work of Loray and helping us to determine how close Loray’s results were to our own.

A.1 Appendix. Known results

A.1.1 Classical results on $\mathcal {S}$

We now state some common results regarding the class $\mathcal {S}$ . These can be found in many texts, in particular, [Reference Carleson and GamelinCG93]. Before we state the first result, let us establish some notation. Throughout, let ${\mathbb D}$ be the unit disk and let $\mathrm {D}(z,R)$ be the (open) Euclidean disk centered at z of radius R. The following is [Reference Carleson and GamelinCG93, Theorem I.1.3].

Theorem A.1. (The Koebe one-quarter theorem)

If $f \in \mathcal {S}$ , then $f(\mathbb D)\supset \mathrm {D}(0,\tfrac 14)$ .

Also of great importance are the well-known distortion theorems [Reference Carleson and GamelinCG93, Theorem I.1.6].

Theorem A.2. (The distortion theorems)

If $f\in {\mathcal S}$ , then

$$ \begin{align*} \frac{1-|z|}{(1+|z|)^3}\leq |f'(z)| \leq \frac{1+|z|}{(1-|z|)^3}, \\ \frac{|z|}{(1+|z|)^2}\leq |f(z)| \leq \frac{|z|}{(1-|z|)^2}. \end{align*} $$

The above implies immediately that $\mathcal {S}$ is a normal family in view of Montel’s theorem. More precisely, we have the following [Reference Carleson and GamelinCG93, Theorem I.1.10].

Corollary A.3. The family $\mathcal {S}$ is normal, and the limit of any sequence in $\mathcal {S}$ belongs to $\mathcal {S}$ .

A.1.2 The hyperbolic metric

One of the key tools we will be using is the following relationship between the hyperbolic and Euclidean metrics (see [Reference Carleson and GamelinCG93, Theorem I.4.3]).

Lemma A.4. Let $D\subsetneq \mathbb {C}$ be a simply connected domain and let $z \in D$ . Then,

$$ \begin{align*} \frac{1}{2}\frac{|{d}z|}{\delta_D(z)}\leq {d}\rho_D(z) \leq 2 \frac{|{d}z|}{\delta_D(z)}. \end{align*} $$

We remark that there is also a more general version of this theorem for hyperbolic domains in $\mathbb {C}$ which are not necessarily simply connected (again see [Reference Carleson and GamelinCG93, Theorem I.4.3]). However, for the purposes of this paper, we will consider only simply connected domains which are proper subsets of ${\mathbb C}$ . The advantage of this is that there is always a unique geodesic segment joining any two distinct points, and we can use the length of this segment to measure hyperbolic distance.

A.1.3 Star-shaped domains

Recall that a domain $D \subset {\mathbb C}$ is said to be star-shaped with respect to some point $z_0 \in D$ if, for every point $z \in D$ , $[z_0, z] \subset D$ , where $[z_0, z]$ denotes the Euclidean line segment from $z_0$ to z. We have the following classical result which will be of use to us later in the ‘up’ section of the proof of Phase II (Lemma 5.17).

Lemma A.5. [Reference DurenDur83, Corollary to Theorem 3.6]

For every radius $r \le \rho := \tanh ({\pi }/{4})= 0.655\ldots \,$ , each function $f \in {\mathcal S}$ maps the Euclidean disc $\mathrm {D}(0,r)$ to a domain which is starlike with respect to the origin. This is false for every $r> \rho $ .

Since this value of r corresponds via the formula $\rho _{\mathbb D}(0,z) = \log ({1 + |z|})/({1-|z|})$ to a hyperbolic radius about $0$ of exactly ${\pi }/{2}$ , we have the following easy consequence.

Lemma A.6. If f is univalent on ${\mathbb D}$ , $z_0 \in {\mathbb D}$ , $r \le {\pi }/{2}$ , and $\Delta _{\mathbb D}(z_0, r)$ denotes the hyperbolic disc in ${\mathbb D}$ of radius r about $z_0$ , then the image $f(\Delta _{\mathbb D}(z_0, r))$ is star-shaped with respect to $f(z_0)$ .

The important property of star-shaped domains for us is that, if we dilate such a domain about its center point by an amount greater than $1$ , then the enlarged domain will cover the original. More precisely, if X is star-shaped with respect to $z_0$ , $r> 1$ , and we let $rX$ be the domain $rX := \{z: (z - z_0)/r + z_0 \in X\}$ , then $X \subset rX$ . Again, this is something we will make use of in the ‘up’ portion of the proof of Phase II (Lemma 5.17).

A.1.4 The Carathéodory topology

The Carathéodory topology is a topology on pointed domains, which consist of a domain and a marked point of the domain which is referred to as the base point. In [Reference CarathéodoryCar52], Constantin Carathéodory defined a suitable topology for simply connected pointed domains for which convergence in this topology is equivalent to the convergence of suitably normalized inverse Riemann maps. The work was then extended in an appropriate sense to hyperbolic domains by Adam Epstein in his Ph.D thesis [Reference EpsteinEps93]. This work was expanded upon further still by the first author [Reference ComerfordCom13a, Reference ComerfordCom14]. This is a supremely useful tool in non-autonomous iteration where the domains on which certain functions are defined may vary. We follow [Reference ComerfordCom13a] for the following discussion. Recall that a pointed domain is an ordered pair $(U,u)$ consisting of an open connected subset U of $\hat {\mathbb C}$ , (possibly equal to $\hat {\mathbb C}$ itself) and a point u in U.

Definition A.7. We say that $(U_m,u_m) \to (U,u)$ in the Carathéodory topology if:

  1. (1) $u_m \to u$ in the spherical topology;

  2. (2) for all compact sets $K \subset U$ , $K \subset U_m$ for all but finitely many m;

  3. (3) for any connected (spherically) open set N containing u, if $N \subset U_m$ for infinitely many m, then $N \subset U$ .

We also wish to consider the degenerate case where $U = \{u\}$ . In this case, condition (2) is omitted (U has no interior of which we can take compact subsets) while condition (3) becomes

  1. (3) for any connected (spherically) open set N containing u, N is contained in at most finitely many of the sets $U_m$ .

Convergence in the Carathéodory topology can also be described using the Carathéodory kernel. Originally defined by Carathéodory himself in [Reference CarathéodoryCar52], one first requires that $u_m \to u$ in the spherical topology. If there is no open set containing u which is contained in the intersection of all but finitely many of the sets $U_m$ , then one defines the kernel of the sequence $\{(U_m,u_m) \}_{m=1}^{\infty }$ to be $\{u \}$ . Otherwise, one defines the Carathéodory kernel as the largest domain U containing u with the property (2) above. It is easy to check that a largest domain does indeed exist. Carathéodory convergence can also be described in terms of the Hausdorff topology. We have the following theorem in [Reference ComerfordCom13a].

Theorem A.8. Let $\{(U_m,u_m) \}_{m=1}^{\infty }$ be a sequence of pointed domains and $(U,u)$ be another pointed domain where we allow the possibility that $(U,u)=(\{ u\},u)$ . Then the following are equivalent:

  1. (1) $(U_m,u_m) \to (U,u)$ ;

  2. (2) $u_m \to u$ in the spherical topology and $\{(U_m,u_m) \}_{m=1}^{\infty }$ has Carathéodory kernel U as does every subsequence;

  3. (3) $u_m \to u$ in the spherical topology and, for any subsequence where the complements of the sets $U_m$ converge in the Hausdorff topology (with respect to the spherical metric), U corresponds with the connected component of the complement of the Hausdorff limit which contains u (this component being empty in the degenerate case $U=\{ u\}$ ).

Of particular use to us will be the following theorem in [Reference ComerfordCom13a] regarding the equivalence of Carathéodory convergence and the local uniform convergence of suitably normalized covering maps, most of which was proved by Adam Epstein in his PhD thesis [Reference EpsteinEps93].

Theorem A.9. Let $\{(U_m,u_m) \}_{m\geq 1}$ be a sequence of pointed hyperbolic domains and for each m, let $\pi _m$ be the unique normalized covering map from ${\mathbb D}$ to $U_m$ satisfying $\pi _m(0)=0$ , $\pi _m^{\prime }(0)>0$ .

Then $(U_m,u_m)$ converges in the Carathéodory topology to another pointed hyperbolic domain $(U,u)$ if and only if the mappings $\pi _m$ converge with respect to the spherical metric uniformly on compact subsets of ${\mathbb D}$ to the covering map $\pi $ from ${\mathbb D}$ to U satisfying $\pi (0)=u$ , $\pi '(0)>0$ .

In addition, in the case of convergence, if D is a simply connected subset of U and $v \in D$ , then locally defined branches $\omega _m$ of $\pi _m^{-1}$ on D for which $\omega _m(v)$ converges to a point in ${\mathbb D}$ will converge locally uniformly with respect to the spherical metric on D to a uniquely defined branch $\omega $ of $\pi ^{-1}$ .

Finally, if $\pi _m$ converges with respect to the spherical topology locally uniformly on ${\mathbb D}$ to the constant function u, then $(U_m,u_m)$ converges to $(\{u \},u)$ .

A.2 Appendix. Glossary of symbols

We will be using many different symbols repeatedly throughout this exposition. For clarity of exposition, we have gathered them into the following table.

A.3 Appendix. Dependency tables

The proofs of the three key steps in this paper, namely the Polynomial Implementation Lemma (Lemma 3.9), Phase I (Lemma 4.8), and Phase II (Lemma 5.17) involve many quantities and functions which are defined in terms of other quantities introduced earlier (and occasionally later) in the proofs of these results. To fully understand these quantities and avoid any danger of circular reasoning, we feel it is therefore important, if not indispensable, that we provide full tables for all three of these results detailing the dependencies of the most important objects in their statements and proofs (see Tables 13).

Table 1 Dependencies table for the Polynomial Implementation Lemma (Lemma 3.9).

Table 2 Dependencies table for Phase I (Lemma 4.8).

Table 3 Dependencies table for Phase II (Lemma 5.17).

The objects in each table are for the most part listed in the order in which they appear in the proof of the corresponding result as well as the statements and proofs of the supporting lemmas which lead up to it. The tables for Polynomial Implementation Lemma (Lemma 3.9) and Phase I (Lemma 4.8) each have five columns. To determine the dependencies for a given object (given as ultimate dependency in the third column), one looks at the immediate dependencies (second column) for that row. One then reads off the dependencies for each object in this column from the column entries for ultimate dependencies (third column) for the earlier lines in the table for these objects, and the combined list of these dependencies for every object then forms the new entry in the third column.

Due to the more complicated nature of the proof, the table for Phase II has an extra column for intermediate dependencies. However, the determination of the ultimate dependencies is done similarly to before where, instead, one looks at the ultimate dependencies (fourth column) for each quantity in the second and third columns for the row containing a given object and the combined list gives the entry for the ultimate dependencies for that object (which is in the fourth column). One exception to this is where, for objects depending on the constants $K_2$ , $K_3$ , one needs to look at later entries in the table for these constants (as explained in the proof of Lemma 5.17, there is no danger of circular reasoning here). For intermediate dependencies, when listed, these are the same as ultimate dependencies but involve extra quantities which are then eliminated by monotonicity or some uniformization procedure such as taking a maximum (e.g. $\eta _1$ , $\eta _2$ ) or by being determined later as is the case with $K_2$ , $K_3$ . Another exception is when some of the ultimate dependencies appear to be ‘missing’. Most of these are instances of where the scaling factor $\kappa $ is omitted because an estimate involving the hyperbolic metric of U which will not depend on $\kappa $ .

One final remark concerns those objects which appear in the columns for ultimate dependencies. As a matter of logical necessity, these fall into two categories—objects which are defined or whose value is determined before the proof (e.g. $\kappa $ , $R_0$ ) and universally quantified objects appearing in the statement (e.g. $\varepsilon _1$ , $\varepsilon _2$ ). Both of these are indicated in the tables by their appearance in the first column being marked by an asterisk. However, these objects also generally come with bounds which have dependencies of their own which then forces these objects to inherit these dependencies. A good way to think of this is to view these results of determining these ultimate dependencies as methods or routines within a computer program where the objects in the ultimate dependencies are then free objects which are either defined outside the method and then passed to it as parameters or alternatively set within the method itself.

References

Avila, A., Buff, X. and Chéritat, A.. Siegel disks with smooth boundaries. Acta Math. 193(1) (2004), 130.CrossRefGoogle Scholar
Astorg, M., Buff, X., Dujardin, R., Peters, H. and Raissy, J.. A two-dimensional polynomial mapping with a wandering fatou component. Ann. of Math. (2) 222(1) (2016), 263313.CrossRefGoogle Scholar
Ahlfors, L.. Lectures on Quasiconformal Mappings. Van Nostrand, New York City, NY, 1966.Google Scholar
Astorg, M. and Thaler, L. B.. Dynamics of skew-products tangent to the identity. Preprint, 2022, arXiv:2204.02644 .Google Scholar
Astorg, M., Thaler, L. B. and Peters, H.. Wandering domains arising from Lavaurs maps with Siegel disks. Ann. PDE 16(1) (2023), 2588.Google Scholar
Brück, R., Büger, M. and Reitz, S.. Random iterations of polynomials of the form null: Connectedness of Julia sets. Ergod. Th. & Dynam. Sys. 19 (1999), 12211231.CrossRefGoogle Scholar
Brück, R.. Connectedness and stability of Julia sets of the composition of polynomials of the form null. J. Lond. Math. Soc. (2) 61(2) (2000), 462470.CrossRefGoogle Scholar
Brück, R.. Geometric properties of Julia sets of the composition of polynomials of the form null, Pacific J. Math. 198 (2001), 347371.CrossRefGoogle Scholar
Büger, M.. Self-similarity of Julia sets of the composition of polynomials. Ergod. Th. & Dynam. Sys. 17 (1997), 12891297.CrossRefGoogle Scholar
Carathéodory, C.. Conformal Representation. Cambridge University Press, Cambridge, 1952.Google Scholar
Contreras, M. D., Diaz-Madrigal, S. and Gumenyuk, P.. Loewner chains in the unit disk. Rev. Mat. Iberoam. 26(3) (2010), 9751012.CrossRefGoogle Scholar
Carleson, L. and Gamelin, T.. Complex Dynamics. Springer Verlag, Berlin, 1993.CrossRefGoogle Scholar
Comerford, M.. Conjugacy and counterexample in non-autonomous iteration. Pacific J. Math. 222(1) (2003), 6980.CrossRefGoogle Scholar
Comerford, M.. A survey of results in random iteration. Fractal Geometry and Applications: A Jubilee of Benoit Mandelbrot. Part 1. Vol. 72, Part 1. Ed. Lapidus, M. L. and van Frankenhuisjen, M.. American Mathematical Society, Providence, RI, 2004, pp. 435476.CrossRefGoogle Scholar
Comerford, M.. Hyperbolic non-autonomous Julia sets. Ergod. Th. & Dynam. Sys. 26 (2006), 353377.CrossRefGoogle Scholar
Comerford, M.. Holomorphic motions of hyperbolic non-autonomous Julia sets. Complex Var. Elliptic Equ. 53(1) (2008), 122.CrossRefGoogle Scholar
Comerford, M.. A straightening theorem for non-autonomous iteration. Commun. Appl. Nonlinear Anal. 19(2) (2012), 123.Google Scholar
Comerford, M.. The Carathéodory topology for multiply connected domains I. CEJOR Cent. Eur. J. Math. 11(2) (2013), 322340.Google Scholar
Comerford, M.. Non-autonomous Julia sets with measurable invariant line fields. Discrete Contin. Dyn. Syst. 32(2) (2013), 629642.CrossRefGoogle Scholar
Comerford, M.. The Carathéodory topology for multiply connected domains II. CEJOR Cent. Eur. J. Math. 12(5) (2014), 721741.Google Scholar
Conway, J. B.. Functions of One Complex Variable I. Springer Verlag, Berlin, 1978.CrossRefGoogle Scholar
Douady, A. and Hubbard, J.. On the dynamics of polynomial-like mappings. Ann. Sci. Éc. Norm. Supér. (4) 18(2) (1985), 287343.CrossRefGoogle Scholar
Duren, P.. Univalent Functions. Springer Verlag, Berlin, 1983.Google Scholar
Epstein, A.. Towers of finite type complex analytic maps. PhD Thesis, CUNY Graduate School, NY, 1993.Google Scholar
Fornaess, J. E. and Sibony, N.. Random iterations of rational functions. Ergod. Th. & Dynam. Sys. 11 (1991), 687708.CrossRefGoogle Scholar
Gelfreich, V. and Turaev, D.. Universal dynamics in a neighborhood of a general elliptic periodic point. Regul. Chaotic Dyn. 15(2–3) (2010), 159164.CrossRefGoogle Scholar
Ji, Z.. Non-wandering fatou components for strongly attracting polynomial skew products. J. Geom. Anal. 30(1) (2020), 124152.CrossRefGoogle Scholar
Ji, Z.. Non-uniform hyperbolicity in polynomial skew products. Int. Math. Res. Not. IMRN 2023(10) (2023), 87558799.CrossRefGoogle Scholar
Ji, Z. and Shen, W.. The wandering domain problem for attracting polynomial skew products. Preprint, 2022, arXiv:2209.01715 .Google Scholar
Keen, L. and Lakic, N.. Hyperbolic Geometry from a Local Viewpoint. Cambridge University Press, Cambridge, 2007.CrossRefGoogle Scholar
Lehto, O.. An extension theorem for quasiconformal mappings. Proc. Lond. Math. Soc. (3) 14A(3) (1965), 187190.CrossRefGoogle Scholar
Lilov, K.. Fatou theory in two dimensions. PhD Thesis, University of Michigan, Michigan, USA, 2004.Google Scholar
Loray, F.. Pseudo-groupe d’une singularité de feuilletage holomorphe en dimension deux. hal-00016434f, 2006.Google Scholar
Lehto, O. and Virtanen, K. I.. Quasikonforme Abbildungen. Springer Verlag, Berlin, 1965.CrossRefGoogle Scholar
Woodard, T. and Comerford, M.. Preservation of external rays in non-autonomous iteration. J. Difference Equ. Appl. 19(4) (2013), 585604.Google Scholar
Milnor, J.. Dynamics in One Complex Variable. Princeton University Press, Princeton, NJ, 2006.Google Scholar
Ma, W. and Minda, D.. Hyperbolically convex functions. Ann. Polon. Math. LX(1) (1994), 81100.CrossRefGoogle Scholar
Ma, W. and Minda, D.. Hyperbolically convex functions II. Ann. Polon. Math. LXXI(3) (1999), 273285.CrossRefGoogle Scholar
Munkres, J.. Topology, 2nd edn. Prentice Hall, Inc., Upper Saddle River, NJ, 2000.Google Scholar
Newman, M. H. A.. Elements of the Topology of Plane Sets of Points. Cambridge University Press, Cambridge, 1951.Google Scholar
Peters, H. and Raissy, J.. Fatou components of elliptic polynomial skew products. Ergod. Th. & Dynam. Sys. 39(8) (2019), 22352247.CrossRefGoogle Scholar
Peters, H. and Smit, I. M.. Fatou components of attracting skew-products. J. Geom. Anal. 28 (2018), 84110.CrossRefGoogle ScholarPubMed
Peters, H. and Vivas, L. R.. Polynomial skew-products with wandering Fatou-disks. Math. Z. 283 (2016), 349366.CrossRefGoogle Scholar
Sester, O.. Hyperbolicité des polynômes fibrés [Hyperbolicity of fibered polynomials]. Bull. Soc. Math. France 127(3) (1999), 398428 (in French).Google Scholar
Sullivan, D.. Quasiconformal homeomorphisms and dynamics. I. Solution of the Fatou–Julia problem on wandering domains. Ann. of Math. (2) 122(3) (1985), 401418.CrossRefGoogle Scholar
Sumi, H.. Skew product maps related to finitely generated rational semigroups. Nonlinearity 13 (2000), 9951019.CrossRefGoogle Scholar
Sumi, H.. Dynamics of sub-hyperbolic and semi-hyperbolic rational semigroups and skew products. Ergod. Th. & Dynam. Sys. 21 (2001), 563603.CrossRefGoogle Scholar
Sumi, H.. Semi-hyperbolic fibered rational maps and rational semigroups. Ergod. Th. & Dynam. Sys. 26(3) (2006), 893922.CrossRefGoogle Scholar
Sumi, H.. Dynamics of postcritically bounded polynomial semigroups III: classification of semi-hyperbolic semigroups and random Julia sets which are Jordan curves but not quasicircles. Ergod. Th. & Dynam. Sys. 30(6) (2010), 18691902.CrossRefGoogle Scholar
Figure 0

Figure 1 The filled Julia set ${\mathcal K}_{\unicode{x3bb} }$ for $P_{\unicode{x3bb} }$ with Siegel disc highlighted.

Figure 1

Figure 2 Supports of dilatations converging to zero almost everywhere.

Figure 2

Figure 3 The filled Julia set ${\mathcal K}$ for P with the Green’s lines $\partial V_h = \{z: G(z)=h\}$ and $\partial V_{2h} =$$\{z: G(z)=2h\}$.

Figure 3

Figure 4 Finding a lower bound for $\rho _{{\tilde V_{2h}}}(0,z_0)$.

Figure 4

Figure 5 The setup for Phase II in rotated logarithmic coordinates.

Figure 5

Figure 6 Showing $R - R" \rightarrow 0$ as $h \to 0_+$.

Figure 6

Figure 7 A block diagram illustrating the induction scheme.

Figure 7

Table 1 Dependencies table for the Polynomial Implementation Lemma (Lemma 3.9).

Figure 8

Table 2 Dependencies table for Phase I (Lemma 4.8).

Figure 9

Table 3 Dependencies table for Phase II (Lemma 5.17).