Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-25T15:57:55.958Z Has data issue: false hasContentIssue false

Flip signatures

Published online by Cambridge University Press:  26 September 2022

SIEYE RYU*
Affiliation:
Independent scholar, South Korea
Rights & Permissions [Opens in a new window]

Abstract

A $D_{\infty }$-topological Markov chain is a topological Markov chain provided with an action of the infinite dihedral group $D_{\infty }$. It is defined by two zero-one square matrices A and J satisfying $AJ=JA^{\textsf {T}}$ and $J^2=I$. A flip signature is obtained from symmetric bilinear forms with respect to J on the eventual kernel of A. We modify Williams’ decomposition theorem to prove the flip signature is a $D_{\infty }$-conjugacy invariant. We introduce natural $D_{\infty }$-actions on Ashley’s eight-by-eight and the full two-shift. The flip signatures show that Ashley’s eight-by-eight and the full two-shift equipped with the natural $D_{\infty }$-actions are not $D_{\infty }$-conjugate. We also discuss the notion of $D_{\infty }$-shift equivalence and the Lind zeta function.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1 Introduction

A topological flip system $(X, T, F)$ is a topological dynamical system $(X, T)$ consisting of a topological space X, a homeomorphism $T : X \rightarrow X$ and a topological conjugacy $F: (X, T^{-1}) \rightarrow (X, T)$ with $F^2 = \text {Id}_X$ . (See the survey paper [Reference Lamb and Roberts6] for the further study of flip systems.) We call the map F a flip for $(X, T)$ . If F is a flip for a discrete-time topological dynamical system $(X, T)$ , then the triple $(X, T, F)$ is called a $D_{\infty }$ -system because the infinite dihedral group

$$ \begin{align*}D_{\infty} = \langle a, b : ab=ba^{-1} \text{ and } b^2=1 \rangle\end{align*} $$

acts on X as follows:

$$ \begin{align*}(a, x) \mapsto T(x) \quad \text{and} \quad (b, x) \mapsto F(x) \quad (x \in X).\end{align*} $$

Two $D_{\infty }$ -systems $(X, T, F)$ and $(X', T', F')$ are said to be $D_{\infty }$ -conjugate if there is a $D_{\infty }$ -equivariant homeomorphism $\theta : X \rightarrow X'$ . In this case, we write

$$ \begin{align*}(X, T, F) \cong (X', T', F')\end{align*} $$

and call the map $\theta $ a $D_{\infty }$ -conjugacy from $(X, T, F)$ to $(X', T', F')$ .

Suppose that $\mathcal {A}$ is a finite set. A topological Markov chain, or TMC for short, $(\textsf {X}_A, \sigma _A)$ over $\mathcal {A}$ is a shift space which has a zero-one $\mathcal {A} \times \mathcal {A}$ matrix A as a transition matrix:

$$ \begin{align*}\textsf{X}_A = \{x \in \mathcal{A}^{\mathbb{Z}} : A(x_i, x_{i+1}) = 1 \text{ for all } i \in \mathbb{Z} \}.\end{align*} $$

A $D_{\infty }$ -system $(X, T, F)$ is said to be a $D_{\infty }$ -topological Markov chain, or $D_{\infty }$ -TMC for short, if $(X, T)$ is a topological Markov chain.

Suppose that $(X, T)$ is a shift space. A flip F for $(X, T)$ is called a one-block flip if $x_0 = x^{\prime }_0$ implies $F(x)_0=F(x')_0$ for all x and $x'$ in X. If F is a one-block flip for $(X, T)$ , then there is a unique map $\tau : \mathcal {A} \rightarrow \mathcal {A}$ such that

$$ \begin{align*}F(x)_i = \tau(x_{-i}) \quad \text{and} \quad \tau^2 = \text{Id}_{\mathcal{A}} \quad (x \in X; i \in \mathbb{Z}).\end{align*} $$

The representation theorem in [Reference Kim, Lee and Park4] says that if $(X, T, F)$ is a $D_{\infty }$ -TMC, then there is a TMC $(X', T')$ and a one-block flip $F'$ for $(X', T')$ such that $(X, T, F)$ and $(X', T', F')$ are $D_{\infty }$ -conjugate.

Suppose that $\mathcal {A}$ is a finite set and that A and J are zero-one $\mathcal {A} \times \mathcal {A}$ matrices satisfying

(1.1) $$ \begin{align} AJ=JA^{\textsf{T}} \quad \text{and} \quad J^2=I. \end{align} $$

Since J is zero-one and $J^2=I$ , it follows that J is symmetric and that for any $a \in \mathcal {A}$ , there is a unique $b \in \mathcal {A}$ such that $J(a, b)=1$ . Thus, there is a unique permutation $\tau _J:\mathcal {A} \rightarrow \mathcal {A}$ of order two satisfying

$$ \begin{align*}J(a, b) =1 \Leftrightarrow \tau_J(a) = b \quad (a, b \in \mathcal{A}).\end{align*} $$

It is obvious that the map $\varphi _{J} : \mathcal {A}^{\mathbb {Z}} \rightarrow \mathcal {A}^{\mathbb {Z}}$ defined by

$$ \begin{align*} \varphi_J(x)_i = \tau_J(x_{-i}) \quad (x \in X)\end{align*} $$

is a one-block flip for the full $\mathcal {A}$ -shift $(\mathcal {A}^{\mathbb {Z}}, \sigma )$ . Since $AJ=JA^{\textsf {T}}$ implies

$$ \begin{align*}A(a, b) = A(\tau_J(b), \tau_J(a)) \quad (a, b \in \mathcal{A}),\end{align*} $$

it follows that $\varphi _{J}(\textsf {X}_A) = \textsf {X}_A$ . Thus, the restriction $\varphi _{A, J}$ of $\varphi _J$ to $\textsf {X}_A$ becomes a one-block flip for $(\textsf {X}_A, \sigma _A)$ . A pair $(A, J)$ of zero-one $\mathcal {A} \times \mathcal {A}$ matrices satisfying equation (1.1) will be called a flip pair.

The classification of shifts of finite type up to conjugacy is one of the central problems in symbolic dynamics. There is an algorithm determining whether or not two one-sided shifts of finite type ( $\mathbb {N}$ -SFTs) are $\mathbb {N}$ -conjugate. (See §2.1 in [Reference Kitchens5].) In the case of two-sided shifts of finite type ( $\mathbb {Z}$ -SFTs), however, one cannot determine whether or not two systems are $\mathbb {Z}$ -conjugate, even though many $\mathbb {Z}$ -conjugacy invariants have been discovered. For instance, it is well known (Proposition 7.3.7 in [Reference Lind and Marcus8]) that if two $\mathbb {Z}$ -SFTs are $\mathbb {Z}$ -conjugate, then their transition matrices have the same set of non-zero eigenvalues. In 1990, Ashley introduced an eight-by-eight zero-one matrix, which is called Ashley’s eight-by-eight and asked whether or not it is $\mathbb {Z}$ -conjugate to the full two-shift. (See Example 2.2.7 in [Reference Kitchens5] or §3 in [Reference Boyle2].) Since the characteristic polynomial of Ashley’s eight-by-eight is $t^7(t-2)$ , we could say Ashley’s eight-by-eight is very simple in terms of spectrum and it is easy to prove that Ashley’s eight-by-eight is not $\mathbb {N}$ -conjugate to the full two-shift. Nevertheless, this problem has not been solved yet. Meanwhile, both Ashley’s eight-by-eight and the full-two shift have one-block flips. More precisely, if we set

$$ \begin{align*}A=\left[\begin{array}{rrrrrrrr} 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \end{array}\right], \quad J=\left[\begin{array}{rrrrrrrr} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array}\right],\end{align*} $$
(1.2) $$ \begin{align} B = \left[\begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array}\right], \quad I = \left[\begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}\right] \quad \text{and} \quad K = \left[\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right], \end{align} $$

then A is Ashley’s eight-by-eight, $\varphi _{A, J}$ is a unique one-block flip for $(\textsf {X}_A, \sigma _A)$ , B is the minimal zero-one matrix defining the full two-shift and $(\textsf {X}_B, \sigma _B)$ has exactly two one-block flips $\varphi _{B, I}$ and $\varphi _{B, K}$ . It is natural to ask whether or not $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ is $D_{\infty }$ -conjugate to $(\textsf {X}_B, \sigma _B, \varphi _{B, I})$ or $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ . In this paper, we introduce the notion of flip signatures and prove

(1.3) $$ \begin{align} (\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\textsf{X}_B, \sigma_B, \varphi_{B, I}), \end{align} $$
(1.4) $$ \begin{align} (\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\textsf{X}_B, \sigma_B, \varphi_{B, K})\end{align} $$

and

(1.5) $$ \begin{align} (\textsf{X}_B, \sigma_B, \varphi_{B, I}) \ncong (\textsf{X}_B, \sigma_B, \varphi_{B, K}). \end{align} $$

When $(A, J)$ and $(B, K)$ are flip pairs, it is clear that if $\theta $ is a $D_{\infty }$ -conjugacy from $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ to $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ , then $\theta $ is also a $\mathbb {Z}$ -conjugacy from $(\textsf {X}_A, \sigma _A)$ to $(\textsf {X}_B, \sigma _B)$ . However, equation (1.5) says that the converse is not true.

We first introduce analogues of elementary equivalence (EE), strong shift equivalence (SSE) and Williams’ decomposition theorem for $D_{\infty }$ -TMCs. Let us recall the notions of EE and SSE. (See [Reference Lind and Marcus8, Reference Williams9] for the details.) Suppose that A and B are zero-one square matrices. A pair $(D, E)$ of zero-one matrices satisfying

$$ \begin{align*}A=DE \quad \text{and} \quad B=ED\end{align*} $$

is said to be an EE from A to B and we write . If , then there is a $\mathbb {Z}$ -conjugacy $\gamma _{D, E}$ from $(\textsf {X}_A, \sigma _A)$ to $(\textsf {X}_B, \sigma _B)$ satisfying

(1.6) $$ \begin{align} \gamma_{D, E}(x) = y \Leftrightarrow \text{for all } i \in \mathbb{Z}, \quad D(x_i, y_i)=E(y_i, x_{i+1})=1. \end{align} $$

The map $\gamma _{D, E}$ is called an elementary conjugacy.

An SSE of lag l from A to B is a sequence of l elementary equivalences

It is evident that if A and B are strong shift equivalent, then $(\textsf {X}_A, \sigma _A)$ and $(\textsf {X}_B, \sigma _B)$ are $\mathbb {Z}$ -conjugate. Williams’ decomposition theorem, found in [Reference Williams9], says that every $\mathbb {Z}$ -conjugacy between two $\mathbb {Z}$ -TMCs can be decomposed into the composition of a finite number of elementary conjugacies.

To establish analogues of EE, SSE and Williams’ decomposition theorem for $D_{\infty }$ -TMCs, we first observe some properties of a $D_{\infty }$ -system. If $(X, T, F)$ is a $D_{\infty }$ -system, then $(X, T, T^n \circ F)$ are also $D_{\infty }$ -systems for all integers n. It is obvious that $T^n$ are $D_{\infty }$ -conjugacies from $(X, T, F)$ to $(X, T, T^{2n} \circ F)$ and from $(X, T, T \circ F)$ to $(X, T, T^{2n+1} \circ F)$ for all integers n. For one’s information, we will see that $(X, T, F)$ is not $D_{\infty }$ -conjugate to $(X, T, T\circ F)$ in Proposition 6.1.

Let $(A, J)$ and $(B, K)$ be flip pairs. A pair $(D, E)$ of zero-one matrices satisfying

$$ \begin{align*}A=DE, \quad B=ED \quad \text{and} \quad E=KD^{\textsf{T}}J\end{align*} $$

is said to be a $D_{\infty }$ -half elementary equivalence ( $D_{\infty }$ -HEE) from $(A, J)$ to $(B, K)$ and write . In Proposition 2.1, we will see that if , then the elementary conjugacy $\gamma _{D, E}$ from equation (1.6) becomes a $D_{\infty }$ -conjugacy from $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ to $(\textsf {X}_B, \sigma _B, \sigma _B \circ \varphi _{B, K})$ . We call the map $\gamma _{D, E}$ a $D_{\infty }$ -half elementary conjugacy from $(\textsf {X}_A$ , $\sigma _A$ , $\varphi _{A, J})$ to $(\textsf {X}_B$ , $\sigma _B$ , $\sigma _B \circ \varphi _{B, K})$ .

A sequence of $l D_{\infty }$ -half elementary equivalences

is said to be a $D_{\infty }$ -strong shift equivalence ( $D_{\infty }$ -SSE) of lag l from $(A, J)$ to $(B, K)$ . If there is a $D_{\infty }$ -SSE of lag l from $(A, J)$ to $(B, K)$ , then $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ is $D_{\infty }$ -conjugate to $(\textsf {X}_B, \sigma _B, \sigma _B^l \circ \varphi _{B, K})$ . If l is an even number, then $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ is $D_{\infty }$ -conjugate to $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ , while if l is an odd number, then $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ is $D_{\infty }$ -conjugate to $(\textsf {X}_B, \sigma _B, \sigma _B \circ \varphi _{B, K})$ . In §4, we will see that Williams’ decomposition theorem can be modified as follows.

Proposition A. Suppose that $(A, J)$ and $(B, K)$ are flip pairs.

  1. (1) Two $D_{\infty }$ -TMCs $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ are $D_{\infty }$ -conjugate if and only if there is a $D_{\infty }$ -SSE of lag $2l$ between $(A, J)$ and $(B, K)$ for some positive integer l.

  2. (2) Two $D_{\infty }$ -TMCs $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \sigma _B \circ \varphi _{B, K})$ are $D_{\infty }$ -conjugate if and only if there is a $D_{\infty }$ -SSE of lag $2l-1$ between $(A, J)$ and $(B, K)$ for some positive integer l.

To introduce the notion of flip signatures, we discuss some properties of $D_{\infty }$ -TMCs. We first indicate notation. If $\mathcal {A}_1$ and $\mathcal {A}_2$ are finite sets and M is an $\mathcal {A}_1 \times \mathcal {A}_2$ zero-one matrix, then for each $a \in \mathcal {A}_1$ , we set

$$ \begin{align*}\mathcal{F}_M(a) = \{b \in \mathcal{A}_2: M(a, b) =1 \}\end{align*} $$

and for each $b \in \mathcal {A}_2$ , we set

$$ \begin{align*}\mathcal{P}_M(b) = \{a \in \mathcal{A}_1: M(a, b) =1 \}.\end{align*} $$

When $(X, T)$ is a TMC, we denote the set of all n-blocks occurring in points in X by $\mathcal {B}_n(X)$ for all non-negative integers n.

Suppose that $(A, J)$ and $(B, K)$ are flip pairs and that $(D, E)$ is a $D_{\infty }$ -HEE from $(A, J)$ to $(B, K)$ . Since B is zero-one and $B=ED$ , it follows that

(1.7) $$ \begin{align} \mathcal{F}_D(a_1) \cap \mathcal{F}_D(a_2) \neq \varnothing &\Rightarrow \mathcal{P}_E(a_1) \cap \mathcal{P}_E(a_2) = \varnothing \nonumber, \\ \mathcal{P}_E(a_1) \cap \mathcal{P}_E(a_2) \neq \varnothing &\Rightarrow \mathcal{F}_D(a_1) \cap \mathcal{F}_D(a_2) = \varnothing, \end{align} $$

for all $a_1, a_2 \in \mathcal {B}_1(\textsf {X}_A)$ . Suppose that u and v are real-valued functions defined on $\mathcal {B}_1(\textsf {X}_A)$ and $\mathcal {B}_1(\textsf {X}_B)$ , respectively. If $|\mathcal {B}_1(\textsf {X}_A)|=m$ and $|\mathcal {B}_1(\textsf {X}_B)|=n$ , then u and v can be regarded as vectors in $\mathbb {R}^m$ and $\mathbb {R}^n$ , respectively. If u and v satisfy

(1.8) $$ \begin{align} \text{ for all } a \in \mathcal{B}_1(\textsf{X}_A) \quad u(a) = \sum_{b \in \mathcal{F}_D(a)} v(b), \end{align} $$

then for each $a \in \mathcal {B}_1(\textsf {X}_A)$ , we have

$$ \begin{align*}u(\tau_J(a)) u(a) = \sum_{b \in \mathcal{P}_E(a)} v(\tau_K(b)) \sum_{b \in \mathcal{F}_D(a)} v(b)\end{align*} $$

by $E = KD^{\textsf {T}}J$ and equation (1.7) leads to

$$ \begin{align*}\sum_{a \in \mathcal{B}_1(\textsf{X}_A)} u(\tau_J(a)) u(a) = \sum_{b \in \mathcal{B}_1(\textsf{X}_B)} \sum_{d \in \mathcal{P}_B(b)} v(\tau_K(d)) v(b).\end{align*} $$

Since J and K are symmetric, this formula can be expressed in terms of symmetric bilinear forms with respect to J and K. If we write $\langle u, u \rangle _J = u^{\textsf {T}}Ju$ and $\langle Bv, v \rangle _K = (Bv)^{\textsf {T}}Kv$ , then we have

$$ \begin{align*}\langle u, u \rangle_J = \langle Bv, v \rangle_K{\kern-1.2pt}.\end{align*} $$

We note that if both A and B have $\unicode{x3bb} $ as their real eigenvalues and v is an eigenvector of B corresponding to $\unicode{x3bb} $ , then u satisfying equation (1.8) is an eigenvector of A corresponding to $\unicode{x3bb} $ . We consider the case where A and B have $0$ as their eigenvalues and find out some relationships between the symmetric bilinear forms $\langle \;\; , \;\; \rangle _J$ and $\langle \;\; , \;\; \rangle _K$ on the generalized eigenvectors of A and B corresponding to $0$ when $(A, J)$ and $(B, K)$ are $D_{\infty }$ -half elementary equivalent.

We call the subspace $\mathcal {K}(A)$ of $u \in \mathbb {R}^m$ such that $A^p u = 0$ for some $p \in \mathbb {N}$ the eventual kernel of A:

$$ \begin{align*}\mathcal{K}(A) = \{u \in \mathbb{R}^m : A^p u = 0 \text{ for some } p \in \mathbb{N} \}.\end{align*} $$

If $u \in \mathcal {K}(A) \setminus \{0\}$ and p is the smallest positive integer for which $A^pu=0$ , then the ordered set

$$ \begin{align*}\alpha = \{A^{p-1}u, \ldots, Au, u\}\end{align*} $$

is called a cycle of generalized eigenvectors of A corresponding to $0$ . In this paper, we sometimes call $\alpha $ a cycle in $\mathcal {K}(A)$ for simplicity. The vectors $A^{p-1}u$ and u are called the initial vector and the terminal vector of $\alpha $ , respectively, and we write

$$ \begin{align*}\text{Ini}(\alpha) = A^{p-1}u \quad \text{and} \quad \text{Ter}(\alpha) = u.\end{align*} $$

We say that the length of $\alpha $ is p and write $|\alpha |=p$ . It is well known [Reference Friedbert, Insel and Spence3] that there is a basis for $\mathcal {K}(A)$ consisting of a union of disjoint cycles of generalized eigenvectors of A corresponding to $0$ . The set of bases for $\mathcal {K}(A)$ consisting of a union of disjoint cycles of generalized eigenvectors of A corresponding to $0$ is denoted by $\mathcal {B}as(\mathcal {K}(A))$ . We will prove the following proposition in §3.

Proposition B. Suppose that . Then there exist bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ such that if $p>1$ and $\alpha = \{u_1, u_2, \ldots , u_p\}$ is a cycle in $\mathcal {E}(A)$ , then one of the following holds.

  1. (1) There is a cycle $\beta =\{v_1, v_2, \ldots , v_{p+1}\}$ in $\mathcal {E}(B)$ such that

    $$ \begin{align*}Dv_{k+1}=u_{k} \quad \mathrm{and} \quad Eu_k =v_k \quad (k=1, \ldots, p).\end{align*} $$
  2. (2) There is a cycle $\beta = \{v_1, v_2, \ldots , v_{p-1}\}$ in $\mathcal {E}(B)$ such that

    $$ \begin{align*}Dv_{k}=u_{k} \quad \mathrm{and} \quad Eu_{k+1} =v_k \quad (k=1, \ldots, p-1).\end{align*} $$

In either case, we have

(1.9) $$ \begin{align} \langle \mathrm{Ini}(\alpha), \mathrm{Ter}(\alpha) \rangle_J= \langle \mathrm{Ini}(\beta), \mathrm{Ter}(\beta)\rangle_K. \end{align} $$

In Lemma 3.3, we will show that there is a basis $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ such that for every cycle $\alpha $ in $\mathcal {E}(A)$ , the restriction of symmetric bilinear form $\langle \;\; , \;\; \rangle _J$ to $\text {span}(\alpha )$ is non-degenerate and in Lemma 3.2, we will see that the restriction of symmetric bilinear form $\langle \;\; , \;\; \rangle _J$ to $\text {span}(\alpha )$ is non-degenerate if and only if the left-hand side of equation (1.9) is not $0$ for a cycle $\alpha $ in $\mathcal {E}(A)$ . In this case, we define the sign of a cycle $\alpha = \{u_1, u_2, \ldots , u_p\}$ in $\mathcal {E}(A)$ by

$$ \begin{align*}\text{sgn}(\alpha) = \begin{cases}+1 \quad \text{if } \langle \text{Ini}(\alpha), \text{Ter}(\alpha)\rangle_J\!>0, \\ -1 \quad \text{if } \langle \text{Ini}(\alpha), \text{Ter}(\alpha)\rangle_J\!<0. \end{cases}\end{align*} $$

We denote the set of $|\alpha |$ such that $\alpha $ is a cycle in $\mathcal {E}(A)$ by $\mathcal {I}nd(\mathcal {K}(A))$ . It is clear that $\mathcal {I}nd(\mathcal {K}(A))$ is independent of the choice of basis for $\mathcal {K}(A)$ . We denote the union of the cycles $\alpha $ of length p in $\mathcal {E}(A)$ by $\mathcal {E}_p(A)$ for each $p\in \mathcal {I}nd(\mathcal {K}(A))$ and define the sign of $\mathcal {E}_p(A)$ by

$$ \begin{align*}\text{sgn}(\mathcal{E}_p(A)) = \prod_{\{\alpha : \alpha \text{ is a cycle in } \mathcal{E}_p(A)\}} \text{sgn}(\alpha).\end{align*} $$

In §3, we will prove the sign of $\mathcal {E}_p(A)$ is also independent of the choice of basis for $\mathcal {K}(A)$ if the restriction of $\langle \;\; , \;\; \rangle _J$ to $\text {span}(\alpha )$ is non-degenerate for every cycle $\alpha $ in $\mathcal {E}_p(A)$ .

Proposition C. Suppose that $\mathcal {E}(A)$ and $\mathcal {E}'(A)$ are two distinct bases in $\mathcal {B}as(\mathcal {K}(A))$ such that for every cycle $\alpha $ in $\mathcal {E}(A)$ or $\mathcal {E}'(A)$ , the restriction of $\langle \;\; , \;\; \rangle _J$ to $\mathrm {span}(\alpha )$ is non-degenerate. Then for each $p \in \mathcal {I}nd(\mathcal {K}(A))$ , we have

$$ \begin{align*}\mathrm{sgn}(\mathcal{E}_p(A))=\mathrm{sgn}(\mathcal{E}^{\prime}_p(A)).\end{align*} $$

Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and that the restriction of $\langle \;\; , \;\; \rangle _J$ to $\text {span}(\alpha )$ is non-degenerate for every cycle in $\mathcal {E}(A)$ . We arrange the elements of $\mathcal {I}nd(\mathcal {K}(A)) = \{p_1, p_2, \ldots , p_A\}$ to satisfy

$$ \begin{align*}p_1 < p_2 < \ldots p_A\end{align*} $$

and write

$$ \begin{align*}\varepsilon_{p}=\text{sgn}(\mathcal{E}_p(A)).\end{align*} $$

If $|\mathcal {I}nd(\mathcal {K}(A))|=k$ , then the k-tuple $(\varepsilon _{p_1}, \varepsilon _{p_2}, \ldots , \varepsilon _{p_A})$ is called the flip signature of $(A, J)$ and $\varepsilon _{p_A}$ is called the leading signature of $(A, J)$ . The flip signature of $(A, J)$ is denoted by

$$ \begin{align*}\mathrm{F.Sig}(A, J) = (\varepsilon_{p_1}, \varepsilon_{p_2}, \ldots, \varepsilon_{p_A}).\end{align*} $$

The following is the main result of this paper.

Theorem D. Suppose that $(A, J)$ and $(B, K)$ are flip pairs and that $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ are $D_{\infty }$ -conjugate. If

$$ \begin{align*}\text{F.Sig}(A, J) = (\varepsilon_{p_1}, \varepsilon_{p_2}, \ldots, \varepsilon_{p_A})\end{align*} $$

and

$$ \begin{align*}\text{F.Sig}(B, K) = (\varepsilon_{q_1}, \varepsilon_{q_2}, \ldots, \varepsilon_{q_B}),\end{align*} $$

then $\text {F.Sig}(A, J)$ and $\text {F.Sig}(B, K)$ have the same number of $-1$ s and the leading signatures $\varepsilon _{p_A}$ and $\varepsilon _{q_B}$ coincide:

$$ \begin{align*}\varepsilon_{p_A} = \varepsilon_{q_B}.\end{align*} $$

In §7, we will compute the flip signatures of $(A, J)$ , $(B, I)$ and $(B, K)$ , where A, J, B, I and K are as in equation (1.2) and prove equations (1.3), (1.4) and (1.5). Actually, we can obtain equations (1.3), (1.4) and (1.5) from the Lind zeta functions. In [Reference Kim, Lee and Park4], an explicit formula for the Lind zeta function for a $D_{\infty }$ -TMC was established, which can be expressed in terms of matrices from flip pairs. From its formula (see also §6), it is obvious that the Lind zeta function is a $D_{\infty }$ -conjugacy invariant. In Example 7.1, we will see that the Lind zeta functions of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ , $(\textsf {X}_B, \sigma _B, \varphi _{B, I})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ are all different. In §6, we introduce the notion of $D_{\infty }$ -shift equivalence ( $D_{\infty }$ -SE) which is an analogue of shift equivalence and prove that $D_{\infty }$ -SE is a $D_{\infty }$ -conjugacy invariant. In Example 7.2, we will see that there are $D_{\infty }$ -SEs between $(A, J)$ , $(B, I)$ and $(B, K)$ pairwise. So the existence of $D_{\infty }$ -shift equivalence between two flip pairs does not imply that the corresponding $\mathbb {Z}$ -TMCs share the same Lind zeta functions. This is a contrast to the fact that the existence of shift equivalence between two defining matrices A and B implies the coincidence of the Artin–Mazur zeta functions [Reference Artin and Mazur1] of the $\mathbb {Z}$ -TMCs $(\textsf {X}_A, \sigma _A)$ and $(\textsf {X}_B, \sigma _B)$ . Meanwhile, Example 7.5 says that the coincidence of the Lind zeta functions of two $D_{\infty }$ -TMCs does not guarantee the existence of $D_{\infty }$ -shift equivalence between their flip pairs. This is analogous to the case of $\mathbb {Z}$ -TMCs because the coincidence of the Artin–Mazur zeta functions of two $\mathbb {Z}$ -TMCs does not guarantee the existence of SE between their defining matrices. (See §7.4 in [Reference Lind and Marcus8].)

When $(A, J)$ is a flip pair with $|\mathcal {B}_1(X)| = m$ , the matrix A defines a linear transformation $A : \mathbb {R}^m \rightarrow \mathbb {R}^m$ . The largest subspace $\mathcal {R}(A)$ of $\mathbb {R}^m$ on which A is invertible is the called the eventual range of A:

$$ \begin{align*}\mathcal{R}(A) = \bigcap_{k=1}^{\infty} A^k\mathbb{R}^m.\end{align*} $$

Similarly, the eventual kernel $\mathcal {K}(A)$ of A is the largest subspace of $\mathbb {R}^m$ on which A is nilpotent:

$$ \begin{align*}\mathcal{K}(A) = \bigcup_{k=1}^{\infty} \text{ker}(A^k).\end{align*} $$

With this notation, we can write $\mathbb {R}^m = \mathcal {R}(A) \oplus \mathcal {K}(A)$ . (See §7.4 in [Reference Lind and Marcus8].) The flip signature of $(A, J)$ is completely determined by $\mathcal {K}(A)$ , while the Lind zeta functions and the existence of $D_{\infty }$ -shift equivalence between two flip pairs depend on the eventual ranges of transition matrices. In other words, two flip signatures which have the same number of $-1$ s and share the same leading signature have nothing to do with the coincidence of the Lind zeta functions or the existence of $D_{\infty }$ -shift equivalence. As a result, flip signatures cannot be a complete $D_{\infty }$ -conjugacy invariant. This will be clear in Example 7.4.

This paper is organized as follows. In §2, we introduce the notions of $D_{\infty }$ -half elementary equivalence and $D_{\infty }$ -strong shift equivalence. In §3, we investigate symmetric bilinear forms with respect to J and K on the eventual kernels of A and B when two flip pairs $(A, J)$ and $(B, K)$ are $D_{\infty }$ -half elementary equivalent. In the same section, we prove Propositions B and C. Proposition A and Theorem D will be proved in §§4 and 5, respectively. In §6, we discuss the notion of $D_{\infty }$ -shift equivalence and the Lind zeta function. Section 7 consists of examples.

2 $D_{\infty }$ -strong shift equivalence

Let $(A, J)$ and $(B, K)$ be flip pairs. A pair $(D, E)$ of zero-one matrices satisfying

$$ \begin{align*}A=DE, \quad B=ED \quad \text{and} \quad E=KD^{\textsf{T}}J\end{align*} $$

is said to be a $D_{\infty }$ -half elementary equivalence from $(A, J)$ to $(B, K)$ . If there is a $D_{\infty }$ -half elementary equivalence from $(A, J)$ to $(B, K)$ , then we write . We note that symmetricities of J and K imply

$$ \begin{align*}E=KD^{\textsf{T}}J \Leftrightarrow D=JE^{\textsf{T}}K.\end{align*} $$

Proposition 2.1. If , then $(\textsf {X}_A, \sigma _A, \varphi _{J, A})$ is $D_{\infty }$ -conjugate to $(\textsf {X}_B, \sigma _B, \sigma _B \circ \varphi _{K, B})$ .

Proof. Since D and E are zero-one and $A=DE$ , it follows that for all $a_1a_2 \in \mathcal {B}_2(\textsf {X}_A)$ , there is a unique $b\in \mathcal {B}_1(\textsf {X}_B)$ such that

$$ \begin{align*}D(a_1, b) = E(b, a_2)=1.\end{align*} $$

We denote the block map which sends $a_1a_2 \in \mathcal {B}_2(\textsf {X}_A)$ to $b \in \mathcal {B}_1(\textsf {X}_B)$ by $\Gamma _{D, E}$ . If we define the map $\gamma _{D, E} : (\textsf {X}_A, \sigma _A) \rightarrow (\textsf {X}_B, \sigma _B)$ by

$$ \begin{align*}\gamma_{D, E}(x)_i = \Gamma_{D, E} (x_i x_{i+1} ) \quad (x \in \textsf{X}_A; i \in \mathbb{Z}),\end{align*} $$

then we have $\gamma _{D, E} \circ \sigma _A = \sigma _B \circ \gamma _{D, E}$ .

Since , we can define the block map $\Gamma _{E, D} : \mathcal {B}_2(\textsf {X}_B) \rightarrow \mathcal {B}_1(\textsf {X}_A)$ and the map $\gamma _{E, D}: (\textsf {X}_B, \sigma _B) \rightarrow (\textsf {X}_A, \sigma _A)$ in the same way. Since $\gamma _{E, D} \circ \gamma _{D, E} = \text {Id}_{\textsf {X}_A}$ and $\gamma _{D, E} \circ \gamma _{E, D} = \text {Id}_{\textsf {X}_B}$ , it follows that $\gamma _{D, E}$ is one-to-one and onto.

It remains to show that

(2.1) $$ \begin{align} \gamma_{D, E} \circ \varphi_{A, J} = ( \sigma_B \circ \varphi_{B, K} ) \circ \gamma_{D, E}. \end{align} $$

Since $E = KD^{\textsf {T}}J$ , it follows that

$$ \begin{align*}E(b, a) = 1 \Leftrightarrow D(\tau_J(a), \tau_K(b))=1 \quad (a \in \mathcal{B}_1(\textsf{X}_A), b\in \mathcal{B}_1(\textsf{X}_B)).\end{align*} $$

This is equivalent to

$$ \begin{align*}D(a, b) = 1 \Leftrightarrow E(\tau_K(b), \tau_J(a))=1 \quad (a \in \mathcal{B}_1(\textsf{X}_A), b\in \mathcal{B}_1(\textsf{X}_B)).\end{align*} $$

Thus, we obtain

(2.2) $$ \begin{align} \Gamma_{D, E}(a_1 a_2) =b \Leftrightarrow \Gamma_{D, E} ( \tau_J(a_2) \tau_J(a_1) ) = \tau_K(b) \quad ( a_1 a_2 \in \mathcal{B}_2 (\textsf{X}_A) ). \end{align} $$

By equation (2.2), we have

$$ \begin{align*} \gamma_{D, E} \circ \varphi_{J, A} (x)_i &= \Gamma_{D, E}(\tau_J(x_{-i}) \tau_J(x_{-i-1})) = \tau_K (\Gamma_{D, E}(x_{-i-1}x_{-i}))\\ &= \varphi_{B, K} \circ \gamma_{D, E} (x)_{i+1} = ( \sigma_B \circ \varphi_{B, K} ) \circ \gamma_{D, E} (x)_i \end{align*} $$

for all $x \in \textsf {X}_A$ and $i \in \mathbb {Z}$ and this proves equation (2.1).

Let $(A, J)$ and $(B, K)$ be flip pairs. A sequence of l half elementary equivalences

$$ \begin{align*}\vdots\end{align*} $$

is said to be a $D_{\infty }$ -SSE of lag l from $(A, J)$ to $(B, K)$ . If there is a $D_{\infty }$ -SSE of lag l from $(A, J)$ to $(B, K)$ , then we say that $(A, J)$ is $D_{\infty }$ -strong shift equivalent to $(B, K)$ and write $(A, J) \approx (B, K)$ (lag l).

By Proposition 2.1, we have

(2.3) $$ \begin{align} (A, J) \approx (B, K) \; (\text{lag} \; l) \Rightarrow (\textsf{X}_A, \sigma_A, \varphi_{J, A}) \cong (\textsf{X}_B, \sigma_B, {\sigma_B}^l \circ \varphi_{K, B}). \end{align} $$

Because ${\sigma _B}^l$ is a conjugacy from $(\textsf {X}_B, \sigma _B, \varphi _{K, B})$ to $(\textsf {X}_B, \sigma _B, {\sigma _B}^{2l} \circ \varphi _{K, B})$ , the implication in equation (2.3) can be rewritten as follows:

(2.4) $$ \begin{align} (A, J) \approx (B, K) \; (\text{lag} \; 2l) \Rightarrow (\textsf{X}_A, \sigma_A, \varphi_{J, A}) \cong (\textsf{X}_B, \sigma_B, \varphi_{K, B}) \end{align} $$

and

(2.5) $$ \begin{align} (A, J) \approx (B, K) \; (\text{lag} \; 2l-1) \Rightarrow (\textsf{X}_A, \sigma_A, \varphi_{J, A}) \cong (\textsf{X}_B, \sigma_B, \sigma_B \circ \varphi_{K, B}). \end{align} $$

In §4, we will prove Proposition A which says that the converses of equations (2.4) and (2.5) are also true.

3 Symmetric bilinear forms

Suppose that $(A, J)$ is a flip pair and that $|\mathcal {B}_1(\textsf {X}_A)|=m$ . Let V be an m-dimensional vector space over the field $\mathbb {C}$ of complex numbers. Let $\langle u, v \rangle _J$ denote the bilinear form $V \times V \rightarrow ~\mathbb {C}$ defined by

$$ \begin{align*}(u, v)\mapsto u^{\textsf{T}}J\bar{v} \quad (u, v \in V).\end{align*} $$

Since J is a non-singular symmetric matrix, it follows that the bilinear form $\langle \;\;\; , \;\;\; \rangle _J$ is symmetric and non-degenerate. If $u, v \in V$ and $\langle u, v \rangle _J = 0$ , then u and v are said to be orthogonal with respect to J and we write $u \perp _J v$ . From $AJ=JA^{\textsf {T}}$ , we see that A itself is the adjoint of A in the following sense:

(3.1) $$ \begin{align} \langle A{u, v} \rangle_J = \langle u, Av \rangle_J. \end{align} $$

If $\unicode{x3bb} $ is an eigenvalue of A and u is an eigenvector of A corresponding to $\unicode{x3bb} $ , then for any $v \in V$ , we have

(3.2) $$ \begin{align} \unicode{x3bb} \langle {u}, {v} \rangle_J = \langle \unicode{x3bb}{u}, {v} \rangle_J = \langle A{u}, {v} \rangle_J = \langle {u}, A{v} \rangle_J. \end{align} $$

Let $\text {sp}(A)$ denote the set of eigenvalues of A. For each $\unicode{x3bb} \in \text {sp}(A)$ , let $\mathcal {K}_{\unicode{x3bb} }(A)$ denote the set of $u \in V$ such that $(A-\unicode{x3bb} I)^p u =0$ for some $p \in \mathbb {N}$ :

$$ \begin{align*}\mathcal{K}_{\unicode{x3bb}}(A) = \{u \in V : \text{there exists } p \in \mathbb{N} \text{ such that } (A-\unicode{x3bb} I)^p {u} = 0 \}.\end{align*} $$

If $u \in \mathcal {K}_{\unicode{x3bb} }(A) \setminus \{0\}$ and p is the smallest positive integer for which $(A-\unicode{x3bb} I)^pu=0$ , then the ordered set

$$ \begin{align*}\alpha = \{(A-\unicode{x3bb} I)^{p-1}u, \ldots, (A-\unicode{x3bb} I)u, u\}\end{align*} $$

is called a cycle of generalized eigenvectors of A corresponding to $\unicode{x3bb} $ . The vectors $(A-\unicode{x3bb} I)^{p-1}u$ and u are called the initial vector and the terminal vector of $\alpha $ , respectively, and we write

$$ \begin{align*}\text{Ini}(\alpha) = (A-\unicode{x3bb} I)^{p-1}u \quad \text{and} \quad \text{Ter}(\alpha) = u.\end{align*} $$

We say that the length of $\alpha $ is p and write $|\alpha |=p$ . It is well known [Reference Friedbert, Insel and Spence3] that there is a basis for $\mathcal {K}_{\unicode{x3bb} }(A)$ consisting of a union of disjoint cycles of generalized eigenvectors of A corresponding to $\unicode{x3bb} $ . From here on, when we say $\alpha =\{u_1, \ldots , u_p\}$ is a cycle in $\mathcal {K}_{\unicode{x3bb} }(A)$ , it means $\alpha $ is a cycle of generalized eigenvectors of A corresponding to $\unicode{x3bb} $ , $\text {Ini}(\alpha ) = u_1$ , $\text {Ter}(\alpha )=u_p$ and $|\alpha |=p$ .

Suppose that $\mathcal {U}(A)$ is a basis for V consisting of generalized eigenvectors of A, A has $0$ as its eigenvalue and that $\mathcal {E}(A)$ is the subset of $\mathcal {U}(A)$ consisting of the generalized eigenvectors of A corresponding to $0$ . Non-degeneracy of $\langle \;\;\; , \;\;\; \rangle _J$ says that for each $u \in \mathcal {E}(A)$ , there is a $v \in \mathcal {U}(A)$ such that $\langle u, v \rangle _J \neq 0$ . The following lemma says that the vector v must be in $\mathcal {E}(A)$ .

Lemma 3.1. Suppose that $\unicode{x3bb} , \mu \in \mathrm {sp}(A)$ . If $\unicode{x3bb} $ is distinct from the complex conjugate $\bar {\mu }$ of $\mu $ , then $\mathcal {K}_{\unicode{x3bb} }(A) \perp _J \mathcal {K}_{\mu }(A)$ .

Proof. Suppose that

$$ \begin{align*}\alpha = \{u_1, \ldots, u_p\} \quad \text{and} \quad \beta = \{v_1, \ldots, v_q\}\end{align*} $$

are cycles in $\mathcal {K}_{\unicode{x3bb} }(A)$ and $\mathcal {K}_{\mu }(A)$ , respectively. Since equation (3.2) implies

$$ \begin{align*}\unicode{x3bb} \langle u_1, v_1 \rangle_J = \langle u_1, Av_1 \rangle_J = \bar{\mu} \langle u_1, v_1 \rangle_J, \end{align*} $$

it follows that

$$ \begin{align*}\langle u_1, v_1 \rangle_J = 0.\end{align*}$$

Using equation (3.2) again, we have

$$ \begin{align*}\unicode{x3bb} \langle u_1, v_{j+1} \rangle_J = \langle u_1, \mu v_{j+1} + v_{j} \rangle_J = \bar{\mu} \langle u_1, v_{j+1} \rangle_J + \langle u_1, v_j \rangle_J\end{align*} $$

for each $j=1, \ldots , q-1$ . By mathematical induction on j, we see that

$$ \begin{align*}\langle u_1, v_j \rangle_J = 0 \quad (j=1, \ldots, q).\end{align*} $$

Applying the same process to each $u_2, \ldots u_p$ , we obtain

$$ \begin{align*}\text{ for all } i=1, \ldots, p, \, \text{ for all } j=1, \ldots, q, \quad \langle u_i, v_j \rangle_J = 0.\\[-35pt]\end{align*} $$

Remark. Non-degeneracy of $\langle \;\;, \;\; \rangle _J$ and Lemma 3.1 imply that the restriction of $\langle \;\;\; , \;\;\; \rangle _J$ to $\mathcal {K}_0(A)$ is non-degenerate.

From here on, we restrict our attention to the zero eigenvalue and the generalized eigenvectors corresponding to $0$ . For notational simplicity, the smallest subspace of V containing all generalized eigenvectors of A corresponding to $0$ is denoted by $\mathcal {K}(A)$ and we call the subspace $\mathcal {K}(A)$ of V the eventual kernel of A. We may assume that the eventual kernel of A is a real vector space. The set of bases for $\mathcal {K}(A)$ consisting of a union of disjoint cycles of generalized eigenvectors of A corresponding to $0$ is denoted by $\mathcal {B}as(\mathcal {K}(A))$ . If $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ , the set of $|\alpha |$ such that $\alpha $ is a cycle in $\mathcal {E}(A)$ is denoted by $\mathcal {I}nd(\mathcal {K}(A))$ and we call $\mathcal {I}nd(\mathcal {K}(A))$ the index set for the eventual kernel of A. It is clear that $\mathcal {I}nd(\mathcal {K}(A))$ is independent of the choice of $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ . When $p \in \mathcal {I}nd(\mathcal {K}(A))$ , we denote the union of the cycles of length p in $\mathcal {E}(A)$ by $\mathcal {E}_p(A)$ .

Lemma 3.2. Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and that $p \in \mathcal {I}nd(\mathcal {K}(A))$ .

  1. (1) Suppose that $\alpha $ is a cycle in $\mathcal {E}_p(A)$ . The restriction of $\langle \;\;\; , \;\;\; \rangle _J$ to $\mathrm {span}(\alpha )$ is non-degenerate if and only if

    $$ \begin{align*}\langle \mathrm{Ini}(\alpha), \mathrm{Ter}(\alpha) \rangle_J \neq 0.\end{align*} $$
  2. (2) The restriction of $\langle \;\;\; , \;\;\; \rangle _J$ to $\mathcal {E}_p(A)$ is non-degenerate.

Proof. Suppose that $\alpha =\{u_1, \ldots , u_p\}$ is a cycle in $\mathcal {E}_p(A)$ . By equation (3.1), we have

$$ \begin{align*} \langle u_1, u_{i} \rangle_J = \langle u_1, Au_{i+1} \rangle_J = \langle Au_1, u_{i+1} \rangle_J = 0 \end{align*} $$

and

(3.3) $$ \begin{align} \langle u_{i+1}, u_{j} \rangle_J - \langle u_{i}, u_{j+1} \rangle_J = \langle u_{i+1}, Au_{j+1} \rangle_J - \langle u_{i}, u_{j+1} \rangle_J=0 \end{align} $$

for each $i, j=1, \ldots , p-1$ . Suppose that $T_p$ is the $m \times p$ matrix whose ith column is $u_i$ for each $i=1, \ldots , p$ . If we set $\langle u_i, u_{p} \rangle _J = b_i$ for each $i=1, 2, \ldots , p$ , then $T_p^{\textsf {T}}JT_p$ is of the form

$$ \begin{align*} T_p^{\textsf{T}}JT_p = \left[\begin{array}{ccccccc} 0 & 0 & 0 & \cdots & 0 & 0 & b_1 \\ 0 & 0 & 0 & \cdots & 0 & b_1 & b_2 \\ 0 & 0 & 0 & \cdots & b_1 & b_2 & b_3 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ b_1 & b_2 & b_3 & \cdots & b_{p-2} & b_{p-1} & b_p \end{array}\right]. \end{align*} $$

This proves item (1).

To prove item (2), we only consider the case where $ \mathcal {I}nd(\mathcal {K}(A))=\{p, q\} (p<q)$ and both $\mathcal {E}_p(A)$ and $\mathcal {E}_q(A)$ have one cycles. Suppose that $\alpha =\{u_1, \ldots , u_p\}$ and $\beta = \{v_1, \ldots ,v_q\}$ are cycles in $\mathcal {E}_p(A)$ and $\mathcal {E}_q(A)$ , respectively. When $T_p$ is as above, we will prove $T_p^{\textsf {T}}JT_p$ is non-singular. We let $T_q$ be the $m \times q$ matrix whose ith column is $v_i$ for each $i=1, \ldots , q$ . If T is the $m \times (p+q)$ matrix defined by

$$ \begin{align*}T =\left[\begin{array}{@{}ll@{}} T_p & T_q \end{array}\right]\!,\end{align*} $$

then

$$ \begin{align*}T^{\textsf{T}}JT = \left[\begin{array}{@{}ll@{}} T_p^{\textsf{T}}JT_p & T_p^{\textsf{T}}JT_q \\ T_q^{\textsf{T}}JT_p & T_q^{\textsf{T}}JT_q \end{array}\right]\end{align*} $$

is non-singular by remark of Lemma 3.1. By arguments in the proof of item (1), we can put

$$ \begin{align*} T_p^{\textsf{T}}JT_p = \left[\begin{array}{ccccccc} 0 & 0 & 0 & \cdots & 0 & 0 & b_1 \\ 0 & 0 & 0 & \cdots & 0 & b_1 & b_2 \\ 0 & 0 & 0 & \cdots & b_1 & b_2 & b_3 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ b_1 & b_2 & b_3 & \cdots & b_{p-2} & b_{p-1} & b_p \end{array}\right] \end{align*} $$

and

$$ \begin{align*}T_q^{\textsf{T}}JT_q = \left[\begin{array}{ccccccc} 0 & 0 & 0 & \cdots & 0 & 0 & d_1 \\ 0 & 0 & 0 & \cdots & 0 & d_1 & d_2 \\ 0 & 0 & 0 & \cdots & d_1 & d_2 & d_3 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ d_1 & d_2 & d_3 & \cdots & d_{q-2} & d_{q-1} & d_q \end{array}\right]\!.\end{align*} $$

Now we consider $T_p^{\textsf {T}}JT_q$ . By equation (3.1), we have

$$ \begin{align*} \langle u_1, v_k \rangle_J &= 0 \quad (k=1, \ldots, q-1),\\ \langle u_2, v_k \rangle_J &= 0 \quad (k=1, \ldots, q-2),\\ \vdots& \\ \langle u_p, v_k \rangle_J &= 0 \quad (k=1, \ldots, q-p). \end{align*} $$

If we set $\langle u_i, v_q \rangle _J = c_{i}$ for each $i= 1, 2, \ldots , p$ , then the argument in equation (3.3) shows that $T_p^{\textsf {T}}JT_q$ is of the form

$$ \begin{align*}T_p^{\textsf{T}}JT_q = \left[\begin{array}{ccccccccc} 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & 0 & c_1 \\ 0 & \cdots & 0 & 0 & 0 & \cdots & 0 & c_1 & c_2 \\ 0 & \cdots & 0 & 0 & 0 & \cdots & c_1 & c_2 & c_3 \\ \vdots & & \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ 0 & \cdots & c_1 & c_2 & c_3 & \cdots & c_{p-2} & c_{p-1} & c_p \end{array}\right]\kern-1.2pt .\end{align*} $$

Finally, $T_q^{\textsf {T}}JT_p$ is the transpose of $T_p^{\textsf {T}}JT_q$ . Hence, $b_1$ and $d_1$ must be non-zero and we have $\text {Rank}(T_p^{\textsf {T}}JT_p)=p$ and $\text {Rank}(T_q^{\textsf {T}}JT_q)=q$ .

The aim of this section is to find out a relationship between $\langle \;\; , \;\; \rangle _J$ and $\langle \;\; , \;\; \rangle _K$ on bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ when . The following lemma will provide us good bases to handle.

Lemma 3.3. Suppose that A has the zero eigenvalue. There is a basis $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ having the following properties.

  1. (1) If $\alpha $ is a cycle in $\mathcal {E}(A)$ , then the restriction of $\langle \;\;\;, \;\;\; \rangle _J$ to $\mathrm {span}(\alpha )$ is non-degenerate, that is,

    $$ \begin{align*}\langle \mathrm{Ini}(\alpha), \mathrm{Ter}(\alpha) \rangle_J\! \neq 0.\end{align*} $$
  2. (2) Suppose that $\alpha $ is a cycle in $\mathcal {E}(A)$ with $\text {Ter}(\alpha )=u$ and $|\alpha |=p$ . For each $k = 0, 1, \ldots p-1$ , $v=A^{p-1-k}u$ is the unique vector in $\alpha $ such that $\langle A^ku, v \rangle _J \neq 0$ .

  3. (3) If $\alpha $ and $\beta $ are distinct cycles in $\mathcal {E}(A)$ , then

    $$ \begin{align*}\mathrm{span}(\alpha) \perp_J \mathrm{span}(\beta).\end{align*} $$

Proof. (1) Lemma 3.2 proves the case where $\mathcal {E}(A)$ has only one cycle. Suppose that $\mathcal {E}(A)$ is the union of disjoint cycles $\alpha _1, \ldots , \alpha _r$ of generalized eigenvectors of A corresponding to $0$ for some $r>1$ and that $|\alpha _1| \leq |\alpha _2| \leq \cdots \leq |\alpha _r|$ . Assuming

$$ \begin{align*}\langle \text{Ini}(\alpha_j), \text{Ter}(\alpha_j) \rangle_J\! \neq 0 \quad (j=1, \ldots, r-1),\end{align*} $$

we will construct a cycle $\beta $ of generalized eigenvectors of A corresponding to $0$ such that the union of the cycles $\alpha _1, \ldots , \alpha _{r-1}, \beta $ forms a basis for $\mathcal {K}(A)$ and that $\langle \text {Ini}(\beta ), \text {Ter}(\beta ) \rangle _J \neq 0.$

By Lemma 3.2, we have

$$ \begin{align*}|\alpha_1| \leq |\alpha_2| \leq \cdots \leq |\alpha_{r-1}| < |\alpha_r| \Rightarrow \langle \text{Ini}(\alpha_r), \text{Ter}(\alpha_r) \rangle_J \neq 0.\end{align*} $$

Thus, we only consider the case where there are other cycles in $\mathcal {E}(A)$ whose length is the same as $|\alpha _r|$ . If $\alpha _r = \{w_1, \ldots , w_q\}$ and $\langle w_1, w_q \rangle _J \neq 0$ , there is nothing to do. So we assume $\langle w_1, w_q \rangle _J = 0$ . By non-degeneracy of $\langle \;\;\; , \;\;\; \rangle _J$ and Lemma 3.1, there is a vector $v \in \mathcal {E}(A)$ such that $\langle w_1, v \rangle _J \neq 0$ . Since $\langle w_1, v \rangle _J = \langle w_q, A^{q-1}v \rangle _J$ , it follows that v must be the terminal vector of a cycle in $\mathcal {E}(A)$ of length q by the maximality of q. We put $v_1 = A^{q-1}v$ and $v_q=v$ and find a number $k \in \mathbb {R}\setminus \{0\}$ such that $\langle {w}_1-kv_1, w_q-kv_q \rangle _J \neq 0$ . We denote the cycle whose terminal vector is $w_q-kv_q$ by $\beta $ . It is obvious that the length of $\beta $ is q and that the union of the cycles $\alpha _1, \ldots , \alpha _{r-1}, \beta $ forms a basis of $\mathcal {K}(A)$ .

(2) We assume that $\mathcal {E}(A)$ has property (1) and that $\alpha =\{u_1, \ldots , u_p\}$ is a cycle in $\mathcal {E}(A)$ . The proof of Lemma 3.2(1) says that if $T_{\alpha }$ is the $m \times p$ matrix whose ith column is $u_i$ , then $T_{\alpha }^{\textsf {T}}JT_{\alpha }$ is of the form

$$ \begin{align*}T_{\alpha}^{\textsf{T}}JT_{\alpha} = \left[\begin{array}{ccccccc} 0 & 0 & 0 & \cdots & 0 & 0 & b_1 \\ 0 & 0 & 0 & \cdots & 0 & b_1 & b_2 \\ 0 & 0 & 0 & \cdots & b_1 & b_2 & b_3 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ b_1 & b_2 & b_3 & \cdots & b_{p-2} & b_{p-1} & b_p \end{array}\right]\kern-1.5pt .\end{align*} $$

We note that $b_1$ must be non-zero. Now, there are unique real numbers $k_1, \ldots , k_p$ such that if we set

$$ \begin{align*}K = \left[\begin{array}{cccc} k_p & k_{p-1} & \cdots & k_1 \\ 0 & k_p & \cdots &k_2 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & k_p \end{array}\right]\kern-1.5pt ,\end{align*} $$

then $K^{\textsf {T}} T_{\alpha }^{\textsf {T}}JT_{\alpha } K$ becomes

$$ \begin{align*}K^{\textsf{T}} T_{\alpha}^{\textsf{T}}JT_{\alpha} K = \left[\begin{array}{ccccc} 0 & 0 & \cdots & 0 & b_1 \\ 0 & 0 & \cdots & b_1 & 0 \\ \vdots & \vdots & & \vdots & \vdots \\ 0 & b_1 & \cdots & 0 & 0 \\ b_1 & 0 & \cdots & 0 & 0\end{array}\right]\kern-1.5pt .\end{align*} $$

If $\alpha '$ is a cycle in $\mathcal {K}(A)$ whose terminal vector is $w = \sum _{i=1}^p \, k_i u_i$ , then we have $|\alpha '|=p$ and

$$ \begin{align*}\langle A^{i}w, A^{j}w \rangle_J = \begin{cases} b_1 \quad \text{if } j=p-1-i, \\ 0 \; \quad \text{otherwise}, \end{cases}\end{align*} $$

for each $ 0 \leq i, j \leq p-1$ . If we replace $\alpha $ with $\alpha '$ for each $\alpha $ in $\mathcal {E}(A)$ , then the result follows.

(3) Suppose that $\mathcal {E}(A)$ has properties (1) and (2) and that $\mathcal {E}(A)$ is the union of disjoint cycles $\alpha _1, \ldots , \alpha _r$ of generalized eigenvectors of A corresponding to $0$ for some $r>1$ with $|\alpha _1| \leq |\alpha _2| \leq \cdots \leq |\alpha _r|$ . Assuming that

$$ \begin{align*}\text{span}(\alpha_i) \perp_J \text{span}(\alpha_j) \quad (i, j =1, \ldots, r-1; i \neq j),\end{align*} $$

we will construct a cycle $\beta $ such that the union of the cycles $\alpha _1, \ldots , \alpha _{r-1}, \beta $ forms a basis for $\mathcal {K}(A)$ and that $\alpha _i$ is orthogonal to $\beta $ with respect to J for each $i=1, \ldots , r-1$ .

Suppose that $\alpha = \{u_1, \ldots , u_p\}$ is a cycle in $\mathcal {E}(A)$ which is distinct from $\alpha _r = \{w_1, \ldots , w_q\}$ . We set

$$ \begin{align*}\langle u_1, u_p\rangle_J = a \, (\neq 0), \quad \langle u_i, w_{q} \rangle_J = b_i \quad (i=1, \ldots, p)\end{align*} $$

and

$$ \begin{align*}z = w_q - \frac{b_1}{a} u_{p} - \frac{b_2}{a} u_{p-1} - \cdots - \frac{b_p}{a}{u_1}.\end{align*} $$

Let $\beta $ denote the cycle whose terminal vector is z.

We first show that $u_1 \perp _J \text {span}(\beta )$ . Direct computation yields

(3.4) $$ \begin{align} \langle u_1, z \rangle_J = 0. \end{align} $$

Since $Au_1 = 0$ , it follows that

$$ \begin{align*}\langle u_1, A^jz \rangle_J = 0 \quad (j=1, \ldots, q-1)\end{align*} $$

by equation (3.1). Thus, $\langle u_1, A^jz \rangle _J = 0$ for all $j=0, \ldots , q-1$ .

Now, we show that $u_2 \perp _J \text {span}(\beta )$ . Direct computation yields

$$ \begin{align*}\langle u_2, z \rangle_J = 0.\end{align*} $$

From $A^2u_2 = 0$ , it follows that

$$ \begin{align*}\langle u_2, A^jz \rangle_J = 0 \quad (j=2, \ldots, q-1).\end{align*} $$

It remains to show that $\langle u_2, Az \rangle _J=0$ , but this is an immediate consequence of equations (3.1) and (3.4).

Applying this process to each $u_i$ inductively, the result follows.

Corollary 3.4. There is a basis $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ such that if u is the terminal vector of a cycle $\alpha $ in $\mathcal {E}(A)$ with $|\alpha |=p$ , then $v=A^{p-1-k}u$ is the unique vector in $\mathcal {E}(A)$ satisfying

$$ \begin{align*}\langle A^ku, v \rangle_J \neq 0\end{align*} $$

for each $k = 0, 1, \ldots p-1$ .

In the rest of the section, we investigate a relationship between $\langle \;\; , \;\; \rangle _J$ and $\langle \;\; , \;\; \rangle _K$ on bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ when there is a $D_{\infty }$ -HEE between two flip pairs $(A, J)$ and $(B, K)$ . Throughout the section, we assume $(A, J)$ and $(B, K)$ are flip pairs with $|\mathcal {B}_1(\textsf {X}_A)|=m$ and $|\mathcal {B}_1(\textsf {X}_B)|=n$ and $(D, E)$ is a $D_{\infty }$ -HEE from $(A, J)$ to $(B, K)$ .

We note that $E=KD^{\textsf {T}}J$ implies

$$ \begin{align*}\langle u, Dv \rangle_J = \langle Eu, v \rangle_K \quad (u \in \mathbb{R}^m, v \in \mathbb{R}^n). \end{align*} $$

From this, we see that $\text {Ker}(E)$ and $\text {Ran}(D)$ are mutually orthogonal with respect to J and that $\text {Ker}(D)$ and $\text {Ran}(E)$ are mutually orthogonal with respect to K, that is,

(3.5) $$ \begin{align} \text{Ker}(E) \perp_J \text{Ran}(D) \quad \text{and} \quad \text{Ker}(D) \perp_K \text{Ran}(E). \end{align} $$

Lemma 3.5. There exist bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ having the following properties.

  1. (1) Suppose that $\alpha $ is a cycle in $\mathcal {E}(A)$ with $|\alpha |=p$ and $u =\mathrm {Ter}(\alpha )$ . Then we have

    (3.6) $$ \begin{align} u \in \mathrm{Ran}(D) \Leftrightarrow A^{p-1}u \notin \mathrm{Ker}(E). \end{align} $$
  2. (2) Suppose that $\beta $ is a cycle in $\mathcal {E}(B)$ with $|\beta |=p$ and $v =\mathrm {Ter}(\beta )$ . Then we have

    $$ \begin{align*}v \in \mathrm{Ran}(E) \Leftrightarrow B^{p-1}v \notin \mathrm{Ker}(D).\end{align*} $$

Proof. We only prove equation (3.6). Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ has properties (1), (2) and (3) from Lemma 3.3. Since $\langle A^{p-1}u, u \rangle _J \neq 0$ , it follows that

$$ \begin{align*}u \in \text{Ran}(D) \Rightarrow A^{p-1}u \notin \text{Ker}(E)\end{align*} $$

from equation (3.5).

Suppose that $u \notin \text {Ran}(D)$ . To draw a contradiction, we assume that $A^{p-1}u \notin \text {Ker}(E)$ . By non-degeneracy of $\langle \;\; , \;\; \rangle _K$ , there is a $v \in \mathcal {K}(B)$ such that $\langle EA^{p-1}u, v \rangle _K \neq 0$ , or equivalently, $\langle A^{p-1}u, Dv \rangle _J \neq 0$ . This is a contradiction because $\langle A^{p-1}u, u \rangle _J \neq 0$ and $\langle A^{p-1}u, w \rangle _J = 0$ for all $w \in \mathcal {E}(A) \setminus \{u\}$ .

Now we are ready to prove Proposition B. We first indicate some notation. When $p \in \mathcal {I}nd(\mathcal {K}(A))$ , let $\mathcal {E}_p(A; \partial _{D, E}^-)$ denote the union of cycles $\alpha $ in $\mathcal {E}_p(A)$ such that $\text {Ter}(\alpha ) \notin \text {Ran}(D)$ and let $\mathcal {E}_p(A; \partial _{D, E}^+)$ denote the union of cycles $\alpha $ in $\mathcal {E}_p(A)$ such that $\text {Ter}(\alpha ) \in \text {Ran}(D)$ . With this notation, Proposition B can be rewritten as follows.

Proposition B. If , then there exist bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ having the following properties.

  1. (1) Suppose that $p \in \mathcal {I}nd(\mathcal {K}(A))$ and $\alpha $ is a cycle in $\mathcal {E}_p(A; \partial _{D, E}^+)$ with $\text {Ter}(\alpha )=u$ . There is a cycle $\beta $ in $\mathcal {E}_{p+1}(B; \partial _{E, D}^-)$ such that if $\text {Ter}(\beta ) = v$ , then $Dv=u$ . In this case, we have

    (3.7) $$ \begin{align} \langle A^{p-1}u, u \rangle_J = \langle B^{p}v, v \rangle_K. \end{align} $$
  2. (2) Suppose that $p \in \mathcal {I}nd(\mathcal {K}(A))$ , $p>1$ and $\alpha $ is a cycle in $\mathcal {E}_p(A; \partial _{D, E}^-)$ with $\text {Ter}(\alpha ) =~u$ . There is a cycle $\beta $ in $\mathcal {E}_{p-1}(B; \partial _{E, D}^+)$ such that if $\text {Ter}(\beta ) = v$ , then $v=Eu$ . In this case, we have

    (3.8) $$ \begin{align} \langle A^{p-1}u, u \rangle_J =\langle B^{p-2}v, v \rangle_K. \end{align} $$

Proof. If we define zero-one matrices M and F by

$$ \begin{align*}M =\left[\begin{array}{cc} 0 & D \\ E & 0 \end{array}\right] \quad \text{and} \quad F = \left[\begin{array}{cc} J & 0 \\ 0 & K \end{array}\right]\!,\end{align*} $$

then $(M, F)$ is a flip pair. Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ have properties (1), (2) and (3) from Lemma 3.3. If we set

$$ \begin{align*}\mathcal{E}(A) \oplus 0^n = \left\{\left[\begin{array}{c} u \\ 0 \end{array}\right] : u \in \mathcal{E}(A) \text{ and } 0 \in \mathbb{R}^n\right\}\end{align*} $$

and

$$ \begin{align*}0^m \oplus \mathcal{E}(B) = \left\{\left[\begin{array}{c} 0 \\ v \end{array}\right] : v \in \mathcal{E}(B) \text{ and } 0 \in \mathbb{R}^m\right\},\end{align*} $$

then the elements in $\mathcal {E}(A) \oplus 0^n$ or $0^m \oplus \mathcal {E}(B)$ belong to $\mathcal {K}(M)$ . Conversely, every vector in $\mathcal {K}(M)$ can be expressed as a linear combination of vectors in $\mathcal {E}(A) \oplus 0^n$ and $0^m \oplus \mathcal {E}(B)$ . Thus, the set $\mathcal {E}(M) = \{\mathcal {E}(A) \oplus 0^n\} \cup \{0^m \oplus \mathcal {E}(B)\}$ becomes a basis for $\mathcal {K}(M)$ .

If $\alpha $ is a cycle in $\mathcal {E}(M)$ , then $|\alpha |$ is an odd number by Lemma 3.5. If $|\alpha |=2p-1$ for some positive integer p, then $\alpha $ is one of the following forms:

$$ \begin{align*}\left\{\left[\begin{array}{c} A^{p-1}u \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ B^{p-2}Eu \end{array}\right], \left[\begin{array}{c} A^{p-2}u \\ 0 \end{array}\right], \ldots, \left[\begin{array}{c} Au \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ Eu \end{array}\right], \left[\begin{array}{c} u \\ 0 \end{array}\right]\right\}\end{align*} $$

or

$$ \begin{align*}\left\{\left[\begin{array}{c} 0 \\ B^{p-1}v \end{array}\right], \left[\begin{array}{c} A^{p-2}Dv \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ B^{p-2}v \end{array}\right], \ldots, \left[\begin{array}{c} 0 \\ Bv \end{array}\right], \left[\begin{array}{c} Dv \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ v \end{array}\right]\right\}\kern-1.5pt .\end{align*} $$

The formulae (3.7) and (3.8) follow from equation (3.3).

Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ has property (1) from Lemma 3.3. If $\alpha $ is a cycle in $\mathcal {E}(A)$ , we define the sign of $\alpha $ by

$$ \begin{align*}\text{sgn}(\alpha) = \begin{cases} +1 \quad \text{if } \langle \text{Ini}(\alpha), \text{Ter}(\alpha) \rangle_J\!>0, \\ -1 \quad \text{if } \langle \text{Ini}(\alpha), \text{Ter}(\alpha) \rangle_J\! <0. \end{cases}\end{align*} $$

We define the sign of $\mathcal {E}_p(A)$ for each $p \in \mathcal {I}nd(\mathcal {K}(A))$ by

$$ \begin{align*}\text{sgn}(\mathcal{E}_p(A)) = \prod_{\{\alpha : \alpha \text{ is a cycle in } \mathcal{E}_p(A)\}} \text{sgn}(\alpha).\end{align*} $$

When , we define the signs of $\mathcal {E}_p(A; \partial _{D, E}^+)$ and $\mathcal {E}_p(A; \partial _{D, E}^-)$ for each $p \in \mathcal {I}nd(\mathcal {K}(A))$ in similar ways.

Proposition B says that if , there exist bases $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ such that

$$ \begin{align*}\text{sgn}(\mathcal{E}_p(A; \partial_{D, E}^+)) = \text{sgn}(\mathcal{E}_{p+1}(B; \partial_{E, D}^-))) \quad (p \in \mathcal{I}nd(\mathcal{K}(A))),\end{align*} $$

and

$$ \begin{align*}\text{sgn}(\mathcal{E}_p(A; \partial_{D, E}^-)) = \text{sgn}(\mathcal{E}_{p-1}(B; \partial_{E, D}^+)) \quad (p \in \mathcal{I}nd(\mathcal{K}(A)); p>1).\end{align*} $$

In Proposition 3.6 below, we will see that the sign of $\mathcal {E}_1(A; \partial _{D, E}^-)$ is always $+1$ if $\mathcal {E}_1(A; \partial _{D, E}^-)$ is non-empty. We first prove Proposition C.

Proof of Proposition C

Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ has properties (1), (2) and (3) from Lemma 3.3 and that $p \in \mathcal {I}nd(\mathcal {K}(A))$ . We denote the terminal vectors of the cycles in $\mathcal {E}_p(A)$ by $u_{(1)}, \ldots , u_{(q)}$ . Suppose that P is the $m \times q$ matrix whose ith column is $u_{(i)}$ for each $i=1, \ldots , q$ . If we set $M=(A^{p-1}P)^{\textsf {T}} J P$ , then the entry of M is given by

$$ \begin{align*}M(i, j) = \begin{cases} \langle A^{p-1}u_{(i)}, u_{(i)} \rangle_J & \text{if } i=j, \\ 0 & \text{otherwise}, \end{cases}\end{align*} $$

and the sign of $\mathcal {E}_p(A)$ is determined by the product of the diagonal entries of M, that is,

$$ \begin{align*}\text{sgn}(\mathcal{E}_p(A)) = \begin{cases} +1 \quad \text{if } \prod_{i=1}^q M(i, i)>0, \\ -1 \quad \text{if } \prod_{i=1}^q M(i, i)<0.\end{cases}\end{align*} $$

Suppose that $\mathcal {E}'(A) \in \mathcal {B}as(\mathcal {K}(A))$ is another basis having property (1) from Lemma 3.3. Then obviously $\mathcal {E}^{\prime }_p(A)$ is the union of q disjoint cycles. If w is the terminal vector of a cycle in $\mathcal {E}^{\prime }_p(A)$ , then w can be expressed as a linear combination of vectors in $\mathcal {E}(A) \cap \text {Ker}(A^p)$ , that is,

$$ \begin{align*}w = \sum_{\substack{c_u \in \mathbb{R}\\ u \in \mathcal{E}(A) \cap \text{Ker}(A^p)}} c_u u.\end{align*} $$

If $u \in \mathcal {E}_k(A)$ for $k < p$ , then $A^{p-1} u = 0$ . If $u \in \mathcal {E}_k(A)$ for $k>p$ or $u \in \mathcal {E}_p(A)$ and u is not a terminal vector, then $\langle A^{p-1}u, u \rangle _J = 0$ by property (2) from Lemma 3.3. This means that the sign of $\mathcal {E}^{\prime }_p(A)$ is not affected by vectors $u \in \mathcal {E}_k(A)$ for $k \neq p$ or $u \in \mathcal {E}_p(A) \setminus \text {Ter}(\mathcal {E}_p(A))$ . In other words, if we write

$$ \begin{align*}w = \sum_{i=1}^q \, c_i u_{(i)} + \sum_{u \in \mathcal{E}(A) \cap \text{Ker}(A^p) \setminus \text{Ter}(\mathcal{E}_p(A))} c_u u \quad (c_i, c_u \in \mathbb{R}),\end{align*} $$

then we have

$$ \begin{align*}\langle A^{p-1}w, w \rangle_J = \langle A^{p-1} \sum_{i=1}^q \, c_i u_{(i)} , \sum_{i=1}^q \, c_i u_{(i)} \rangle_J.\end{align*} $$

To compute the sign of $\mathcal {E}^{\prime }_p(A)$ , we may assume that

$$ \begin{align*}w = \sum_{i=1}^q \, c_i u_{(i)} \quad(c_1, \ldots, c_q \in \mathbb{R}).\end{align*} $$

We denote the terminal vectors of the cycles in $\mathcal {E}'(A)$ by $w_{(1)}, \ldots , w_{(q)}$ and let Q be the $m \times q$ matrix whose ith column is $w_{(i)}$ for each $i=1, \ldots , q$ . If we set $N=(A^{p-1}Q)^{\textsf {T}} J Q$ , then $\prod _{i=1}^q N(i, i) \neq 0$ since $\mathcal {E}'(A)$ has property (1) from Lemma 3.3. So we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_p'(A)) = \begin{cases} +1 \quad \text{if } \prod_{i=1}^q N(i, i)>0, \\ -1 \quad \text{if } \prod_{i=1}^q N(i, i)<0.\end{cases}\end{align*} $$

It is obvious that there is a non-singular matrix R such that $PR=Q$ . Since $N=R^{\textsf {T}}MR$ and M is a diagonal matrix, it follows that

$$ \begin{align*}\prod_{i=1}^q \, M(i, i)> 0 \Leftrightarrow \prod_{i=1}^q \, N(i, i)> 0\end{align*} $$

and

$$ \begin{align*}\prod_{i=1}^q \, M(i, i) < 0 \Leftrightarrow \prod_{i=1}^q \, N(i, i) < 0.\\[-47pt]\end{align*} $$

Proposition 3.6. Suppose that and that $\mathcal {I}nd(\mathcal {K}(A))$ contains $1$ . There is a basis $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ such that if $\alpha $ is a cycle in $\mathcal {E}_1(A; \partial _{D, E}^-)$ , then $\mathrm {sgn}(\alpha )=+1$ . Hence, we have

$$ \begin{align*}\mathrm{sgn}(\mathcal{E}_1(A; \partial_{D, E}^-))=+1\end{align*} $$

if $\mathcal {E}_1(A; \partial _{D, E}^-)$ is non-empty.

Proof. Suppose that $\mathcal {U}$ is a basis for the subspace $\text {Ker}(A)$ of $\mathcal {K}(A)$ . We may assume that for each $u \in \mathcal {U}$ ,

(3.9) $$ \begin{align} a_1, a_2 \in \mathcal{B}_1(\textsf{X}_A), u(a_1) \neq 0 \text{ and } \mathcal{P}_E(a_1) \cap \mathcal{P}_E(a_2) = \varnothing \Rightarrow u(a_2)=0 \end{align} $$

for the following reason. If $u(a_2) \neq 0$ , then we define $u_1$ and $u_2$ by

$$ \begin{align*}u_1(a) = \begin{cases} u(a) & \text{if } \mathcal{P}_E(a_1) \cap \mathcal{P}_E(a) \neq \varnothing, \\ 0 & \text{otherwise}, \end{cases}\end{align*} $$

and

$$ \begin{align*}u_2(a) = \begin{cases} u(a) & \text{if } \mathcal{P}_E(a_2) \cap \mathcal{P}_E(a) \neq \varnothing, \\ 0 & \text{otherwise}. \end{cases}\end{align*} $$

It is obvious that $\{u_1, u_2\}$ is linearly independent. We set $u_3 = u - u_1 -u_2$ . If $u_3 \neq 0$ , then obviously $\{u_1, u_2, u_3\}$ is also linearly independent. We set

$$ \begin{align*}\mathcal{U}' =\mathcal{U} \cup \{u_1, u_2, u_3\} \setminus\{u\}.\end{align*} $$

If necessary, we apply the same process to $u_3$ and to each $u \in \mathcal {U}$ so that every element in $\mathcal {U}'$ satisfies equation (3.9) and then we remove some elements in $\mathcal {U}'$ so that it becomes a basis for $\text {Ker}(A)$ .

We first show the following:

$$ \begin{align*}u \in \mathcal{U} \Rightarrow u(\tau_J(a))u(a) \geq 0 \quad \text{ for all } a \in \mathcal{B}_1(\textsf{X}_A). \end{align*} $$

Suppose that $u \in \mathcal {U}$ , $a_0 \in \mathcal {B}_1(\textsf {X}_A)$ and that $u(a_0) \neq 0$ . If $a_0 = \tau _J(a_0)$ , then $u(\tau _J(a_0)) u(a_0)> 0$ and we are done. When $a_0 \neq \tau _J(a_0)$ and $u(\tau _J(a_0)) = 0$ , there is nothing to do. So we assume $a_0 \neq \tau _J(a_0)$ and $u(\tau _J(a_0)) \neq 0$ . If there were $b \in \mathcal {P}_E(a_0) \cap \mathcal {P}_E(\tau _J(a_0))$ , then we would have

$$ \begin{align*}1 \geq B(b, \tau_K(b)) \geq E(b, a_0) D(a_0, \tau_K(b)) + E(b, \tau_J(a_0))D(\tau_J(a_0), \tau_K(b))=2\end{align*} $$

from $E=KD^{\textsf {T}}J$ . Thus, we have $\mathcal {P}_E(a_0) \cap \mathcal {P}_E(\tau _J(a_0)) = \varnothing $ and this implies $u(\tau _J (a_0))=0$ by assumption (3.9).

Now, we denote the intersection of $\mathcal {U}$ and $\mathcal {E}_1(A; \partial _{D, E}^-)$ by $\mathcal {V}$ and assume that the elements of $\mathcal {V}$ are $u_1$ , $\ldots $ , $u_k$ , that is,

$$ \begin{align*}\mathcal{V} = \mathcal{U} \cap \mathcal{E}_1(A; \partial_{D, E}^-) = \{u_1, \ldots, u_k \}.\end{align*} $$

By Lemma 3.2 and equation (3.5), for each $u \in \mathcal {V}$ , there is a $v \in \mathcal {V}$ such that $\langle u, v \rangle _J \neq 0$ . If $\langle u_1, u_1 \rangle _J = 0$ , we choose $u_i \in \mathcal {V}$ such that $\langle u_1, u_i \rangle _J \neq 0$ . There are real numbers $k_1, k_2$ such that $\{u_1+k_1u_i, u_1+k_2u_i\}$ is linearly independent and that both $\langle u_1+k_1u_i, u_1+k_1u_i\rangle _J$ and $\langle u_1+k_2u_i, u_1+k_2u_i \rangle _J$ are positive. We replace $u_1$ and $u_i$ with $u_1+k_1u_i$ and $u_1+k_2u_i$ . Continuing this process, we can construct a new basis for $\mathcal {E}_1(A; \partial _{D, E}^-)$ such that if $\alpha $ is a cycle in $\mathcal {E}_1(A; \partial _{D, E}^-)$ , then $\text {sgn}(\alpha )=+1$ .

Suppose that $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ has property (1) from Lemma 3.3. We arrange the elements of $ \mathcal {I}nd(\mathcal {K}(A)) = \{p_1, p_2, \ldots , p_A\}$ to satisfy

$$ \begin{align*}p_1 < p_2 < \cdots < p_A\end{align*} $$

and write

$$ \begin{align*}\varepsilon_{p}=\text{sgn}(\mathcal{E}_p(A)).\end{align*} $$

If $| \mathcal {I}nd(\mathcal {K}(A))|$ =k, then the k-tuple $(\varepsilon _{p_1}, \varepsilon _{p_2}, \ldots , \varepsilon _{p_A})$ is called the flip signature of $(A, J)$ and $\varepsilon _{p_A}$ is called the leading signature of $(A, J)$ . The flip signature of $(A, J)$ is denoted by

$$ \begin{align*}\mathrm{F.Sig}(A, J) = (\varepsilon_{p_1}, \varepsilon_{p_2}, \ldots, \varepsilon_{p_A}).\end{align*} $$

When the eventual kernel $\mathcal {K}(A)$ of A is trivial, we write

$$ \begin{align*}\mathcal{I}nd(\mathcal{K}(A)) = \{0\}\end{align*} $$

and define the flip signature of $(A, J)$ by

$$ \begin{align*}\mathrm{F.Sig}(A, J) = (+1).\end{align*} $$

We have seen that both the flip signature and the leading signature are independent of the choice of basis $\mathcal {E}_A \in \mathcal {B}as(\mathcal {K}(A))$ as long as $\mathcal {E}_A$ has property (1) from Lemma 3.3.

In the next section, we prove Proposition A and in §5, we prove Theorem D.

4 Proof of Proposition A

We start with the notion of $D_{\infty }$ -higher block codes. (See [Reference Kitchens5, Reference Lind and Marcus8] for more details about higher block codes.) We need some notation. Suppose that $(X, \sigma _X)$ is a shift space over a finite set $\mathcal {A}$ and that $\varphi _{\tau }$ is a one-block flip for $(X, \sigma _X)$ defined by

$$ \begin{align*}\varphi_{\tau}(x)_i = \tau(x_{-i}) \quad (x \in X; i \in \mathbb{Z}).\end{align*} $$

For each positive integer n, we define the n-initial map $i_n : \bigcup _{k=n}^{\infty } \mathcal {B}_k(X) \rightarrow \mathcal {B}_n(X)$ , the n-terminal map $t_n : \bigcup _{k=n}^{\infty } \mathcal {B}_k(X) \rightarrow \mathcal {B}_n(X)$ and the mirror map $\mathcal {M}_n : \mathcal {A}^n \rightarrow \mathcal {A}^n$ by

$$ \begin{align*}i_n(a_1 a_2 \ldots a_m) = a_1 a_2 \ldots a_n \quad (a_1 \ldots a_m \in \mathcal{B}_m(X); \; m \geq n),\end{align*} $$
$$ \begin{align*}t_n(a_1 a_2 \ldots a_m) = a_{m-n+1} a_{m-n+2} \ldots a_m \quad (a_1 \ldots a_m \in \mathcal{B}_m(X); \; m \geq n)\end{align*} $$

and

$$ \begin{align*}\mathcal{M}_n(a_1 a_2 \ldots a_n) = a_n \ldots a_1 \quad (a_1 \ldots a_n \in \mathcal{A}^n).\end{align*} $$

For each positive integer n, we denote the map

$$ \begin{align*}a_1 a_2 \ldots a_n \mapsto \tau(a_1) \tau(a_2) \ldots \tau(a_n) \quad (a_1 \ldots a_n \in \mathcal{A}^n)\end{align*} $$

by $\tau _n : \mathcal {A}^n \rightarrow \mathcal {A}^n$ . It is obvious that the restriction of the map $\mathcal {M}_n \circ \tau _n$ to $\mathcal {B}_n(X)$ is a permutation of order $2$ .

For each positive integer n, we define the nth higher block code $h_n: X \rightarrow \mathcal {B}_n(X)^{\mathbb {Z}}$ by

$$ \begin{align*}h_n(x)_ i = x_{[i, i+n-1]} \quad (x\in X; i \in \mathbb{Z}).\end{align*} $$

We denote the image of $(X, \sigma _X)$ under $h_n$ by $(X_n, \sigma _n)$ and call $(X_n, \sigma _n)$ the nth higher block shift of $(X, \sigma _X)$ . If we write $\upsilon = \mathcal {M}_n \circ \tau _n$ , then the map $\varphi _{\upsilon }: X_n \rightarrow X_n$ defined by

$$ \begin{align*}\varphi_{\upsilon}(x)_i = \upsilon(x_{-i}) \quad (x \in X_n; i \in \mathbb{Z})\end{align*} $$

becomes a natural one-block flip for $(X_n, \sigma _n)$ . It is obvious that the nth higher block code $h_n$ is a $D_{\infty }$ -conjugacy from $(X, \sigma _X, \varphi _{\tau })$ to $(X_n, \sigma _n, (\sigma _n)^{n-1} \circ \varphi _{\upsilon })$ . We call the $D_{\infty }$ -system $(X_n, \sigma _n, \varphi _{\upsilon })$ the nth higher block $D_{\infty }$ -system of $(X, \sigma _X, \varphi _{\tau })$ .

For notational simplicity, we drop the subscript n and write $\tau =\tau _n$ and $\mathcal {M}=\mathcal {M}_n$ if the domains of $\tau _n$ and $\mathcal {M}_n$ are clear in the context.

Suppose that $(A, J)$ is a flip pair. Then the flip pair $(A_n, J_n)$ for the nth higher block $D_{\infty }$ -system $(\textsf {X}_n, \sigma _n, \varphi _n)$ of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ consists of $\mathcal {B}_n(\textsf {X}_A) \times \mathcal {B}_n(\textsf {X}_A)$ zero-one matrices $A_n$ and $J_n$ defined by

$$ \begin{align*}A_n(u, v) = \begin{cases} 1 \quad \text{if } t_{n-1}(u)=i_{n-1}(v),\\0 \quad \text{otherwise},\end{cases} (u, v \in \mathcal{B}_n(\textsf{X}_A))\end{align*} $$

and

$$ \begin{align*}J_n(u, v) = \begin{cases} 1 \quad \text{if } v = (\mathcal{M} \circ \tau_J)(u),\\0 \quad \text{otherwise},\end{cases} (u, v \in \mathcal{B}_n(\textsf{X}_A)).\end{align*} $$

In the following lemma, we prove that there is a $D_{\infty }$ -SSE from $(A, J)$ to $(A_n, J_n)$ .

Lemma 4.1. If n is a positive integer greater than $1$ , then we have

$$ \begin{align*}(A_1, J_1) \approx (A_{n}, J_{n})\; (\mathrm{lag} \; n-1).\end{align*} $$

Proof. For each $k=1, 2, \ldots , n-1$ , we define a zero-one $\mathcal {B}_k(\textsf {X}_A) \times \mathcal {B}_{k+1}(\textsf {X}_A)$ matrix $D_k$ and a zero-one $ \mathcal {B}_{k+1}(\textsf {X}_A) \times \mathcal {B}_k (\textsf {X}_A)$ matrix $E_k$ by

$$ \begin{align*}D_k(u, v)=\begin{cases} 1 \quad \text{if} \; u=i_{k}(v), \\ 0 \quad \text{otherwise}, \end{cases} (u \in \mathcal{B}_k(\textsf{X}_A), v \in \mathcal{B}_{k+1}(\textsf{X}_A))\end{align*} $$

and

$$ \begin{align*}E_k(v, u)=\begin{cases} 1 \quad \text{if} \; u=t_k(v), \\ 0 \quad \text{otherwise}, \end{cases} (u \in \mathcal{B}_k(\textsf{X}_A), v \in \mathcal{B}_{k+1}(\textsf{X}_A)).\end{align*} $$

It is straightforward to see that for each k.

In the proof of Lemma 4.1, $(\textsf {X}_{A_{k+1}}$ , $\sigma _{A_{k+1}}$ , $\varphi _{A_{k+1}, J_{k+1}})$ is equal to the second higher block $D_{\infty }$ -system of $(\textsf {X}_{A_k}$ , $\sigma _{A_k}$ , $\varphi _{A_k, J_k})$ by recoding of symbols and the half elementary conjugacy

$$ \begin{align*}\gamma_{D_k, E_k} : (\textsf{X}_{A_k}, \sigma_{A_k}, \varphi_{A_k, J_k}) \rightarrow (\textsf{X}_{A_{k+1}}, \sigma_{A_{k+1}}, \sigma_{A_{k+1}} \circ \varphi_{A_{k+1}, J_{k+1}})\end{align*} $$

induced by $(D_k, E_k)$ can be regarded as the second $D_{\infty }$ -higher block code for each $k=1, 2, \ldots , n-1$ . A $D_{\infty }$ -HEE is said to be a complete $D_{\infty }$ -half elementary equivalence from $(A, J)$ to $(B, K)$ if $\gamma _{D, E}$ is the second $D_{\infty }$ -higher block code.

In the rest of the section, we prove Proposition A.

Proof of Proposition A

We only prove part (1). One can prove part (2) in a similar way.

We denote the flip pairs for the nth higher block $D_{\infty }$ -system of $(\textsf {X}_{A}, \sigma _A, \varphi _{A, J})$ by $(A_n, J_n)$ for each positive integer n. If $\psi :(\textsf {X}_{A}, \sigma _A, \varphi _{A, J}) \rightarrow (\textsf {X}_{B}, \sigma _B, \varphi _{B, K})$ is a $D_{\infty }$ -conjugacy, then there are non-negative integers s and t and a block map $\Psi : \mathcal {B}_{s+t+1}(\textsf {X}_A) \rightarrow \mathcal {B}_1(\textsf {X}_B)$ such that

$$ \begin{align*}\psi(x)_i = \Psi(x_{[i-s, i+t]}) \quad (x \in \textsf{X}_A; i \in \mathbb{Z}).\end{align*} $$

We may assume that $s+t$ is even by extending the window size if necessary. By Lemma 4.1, there is a $D_{\infty }$ -SSE of lag $(s+t)$ from $(A, J)$ to $(A_{s+t+1}, J_{s+t+1})$ . From equation (2.4), it follows that the ( $s+t+1$ )th $D_{\infty }$ -higher block code $h_{s+t+1}$ is a $D_{\infty }$ -conjugacy. It is obvious that there is a $D_{\infty }$ -conjugacy $\psi '$ induced by $\psi $ satisfying $\psi = \psi ' \circ h_{s+t+1}$ and

$$ \begin{align*}x, y \in h_{s+t+1}(X) \quad \text{and} \quad x_0 = y_0 \Rightarrow \psi'(x)_0 = \psi'(y)_0.\end{align*} $$

So we may assume $s=t=0$ and show that there is a $D_{\infty }$ -SSE of lag $2l$ from $(A, J)$ to $(B, K)$ for some positive integer l.

If $\psi ^{-1}$ is the inverse of $\psi $ , there is a non-negative integer m such that

(4.1) $$ \begin{align} y, y' \in \textsf{X}_B \quad \text{and} \quad y_{[-m, m]}=y^{\prime}_{[-m, m]} \Rightarrow \psi^{-1}(y)_0=\psi^{-1}(y')_0 \end{align} $$

since $\psi ^{-1}$ is uniformly continuous. For each $k=1$ , $\ldots $ , $2m+1$ , we define a set $\mathcal {A}_k$ by

$$ \begin{align*}\mathcal{A}_k = \left\{ \left[\begin{array}{c} v \\ w \\ u \end{array} \right]: u, v \in \mathcal{B}_{i}(\textsf{X}_B), w \in \mathcal{B}_{j}(\textsf{X}_A) \text{ and } u \Psi(w) v \in \mathcal{B}_k(\textsf{X}_{B})\right\}\kern-1.5pt ,\end{align*} $$

where $i=\lfloor ({k-1})/{2} \rfloor $ and $j=k-2\lfloor ({k-1})/{2} \rfloor $ . We define $\mathcal {A}_k \times \mathcal {A}_k$ matrices $M_k$ and $F_k$ to be

$$ \begin{align*} M_k \left( \left[\begin{array}{c} v \\ w \\ u \end{array} \right], \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right] \right) = 1 &\Leftrightarrow \left[\begin{array}{c} v \\ \Psi(w) \\ u \end{array} \right] \left[\begin{array}{c} v' \\ \Psi(w') \\ u' \end{array} \right] \in \mathcal{B}_2(\textsf{X}_{B_k}) \\[6pt] & \qquad\ \ \text{ and } ww' \in \mathcal{B}_2(\textsf{X}_{A_j}) \end{align*} $$

and

$$ \begin{align*} F_k \left( \left[\begin{array}{c} v \\ w \\ u \end{array} \right], \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right] \right)=1 \Leftrightarrow\kern-3pt \begin{array}{c} \\ u' = (\mathcal{M} \circ \tau_{K}) (v), \; w'= (\mathcal{M} \circ \tau_{J})(w) \\ \\ \quad \text{ and } v'= (\mathcal{M} \circ \tau_{K})(u) \end{array} \end{align*} $$

for all

$$ \begin{align*}\left[\begin{array}{c} v \\ w \\ u \end{array} \right] , \; \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right] \in \mathcal{A}_k.\end{align*} $$

A direct computation shows that $(M_k, F_k)$ is a flip pair for each k. Next, we define a zero-one $\mathcal {A}_k \times \mathcal {A}_{k+1}$ matrix $R_k$ and a zero-one $\mathcal {A}_{k+1} \times \mathcal {A}_k$ matrix $S_k$ to be

$$ \begin{align*} R_k \left( \left[\begin{array}{c} v \\ w \\ u \end{array} \right], \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right] \right) = 1 \Leftrightarrow\kern-3pt \begin{array}{c} \\ u \Psi(w) v = i_k\! \left( u' \Psi(w') v' \right) \\ \\ \quad \text{ and } t_1(w)=i_1(w') \end{array}\end{align*} $$

and

$$ \begin{align*}S_k\left( \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right], \left[\begin{array}{c} v \\ w \\ u \end{array} \right] \right) = 1 \Leftrightarrow\kern-3pt \begin{array}{c} \\ t_k \left( u'\Psi(w')v' \right)=u\Psi(w)v \\ \\ \quad \text{ and } t_1(w')=i_1(w), \end{array}\end{align*} $$

for all

$$ \begin{align*}\left[\begin{array}{c} v \\ w \\ u \end{array} \right] \in \mathcal{A}_{k} \quad \text{and} \quad \left[\begin{array}{c} v' \\ w' \\ u' \end{array} \right] \in \mathcal{A}_{k+1}.\end{align*} $$

A direct computation shows that

Because $M_1=A$ and $F_1=J$ , we obtain

(4.2) $$ \begin{align} (A, J) \approx (M_{2m+1}, F_{2m+1})\; (\text{lag} \;2m). \end{align} $$

Finally, equation (4.1) implies that the $D_{\infty }$ -TMC determined by the flip pair $(M_{2m+1}$ , $F_{2m+1})$ is equal to the $(2m+1)$ th higher block $D_{\infty }$ -system of $(\textsf {X}_B$ , $\sigma _B$ , $\varphi _{K, B})$ by recoding of symbols. From Lemma 4.1, we have

(4.3) $$ \begin{align} (B, K) \approx (M_{2m+1}, F_{2m+1})\; (\text{lag} \;2m). \end{align} $$

From equations (4.2) and (4.3), it follows that

$$ \begin{align*}(A, J) \approx (B, K)\; (\text{lag} \;4m).\\[-38pt]\end{align*} $$

5 Proof of Theorem D

We start with the case where $(B, K)$ in Theorem D is the flip pair for the nth higher block $D_{\infty }$ -system of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ .

Lemma 5.1. Suppose that $(B, K)$ is the flip pair for the nth higher block $D_{\infty }$ -system of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ .

  1. (1) If $p \in \mathcal {I}nd(\mathcal {K}(A))$ , then there is $q \in \mathcal {I}nd(\mathcal {K}((B))$ such that $q = p+n-1$ and that

    $$ \begin{align*}\mathrm{sgn}(\mathcal{E}_p(A)) = \mathrm{sgn}(\mathcal{E}_{q}(B)).\end{align*} $$
  2. (2) If $q \kern-1pt\in\kern-1pt \mathcal {I}nd(\mathcal {K}((B))$ and $q \kern-1pt\geq\kern-1pt n$ , then there is $p \kern-1pt\in\kern-1pt \mathcal {I}nd(\mathcal {K}(A))$ such that $q \kern-1pt =\kern-1pt p + n - 1$ and that

    $$ \begin{align*}\mathrm{sgn}(\mathcal{E}_p(A)) = \mathrm{sgn}(\mathcal{E}_q(B)).\end{align*} $$
  3. (3) If $q \in \mathcal {I}nd(\mathcal {K}((B))$ and $q < n$ , then we have

    $$ \begin{align*}\mathrm{sgn}(\mathcal{E}_q(B)) = +1.\end{align*} $$

Proof. We only prove the case $n=2$ . We assume $\mathcal {E}(A) \in \mathcal {B}as(\mathcal {K}(A))$ and $\mathcal {E}(B) \in \mathcal {B}as(\mathcal {K}(B))$ are bases having properties from Proposition B. Suppose that $\alpha $ is a cycle in $\mathcal {E}_p(A)$ for some $p \in \mathcal {I}nd(\mathcal {K}(A))$ and that u is the initial vector of $\alpha $ . For any $a_1 a_2 \in \mathcal {B}_2(\textsf {X}_A)$ , we have

$$ \begin{align*}Eu\left(\left[\begin{array}{c} a_2 \\ a_1 \end{array}\right]\right) = u(a_2)\end{align*} $$

and this implies that $Eu$ is not identically zero. By Lemma 3.5, $\alpha $ is a cycle in $\mathcal {E}_p(A; \partial _{D, E}^+)$ . Under the assumption that $\mathcal {E}(A)$ and $\mathcal {E}(B)$ have properties from Proposition B, we can find a cycle $\beta $ in $\mathcal {E}(B)$ such that the initial vector of $\beta $ is $Eu$ . Thus, we obtain

(5.1) $$ \begin{align} \mathcal{E}_p(A; \partial_{D, E}^-) = \varnothing \quad \text{and} \quad \mathcal{E}_{p+1}(B; \partial_{E, D}^+) = \varnothing, \end{align} $$
$$ \begin{align*}p \in \mathcal{I}nd(\mathcal{K}(A)) \Leftrightarrow p+1 \in \mathcal{I}nd(\mathcal{K}(B)) \quad (p \geq 1)\end{align*} $$

and

$$ \begin{align*}\text{sgn}(\mathcal{E}_p (A))=\text{sgn}(\mathcal{E}_{p+1}(B)) \quad (p\in \mathcal{I}nd(\mathcal{K}(A))).\end{align*} $$

If $\mathcal {E}_1(B) \neq \varnothing $ , then $\mathcal {E}_1(B) = \mathcal {E}_1(B; \partial _{E, D}^-)$ by equation (5.1) and we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_1(B)) = +1\end{align*} $$

by Propositions 3.6 and C.

Remark. If two $D_{\infty }$ -TMCs are finite, then we can directly determine whether or not they are $D_{\infty }$ -conjugate. In this paper, we do not consider $D_{\infty }$ -TMCs that have finite cardinalities. Hence, when $(B, K)$ is the flip pair for the nth higher block $D_{\infty }$ -system of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ for some positive integer $n>1$ , B must have zero as its eigenvalue.

Proof of Theorem D

Suppose that $(A, J)$ and $(B, K)$ are flip pairs and that $\psi :(\textsf {X}_A, \sigma _A,\varphi _{A, J}) \rightarrow (\textsf {X}_B, \sigma _B, \varphi _{B, K})$ is a $D_{\infty }$ -conjugacy. As we can see in the proof of Proposition A, there is a $D_{\infty }$ -SSE from $(A, J)$ to $(B, K)$ consisting of the even number of complete $D_{\infty }$ -half elementary equivalences and . In Lemma 5.1, we have already seen that Theorem D is true in the case of complete $D_{\infty }$ -half elementary equivalences. So it remains to compare the flip signatures of $(M_k, F_k)$ and $(M_{k+1}, F_{k+1})$ for each $k=1, \ldots , 2m$ . Throughout the proof, we assume $\mathcal {A}_k$ and are as in the proof of Proposition A.

We only discuss the following two cases:

  1. (1) ;

  2. (2) .

When $k=1$ , $(R_1, S_1)$ is a complete $D_{\infty }$ -half elementary conjugacy from $(A, J)$ to $(A_2, J_2)$ . For each $k=4, 5 \ldots , 2m$ , one can apply the arguments used in cases (1) and (2) to . More precisely, when k is an even number, the argument used in case (1) can be applied and when k is an odd number, the argument used in case (2) can be applied.

(1) Suppose that $(B_2, K_2)$ is the flip pair for the second higher block $D_{\infty }$ -system of $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ . We first compare the flip signatures of $(B_2, K_2)$ and $(M_3, F_3)$ . We define a zero-one $\mathcal {B}_2(\textsf {X}_B) \times \mathcal {A}_3$ matrix $U_2$ and a zero-one $\mathcal {A}_3 \times \mathcal {B}_2(\textsf {X}_B)$ matrix $V_2$ by

$$ \begin{align*}U_2 \left( \left[\begin{array}{c} b_2 \\ b_1 \end{array} \right], \left[\begin{array}{c} d_3 \\ a_2 \\ d_1 \end{array} \right] \right) = \begin{cases} 1 \quad \text{if } b_1=d_1 \text{ and } \Psi(a_2) = b_2, \\ 0 \quad \text{otherwise}, \end{cases} \end{align*} $$

and

$$ \begin{align*}V_2\left( \left[\begin{array}{c} d_3 \\ a_2 \\ d_1 \end{array} \right], \left[\begin{array}{c} b_2 \\ b_1 \end{array} \right] \right) = \begin{cases} 1 \quad \text{if } b_2=d_3 \text{ and } \Psi(a_2)=b_1, \\ 0 \quad \text{otherwise}, \end{cases}\end{align*} $$

for all

$$ \begin{align*}\left[\begin{array}{c} b_2 \\ b_1 \end{array} \right] \in \mathcal{B}_2(\textsf{X}_B) \quad \text{and} \quad \left[\begin{array}{c} d_3 \\ a_2 \\ d_1 \end{array} \right] \in \mathcal{A}_{3}.\end{align*} $$

A direct computation shows that

Remark of Lemma 5.1 says that $\mathcal {K}(B_2)$ is not trivial. So there is a basis $\mathcal {E}(B_2) \in \mathcal {B}as(\mathcal {K}(B_2))$ for the eventual kernel of $B_2$ having property (1) from Lemma 3.3. Suppose that $\gamma = \{w_1, \ldots , w_p\}$ is a cycle in $\mathcal {E}(B_2)$ . Since

$$ \begin{align*}V_2w_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right) = w_1 \left(\left[ \begin{array}{c} b_3 \\ \Psi(a_2) \end{array}\right]\right) \quad \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right] \in \mathcal{A}_3 \right)\kern-1.5pt ,\end{align*} $$

it follows that $w_1 \notin \text {Ker}(V_2)$ . By Lemma 3.5, $\gamma $ is a cycle in $\mathcal {E}_p(B_2; \partial _{U_2, V_2}^+)$ . Suppose that $\mathcal {E}(M_3) \in \mathcal {B}as(\mathcal {K}(M_3))$ is a basis for the eventual kernel of $M_3$ having property (1) from Lemma 3.3. Then it is obvious that for each $p \in \mathcal {I}nd(\mathcal {K}(B_2))$ , we have

(5.2) $$ \begin{align} \mathcal{E}_p(B_2; \partial_{U_2, V_2}^-) = \varnothing \quad \text{and} \quad \mathcal{E}_{p+1}(M_3; \partial_{V_2, U_2}^+) = \varnothing. \end{align} $$

Hence,

$$ \begin{align*}p \in \mathcal{I}nd(\mathcal{K}(B_2)) {\ \Leftrightarrow\ } p+1 \in \mathcal{I}nd(\mathcal{K}(M_3)) \quad (p \geq 1)\end{align*} $$

and

$$ \begin{align*}\text{sgn}(\mathcal{E}_p (B_2))=\text{sgn}(\mathcal{E}_{p+1}(M_3)) \quad (p\in \mathcal{I}nd(\mathcal{K}(B_2)))\end{align*} $$

by Proposition C. If $\mathcal {E}_1(M_3) \neq \varnothing $ , then $\mathcal {E}_1(M_3) = \mathcal {E}_1(M_3; \partial _{V_2, U_2}^-)$ by equation (5.2) and we have

(5.3) $$ \begin{align} \text{sgn}(\mathcal{E}_1(M_3)) = +1 \end{align} $$

by Propositions 3.6 and C.

Now, we compare the flip signatures of $(M_2, F_2)$ and $(M_3, F_3)$ . Let $\beta =\{v_1, \ldots , v_{p+1}\}$ be a cycle in $\mathcal {E}(M_3)$ for some $p \geq 1$ . If $b_1b_2b_3 \in \mathcal {B}_3(\textsf {X}_B)$ and $a_2, a_2' \in \Psi ^{-1}(b_2)$ , then from $M_3v_2 =v_1$ , it follows that

$$ \begin{align*}v_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right) = \sum_{a_3 \in \Psi^{-1}(b_3)} \sum_{b_4 \in \mathcal{F}_B(b_3)} v_2 \left( \left[\begin{array}{c} b_4 \\ a_3 \\ b_2 \end{array} \right]\right)\end{align*} $$

and this implies that

$$ \begin{align*}v_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right) = v_1 \left( \left[\begin{array}{c} b_3 \\ a_2' \\ b_1 \end{array} \right]\right)\kern-1.5pt .\end{align*} $$

Since $v_1$ is a non-zero vector, there is a block $b_1b_2b_3 \in \mathcal {B}_3(\textsf {X}_B)$ and a non-zero real number k such that

$$ \begin{align*}v_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right) = k \quad \text{for all } a_2 \in \Psi^{-1}(b_2).\end{align*} $$

Since $M_3v_1=0$ , it follows that

$$ \begin{align*} \sum_{a_2 \in \Psi^{-1}(b_2)} \sum_{b_3 \in \mathcal{F}_B(b_2)} v_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right) = k \sum_{b_3 \in \mathcal{F}_B(b_2)} v_1 \left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right)= 0.\end{align*} $$

From this, we see that

$$ \begin{align*}R_2 v_1 \left(\left[ \begin{array}{c} a_2 \\ a_1 \end{array}\right]\right) = \sum_{b_3 \in \mathcal{F}_B(b_2)} v_1\left( \left[\begin{array}{c} b_3 \\ a_2 \\ b_1 \end{array} \right]\right)=0\end{align*} $$

for any $a_1 \in \Psi ^{-1}(b_1)$ and $a_1 a_2 \in \mathcal {B}_2(\textsf {X}_A)$ . Hence, $v_1 \in \text {Ker}(R_2)$ and $\beta $ is a cycle in $\mathcal {E}_{p+1}(M_3; \partial _{S_2, R_2}^-)$ by Lemma 3.5. From this, we see that

$$ \begin{align*}p+1 \in \mathcal{I}nd(\mathcal{K}(M_3)) \Leftrightarrow p \in \mathcal{I}nd(\mathcal{K}(M_2)) \quad (p \geq 2)\end{align*} $$

and

$$ \begin{align*}2 \in \mathcal{I}nd(\mathcal{K}(M_3)) \Leftrightarrow 1 \in \mathcal{I}nd(\mathcal{K}(M_2; \partial_{R_2, S_2}^+)).\end{align*} $$

Suppose that $\mathcal {E}(M_2) \in \mathcal {B}as(\mathcal {K}(M_2))$ is a basis for the eventual kernel of $M_2$ having property (1) from Lemma 3.3. If $1 \in \mathcal {I}nd(\mathcal {K}(M_2))$ and $\mathcal {E}_{1}(M_2; \partial _{R_2, S_2}^-)$ is non-empty, then we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_{1}(M_2; \partial_{R_2, S_2}^-)) = +1\end{align*} $$

by Propositions 3.6, C and equation (3.5). Thus, we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_{p+1} (M_3))=\text{sgn}(\mathcal{E}_{p}(M_2)) \quad (p+1\in \mathcal{I}nd(\mathcal{K}(M_3)); p \geq 1).\end{align*} $$

If $1 \in \mathcal {I}nd(\mathcal {K}(M_3))$ and $\mathcal {E}_{1}(M_3; \partial _{S_2, R_2}^+)$ is non-empty, then we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_{1}(M_3; \partial_{S_2, R_2}^+)) = +1\end{align*} $$

and if $\mathcal {E}_{1}(M_3; \partial _{S_2, R_2}^-)$ is non-empty, then we have

$$ \begin{align*}\text{sgn}(\mathcal{E}_{1}(M_3; \partial_{S_2, R_2}^-)) = +1\end{align*} $$

by equations (3.5), (5.3) and Propositions 3.6, C. As a consequence, the flip signatures of $(M_2, F_2)$ and $(M_3, F_3)$ have the same number of $-1$ s and their leading signatures coincide.

(2) Suppose that $\alpha $ is a cycle in $\mathcal {K}(M_3)$ and that u is the initial vector of $\alpha $ . Since

$$ \begin{align*}S_3u\left(\left[\begin{array}{c} b_4 \\ a_3 \\ a_2 \\ b_1 \end{array}\right]\right) = u \left[\begin{array}{c} b_4 \\ a_3 \\ \Psi(a_2) \end{array}\right] \quad \left(\left[\begin{array}{c} b_4 \\ a_3 \\ a_2 \\ b_1 \end{array}\right] \in \mathcal{A}_4\right)\kern-1.5pt ,\end{align*} $$

it follows that $S_3u$ is not identically zero. The argument used in the proof of Lemma 5.1 completes the proof.

6 $D_{\infty }$ -shift equivalence and the Lind zeta functions

We first introduce the notion of $D_{\infty }$ -shift equivalence which is an analogue of shift equivalence. Let $(A, J)$ and $(B, K)$ be flip pairs and let l be a positive integer. A $D_{\infty }$ -shift equivalence ( $D_{\infty }$ -SE) of lag l from $(A, J)$ to $(B, K)$ is a pair $(D, E)$ of non-negative integral matrices satisfying

$$ \begin{align*}A^{l}=DE, \quad B^{l}=ED, \quad AD=DB \quad \text{and} \quad E=KD^{\textsf{T}}J.\end{align*} $$

We observe that $AD=DB$ , $E=KD^{\textsf {T}}J$ and the fact that $(A,J)$ and $(B,K)$ are flip pairs imply $EA=BE$ . If there is a $D_{\infty }$ -SE of lag l from $(A, J)$ to $(B, K)$ , then we say that $(A, J)$ is $D_{\infty }$ -shift equivalent to $(B, K)$ and write

$$ \begin{align*}(A, J) \sim (B, K) \; (\text{lag } l).\end{align*} $$

Suppose that

$$ \begin{align*}(D_1, E_1), (D_2, E_2), \ldots, (D_l, E_l)\end{align*} $$

is a $D_{\infty }$ -SSE of lag l from $(A, J)$ to $(B, K)$ . If we set

$$ \begin{align*}D=D_1 D_2 \ldots D_l \quad \text{and} \quad E=E_l \ldots E_2 E_1,\end{align*} $$

then $(D, E)$ is a $D_{\infty }$ -SE of lag l from $(A, J)$ to $(B, K)$ . Hence, we have

$$ \begin{align*}(A, J) \approx (B, K) \quad (\text{lag } l) \Rightarrow (A, J) \sim (B, K) \; (\text{lag } l).\end{align*} $$

In the rest of the section, we review the Lind zeta function of a $D_{\infty }$ -TMC. In [Reference Kim, Lee and Park4], an explicit formula for the Lind zeta function of a $D_{\infty }$ -system was established. In the case of a $D_{\infty }$ -TMC, the Lind zeta function can be expressed in terms of matrices from flip pairs. We briefly discuss the formula.

Suppose that G is a group and that $\alpha $ is a G-action on a set X. Let $\mathcal {F}$ denote the set of finite index subgroups of G. For each $H \in \mathcal {F}$ , we set

$$ \begin{align*}p_H(\alpha) = |\{ x \in X : \text{for all } h \in H \; \alpha(h, x) = x \}|.\end{align*} $$

The Lind zeta function $\zeta _{\alpha }$ of the action $\alpha $ is defined by

(6.1) $$ \begin{align} \zeta_{\alpha}(t) = \exp \left( \sum_{H \in \mathcal{F}} \frac{p_H(\alpha)}{|G/H|}\, t^{|G/H|}\right)\kern-1.5pt . \end{align} $$

It is clear that if $\alpha : \mathbb {Z} \times X \rightarrow X$ is given by $\alpha (n, x) = T^n(x)$ , then the Lind zeta function $\zeta _{\alpha }$ becomes the Artin–Mazur zeta function $\zeta _T$ of a topological dynamical system $(X, T)$ . The formula for the Artin–Mazur zeta function can be found in [Reference Artin and Mazur1]. Lind defined the function (6.1) in [Reference Lind, Pollicott and Schmidt7] for the case $G = \mathbb {Z}^d$ .

Every finite index subgroup of the infinite dihedral group $D_{\infty } =\langle a, b : ab=ba^{-1} \;\, \text {and} \;\, b^2=1\rangle $ can be written in one and only one of the following forms:

$$ \begin{align*}\langle a^m \rangle \quad \text{or} \quad \langle a^m, a^k b \rangle \quad (m=1, 2, \ldots; k=1, \ldots, m-1)\end{align*} $$

and the index is given by

$$ \begin{align*}|G_2/ \langle a^m \rangle| = 2m \quad \text{or} \quad |G_2/ \langle a^m, a^k b \rangle| = m.\end{align*} $$

Suppose that $(X, T, F)$ is a $D_{\infty }$ -system. If m is a positive integer, then the number of periodic points in X of period m will be denoted by $p_m(T)$ :

$$ \begin{align*}p_m(T) = |\{ x \in X : T^m(x) = x \}|.\end{align*} $$

If m is a positive integer and n is an integer, then $p_{m,n}(T, F)$ will denote the number of points in X fixed by $T^m$ and $T^n \circ F$ :

$$ \begin{align*}p_{m, n}(T, F) = |\{ x \in X : T^m(x) = T^n \circ F (x) = x \}|.\end{align*} $$

Thus, the Lind zeta function $\zeta _{T,F}$ of a $D_{\infty }$ -system $(X, T, F)$ is given by

(6.2) $$ \begin{align} \zeta_{T, F}(t) = \exp \bigg(\sum_{m=1}^{\infty} \, \frac{p_m(T)}{2m}t^{2m} +\sum_{m=1}^{\infty} \, \sum_{k=0}^{m-1}\, \frac{p_{m,k}(T, F)}{m}t^{m} \bigg). \end{align} $$

It is evident if two $D_{\infty }$ -systems $(X, T, F)$ and $(X', T', F')$ are $D_{\infty }$ -conjugate, then

$$ \begin{align*}p_m(T) = p_m(T') \quad \text{and} \quad p_{m, n}(T, F) = p_{m,n}(T', F')\end{align*} $$

for all positive integers m and integers n. As a consequence, the Lind zeta function is a $D_{\infty }$ -conjugacy invariant.

The formula (6.2) can be simplified as follows. Since $T \circ F = F \circ T^{-1}$ and $F^2 = \text {Id}_X,$ it follows that

$$ \begin{align*}p_{m,n}(T, F) = p_{m,n+m}(T, F) = p_{m,n+2}(T, F)\end{align*} $$

and this implies that

(6.3) $$ \begin{align} & p_{m,n}(T, F) = p_{m,0}(T, F) \quad \text{ if } m \text{ is odd}, \\[-1pt] & p_{m,n}(T, F)= p_{m,0}(T, F) \quad \text{ if } m \text{ and } n \text{ are even}, \nonumber \\[-1pt] & p_{m,n}(T, F)= p_{m,1}(T, F) \quad \text{ if } m \text{ is even and } n \text{ is odd}. \nonumber \end{align} $$

Hence, we obtain

$$ \begin{align*}\sum_{k=0}^{m-1} \, \frac{p_{m,n}(T, F)}{m} = \begin{cases}p_{m,0}(T, F) &\text{if } m \text{ is odd},\\ \displaystyle{\frac{p_{m,0}(T, F)+p_{m,1}(T, F)}{2}} & \text{if } m \text{ is even}.\end{cases}\end{align*} $$

Using this, equation (6.2) becomes

$$ \begin{align*}\zeta_{\alpha}(t) = {\zeta_T(t^2)}^{1/2} \exp \left( G_{T, F}(t) \right),\end{align*} $$

where $\zeta _T$ is the Artin–Mazur zeta function of $(X, T)$ and $G_{T, F}$ is given by

$$ \begin{align*}G_{T, F}(t) = \sum_{m=1}^{\infty} \, \left( p_{2m-1, 0}(T, F) \, t^{2m-1} + \frac{p_{2m, 0}(T, F)+p_{2m, 1}(T, F)}{2} \, t^{2m}\right).\end{align*} $$

If there is a $D_{\infty }$ -SSE of lag $2l$ between two flip pairs $(A, J)$ and $(B, K)$ for some positive integer l, then $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ have the same Lind zeta function by item (1) in Proposition A. The following proposition says that the Lind zeta function is actually an invariant for $D_{\infty }$ -SSE.

Proposition 6.1. If $(X, T, F)$ is a $D_{\infty }$ -system, then

$$ \begin{align*}p_{2m-1, 0}(T, F) = p_{2m-1, 0}(T, T \circ F),\end{align*} $$
$$ \begin{align*}p_{2m, 0}(T, F) = p_{2m, 1}(T, T \circ F), \end{align*} $$
$$ \begin{align*}p_{2m, 1}(T, F) = p_{2m, 0}(T, T \circ F)\end{align*} $$

for all positive integers m. As a consequence, the Lind zeta functions of $(X, T, F)$ and $(X, T, T \circ F)$ are the same.

Proof. The last equality is trivially true. To prove the first two equalities, we observe that

$$ \begin{align*}T^{m}(x) = F(x)=x \Leftrightarrow T^{m}(Tx)=T\circ(T \circ F)(Tx) = Tx\end{align*} $$

for all positive integers m. Thus, we have

(6.4) $$ \begin{align} p_{m, 0}(T, F)=p_{m, 1}(T, T \circ F) \quad (m=1, 2, \ldots). \end{align} $$

Replacing m with $2m$ yields the second equality. From equations (6.3) and (6.4), the first equality follows.

When $(A, J)$ is a flip pair, the numbers $p_{m, \delta }(\sigma _A, \varphi _{A, J})$ of fixed points can be expressed in terms of A and J for all positive integers m and $\delta \in \{0, 1\}$ . To present it, we indicate notation. If M is a square matrix, then $\Delta _M$ will denote the column vector whose ith coordinates are identical with ith diagonal entries of M, that is,

$$ \begin{align*}\Delta_M(i) = M(i, i).\end{align*} $$

For instance, if I is the $2 \times 2$ identity matrix, then

$$ \begin{align*}\Delta_I = \left[\begin{array}{c} 1 \\ 1 \end{array}\right]\!.\end{align*} $$

The following proposition is proved in [Reference Kim, Lee and Park4].

Proposition 6.2. If $(A, J)$ is a flip pair, then

$$ \begin{align*}p_{2m-1, 0}(\sigma_A, \varphi_{A, J}) = {\Delta_J}^{\textsf{T}} ( A^{m-1} ) \Delta_{AJ},\end{align*} $$
$$ \begin{align*}p_{2m, 0}(\sigma_A, \varphi_{A, J}) = {\Delta_J}^{\textsf{T}} ( A^m ) \Delta_J, \end{align*} $$
$$ \begin{align*}p_{2m, 1}(\sigma_A, \varphi_{A, J}) = {\Delta_{JA}}^{\textsf{T}} ( A^{m-1} ) \Delta_{AJ}\end{align*} $$

for all positive integers m.

7 Examples

Let A be Ashley’s eight-by-eight and let B be the minimal zero-one transition matrix for the full two-shift, that is,

$$ \begin{align*}A=\left[\begin{array}{rrrrrrrr} 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \end{array}\right] \quad \text{and} \quad B = \left[\begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array}\right]\!.\end{align*} $$

There is a unique one-block flip for $(\textsf {X}_A, \sigma _A)$ and there are exactly two one-block flips for $(\textsf {X}_B, \sigma _B)$ . Those flips are determined by the permutation matrices

$$ \begin{align*}J=\left[\begin{array}{rrrrrrrr} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array}\right], \quad I = \left[\begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}\right] \quad \text{and} \quad K = \left[\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right]\!.\end{align*} $$

In the following example, we calculate the Lind zeta functions of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ , $(\textsf {X}_B, \sigma _B, \varphi _{B, I})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ .

Example 7.1. Direct computation shows that the number of fixed points of $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ , $(\textsf {X}_B, \sigma _B, \varphi _{B, I})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ are as follows:

$$ \begin{align*}p_m(\sigma_A) = p_m(\sigma_B) = 2^m,\end{align*} $$
$$ \begin{align*}p_{2m-1, 0}(\sigma_A, \varphi_{A, J}) = p_{2m, 0}(\sigma_A, \varphi_{A, J}) = 0,\end{align*} $$
$$ \begin{align*}p_{2m, 1}(\sigma_A, \varphi_{A, J}) = \begin{cases} 2^m \quad \text{if } m \neq 6, \\ 80 \quad \; \text{if } m=6, \end{cases}\end{align*} $$
$$ \begin{align*}p_{2m-1, 0}(\sigma_B, \varphi_{B, I}) = 2^m, \quad p_{2m, 0}(\sigma_B, \varphi_{B, I}) = 2^{m+1}, \quad p_{2m, 1}(\sigma_B, \varphi_{B, I}) = 2^m,\end{align*} $$
$$ \begin{align*}p_{2m-1, 0}(\sigma_B, \varphi_{B, K}) = p_{2m, 0}(\sigma_B, \varphi_{B, K}) = 0, \quad p_{2m, 1}(\sigma_B, \varphi_{B, K}) = 2^m\end{align*} $$

for all positive integers m. Thus, the Lind zeta functions are as follows:

$$ \begin{align*}\zeta_{A, J}(t) = \frac{1}{\sqrt{1-2t^2}} \, \exp \left( \frac{t^2}{1-2t^2} +8t^{12}\right),\end{align*} $$
$$ \begin{align*}\zeta_{B, I}(t) = \frac{1}{\sqrt{1-2t^2}} \, \exp \left( \frac{2t+3t^2}{1-2t^2} \right)\end{align*} $$

and

$$ \begin{align*}\zeta_{B, K}(t) = \frac{1}{\sqrt{1-2t^2}} \, \exp \left( \frac{t^2}{1-2t^2} \right)\!.\end{align*} $$

As a result, we see that

$$ \begin{align*}(\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, I}),\end{align*} $$
$$ \begin{align*}(\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, K})\end{align*} $$

and

$$ \begin{align*}(\textsf{X}_B, \sigma_B, \varphi_{B, I}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, K}).\end{align*} $$

Example 7.2. In spite of $\zeta _{A, J} \neq \zeta _{B, I}$ , $\zeta _{A, J} \neq \zeta _{B, K}$ and $\zeta _{B, I} \neq \zeta _{B, K}$ , there are $D_{\infty }$ -SEs between $(A, J)$ , $(B, I)$ and $(B, K)$ pairwise. If D and E are matrices given by

$$ \begin{align*}D = 2\left[\begin{array}{rr} 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \\ 1 & 1 \end{array}\right]\quad \text{and} \quad E= 2\left[\begin{array}{rrrrrrrr} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array}\right]\!,\end{align*} $$

then $(D, E)$ is a $D_{\infty }$ -SE of lag $6$ from $(A, J)$ to $(B, K)$ and from $(A, J)$ to $(B, I)$ :

$$ \begin{align*}(D, E): (A, J) \sim (B, I) \; (\text{lag } 6) \quad \text{and} \quad (D, E): (A, J) \sim (B, K) \; (\text{lag } 6).\end{align*} $$

Direct computation shows that $(B^l, B^l)$ is a $D_{\infty }$ -SE from $(B, I)$ to $(B, K)$ :

$$ \begin{align*}(B^l, B^l): (B, I) \sim (B, K) \; (\text{lag } 2l)\end{align*} $$

for all positive integers l. This contrasts with the fact that the existence of SE between two transition matrices implies that the corresponding $\mathbb {Z}$ -TMCs share the same Artin–Mazur zeta functions. (See §7 in [Reference Lind and Marcus8].)

Example 7.3. We compare the flip signatures of $(A, J)$ , $(B, I)$ and $(B, K)$ . Direct computation shows that the index sets for the eventual kernels of A and B are

$$ \begin{align*} \mathcal{I}nd(\mathcal{K}(A)) = \{1, 6\} \quad \text{and} \quad \mathcal{I}nd(\mathcal{K}(B)) = \{1\}\end{align*} $$

and the flip signatures are

$$ \begin{align*}\mathrm{F.Sig}(A, J) = (-1, +1),\end{align*} $$
$$ \begin{align*}\mathrm{F.Sig}(B, I) = (+1)\end{align*} $$

and

$$ \begin{align*}\mathrm{F.Sig}(B, K) = (-1).\end{align*} $$

By Theorem D, we see that

$$ \begin{align*}(\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, I}),\end{align*} $$
$$ \begin{align*}(\textsf{X}_A, \sigma_A, \varphi_{A, J}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, K})\end{align*} $$

and

$$ \begin{align*}(\textsf{X}_B, \sigma_B, \varphi_{B, I}) \ncong (\text{X}_B, \sigma_B, \varphi_{B, K}). \end{align*} $$

The flip signature is completely determined by the eventual kernel of a transition matrix, while the Lind zeta functions and the existence of $D_{\infty }$ -shift equivalence between two flip pairs rely on the eventual ranges of transition matrices. The nilpotency index of Ashley’s eight-by-eight A on the eventual kernel $\mathcal {K}(A)$ is $6$ . In the case of $(A, J)$ in Example 7.1, the number of periodic points $p_m(\sigma _A)$ is completely determined by the eventual range of A, the numbers of fixed points $p_{2m-1, 0}(\sigma _A, \varphi _{A, J})$ and $p_{2m, 1}(\sigma _A, \varphi _{A, J})$ are completely determined by the eventual ranges if $m \geq 7$ , and $p_{2m, 0}(\sigma _A, \varphi _{A, J})$ is completely determined by the eventual ranges if $m \geq 6$ . In Example 7.2, $(D, E)$ is actually the $D_{\infty }$ -SE from $(A, J)$ to $(B, I)$ and from $(A, J)$ to $(B, K)$ having the smallest lag, and this means that the existence of $D_{\infty }$ -SE from $(A, J)$ to $(B, I)$ and from $(A, J)$ to $(B, K)$ are not related to the eventual kernels of A and B at all. Similarly, the existence of $D_{\infty }$ -SE from $(B, I)$ to $(B, K)$ is not related to the eventual kernel of B at all. Therefore, the coincidence of the Lind zeta functions or the existence of $D_{\infty }$ -shift equivalence are not enough to guarantee the same number of $-1$ s in the corresponding flip signatures or the coincidence of leading signatures. The following example shows that the flip signatures of two flip pairs can have the same number of $-1$ s and share the same leading signatures even when their non-zero eigenvalues are totally different.

Example 7.4. Let A and B be the minimal zero-one transition matrices for the even shift and full two-shift, respectively:

$$ \begin{align*}A = \left[\begin{array}{rrr} 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{array}\right] \quad \text{and} \quad B = \left[\begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array}\right]\!.\end{align*} $$

If we set

$$ \begin{align*}J = \left[\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] \quad \text{and} \quad I = \left[\begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}\right]\!,\end{align*} $$

then $(A, J)$ and $(B, K)$ are flip pairs. Let $\text {sp}^{\textsf {x}}(A)$ and $\text {sp}^{\textsf {x}}(B)$ be the sets of non-zero eigenvalues of A and B, respectively:

$$ \begin{align*}\text{sp}^{\textsf{x}}(A) = \bigg\{\frac{1+ \sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}\bigg\} \quad \text{and} \quad \text{sp}^{\textsf{x}}(B) = \{2\}.\end{align*} $$

Because $\text {sp}^{\textsf {x}}(A)$ and $\text {sp}^{\textsf {x}}(B)$ do not coincide, $(\textsf {X}_A, \sigma _A)$ and $(\textsf {X}_B, \sigma _B)$ are not $\mathbb {Z}$ -conjugate, and hence $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, K})$ are not $D_{\infty }$ -conjugate. More precisely, $\text {sp}^{\textsf {x}}(A) \neq \text {sp}^{\textsf {x}}(B)$ implies that A and B are not shift-equivalent, and hence $(A, J)$ and $(B, K)$ are not $D_{\infty }$ -shift equivalent:

$$ \begin{align*}\text{sp}^{\textsf{x}}(A) \neq \text{sp}^{\textsf{x}}(B) \Rightarrow A \nsim B \Rightarrow (A, J) \nsim (B, K).\end{align*} $$

In addition, $\text {sp}^{\textsf {x}}(A) \neq \text {sp}^{\textsf {x}}(B)$ implies that the Artin–Mazur zeta functions $\zeta _A(t)$ and $\zeta _B(t)$ of $(\textsf {X}_A, \sigma _A)$ and $(\textsf {X}_B, \sigma _B)$ do not coincide (see Ch. 7 in [Reference Lind and Marcus8]), and hence the Lind zeta functions $\zeta _{A, J}(t)$ and $\zeta _{B, K}(t)$ of $(\textsf {X}_A, \sigma _A)$ and $(\textsf {X}_B, \sigma _B)$ do not coincide:

$$ \begin{align*}\text{sp}^{\textsf{x}}(A) \neq \text{sp}^{\textsf{x}}(B) \Rightarrow \zeta_A(t) \neq \zeta_B(t) \Rightarrow \zeta_{A, J}(t) \neq \zeta_{B, K}(t).\end{align*} $$

However, the flip signatures of $(A, J)$ and $(B, K)$ are the same:

$$ \begin{align*}\mathrm{F.Sig}(A, J) = (+1) \quad \text{and} \quad \mathrm{F.Sig}(B, K) = (+1).\end{align*} $$

In the following example, we see that the coincidence of the Lind zeta functions does not guarantee the existence of $D_{\infty }$ -SE between the corresponding flip pairs.

Example 7.5. Let

$$ \begin{align*}A=\left[\begin{array}{rrrrrrr} 1 & 1& 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 &0 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 1\end{array}\right], \quad B=\left[\begin{array}{rrrrrrr} 1 & 1& 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 &0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 & 1\end{array}\right]\end{align*} $$

and

$$ \begin{align*}J=\left[\begin{array}{rrrrrrr} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \end{array}\right]\!.\end{align*} $$

The characteristic functions $\chi _A$ and $\chi _B$ of A and B are the same:

$$ \begin{align*}\chi_A(t) = \chi_B(t)=t(t-1)^4(t^2-3t+1).\end{align*} $$

We denote the zeros of $t^2 - 3t +1$ by $\unicode{x3bb} $ and $\mu $ . Direct computation shows that $(A, J)$ and $(B, J)$ are flip pairs and $(\textsf {X}_A, \sigma _A, \varphi _{A, J})$ and $(\textsf {X}_B, \sigma _B, \varphi _{B, J})$ share the same numbers of fixed points.

$$ \begin{align*} p_m &= 4 + \unicode{x3bb}^m + \mu^m,\\ p_{2m-1, 0} &= \frac{8 \unicode{x3bb}^{m}-3\unicode{x3bb}^{m-1}}{11 \unicode{x3bb} -4} + \frac{8 \mu^{m}-3 \mu^{m-1}}{11 \mu -4},\\ p_{2m, 0} &= \frac{\unicode{x3bb}^{m+1}}{11 \unicode{x3bb} -4} + \frac{\mu^{m+1}}{11 \mu -4},\\ p_{2m, 1} &= \frac{55\unicode{x3bb}^m -21\unicode{x3bb}^{m-1}}{11 \unicode{x3bb} -4} + \frac{55\mu^m -21\mu^{m-1}}{11 \mu -4} \quad (m=1, 2, \ldots). \end{align*} $$

As a result, they share the same Lind zeta functions:

$$ \begin{align*}\sqrt{\frac{1}{t^2(1-t^2)^4(1-3t^2+t^4)}} \exp \bigg(\frac{t+3t^2-t^3-2t^4}{1-3t^2+t^4}\bigg).\end{align*} $$

If there is a $D_{\infty }$ -SE $(D, E)$ from $(A, J)$ to $(B, J)$ , then $(D, E)$ also becomes a SE from A to B. It is well known [Reference Lind and Marcus8] that the existence of SE from A to B implies that A and B have the same Jordan forms away from zero up to the order of Jordan blocks. The Jordan canonical forms of A and B are given by

$$ \begin{align*}\left[\begin{array}{rrrrrrr} \unicode{x3bb} & & & & & & \\ & \mu & & & & & \\ & & 1 & 1 & 0 & 0 & \\ & & 0 & 1 &1 & 0 & \\ & & 0 & 0 & 1 &1 & \\ & & 0 & 0 & 0 & 1 & \\ & & & & & & 0 \end{array}\right] \quad \text{and} \quad \left[\begin{array}{rrrrrrr} \unicode{x3bb} & & & & & & \\ & \mu & & & & & \\ & & 1 & 1 & & & \\ & & 0 & 1 & & & \\ & & & & 1 &1 & \\ & & & & 0 & 1 & \\ & & & & & & 0 \end{array}\right],\end{align*} $$

respectively. From this, we see that $(A, J)$ cannot be $D_{\infty }$ -shift equivalent to $(B, J)$ .

Acknowledgements

The author gratefully acknowledges the support of FAPESP (Grant No. 2018/12482-3) and Institute of Mathematics and Statistics in University of Sao Paulo (IME-USP) and the referee’s helpful comments.

References

Artin, M. and Mazur, B.. On periodic points. Ann. of Math. (2) 81 (1965), 8299.CrossRefGoogle Scholar
Boyle, M.. Open problems in symbolic dynamics. http://www.math.umd.edu/~mmb/.Google Scholar
Friedbert, S. H., Insel, A. J. and Spence, L. E.. Linear Algebra. 4th edn. Pearson Education, New Jersey, 2003.Google Scholar
Kim, Y.-O., Lee, J. and Park, K. K.. A zeta function for flip systems. Pacific J. Math. 209 (2003), 289301.CrossRefGoogle Scholar
Kitchens, B.. Symbolic Dynamics: One-Sided, two-Sided and Countable State Markov Shifts. Springer, Berlin, 1998.CrossRefGoogle Scholar
Lamb, J. S. W. and Roberts, J. A. G.. Time-reversal symmetry in dynamical systems: a survey. Phys. D 112(1–2) (1998), 139.CrossRefGoogle Scholar
Lind, D.. A zeta function for ${\mathbb{Z}}^d$ -actions. Ergodic Theory and ${\mathbb{Z}}^d$ -Actions (London Mathematical Society Lecture Note Series, 228). Eds. Pollicott, M. and Schmidt, K.. Cambridge University Press, Cambridge, 1996, pp. 433450.Google Scholar
Lind, D. and Marcus, B.. Symbolic Dynamics and Coding. Cambridge University Press, Cambridge, 1995.CrossRefGoogle Scholar
Williams, R. F.. Classification of subshifts of finite type. Ann. of Math. (2) 98 (1973), 120153; Erratum, Ann. of Math. (2) 99 (1974), 380–381.CrossRefGoogle Scholar