Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-23T18:03:34.981Z Has data issue: false hasContentIssue false

ON EXTERIOR POWERS OF REFLECTION REPRESENTATIONS

Published online by Cambridge University Press:  06 October 2023

HONGSHENG HU*
Affiliation:
Beijing International Center for Mathematical Research, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing 100871, PR China
Rights & Permissions [Opens in a new window]

Abstract

In 1968, Steinberg [Endomorphisms of Linear Algebraic Groups, Memoirs of the American Mathematical Society, 80 (American Mathematical Society, Providence, RI, 1968)] proved a theorem stating that the exterior powers of an irreducible reflection representation of a Euclidean reflection group are again irreducible and pairwise nonisomorphic. We extend this result to a more general context where the inner product invariant under the group action may not necessarily exist.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Australian Mathematical Publishing Association Inc.

1 Introduction

In [Reference Steinberg9, Sections 14.1, 14.3], Steinberg proved the following theorem (see also [Reference Bourbaki1, Ch. V, Section 2, Exercise 3], [Reference Curtis, Iwahori and Kilmoyer3, Theorem 9.13], [Reference Geck and Pfeiffer5, Theorem 5.1.4] and [Reference Kane7, Section 24-3]).

Theorem 1.1 (Steinberg).

Let V be a finite-dimensional vector space endowed with an inner product (for example, a Euclidean space or a complex Hilbert space). Let $\{v_1, \dots , v_n\}$ be a basis of V and $W \subseteq \operatorname {GL}(V)$ be the group generated by (orthogonal) reflections with respect to these basis vectors. Suppose V is a simple W-module. Then the W-modules $\{\bigwedge ^d V \mid 0 \le d \le n\}$ are simple and pairwise nonisomorphic.

The proof relies on the existence of an inner product which stays invariant under the W-action. With the help of this inner product, the vector space $\bigwedge ^d V$ is decomposed into a direct sum $\bigwedge ^d V^\prime \bigoplus (v \wedge \bigwedge ^{d-1} V^\prime )$ , where $V^\prime $ is a subspace of V of codimension one and a simple module of a subgroup generated by fewer reflections, and v is a vector orthogonal to $V^\prime $ . The theorem is proved by induction on the number of reflections.

We extend this result to a more general context, where the W-invariant inner product may not exist. The following is the main theorem.

Theorem 1.2. Let $\rho :W \to \operatorname {GL}(V)$ be an n-dimensional representation of a group W over a field $\mathbb {F}$ of characteristic 0. Suppose that $s_1, \dots , s_k \in W$ satisfy:

  1. (1) for each i, $s_i$ acts on V by a (generalised) reflection with reflection vector $\alpha _i$ of eigenvalue $\lambda _i$ (see Definition 2.1 for related notions);

  2. (2) the group W is generated by $\{s_1, \dots , s_k\}$ ;

  3. (3) the representation $(V,\rho )$ is irreducible;

  4. (4) for any pair $i,j$ of indices, $s_i \cdot \alpha _j \ne \alpha _j$ if and only if $s_j \cdot \alpha _i \ne \alpha _i$ .

Then the W-modules $\{\bigwedge ^d V \mid 0 \le d \le n\}$ are irreducible and pairwise nonisomorphic.

Remark 1.3. The condition (4) in Theorem 1.2 is a technical condition (automatically satisfied in the setting of Theorem 1.1). However, it is not that strict. For example, if $s_i$ and $s_j$ are both of order $2$ (so that they generate a dihedral subgroup), and if $s_i \cdot \alpha _j = \alpha _j$ while $s_j \cdot \alpha _i \ne \alpha _i$ , then the order of $s_is_j$ in W must be $\infty $ . Moreover, since $\langle \alpha _i,\alpha _j \rangle $ forms a subrepresentation of the subgroup $\langle s_i, s_j \rangle $ , there are uncountably many two-dimensional representations of the infinite dihedral group $\langle s_i, s_j \rangle $ , but only two of them invalidate the condition (4) (see [Reference Hu6, Section 2.2]).

In the paper [Reference Hu6], we construct and classify a class of representations of an arbitrary Coxeter group of finite rank, where the defining generators of the group act by (generalised) reflections. In view of the previous remark, most of these reflection representations satisfy the conditions of Theorem 1.2. Thus, our result applies to them giving many irreducible representations of the Coxeter group. (Note that in [Reference Hu6], we have seen that only a few reflection representations admit a nonzero bilinear form which is invariant under the group action. Consequently, the module $\bigwedge ^d V$ usually fails to decompose into the form $\bigwedge ^d V^\prime \bigoplus (v \wedge \bigwedge ^{d-1} V^\prime )$ as it did in proving Theorem 1.1. Even if we have such a decomposition, the subspace $V^\prime $ may not be a simple module of a suitable subgroup. Therefore, the arguments in proving Theorem 1.1 usually fail in the context of Theorem 1.2.)

This paper is organised as follows. In Sections 24, we revisit basic concepts and provide the background we need concerning (generalised) reflections, exterior powers and graphs. In Section 5, we prove our main theorem. In Section 6, we present some byproducts, including a description of the subspace of an exterior power that is fixed pointwise by a set of reflections, and a Poincaré-like duality on exterior powers. In Section 7, we raise several interesting questions that have yet to be resolved.

2 Generalised reflections

Let $\mathbb {F}$ be a field and V be a finite-dimensional vector space over $\mathbb {F}$ .

Definition 2.1.

  1. (1) A linear map $s:V \to V$ is called a (generalised) reflection if s is diagonalisable and $\operatorname {\mathrm {rank}}(s - \mathrm {Id}_V) = 1$ .

  2. (2) Suppose s is a reflection on V. The hyperplane $H_s : = \ker (s - \mathrm {Id}_V)$ , which is fixed pointwise by s, is called the reflection hyperplane of s. Let $\alpha _s$ be a nonzero vector in $\operatorname {\mathrm {Im}}(s - \mathrm {Id}_V)$ . Then, $s \cdot \alpha _s = \lambda _s \alpha _s$ for some $\lambda _s \in \mathbb {F} \setminus \{1\}$ and $\alpha _s$ is called a reflection vector of s.

Note that if s is an invertible map, then $\lambda _s \ne 0$ .

Lemma 2.2. Let s be a reflection on V. Then there exists a nonzero linear function $f : V \to \mathbb {F}$ such that $s \cdot v = v + f(v) \alpha _s$ for any $v \in V$ .

Proof. Note that ${V = H_s \bigoplus \mathbb {F} \alpha _s}$ . Any vector v can be written in the form ${v = v_s + c_v \alpha _s}$ where $v_s \in H_s$ and $c_v \in \mathbb {F}$ . Then

$$ \begin{align*}s \cdot v = v_s + \lambda_s c_v \alpha_s = v + (\lambda_s - 1) c_v \alpha_s.\end{align*} $$

The linear function $f: v \mapsto (\lambda _s - 1) c_v$ is the desired function.

3 Exterior powers

In this section, let W be a group and $\rho : W \to \operatorname {GL}(V)$ be a representation of W, where V is an n-dimensional vector space over the base field $\mathbb {F}$ . Let $\bigwedge ^d V$ ( $0 \le d\le n$ ) be the dth exterior power of V. The representation $\bigwedge ^d \rho $ of W on $\bigwedge ^d V$ is given by

$$ \begin{align*}w \cdot (v_1 \wedge \dots \wedge v_d) = (w \cdot v_1) \wedge \dots \wedge (w \cdot v_d).\end{align*} $$

By convention, $\bigwedge ^0 V$ is the one-dimensional W-module with trivial action. However, $\bigwedge ^n V$ carries the one-dimensional representation $\det \circ \rho $ . For more details, one may refer to [Reference Fulton and Harris4]. The following is a well-known fact.

Lemma 3.1. Suppose $\{\alpha _1, \dots , \alpha _n\}$ is a basis of V. Then,

$$ \begin{align*}\{ \alpha_{i_1} \wedge \dots \wedge \alpha_{i_d} \mid 1 \le i_1 < \dots < i_d \le n\}\end{align*} $$

is a basis of $\bigwedge ^d V\ (0 \le d \le n)$ . In particular, $\dim \bigwedge ^d V = \binom {n}{d}$ .

Suppose an element s of W acts on V by a reflection, with reflection hyperplane $H_s$ and reflection vector $\alpha _s$ of eigenvalue $\lambda _s$ (see Definition 2.1). Note that W is a group and s is invertible. Thus, $\lambda _s \ne 0$ . We define

(3.1) $$ \begin{align} V_{d,s}^+ = \bigg\{v \in \bigwedge^d V \mid s \cdot v = v\bigg\}, \quad V_{d,s}^- = \bigg\{v \in \bigwedge^d V \mid s \cdot v = \lambda_s v\bigg\} \end{align} $$

to be the eigen-subspaces of s in $\bigwedge ^d V$ , for the eigenvalues $1$ and $\lambda _s$ , respectively.

Lemma 3.2 (See [Reference Geck and Pfeiffer5, Lemma 5.1.2] and the proof of [Reference Curtis, Iwahori and Kilmoyer3, Proposition 9.12]).

Let $W, s, V$ be as above. Suppose $\{v_1, \dots , v_{n-1}\}$ is a basis of $H_s$ . Then, $V_{d,s}^+\ (0 \le d \le n)$ has a basis

(3.2) $$ \begin{align} \{ v_{i_1} \wedge \dots \wedge v_{i_d} \mid 1 \le i_1 < \dots < i_d \le n-1 \}, \end{align} $$

and $V_{d,s}^-$ has a basis

(3.3) $$ \begin{align} \{ \alpha_s \wedge v_{i_1} \wedge \dots \wedge v_{i_{d-1}} \mid 1 \le i_1 < \dots < i_{d-1} \le n-1 \}. \end{align} $$

In particular, $\dim V_{d,s}^+ = \binom {n-1}{d}$ , $\dim V_{d,s}^- = \binom {n-1}{d-1}$ and $\bigwedge ^d V = V_{d,s}^+ \bigoplus V_{d,s}^-$ . (Here we regard $\binom {n-1}{n} = \binom {n-1}{-1} = 0$ .)

Proof. Note that $\{\alpha _s, v_1, \dots , v_{n-1}\}$ is a basis of V. Denote by $B^+$ and $B^-$ the two sets of vectors in (3.2) and (3.3), respectively. Then the disjoint union $B^+ \cup B^-$ is a basis of $\bigwedge ^d V$ by Lemma 3.1. Clearly, $B^+ \subseteq V_{d,s}^+$ and $B^- \subseteq V_{d,s}^-$ . Therefore, ${\bigwedge ^d V = V_{d,s}^+ \bigoplus V_{d,s}^-}$ and the result follows.

Corollary 3.3. Let $W, s, V$ be as above.

  1. (1) We have $V_{d,s}^+ = \bigwedge ^d H_s$ . Here, $\bigwedge ^d H_s$ is regarded as a subspace of $\bigwedge ^d V$ naturally.

  2. (2) Extend $\alpha _s$ arbitrarily to a basis of V, say, $\{\alpha _s, \alpha _2, \dots , \alpha _n\}$ . Then, $V_{d,s}^-\ (0 \le d \le n)$ has a basis

    (3.4) $$ \begin{align} \{ \alpha_s \wedge \alpha_{i_1} \wedge \dots \wedge \alpha_{i_{d-1}} \mid 2 \le i_1 < \dots < i_{d-1} \le n \}. \end{align} $$

Proof. Point (1) in Corollary 3.3 is directly derived from Lemma 3.2. For (2), suppose ${s \cdot \alpha _i = \alpha _i + c_i \alpha _s}$ ( $c_i \in \mathbb {F}$ ) for $i = 2, \dots , n$ (see Lemma 2.2). Then,

$$ \begin{align*} s \cdot (\alpha_s \wedge \alpha_{i_1} \wedge \dots \wedge \alpha_{i_{d-1}}) & = (\lambda_s \alpha_s) \wedge (\alpha_{i_1} + c_{i_1} \alpha_s) \wedge \dots \wedge (\alpha_{i_{d-1}} + c_{i_{d-1}} \alpha_s) \\ & = \lambda_s \alpha_s \wedge \alpha_{i_1} \wedge \dots \wedge \alpha_{i_{d-1}}. \end{align*} $$

Thus, $\alpha _s \wedge \alpha _{i_1} \wedge \dots \wedge \alpha _{i_{d-1}} \in V_{d,s}^-$ . Note that by Lemma 3.1, the $\binom {n-1}{d-1}$ vectors in (3.4) are linearly independent, and that $\dim V_{d,s}^- = \binom {n-1}{d-1}$ by Lemma 3.2. Thus, the vectors in (3.4) form a basis of $V_{d,s}^-$ .

For a subset $B \subset V$ , we denote by $\langle B \rangle $ the linear subspace spanned by B. By convention, $\langle \emptyset \rangle = 0$ . The following lemma is evident by linear algebra.

Lemma 3.4. Let $B \subseteq V$ be a basis of V and $B_i \subseteq B\ (i \in I)$ be a family of subsets of B. Then, $\bigcap _{i \in I} \langle B_i \rangle = \langle \bigcap _{i \in I} B_i \rangle $ .

Using the lemmas above, we can deduce the following results which will be used in the proof of our main theorem.

Proposition 3.5. Let $W, V$ be as above. Suppose $s_1, \dots , s_k \in W$ such that, for each i, $s_i$ acts on V by a reflection with reflection vector $\alpha _i$ of eigenvalue $\lambda _i$ . Suppose $\alpha _1, \dots , \alpha _k$ are linearly independent. We extend these vectors to a basis of V, say, $\{\alpha _1, \dots , \alpha _k, \alpha _{k+1}, \dots , \alpha _n\}$ .

  1. (1) If $0 \le d < k$ , then $\bigcap _{1 \le i\le k} V_{d, s_i}^- = 0$ .

  2. (2) If $k \le d \le n$ , then $\bigcap _{1 \le i\le k} V_{d, s_i}^-$ has a basis

    $$ \begin{align*} \{\alpha_1 \wedge \dots \wedge \alpha_k \wedge \alpha_{j_{k+1}} \wedge \dots \wedge \alpha_{j_d} \mid k+1 \le j_{k+1} < \dots < j_d \le n\}. \end{align*} $$
    In particular, if $d = k$ , then $\bigcap _{1 \le i\le k} V_{k, s_i}^-$ is one-dimensional with a basis vector $\alpha _1 \wedge \dots \wedge \alpha _k$ .

Proof. By Corollary 3.3(2), $V_{d,s_i}^-$ ( $1 \le i \le k$ ) has a basis

$$ \begin{align*}B_i := \{\alpha_{j_1} \wedge \dots \wedge \alpha_{j_d} \mid 1 \le j_1 < \dots < j_d \le n, \text{ and}\ j_l = i\ \text{for some}\ l\}.\end{align*} $$

Note that by Lemma 3.1, the ambient space $\bigwedge ^d V$ has a basis

$$ \begin{align*}B : = \{\alpha_{j_1} \wedge \dots \wedge \alpha_{j_d} \mid 1 \le j_1 < \dots < j_d \le n\}\end{align*} $$

and $B_i \subseteq B$ , for all $i = 1, \dots , k$ . By Lemma 3.4,

$$ \begin{align*}\bigcap_{i=1}^{k} V_{d,s_i}^- = \bigcap_{i=1}^{k} \langle B_i \rangle = \bigg\langle \bigcap_{i=1}^k B_i \bigg\rangle.\end{align*} $$

If $0 \le d < k$ , then $\bigcap _{1 \le i \le k} B_i = \emptyset $ and $\bigcap _{1 \le i\le k} V_{d, s_i}^- = 0$ . If $k \le d \le n$ , then

$$ \begin{align*}\bigcap_{i=1}^k B_i = \{\alpha_1 \wedge \dots \wedge \alpha_k \wedge \alpha_{j_{k+1}} \wedge \dots \wedge \alpha_{j_d} \mid k+1 \le j_{k+1} < \dots < j_d \le n\}\end{align*} $$

and $\bigcap _{1 \le i \le k} B_i$ is a basis of $\bigcap _{1 \le i\le k} V_{d, s_i}^-$ .

Proposition 3.6 (See the proofs of [Reference Curtis, Iwahori and Kilmoyer3, Theorem 9.13] and [Reference Geck and Pfeiffer5, Theorem 5.14]).

Let $W, V$ be as above. Suppose there exists $s \in W$ such that s acts on V by a reflection. If $0 \le k,l \le n$ are integers and $\bigwedge ^k V \simeq \bigwedge ^l V$ as W-modules, then $k = l$ .

Proof. If $\bigwedge ^k V \simeq \bigwedge ^l V$ , then $\dim \bigwedge ^k V = \dim \bigwedge ^l V$ and $\dim V_{k,s}^+ = \dim V_{l,s}^+$ . By Lemmas 3.1 and 3.2, this is equivalent to

$$ \begin{align*} \binom{n}{k} = \binom{n}{l}, \quad \binom{n-1}{k} = \binom{n-1}{l}. \end{align*} $$

The two equalities force $k = l$ .

The following lemma is due to Chevalley [Reference Chevalley2, page 88] (see also [Reference Milne8, Corollary 22.45]).

Lemma 3.7 (Chevalley).

Let $\mathbb {F}$ be a field of characteristic 0. Let W be a group and V, U be finite-dimensional semisimple W-modules over $\mathbb {F}$ . Then $V \bigotimes U$ is a semisimple W-module.

Note that the W-module $\bigwedge ^d V$ can be regarded as a submodule of $\bigotimes ^d V$ via the natural embedding

$$ \begin{align*}\bigwedge^d V \hookrightarrow \bigotimes^d V, \quad v_1 \wedge \dots \wedge v_d \mapsto \sum_{\sigma \in \mathfrak{S}_d} \operatorname{sgn}(\sigma) v_{\sigma(1)} \otimes \dots \otimes v_{\sigma(d)}.\end{align*} $$

(The notation $\mathfrak {S}_d$ denotes the symmetric group on d elements.) Therefore, we have the following corollary.

Corollary 3.8. Let $\mathbb {F}$ be a field of characteristic 0. Let W be a group and V be a finite-dimensional simple W-module over $\mathbb {F}$ . Then the W-module $\bigwedge ^d V$ is semisimple.

4 Some lemmas on graphs

By definition, an (undirected) graph $G = (S,E)$ consists of a set S of vertices and a set E of edges. Each edge in E is an unordered binary subset $\{s,t\}$ of S. For our purpose, we only consider finite graphs without loops and multiple edges (that is, S is a finite set, there is no edge of the form $\{s,s\}$ and each pair $\{s,t\}$ occurs at most once in E).

A sequence $(s_1, s_2, \dots , s_n)$ of vertices is called a path in G if $\{s_i, s_{i+1}\} \in E$ , for all i. In this case, we say that the two vertices $s_1$ and $s_n$ are connected by the path. A graph G is called connected if any two vertices are connected by a path.

Definition 4.1. Let $G = (S,E)$ be a graph and $I \subseteq S$ be a subset. We set ${E(I) := \{\{s,t\} \in E \mid s,t \in I\}}$ to be the set of edges with vertices in I, and call the graph ${G(I) : = (I, E(I))}$ the subgraph of G spanned by I.

Definition 4.2. Let $G = (S,E)$ be a graph and $I \subseteq S$ be a subset. Suppose there exists vertices $r \in I$ and $t\in S \setminus I$ such that $\{r,t\} \in E$ is an edge. Let $I^\prime : = (I \setminus \{r\} ) \cup \{t\}$ . Then we say $I^\prime $ is obtained from I by a move.

Intuitively, $I^\prime $ is obtained from I by moving the vertex r to the vertex t along the edge $\{r,t\}$ . In particular, $\lvert I \rvert = \lvert I^\prime \rvert $ .

We shall need the following lemmas in the proof of our main theorem.

Lemma 4.3. Let $G = (S, E)$ be a connected graph. Let $I, J \subseteq S$ be subsets with cardinality $\lvert I \rvert = \lvert J \rvert = d$ . Then J can be obtained from I by finite steps of moves.

Proof. We do induction downwards on $\lvert I \cap J \rvert $ . If $\lvert I \cap J \rvert =d$ , then $I=J$ and there is nothing to prove.

If $I \ne J$ , then there exist vertices $r \in I \setminus J$ and $t \in J \setminus I$ . Since G is connected, there is a path connecting r and t, say,

$$ \begin{align*}(r = r_0, r_1,r_2, \dots, r_l = t).\end{align*} $$

Let $0 = i_0 < i_1 < \dots < i_k < l$ be the indices such that $\{r_{i_0}, r_{i_1}, \dots , r_{i_k} \}$ is the set of vertices in I on this path, that is, $\{r_{i_0}, \dots , r_{i_k} \} = \{r_i \mid 0 \le i < l, r_i \in I\}$ .

Clearly, $r_{i_k+1}, r_{i_k+2}, \dots , r_l \notin I$ . So beginning with I, we can move $r_{i_k}$ to $r_{i_k+1}$ , then to ${r_{i_k+2}}$ , and finally to t. Therefore, the set $I_1 := (I \setminus \{r_{i_k}\}) \cup \{t\}$ can be obtained from I by finite steps of moves. Similarly, from $I_1$ , we can move $r_{i_{k-1}}$ to $r_{i_k}$ so that we obtain

$$ \begin{align*}I_2 : = (I_1 \setminus \{r_{i_{k-1}}\}) \cup \{r_{i_k}\} = (I \setminus \{r_{i_{k-1}}\}) \cup \{t\}.\end{align*} $$

Do this recursively, and finally we get $I_{k+1} := (I \setminus \{r_{i_{0}}\}) \cup \{t\} = (I \setminus \{r\}) \cup \{t\}$ from I by finite steps of moves.

Moreover, we have $\lvert I_{k+1} \cap J \rvert = \lvert I \cap J \rvert + 1$ . By the induction hypothesis, J can be obtained from $I_{k+1}$ by finite steps of moves. It follows that J can be obtained from I by finite steps of moves.

Lemma 4.4. Let $G = (S, E)$ be a connected graph. Suppose $I \subseteq S$ is a subset such that $\lvert I \rvert \ge 2$ , and for any $t \in S \setminus I$ , either one of the following conditions is satisfied:

  1. (1) for any $r \in I$ , $\{r,t\}$ is not an edge;

  2. (2) there exist at least two vertices $r,r^\prime \in I$ such that $\{r,t\}, \{r^\prime ,t\} \in E$ .

Then there exists $s \in I$ such that the subgraph $G(S \setminus \{s\})$ is connected.

Proof. For two vertices $r,t \in S$ , define the distance $\operatorname {d}(r,t)$ in G to be

$$ \begin{align*} \operatorname{d}(r,t) := \min \{m \in \mathbb{N} & \mid \exists r_1, \dots, r_{m-1}\in S \text{ such that } (r, r_1, \dots, r_{m-1}, t) \text{ is a path in } G\}. \end{align*} $$

Let $m = \max \{\operatorname {d}(r,t) \mid r,t \in I\}$ , and suppose $s, s^\prime \in I$ such that $\operatorname {d}(s, s^\prime ) = m$ .

We claim first that

(4.1) $$ \begin{align} \text{for any}\ r \in I \setminus\{s\}, r\ \text{is connected to}\ s^\prime\ \text{in the subgraph}\ G(S \setminus \{s\}). \end{align} $$

Otherwise, any path in G connecting r and $s^\prime $ (note that G is connected) must pass through s. It follows that $\operatorname {d}(r, s^\prime )> \operatorname {d}(s, s^\prime ) = m$ , which contradicts our choice of m.

Next we claim that

(4.2) $$ \begin{align} \text{for any}\ t \in S \setminus \{s\},\ t\ \text{is connected to}\ s^\prime\ \text{in the subgraph}\ G(S \setminus \{s\}), \end{align} $$

and therefore, $G(S \setminus \{s\})$ is connected.

In (4.2), the case where $t \in I \setminus \{s\}$ has been settled in the claim (4.1). Thus, we may assume $t \in S \setminus I$ . Let $(t = t_0, t_1, \dots , t_p = s)$ be a path of minimal length in G connecting t and s. Then, $t_1, \dots , t_{p-1} \in S \setminus \{s\}$ . In particular, t is connected to $t_{p-1}$ in $G(S \setminus \{s\})$ . If $t_{p-1} \in I$ , then by the claim (4.1), $t_{p-1}$ is connected to $s^\prime $ in $G(S \setminus \{s\})$ . Thus, t is connected to $s^\prime $ in $G(S \setminus \{s\})$ as desired. If $t_{p-1} \notin I$ , then, since $\{t_{p-1}, s\} \in E$ , there is another vertex $r \in I \setminus \{s\}$ such that $\{t_{p-1},r\} \in E$ . By the claim (4.1) again, r is connected to $s^\prime $ in ${G(S \setminus \{s\})}$ , and so is t. See Figure 1 for an illustration.

Figure 1 Illustration for the proof of Lemma 4.4.

Remark 4.5. Applying Lemma 4.4 to the trivial case $I = S$ , we recover the following simple fact (see also the hint of [Reference Bourbaki1, Ch. V, Section 2, Exercise 3(d)]): if $G = (S,E)$ is a connected graph, then there exists $s \in S$ such that $G(S \setminus \{s\})$ is connected.

5 Proof of Theorem 1.2

This section is devoted to proving Theorem 1.2.

By Proposition 3.6, we see that the W-modules $\{\bigwedge ^d V \mid 0 \le d \le n\}$ are pairwise nonisomorphic. Thus, to prove Theorem 1.2, it suffices to show that $\bigwedge ^d V$ is a simple W-module for each fixed d.

By Corollary 3.8, the W-module $\bigwedge ^d V$ is semisimple. Therefore, the problem reduces to proving

(5.1) $$ \begin{align} \text{any endomorphism of}\ \bigwedge^d V\ \text{is a scalar multiplication.} \end{align} $$

Let $S := \{1, \dots , k\}$ and $E := \{\{i, j \} \mid s_i \cdot \alpha _j \ne \alpha _j\}$ . Then $G = (S, E)$ is a graph in the sense of Section 4.

Claim 5.1. G is a connected graph.

Proof. Otherwise, suppose $S = I \sqcup J$ such that for any $i \in I$ and $j \in J$ , $\{i, j\}$ is never an edge, that is, $s_j \cdot \alpha _i = \alpha _i$ . If $V = V_I :=\langle \alpha _i \mid i \in I \rangle $ , then for any $j \in J$ , $\alpha _j$ is a linear combination of $\{\alpha _i \mid i \in I\}$ . It follows that $s_j \cdot \alpha _j = \alpha _j$ , which is absurd. Therefore, ${V_I \ne V}$ .

By Lemma 2.2, $s_i \cdot V_I \subseteq V_I$ for any $i \in I$ . However, for any $j \in J$ , $s_j$ acts trivially on $V_I$ . Since W is generated by $\{s_1, \dots , s_k\}$ , $V_I$ is closed under the action of W, that is, $V_I$ is a proper submodule. This contradicts the assumption that V is a simple W-module.

Claim 5.2. V is spanned by $\{\alpha _1, \dots , \alpha _k\}$ . In particular, $n \le k$ (where $n = \dim V$ ).

Proof. Let $U = \langle \alpha _1, \dots , \alpha _k \rangle \subseteq V$ . By Lemma 2.2, $s_i \cdot U \subseteq U$ for any $i \in S$ . Thus, U is a W-submodule. However, V is a simple W-module. So $U = V$ .

Claim 5.3. There exists a subset $I \subseteq S$ , such that:

  1. (1) $\{\alpha _i \mid i \in I\}$ is a basis of V;

  2. (2) the subgraph $G(I)$ (see Definition 4.1) is connected.

Proof. Suppose we have found a subset $J \subseteq S$ such that:

  1. (a) V is spanned by $\{\alpha _i \mid i \in J\}$ ;

  2. (b) the subgraph $G(J)$ is connected.

For example, S itself is such a subset by Claims 5.1 and 5.2. If the vectors ${\{\alpha _i \mid i\in J\}}$ are linearly independent, then we are done.

Now suppose $\{\alpha _i \mid i\in J\}$ are linearly dependent. By a permutation of indices, we may assume $J = \{1, \dots , h\}$ , $h \le k$ , and

(5.2) $$ \begin{align} c_1 \alpha_1 + \dots + c_l \alpha_l = 0 \quad\text{for some}\ c_1, \dots, c_l \in \mathbb{F}^\times, l \le h. \end{align} $$

If there exists $j \in J$ such that $j \ge l+1$ and $s_j \cdot \alpha _i \ne \alpha _i$ for some $i \le l$ , then

$$ \begin{align*}s_j \cdot (c_1 \alpha_1 + \dots + \widehat{c_i \alpha_i} + \dots + c_l \alpha_l) \ne c_1 \alpha_1 + \dots + \widehat{c_i \alpha_i} + \dots + c_l \alpha_l.\end{align*} $$

Here, $\widehat {c_i \alpha _i}$ means this term is omitted. Thus, there is an index $i^\prime $ with $i^\prime \le l$ and $i^\prime \ne i$ such that $s_j \cdot \alpha _{i^\prime } \ne \alpha _{i^\prime }$ . In other words, if $l+1 \le j \le h$ , then one of the following is satisfied:

  1. (1) for any $i \le l$ , $\{i,j\}$ is never an edge;

  2. (2) there exist at least two indices $i, i^\prime \le l$ such that $\{i,j\}, \{i^\prime , j\} \in E$ .

Applying Lemma 4.4 to the subset $\{1, \dots , l\} \subseteq J$ , we see that there is an index $i_0 \le l$ such that the subgraph $G(J \setminus \{i_0\})$ is connected. Moreover, V is spanned by ${\{\alpha _i \mid i \in J \setminus \{i_0\}\}}$ by our assumption (5.2). Thus, $J \setminus \{i_0\}$ satisfies the conditions (a) and (b), and $J \setminus \{i_0\}$ has a smaller cardinality than J.

Apply the arguments above recursively. Finally, we will obtain a subset $I \subseteq S$ as claimed.

Now suppose $I = \{1, \dots , n\} \subseteq S\ (n = \dim V)$ is the subset obtained in Claim 5.3. By Lemma 3.1, $\bigwedge ^d V$ has a basis

$$ \begin{align*}\{ \alpha_{i_1} \wedge \dots \wedge \alpha_{i_d} \mid 1 \le i_1 < \dots < i_d \le n\}.\end{align*} $$

Note that for any such basis vector $\alpha _{i_1} \wedge \dots \wedge \alpha _{i_d}$ , the vectors $\alpha _{i_1}, \dots , \alpha _{i_d}$ of V are linearly independent.

Claim 5.4. For any indices $1 \le i_1 < \dots < i_d \le n$ , the subspace $\bigcap _{1 \le j \le d} V_{d, s_{i_j}}^-$ is one-dimensional with a basis vector $\alpha _{i_1} \wedge \dots \wedge \alpha _{i_d}$ (the subspace $V_{d,s_i}^-$ is defined in (3.1)).

Proof. Apply Proposition 3.5(2) to $s_{i_1}, \dots , s_{i_d} \in W$ .

Now suppose $\varphi : \bigwedge ^d V \to \bigwedge ^d V$ is an endomorphism of the W-module. For any ${i \in I}$ and any $v \in V_{d,s_i}^-$ , we have $s_i \cdot \varphi (v) = \varphi (s_i \cdot v) = \lambda _i \varphi (v)$ . Thus, $\varphi (v) \in V_{d,s_i}^-$ . Therefore, for any indices $1 \le i_1 < \dots < i_d \le n$ , we have $\varphi (\bigcap _{1 \le j \le d} V_{d, s_{i_j}}^-) \subseteq \bigcap _{1 \le j \le d} V_{d, s_{i_j}}^-$ . By Claim 5.4,

$$ \begin{align*}\varphi (\alpha_{i_1} \wedge \dots \wedge \alpha_{i_d}) = \gamma_{i_1, \dots, i_d} \cdot \alpha_{i_1} \wedge \dots \wedge \alpha_{i_d} \quad\text{ for some } \gamma_{i_1, \dots, i_d} \in \mathbb{F}.\end{align*} $$

To prove the statement (5.1), it suffices to show that the coefficients $\gamma _{i_1, \dots , i_d}$ are constant among all choices of $i_1, \dots , i_d$ . We may assume $d \le n-1$ .

Claim 5.5. Let $I_1 = \{1 \le i_1 < \dots < i_d \le n\}, I_2 = \{1 \le j_1 < \dots < j_d \le n\}$ be two subsets of I. Suppose $I_2$ can be obtained from $I_1$ by a move (see Definition 4.2) in the graph $G(I)$ . Then, $\gamma _{i_1, \dots , i_d} = \gamma _{j_1, \dots , j_d}$ .

Proof. To simplify notation, we assume $I_1 = \{1, \dots , d\}$ , $I_2 = (I_1 \setminus \{d\}) \cup \{d+1\}$ and $\{d, d+1\} \in E$ is an edge. In view of Lemma 2.2, for $i = 1, \dots , d$ , we assume

$$ \begin{align*}s_{d+1} \cdot \alpha_i = \alpha_i + c_i \alpha_{d+1}, \quad c_i \in \mathbb{F}. \end{align*} $$

Then, $c_d \ne 0$ . We have

$$ \begin{align*} s_{d+1} \cdot (\alpha_1 \wedge \dots \wedge \alpha_d) & = (\alpha_1 + c_1 \alpha_{d+1}) \wedge \dots \wedge (\alpha_d + c_d \alpha_{d+1})\\ & = \alpha_1 \wedge \dots \wedge \alpha_d + \sum_{i=1}^{d} (-1)^{d-i} c_i \cdot \alpha_1 \wedge \dots \wedge \widehat{\alpha}_i \wedge \dots \wedge \alpha_d \wedge \alpha_{d+1}. \end{align*} $$

Hence,

$$ \begin{align*} \varphi \bigl(s_{d+1} & \cdot (\alpha_1 \wedge \dots \wedge \alpha_d)\bigr) = \varphi \bigl(\alpha_1 \wedge \dots \wedge \alpha_d + \sum_{i=1}^{d} (-1)^{d-i} c_i \cdot \alpha_1 \wedge \dots \wedge \widehat{\alpha}_i \wedge \dots \wedge \alpha_{d+1}\bigr) \\ &\hspace{-10.2pt} = \gamma_{1,\dots, d} \cdot \alpha_1 \wedge \dots \wedge \alpha_d + \sum_{i=1}^{d} (-1)^{d-i} c_i \gamma_{1,\dots,\widehat{i},\dots, d+1} \cdot \alpha_1 \wedge \dots \wedge \widehat{\alpha}_i \wedge \dots \wedge \alpha_{d+1} \end{align*} $$

and also equals

$$ \begin{align*} s_{d+1} &\cdot \varphi(\alpha_1 \wedge \dots \wedge \alpha_d) = \gamma_{1,\dots, d} s_{d+1} \cdot (\alpha_1 \wedge \dots \wedge \alpha_d) \\ & = \gamma_{1,\dots, d} \cdot \alpha_1 \wedge \dots \wedge \alpha_d + \sum_{i=1}^{d} (-1)^{d-i} c_i \gamma_{1,\dots, d} \cdot \alpha_1 \wedge \dots \wedge \widehat{\alpha}_i \wedge \dots \wedge \alpha_{d+1}.\qquad\quad\qquad \end{align*} $$

Note that $c_d \ne 0$ , and that the vectors involved in the equation above are linearly independent. Thus, we have $\gamma _{1,\dots , d} = \gamma _{1,\dots , d-1, d+1}$ which is what we want.

Now apply Lemma 4.3 to the connected graph $G(I)$ . Then by Claim 5.5, we see that the coefficients $\gamma _{i_1, \dots , i_d}$ are constant among all choices of $i_1, \dots , i_d \in I$ . As we have pointed out, this means that the statement (5.1) is valid.

The proof is completed.

6 Some other results

Lemma 6.1. Let $H_1, \dots , H_k \subseteq V$ be linear subspaces of a vector space V. Regard $\bigwedge ^d H_i$ as a subspace of $\bigwedge ^d V$ for $0 \le d \le n$ . Then, $\bigcap _{1 \le i \le k} (\bigwedge ^d H_i) = \bigwedge ^d (\bigcap _{1 \le i \le k} H_i)$ .

Proof. We do induction on k and begin with the case $k = 2$ . Let $I_0$ be a basis of $H_1 \cap H_2$ . Extend $I_0$ to a basis of $H_1$ , say, $I_0 \sqcup I_1$ , and to a basis of $H_2$ , say, $I_0 \sqcup I_2$ . Then $I_0 \sqcup I_1 \sqcup I_2$ is a basis of $H_1 + H_2$ . Further, extend $I_0 \sqcup I_1 \sqcup I_2$ to a basis of V, say, $I_0 \sqcup I_1 \sqcup I_2 \sqcup I_3$ .

We define a total order $\le $ on the set of vectors $I_0 \sqcup I_1 \sqcup I_2 \sqcup I_3$ , and we write $v_1 < v_2$ if $v_1 \le v_2$ and $v_1 \ne v_2$ . By Lemma 3.1, B, $B_1$ , $B_2$ , $B_0$ are bases of $\bigwedge ^d V$ , $\bigwedge ^d H_1$ , $\bigwedge ^d H_2$ , $\bigwedge ^d (H_1 \cap H_2)$ , respectively, where

$$ \begin{align*} B & := \{v_1 \wedge \dots \wedge v_d \mid v_1, \dots, v_d \in I_0 \sqcup I_1 \sqcup I_2 \sqcup I_3, \text{ and } v_1 < \dots < v_d\}, \\ B_1 & := \{v_1 \wedge \dots \wedge v_d \mid v_1, \dots, v_d \in I_0 \sqcup I_1, \text{ and } v_1 < \dots < v_d\}, \\ B_2 & := \{v_1 \wedge \dots \wedge v_d \mid v_1, \dots, v_d \in I_0 \sqcup I_2, \text{ and } v_1 < \dots < v_d\}, \\ B_0 & := \{v_1 \wedge \dots \wedge v_d \mid v_1, \dots, v_d \in I_0, \text{ and } v_1 < \dots < v_d\}. \end{align*} $$

(The sets $B_1, B_2, B_0$ may be empty.) Moreover, $B_1, B_2 \subseteq B$ , and $B_0 = B_1 \cap B_2$ . Apply Lemma 3.4 to the vector space $\bigwedge ^d V$ . We obtain

$$ \begin{align*}\bigg(\bigwedge^d H_1\bigg) \cap \bigg(\bigwedge^d H_2\bigg) = \langle B_1 \rangle \cap \langle B_2 \rangle = \langle B_1 \cap B_2 \rangle = \langle B_0 \rangle = \bigwedge^d (H_1 \cap H_2).\end{align*} $$

For $k \ge 3$ , by the induction hypothesis,

$$ \begin{align*} \bigcap_{i=1}^{k} \bigg(\bigwedge^d H_i\bigg) = \bigg( \bigcap_{i=1}^{k-1} \bigg(\bigwedge^d H_i\bigg) \bigg) \cap \bigg(\bigwedge^d H_k\bigg) & = \bigg( \bigwedge^d \bigg(\bigcap_{i=1}^{k-1} H_i\bigg) \bigg) \cap \bigg(\bigwedge^d H_k\bigg) \\ & = \bigwedge^d \bigg( \bigg(\bigcap_{i=1}^{k-1} H_i\bigg) \cap H_k \bigg) = \bigwedge^d \bigg(\bigcap_{i=1}^{k} H_i\bigg) \end{align*} $$

as desired.

The following proposition, which is derived from Lemma 6.1, recovers [Reference Steinberg9, Section 14.2] in a more general context.

Proposition 6.2. Let $\rho :W \to \operatorname {GL}(V)$ be a finite-dimensional representation of a group W. Suppose $s_1, \dots , s_k \in W$ such that, for each i, $s_i$ acts on V by a reflection with reflection hyperplane $H_i$ . Then, $\{v \in \bigwedge ^d V \mid s_i \cdot v = v, \mbox { for all } i\} = \bigcap _{1 \le i \le k} V_{d,s_i}^+ = \bigwedge ^d (\bigcap _{1 \le i \le k} H_i)$ for $0 \le d \le n$ .

Proof. By Corollary 3.3(1), $V_{d,s_i}^+ = \bigwedge ^d H_i$ for all i. Therefore,

$$ \begin{align*} \bigg\{v \in \bigwedge^d V \mid s_i \cdot v = v \mbox{ for all } i\bigg\} = \bigcap_{i=1}^{k} V_{d,s_i}^+ = \bigcap_{i=1}^{k} \bigg(\bigwedge^d H_i\bigg) = \bigwedge^d \bigg(\bigcap_{i=1}^{k} H_i\bigg). \end{align*} $$

The last equality follows from Lemma 6.1.

The next result is a Poincaré-like duality on exterior powers of a representation.

Proposition 6.3. Let $\rho : W \to \operatorname {GL}(V)$ be an n-dimensional representation of a group W. Then, $\bigwedge ^{n-d} V \simeq (\bigwedge ^d V)^* \bigotimes (\det \circ \rho )$ as W-modules for all $d = 0,1, \dots , n$ . Here we denote by $(\bigwedge ^d V)^*$ the dual representation of $\bigwedge ^d V$ .

Proof. Fix an identification of linear spaces $\bigwedge ^n V \simeq \mathbb {F}$ . For any $d = 0, 1, \dots , n$ , we define a bilinear map

$$ \begin{align*} f: \bigg(\bigwedge^{n-d} V\bigg) \times \bigg(\bigwedge^{d} V\bigg) & \to \bigwedge^{n} V \simeq \mathbb{F}, \\ (u,v) & \mapsto u \wedge v. \end{align*} $$

Clearly, this induces an isomorphism of linear spaces

$$ \begin{align*} \varphi: \bigwedge^{n-d} V & \xrightarrow{\sim} \bigg(\bigwedge^{d} V\bigg)^*, \\ u & \mapsto f(u,-). \end{align*} $$

Note that for any $w \in W$ , $u \in \bigwedge ^{n-d} V$ , $v \in \bigwedge ^{d} V$ , we have

$$ \begin{align*}f(w \cdot u, w \cdot v) = (w\cdot u) \wedge (w \cdot v) = w \cdot (u \wedge v) = \det(\rho(w)) u \wedge v = \det(\rho(w)) f(u,v).\end{align*} $$

Therefore, for $w \in W$ , $u \in \bigwedge ^{n-d} V$ ,

$$ \begin{align*} \varphi(w \cdot u) = f(w\cdot u, -) = f(w\cdot u, (w w ^{-1} \cdot -)) = \det (\rho(w))f(u, (w^{-1} \cdot -)). \end{align*} $$

This implies $u \mapsto f(u,-) \otimes 1$ is an isomorphism $\bigwedge ^{n-d} V \xrightarrow {\sim } (\bigwedge ^d V)^* \bigotimes (\det \circ \rho )$ of W-modules.

7 Further questions

Question 7.1. Can we remove the technical condition (4) in the statement of Theorem 1.2?

Question 7.2. Is it possible to find two nonisomorphic simple W-modules $V_1, V_2$ satisfying the conditions of Theorem 1.2, and two integers $d_1, d_2$ with $0 < d_i < \dim V_i$ , such that $\bigwedge ^{d_1} V_1 \simeq \bigwedge ^{d_2} V_2$ as W-modules?

If the answer to this question is negative for reflection representations of Coxeter groups, then the irreducible representations obtained in the way described in Section 1 are nonisomorphic to each other.

Question 7.3. What kinds of simple W-modules V have the property that the modules $\bigwedge ^d V$ , $0 \le d \le \dim V$ , are simple and pairwise nonisomorphic? Can we formulate any other sufficient conditions (in addition to Theorem 1.2) or any necessary conditions?

Acknowledgements

The author is deeply grateful to Professors Ming Fang and Tao Gui for enlightening discussions. The author would also like to thank the anonymous referee for suggestions, which helped to improve this paper.

References

Bourbaki, N., Lie Groups and Lie Algebras: Chapters 4–6, Elements of Mathematics (Springer-Verlag, Berlin, 2002); translated from the 1968 French original by A. Pressley.10.1007/978-3-540-89394-3CrossRefGoogle Scholar
Chevalley, C., Théorie des groupes de Lie. Tome III. Théorèmes généraux sur les algèbres de Lie, Actualités Scientifiques et Industrielles, 1226 (Hermann & Cie, Paris, 1955).Google Scholar
Curtis, C. W., Iwahori, N. and Kilmoyer, R. W., ‘Hecke algebras and characters of parabolic type of finite groups with (B,N)-pairs’, Publ. Math. Inst. Hautes Études Sci. 40 (1971), 81116.10.1007/BF02684695CrossRefGoogle Scholar
Fulton, W. and Harris, J., Representation Theory. A First Course, Graduate Texts in Mathematics, 129 (Springer-Verlag, New York, 1991).Google Scholar
Geck, M. and Pfeiffer, G., Characters of Finite Coxeter Groups and Iwahori–Hecke Algebras, London Mathematical Society Monographs, New Series, 21 (Clarendon Press, Oxford–New York, 2000).Google Scholar
Hu, H., ‘Reflection representations of Coxeter groups and homology of Coxeter graphs’, Preprint, 2023, arXiv:2306.12846.Google Scholar
Kane, R., Reflection Groups and Invariant Theory, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, 5 (Springer-Verlag, New York, 2001).10.1007/978-1-4757-3542-0CrossRefGoogle Scholar
Milne, J. S., Algebraic Groups. The Theory of Group Schemes of Finite Type over a Field, Cambridge Studies in Advanced Mathematics, 170 (Cambridge University Press, Cambridge, 2017).10.1017/9781316711736CrossRefGoogle Scholar
Steinberg, R., Endomorphisms of Linear Algebraic Groups, Memoirs of the American Mathematical Society, 80 (American Mathematical Society, Providence, RI, 1968).10.1090/memo/0080CrossRefGoogle Scholar
Figure 0

Figure 1 Illustration for the proof of Lemma 4.4.