Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-22T17:28:44.704Z Has data issue: false hasContentIssue false

MALTSEV CONDITIONS FOR GENERAL CONGRUENCE MEET-SEMIDISTRIBUTIVE ALGEBRAS

Part of: Varieties

Published online by Cambridge University Press:  01 February 2021

MIROSLAV OLŠÁK*
Affiliation:
FACULTY OF MATHEMATICS AND PHYSICS CHARLES UNIVERSITY IN PRAGUE SOKOLOVSKÁ 83, PRAHA 8, CZECHIA and DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF INNSBRUCK TECHNIKERSTRASSE 21A, INNSBRUCK, AUSTRIAE-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Meet semidistributive varieties are in a sense the last of the most important classes in universal algebra for which it is unknown whether it can be characterized by a strong Maltsev condition. We present a new, relatively simple Maltsev condition characterizing the meet-semidistributive varieties, and provide a candidate for a strong Maltsev condition.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Association for Symbolic Logic

1 Introduction

The tame congruence theory (TCT) [Reference Hobby and McKenzie3], a structure theory of general finite algebras, has revealed that there are only five possibly local behaviors of a finite algebra:

  1. (1) algebra having only unary functions,

  2. (2) one-dimensional vector space,

  3. (3) the two-element Boolean algebra,

  4. (4) the two-element lattice,

  5. (5) the two-element semilattice.

If there is a local behavior of type (i) in an algebra $\mathbf {A}$ , the algebra is said to have type (i). A variety $\mathcal {V}$ has type (i) if there is an algebra $\mathbf {A}\in \mathcal {V}$ that has (i). If an algebra or variety does not have a type (i), it is said to omit type (i). The set of “bad” types that are omitted in a variety is an important structural information; for instance, it plays a significant role in the fixed-template constraint satisfaction problem [Reference Barto2]. The “worst” type is type (1) and omitting it has been characterized in many equivalent ways, one of which is given in the following theorem.

Theorem 1.1 [Reference Maróti and McKenzie9]

A locally finite variety $\mathcal {V}$ omits type (1) if and only if there is an idempotent WNU (weak near unanimity) term in $\mathcal {V}$ , that is a term satisfying the following identities:

  • idempotence: $t(x,x,x,\ldots ,x)=x$ ,

  • weak near unanimity:

    $$ \begin{align*} t(y,x,x,\ldots,x) = t(x,y,x,\ldots,x) = \cdots = t(x,\ldots,x,y) \end{align*} $$

for any $x,y\in \mathbf {A}$ in every $\mathbf {A}\in \mathcal {V}$ .

Such a characterization of varieties of algebras by means of the existence of terms satisfying certain identities is in general called Maltsev conditions. More precisely, a strong Maltsev condition is given by a finite set of term symbols and a finite set of identities. A given strong Maltsev condition is satisfied in a variety $\mathcal {V}$ if we can substitute the term symbols by actual terms in the variety in such a way that all the identities are satisfied. A general Maltsev condition is then a disjunction of countably many strong Maltsev conditions (as in the example of Theorem 1.1).

Whenever a variety $\mathcal {V}$ satisfies a certain Maltsev condition and $\mathcal {W}$ is interpretable into $\mathcal {W}$ , then $\mathcal {W}$ satisfies the Maltsev condition too. For the notion of interpretability, we refer the reader to [Reference Hobby and McKenzie3]. There are the following relations between types of locally finite varieties and the interpretability.

  • Any variety that has type (1) is interpretable into any variety.

  • Any variety is interpretable into a variety that has type (3).

  • Any variety that has type (5) is interpretable into a variety that has type (4).

Therefore, it is reasonable to ask for the Maltsev conditions for the following classes:

$$ \begin{align*}\mathcal{M}_{\{1\}}, \mathcal{M}_{\{1,2\}}, \mathcal{M}_{\{1,5\}}, \mathcal{M}_{\{1,2,5\}},\mathcal{M}_{\{1,4,5\}}, \mathcal{M}_{\{1,2,4,5\}}, \end{align*} $$

where $\mathcal {M}_S$ is the class of all the algebras that omit all the types from the set S. There is an appropriate Maltsev condition for all six classes.

It was proved that $\mathcal {M}_{\{1\}}$ and $M_{\{1,2\}}$ can be characterized by strong Maltsev conditions. Recall that idempotent term is a term t satisfying the equation $t(x,x,\ldots ,x)=x$ .

Theorem 1.2 [Reference Kearnes, Marković and McKenzie7]

A locally finite variety omits type (1) if and only if it has an idempotent 4-ary term s satisfying $s(r,a,r,e) = s(a,r,e,a)$ .

Theorem 1.3 [Reference Kozik, Krokhin, Valeriote and Willard8, Theorem 2.8]

A locally finite variety omits types $(1)$ and $(2)$ if and only if it has $3$ -ary and $4$ -ary idempotent terms $w_3,w_4$ satisfying the equations

$$ \begin{align*} &w_3(yxx)=w_3(xyx)=w_3(xxy)=w_4(yxxx) \\ &\quad =w_4(xyxx)=w_4(xxyx)=w_4(xxxy). \end{align*} $$

In the same paper [Reference Kozik, Krokhin, Valeriote and Willard8] the authors have demostrated that the remaining classes, that is $\mathcal {M}_{{1,5}}, \mathcal {M}_{{1,2,5}}, \mathcal {M}_{{1,4,5}}, \mathcal {M}_{{1,2,4,5}}$ , cannot be characterized by strong Maltsev conditions.

Although types in the TCT are defined only for locally finite varieties (because only finite algebras are assigned types), the type-omitting classes have alternative characterizations which do not refer to the type-set. They are shown in the following table taken from [Reference Kozik, Krokhin, Valeriote and Willard8].

Each of the properties in the right column of the table is characterized by an idempotent Maltsev condition [Reference Hobby and McKenzie3] for general (not necessarily locally finite) varieties. However, Theorems 1.2 and 1.3 giving strong Maltsev conditions are not guaranteed to work. Indeed, there is an example of an idempotent algebra that satisfies a non-trivial Maltsev condition, but has no term $s(r,a,r,e)=s(a,r,e,a)$ , see [Reference Kazda4]. However, it turned out that the first property is characterized by another strong Maltsev condition.

Theorem 1.4 [Reference Olšák10]

An idempotent variety satisfies a non-trivial Maltsev condition if and only if it has a term t such that

$$ \begin{align*}t(yxx,xyy) = t(xyx,yxy) = t(xxy,yyx). \end{align*} $$

The finite counterexamples to strong Maltsev conditions for

$$ \begin{align*} \mathcal{M}_{\{1,5\}}, \mathcal{M}_{\{1,2,5\}},\mathcal{M}_{\{1,4,5\}}, \mathcal{M}_{\{1,2,4,5\}}\end{align*} $$

work as counterexamples for the general case, so the remaining question is the following.

Question 1.1. Is there a strong Maltsev condition that is equivalent to congruence meet-semidistributivity?

1.1 Congruence meet-semidistributivity

By $\mathbf {Con}(\mathbf {A})$ we denote the lattice of congruences of $\mathbf {A}$ . A variety $\mathcal {V}$ is said to be congruence meet-semidistributive (shortly $SD(\wedge )$ ) if for any $\mathbf {A}\in \mathcal {V}$ , and any three congruences $\alpha ,\beta ,\gamma \in \mathbf {Con}(\mathbf {A})$ such that

$$ \begin{align*}\alpha\wedge\gamma = \beta\wedge\gamma, \end{align*} $$

we have

$$ \begin{align*}\alpha\wedge\gamma = \beta\wedge\gamma = (\alpha\vee\beta)\wedge\gamma. \end{align*} $$

This property has many equivalent definitions, see Theorem 8.1 in [Reference Kearnes and Kiss6]; we mention some of them.

Theorem 1.5. Let $\mathcal {V}$ be a variety. The following are equivalent.

  • $\mathcal {V}$ is a congruence meet-semidistributive variety.

  • No member of $\mathcal {V}$ has a non-trivial abelian congruence.

  • $[\alpha ,\beta ]=\alpha \wedge \beta $ for all $\alpha ,\beta \in \mathbf {Con}(\mathbf {A})$ and all $\mathbf {A}\in \mathcal {V}$ , where $[\alpha ,\beta ]$ denotes the commutator of congruences.

  • The diamond lattice $M_3$ is not embeddable in $\mathbf {Con}(\mathbf {A})$ for any $\mathbf {A}\in \mathcal {V}$ .

  • $\mathcal {V}$ satisfies an idempotent Maltsev condition that fails in any finite one-dimensional vector space over a non-trivial field (equivalently in any module).

In this paper, we are going to study the Maltsev conditions satisfied by every $SD(\wedge )$ variety. Not only is it not known whether there is a strong Maltsev condition characterizing the $SD(\wedge )$ varieties, but the known Maltsev conditions for $SD(\wedge )$ were quite complicated. Probably the simplest Maltsev condition for $SD(\wedge )$ which was available before this work is the following one.

Let $[n]$ denote the set $\{1,2,\ldots ,n\}$ . Consider some n, and a self-inverse bijection $\varphi \colon [2n]\to [2n]$ without fixed points, such that whenever $i < j < \varphi (i)$ , then also $i < \varphi (j) < \varphi (i)$ . Such a bijection corresponds to a proper bracketing sequence with n opening and n closing brackets. Then the bracket terms are ternary terms $b_1,\ldots ,b_{2n}$ satisfying the following identities:

$$ \begin{align*} b_1(x,y,z) = x, &\quad b_{2n}(x,y,z) = z, \\ b_{2i}(y,x,x) = b_{2i-1}(y,x,x), &\quad b_{2i}(x,x,y) = b_{2i+1}(x,x,y), \\ b_{i}(x,y,x) = b_{\varphi(i)}(x,y,x), & \end{align*} $$

for any i where it makes sense.

Theorem 1.6 [Reference Baker, Mcnulty and Wang1, Theorem 1]

A variety $\mathcal {V}$ satisfies the $SD(\wedge )$ property if and only if it has some bracket terms.

1.2 The new terms

In this paper we define $(m_1+m_2)$ -terms as a triple of idempotent terms $(f, g_1, g_2)$ , where $g_1$ is $m_1$ -ary, $g_2$ is $m_2$ -ary, f is $(m_1+m_2)$ -ary, and they satisfy the identities

$$ \begin{align*} f(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{i}},x,\ldots,x) &= g_1(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{i}},x,\ldots,x) \text{ for any } i=1,\ldots,m_1,\cr f(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{n_1+i}},x,\ldots,x) &= g_2(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{i}},x,\ldots,x) \text{ for any } i=1,\ldots,m_2. \end{align*} $$

We prove the following theorem.

Theorem 1.7. A variety $\mathcal {V}$ is congruence meet-semidistributive if and only if it has $(3+m)$ -terms for some m.

Checking the backward implication is easy. For a contradiction, assume that the identities of $(m_1+m_2)$ -terms were satisfied in some nontrivial unitary module. That means that f, $g_1$ , $g_2$ are represented by linear combinations. In particular, let

$$ \begin{align*} f(x_1,x_2,\ldots,x_{m_1+m_2}) &= a_1x_1+\cdots+a_{m_1+m_2}x_{m_1+m_2},\\ g_1(x_1,x_2,\ldots,x_{m_1}) &= b_1x_1+\cdots+b_{m_1}x_{m_1},\\ g_2(x_1,x_2,\ldots,x_{m_2}) &= c_1x_1+\cdots+c_{m_2}x_{m_2}. \end{align*} $$

By plugging $x=0$ and $y\neq 0$ into the identities for f and $g_1$ , we get $a_i=b_i$ for $i=1,\ldots ,m_1$ . If we make the same substitution in the second identity, we get $a_{m_1+i}=c_i$ for $i=1,\ldots ,m_2$ . Moreover, idempotence enforces

$$ \begin{align*}\sum_{i=1}^{m_1+m_2} a_i = \sum_{i=1}^{m_1} b_i = \sum_{i=1}^{m_2} c_i = 1. \end{align*} $$

Therefore we get

$$ \begin{align*}1 = \sum_{i=1}^{m_1+m_2} a_i = \sum_{i=1}^{m_1} b_i + \sum_{i=1}^{m_2} c_i = 2, \end{align*} $$

which contradicts that our field was non-trivial. Thus, we proved the backward implication.

To prove the forward implication, we take a detour through a generalized version of $(m_1+m_2)$ -terms. Given $n,m$ , we define $n\times (n+1)\times m$ -terms as follows.

Let i have values from $1$ to n, j have values from $1$ to $n+1$ , and k have values from $1$ to m. The $n\times (n+1)\times m$ -terms are idempotent $(n+1)m$ -ary terms $f_i$ (variables are indexed by pairs $(j,k)$ ) and idempotent $nm$ -ary terms $g_j$ (variables are indexed by pairs $(i,k)$ ) such that for every $i,\;j,k$ they satisfy the equation

$$ \begin{align*}f_i(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{(j,k)}},x,\ldots,x) = g_j(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{(i,k)}},x,\ldots,x). \end{align*} $$

By definition, $1\times 2\times m$ -terms are equivalent to the $(m+m)$ -terms. On the other hand, for large enough $n,m$ , it is simple to derive the $n\times (n+1)\times m$ -terms from another Maltsev condition not satisfiable in vector spaces.

Proposition 1.1. Let $\mathcal {V}$ be an $SD(\wedge )$ variety. Then $\mathcal {V}$ has $n\times (n+1)\times m$ -terms for some $n,m$ .

Proof By Theorem 1.6, there are bracket terms $b_1,\ldots ,b_{2n}$ in $\mathcal {V}$ corresponding to a bijection $\varphi \colon [2n]\to [2n]$ . Notice that since $\varphi $ forms a proper bracketing, $\varphi (i)$ has a different parity than i for any i. Let $\psi (i) = \varphi (2i-1)/2$ and $\psi '(i) = (\varphi (2i)+1)/2$ . In other words, we split $[2n]$ to odd and even parts and label them as $[n]$ ; then $\psi $ corresponds to the mapping $\varphi $ odd ${}\to {}$ even, and $\psi '$ to its inverse. We construct $n\times (n+1)\times 3$ -terms as follows. We set

$$ \begin{align*} g_1(x_{1,1},\ldots) &= x_{1,1},\cr g_i(\ldots,x_{i,1},\ldots,x_{\psi(i),2},\ldots,x_{i-1,3},\ldots) &= b_{2i-1}(x_{i,1},x_{\psi(i),2},x_{i-1,3}),\cr g_{n+1}(\ldots,x_{n,3}) &= x_{n,3}, \cr f_{i}(\ldots,x_{i,1},\ldots,x_{\psi'(i),2},\ldots,x_{i+1,3},\ldots) &= b_{2i}(x_{i,1},x_{\psi'(i),2},x_{i+1,3}). \end{align*} $$

All the $n\times (n+1)\times 3$ -identities follow directly from the bracket identities. ⊣

1.3 Outline

The rest of the proof is divided into two sections. In Section 2 we show that in $n\times (n+1)\times m$ -terms, we can decrease n by one increasing m enough. It follows that any $SD(\wedge )$ variety has $(m+m)$ -terms for large enough m. In Section 3, we improve that result to $(3+m)$ -terms. Section 4 then provides a few counterexamples showing that requesting $(2+m)$ -terms would be too strong. Finally, in Section 5 we discuss the remaining open questions.

2 Simplifying $n\times (n+1)\times m$ -terms

2.1 Semirings

We will need some basic facts about semirings for our first proof.

Semiring is a general algebra $\mathbf {A}=(A, +, \cdot , 0, 1)$ where $(A,+,0)$ is a commutative monoid, $(A, \cdot , 1)$ is a monoid, zero absorbs everything in multiplication ( $0\cdot x = x\cdot 0 = 0$ ), and distributive laws are satisfied, that is, $a\cdot (b+c)=a\cdot b+a\cdot c$ and $(a+b)\cdot c = a\cdot c + b\cdot c$ . As usual, the binary multiplication operation $\cdot $ is often omitted writing $ab$ instead of $a\cdot b$ .

Let $\mathcal {A}$ be an alphabet. The elements of the free monoid $\mathcal {A}^*$ generated by $\mathcal {A}$ are represented by finite words in the alphabet, multiplication concatenates the words, and the constant 1 corresponds to the empty word. Finally, the elements of the free semiring generated by $\mathcal {A}$ are represented as finite multisets (formal sums) of words in $\mathcal {A}^*$ . The addition in the free semiring is defined as sums (disjoint unions) of the corresponding multisets, and the product $p\cdot q$ is defined as piecewise product of the monomials, that is $\{u\cdot v : u\in p, v\in q\}$ .

Let $\mathbf {F}$ be the free semiring generated by some alphabet $\mathcal {A}$ , and E be a set of equations of the form $e_1= 1, e_2= 1, e_3= 1, \ldots $ where $e_i\in \mathbf {F}$ . We are going to provide a description of the congruence on $\mathbf {F}$ generated by E.

Take a monomial $u\in \mathcal {A}^*$ . By a single expansion of u we mean any element of $\mathbf {F}$ of the form $ve_iw$ where $vw=u$ . A single expansion on a general element of $\mathbf {F}$ is then defined as performing a single expansion on one of its summands. Finally, we say that p is an expansion of q if we can obtain p by performing consecutive single expansion steps starting from q.

Proposition 2.1. For any pair $(p,q)$ of elements in $\mathbf {F}$ , these two elements are congruent modulo the congruence generated by E if and only if there is a common expansion r of both p and q.

Proof The backward implication is obvious: If r is an expansion of p, then r is clearly congruent to p. Analogously, r is congruent to q, therefore p is congruent to q. We are going to prove the forward implication.

For $p,q\in \mathbf {F}$ we define a relation $p\sim q$ if there is a common expansion of p and q. Clearly each $e_i\sim 1$ . To show that $\sim $ includes the congruence generated by E, it remains to prove that $\sim $ is a congruence. Symmetry and reflexivity are apparently satisfied, so we have to prove that $\sim $ is transitive and compatible with the operations. To do that, let us introduce some notation.

Let $p \leq q$ denote that q is an expansion of p and let $p\preccurlyeq q$ denote that q can be obtained by applying single expansion steps on a subset of summands of p. So $p\preccurlyeq q$ is stronger than $p\leq q$ but weaker than q being a single expansion of p.

These orderings are clearly closed under addition. In particular, if $p=\sum _i^n p_i$ , $q=\sum _i^n q_i$ , and $p_i\preccurlyeq q_i$ , then $p\preccurlyeq q$ .⊣

Claim 2.1. For any $p,q,r,s\in \mathbf {F}$ such that $p\preccurlyeq q$ we have $rps\preccurlyeq rqs$ .

To verify that, let $p=\sum _j^P p_j$ , $q = \sum _j^P q_j$ , $r=\sum _i^R r_i$ , $s=\sum _k^S s_k$ , where $r_i, p_j, s_k$ are monomials and $p_j \preccurlyeq q_j$ . Then

$$ \begin{align*}rps = \sum_i^R\sum_j^P\sum_k^S r_ip_js_k, \quad rqs = \sum_i^R\sum_j^P\sum_k^S r_iq_js_k. \end{align*} $$

Since $p_j\preccurlyeq q_j$ , we can write $p_j = u_jv_j$ so that $q_j = u_jx_jv_j$ where $x_j\succcurlyeq 1$ , that is, $x_j=1$ or $x_j$ is one of the elements $e_i$ . So we can write $r_ip_js_k = (r_iu_j)(v_js_k)$ and $r_iq_j = (r_iu_j)x_j(v_js_k)$ . Therefore $r_ip_js_k\preccurlyeq r_iq_js_k$ and thus $rps\preccurlyeq rqs$ .

Claim 2.2. For any $p,q,r\in \mathbf {F}$ such that $r\preccurlyeq p$ and $r\preccurlyeq q$ there exists $s\in \mathbf {F}$ such that $p\preccurlyeq s$ and $q\preccurlyeq s$ .

First, we prove the claim if r is a monomial. So polynomials p, q are constructed by inserting $p'$ , $q'$ somewhere into r, respectively, where $p', q'\succcurlyeq 1$ . Without loss of generality, $q'$ is inserted at the same position as $p'$ or later, so we can write $r=uvw$ , $p=up'vw$ , $q=uvq'w$ . Now we choose $s=up'vq'w$ . By Claim 2.1 and $p', q'\succcurlyeq 1$ we get the required

$$ \begin{align*}p = (up'v)(w) \preccurlyeq (up'v)q'(w) = s, \quad q = (u)(vq'w) \preccurlyeq (u)p'(vq'w) = s. \end{align*} $$

For a general $r=\sum _i^n r_i$ where $r_i$ are monomials, we decompose $p=\sum _i^n p_i$ , $q=\sum _i^n q_i$ so that $r_i\preccurlyeq p_i, q_i$ . Therefore, we find elements $s_i$ such that $s_i\succcurlyeq p_i, q_i$ , and eventually $s=\sum _i^n s_i\succcurlyeq p, q$ .

We are finally ready to prove the transitivity of $\sim $ and compatibility with operations.

Claim 2.3. If $x,r,y\in \mathbf {F}$ , $x\sim r$ , and $r\sim y$ , then $x\sim y$ .

By definition of $\sim $ , there are $p,q\in \mathbf {F}$ such that $x, r\leq p$ and $r,y\leq q$ . We break the expansion $r\leq p$ into finite number of single expansion steps getting a sequence

$$ \begin{align*}r = s_{0,0}\preccurlyeq s_{1,0} \preccurlyeq \cdots \preccurlyeq s_{P,0} = p. \end{align*} $$

Similarly, there is a sequence

$$ \begin{align*}r = s_{0,0}\preccurlyeq s_{0,1} \preccurlyeq \cdots \preccurlyeq s_{0,Q} = q. \end{align*} $$

By repeated application of Claim 2.2, we fill in the matrix $(s_{i,\;j})\in \mathbf {F}^{P\times Q}$ in such a way that $s_{i,\;j}\preccurlyeq s_{i+1,\;j}$ and $s_{i,\;j}\preccurlyeq s_{i,\;j+1}$ where they are defined. Eventually, we get $s=s_{P,Q}$ such that $s\geq p,q$ . Therefore $s\geq p\geq x$ and $s\geq q\geq y$ , so $x\sim y$ .

Compatibility of $\sim $ with addition and multiplication is straightforward. For $p_1,q_1,p_2,q_2\in \mathbf {F}$ such that $p_1\sim q_1$ and $p_2\sim q_2$ , there are $r_1, r_2$ such that $p_1,q_1 \leq r_1$ and $p_2,q_2 \leq r_2$ . Thus $p_1+p_2 \leq r_1+r_2$ and $q_1+q_2 \leq r_1+r_2$ . Therefore $p_1+p_2\sim q_1+q_2$ , so $\sim $ is compatible with addition.

Regarding multiplication, consider any $p,q,s\in \mathbf {F}$ such that $p\sim q$ . There is r such that $p,q\leq r$ . By Claim 2.1, we get $sp, sq\leq sr$ and $ps,qs\leq rs$ . Therefore $sp\sim sq$ and $ps\sim qs$ .

This is sufficient for compatibility with multiplication: If $p_1\sim q_1$ and $p_2\sim q_2$ , then $p_1p_2\sim q_1p_2\sim q_1q_2$ , so $p_1p_2\sim q_1q_2$ by transitivity.

2.2 Decreasing $n$

Theorem 2.4. Let $\mathbf {A}$ be an idempotent algebra with $n\times (n+1)\times m$ -terms for some $n>1,m>0$ . Then there exists $m'$ such that $\mathbf {A}$ has $(n-1)\times n\times m'$ -terms.

For proving the theorem, we first make a few assumptions without loss of generality. We assume that $m\geq 2$ —if $m=1$ , we introduce dummy variables. Moreover, we assume that $n\times (n+1)\times m$ -terms $f_1,\ldots ,f_{nm},g_1,\ldots ,g_{(n+1)m}$ are the only basic operations of $\mathbf {A}$ , and $\mathbf {A}$ is free idempotent algebra generated by two symbols $0$ and $1$ modulo the equations describing the $n\times (n+1)\times m$ -terms.

Consider the subuniverse $R\leq \mathbf {A}^{\omega }$ generated by all the infinite sequences that have the element $1$ at exactly one position and the element $0$ everywhere else.

Notice that R is invariant under all permutations of $\omega $ and since $\mathbf {A}$ is idempotent, every sequence in R has only finitely many nonzero values.

By $\hat {\mathbf {A}} = (\hat A, +, 0)$ we denote the free commutative monoid generated by all the non-zero elements of $\mathbf {A}$ . We identify the element $0\in \mathbf {A}$ with the neutral element in $\hat {\mathbf {A}}$ . For $\bar x\in R$ , let $\hat x$ denote the sum of all nonzero values of $\bar x$ , and let $\hat R$ be the set $\{\hat x : \bar x\in R\}$ .

Claim 2.5. To prove Theorem 2.4, it suffices to find

$$ \begin{align*}x_1,x_2,\ldots,x_{n-1},y_1,y_2,\ldots,y_n\in \hat R\end{align*} $$

such that $x_1+\cdots +x_{n-1} = y_1+\cdots +y_n$ .

If that happens, we can choose large enough $m'$ and express the elements $x_i,y_i\in \hat {\mathbf {A}}$ as follows:

$$ \begin{align*} x_i &= \sum_j^n\sum_k^{m'}z_{i,\;j,k} \text{ for any } i=1,\ldots,n-1, \\ y_j &= \sum_i^{n-1}\sum_k^{m'}z_{i,\;j,k} \text{ for any } j=1,\ldots,n, \end{align*} $$

where $z_{i,\;j,k}\in \mathbf {A}$ for $i=1,\ldots , n-1,\, j=1,\ldots , \ n, k=1,\ldots ,m'$ . Since elements $x_i$ are in $\hat R$ , there are $(nm')$ -ary terms $f^{\prime }_i$ such that if we put the element $1$ at the position $(j,k)$ and zeros otherwise in $f^{\prime }_i$ , we get $z_{i,\;j,k}$ . Similarly, since elements $y_j$ are in $\hat R$ , there are $((n-1)m')$ -ary terms $g^{\prime }_j$ such that if we put $1$ at the position $(i,k)$ and zeros otherwise into the term $g^{\prime }_j$ , we get $z_{i,\;j,k}$ . So the equations of $(n-1)\times n\times m'$ -terms are satisfied by terms $f^{\prime }_i,g^{\prime }_j$ if variables $x,y$ are substituted by $0$ and $1$ , respectively. Then the equations are satisfied in general, since $0,1$ are the generators of the free algebra $\mathbf {A}$ . This finishes the proof of the claim.

Every element of $\mathbf {A}$ is a binary function $t(0,1)$ on A in variables $0, 1$ . We regard them as unary functions $t(1)$ where $0$ is a constant and $1$ is the variable. With this viewpoint, there is a multiplication on A defined as usual function composition. $(t_1t_2)(1) = t_1(t_2(1))$ . This defines a structure of monoid on A where $1$ is the neutral element and $0$ is an absorbing element. For $i=1,\ldots ,n,\,\;j=1,\ldots ,(n+1),\,k=1,\ldots ,m$ , let $b_{i,\;j,k}\in A$ be the element of the monoid defined by

$$ \begin{align*} b_{i,\;j,k} = f_i(0,0,\ldots,0,{\mathop{1\kern0pt}\limits_{(j,k)}},0,\ldots,0) = g_j(0,0,\ldots,0,{\mathop{1\kern0pt}\limits_{(i,k)}},0,\ldots,0), \end{align*} $$

and let $\mathbf {B}$ be the submonoid generated the elements $b_{i,\;j,k}$ .

We first prove two lemmas showing that the free idempotent algebra $\mathbf {A}$ does not satisfy certain unexpected equations relevant for $\mathbf {B}$ .

Lemma 2.6. $\mathbf {B}$ is the free monoid generated by $b_{i,\;j,k}$ .

Lemma 2.7. Let h be a basic operation of $\mathbf {A}$ , and assume that $h(x_1,\ldots ,x_d) = x,$ where $x\in \mathbf {B}$ . Then either $x_1=\cdots =x_d=x$ , or there is $i\in \{1,\ldots ,d\}$ such that $x_i\in \mathbf {B}$ and $x_j=0$ for all $j\neq i$ .

Proof To prove the claims, we consider the free monoid $\mathbf {M} = \{M, \cdot , 1\}$ generated by all the symbols $\mathbf {b}_{i,\;j,k}$ for $i=1,\ldots ,n,\,\;j=1,\ldots ,(n+1),\,k=1,\ldots ,m$ . Let $\varphi $ denote the monoid homomorphism $\mathbf {M}\to \mathbf {B}$ generated by $\mathbf {b}_{i,\;j,k}\mapsto b_{i,\;j,k}$ . We define an algebra $\mathbf {M}'$ with the signature of $\mathbf {A}$ on the set $M\cup \{0,\bot \}$ . The basic operations are defined as follows:

$$ \begin{align*} h(x,\ldots,x) = x &\quad \text{for basic operation } h \text{ and } x\in M,\cr f_i(0,0,\ldots,0,{\mathop{x\kern0pt}\limits_{(j,k)}},0,\ldots,0) = \mathbf{b}_{i,\;j,k}\cdot x &\quad \text{for } x\in M,\cr g_j(0,0,\ldots,0,{\mathop{x\kern0pt}\limits_{(i,k)}},0,\ldots,0) = \mathbf{b}_{i,\;j,k}\cdot x &\quad \text{for } x\in M,\cr h(\ldots) = \bot &\quad \text{otherwise.} \end{align*} $$

It is straightforward to check that the algebra $\mathbf {M}'$ is idempotent and satisfies the equations of $n\times (n+1)\times m$ -terms, and note that every basic operation is at least ternary since $n,m\geq 2$ . Therefore, there is an algebra homomorphism $\psi \colon \mathbf {A}\to \mathbf {M}'$ generated by $0\mapsto 0$ , $1\mapsto 1$ . By the defined behavior of $\mathbf {M}'$ , we have $\psi (\varphi (x)) = x$ for any $x\in \mathbf {M}$ . Therefore, $\varphi $ is injective, hence an isomorphism, proving Lemma 2.6. Lemma 2.7 will follow from the following claim proven by induction.⊣

Claim 2.8. If $\psi (x) = 0$ , then $x = 0$ . If $\psi (x) \in \mathbf {M}$ , then $x = \varphi (\psi (x))$ .

We prove Claim 2.8 by induction on complexity of x. For $x=0,1$ , it is satisfied trivially as $0 \neq 1$ in $\mathbf {M}'$ (and so $0 \neq 1$ in $\mathbf {A}$ as well). Let us now assume that $x = h(x_1,\ldots ,x_d)$ for an elementary operation h, and Claim 2.8 is satisfied for $x_1,\ldots ,x_d$ by induction assumption. If $\psi (x)=\bot $ , it is satisfied for x trivially. Assume $\psi (x)\neq \bot $ . By the definition of basic operations on $\mathbf {M}'$ , there are three options:

  1. 1. $\psi (x_1)=\cdots =\psi (x_d)=\psi (x)$ , or

  2. 2. $h = f_i$ , and $\psi (x_{1,1})=\cdots =\psi (x_{n+1,m}) = 0$ except one $\psi (x_{j,k})\in M$ , or

  3. 3. $h = g_j$ , and $\psi (x_{1,1})=\cdots =\psi (x_{n,m}) = 0$ except one $\psi (x_{i,k})\in M$ .

In the first case, Claim 2.8 yields from the induction hypothesis. The other two cases are analogous, so we focus on just the second one. From the induction hypothesis, all $x_{1,1}=\cdots =x_{n+1,m}=0$ except $x_{j,k} = \varphi (\psi (x_{j,k}))$ . Therefore,

$$ \begin{align*}x = b_{i,\;j,k}(x_{j,k}) = \varphi(\mathbf{b}_{i,\;j,k}\cdot \psi(x_{j,k})) = \varphi(\psi(x)), \end{align*} $$

where the second equation follows from the fact that $\varphi $ is a homomorphism, and the second from the definition of $f_i$ in $\mathbf {M}'$ . We obtained $x = \varphi (\psi (x))$ finishing the claim.

Lemma 2.7 follows from similar reasons. We consider $x = h(x_1,\ldots ,x_d)$ such that $x\in \mathbf {B}$ . Options for $\psi (x)$ split into three cases as before. We already know that $\psi $ has unique preimages on $\mathbf {B}\cup \{0\}$ from Claim 2.8, so the three cases directly translate into the cases of Lemma 2.7.⊣

We continue with the translation of the original problem about the commutative monoid $\hat {\mathbf {A}}$ into the language of semirings. Let $\hat {\mathbf {B}} = (\hat B,+,\cdot ,0,1)$ be the additive submonoid of $\hat {\mathbf {A}}$ generated by elements of $\mathbf {B}$ with multiplicative structure inherited from the monoid $\mathbf {B}$ . Since $\mathbf {B} = (B, \cdot , 1)$ is a free monoid by Lemma 2.6, and $\hat {\mathbf {A}} = (\hat A, +, 0)$ is defined to be a free commutative monoid, $\hat {\mathbf {B}}$ is the free semiring generated by elements $b_{i,\;j,k}$ .

We equip the semiring $\hat {\mathbf {B}}$ with equations E of the form

$$ \begin{align*} \sum_i^n\sum_k^m b_{i,\;j,k}&= 1 \text{ for all } j=1,\ldots,(n+1), \\ \sum_j^{n+1}\sum_k^m b_{i,\;j,k}&= 1 \text{ for all } i=1,\ldots,n. \end{align*} $$

In other words, these equations say that

$$ \begin{align*} g_j(1,0,\ldots,0)&+g_j(0,1,\ldots,0)+\cdots+g_j(0,0,\ldots,1)= 1. \\ f_i(1,0,\ldots,0)&+f_i(0,1,\ldots,0)+\cdots+f_i(0,0,\ldots,1)= 1. \end{align*} $$

Let $\sim $ be the congruence generated by these equations E.

Lemma 2.9. If $p,q\in \hat {\mathbf {B}}$ such that q is a single expansion of p using equations E and $p\in \hat R$ , then also $q \in \hat R$ .

Proof Let t be a term in $\omega $ variables (using just finitely many of them) that take the generators of R and outputs some $\bar r\in R$ such that $\hat r = p$ . We prove the claim by induction on the complexity of t. Let $p = uv+s$ and $q = uev+s$ where $u,v$ are monomials, s is a polynomial, and e is a single expansion of $1$ . If $u=1$ , we prove the claim directly. Any single expansion e of $1$ is of the form

$$ \begin{align*}h(1,0,\ldots,0)+h(0,1,0,\ldots,0)+\cdots+h(0,\ldots,0,1), \end{align*} $$

where h is a basic operation of $\mathbf {A}$ . Let us denote the arity of h as d and the summands as $b_i$ for $i=1,\ldots ,d$ . So we can write $e=\sum _{i=0}^d b_i$ . We take d different representations $\bar r_1,\ldots \bar r_d\in R$ that differ only in the position of v (if there are multiple v in $\bar r$ , we vary the position of one of them and fix the rest). Then $h(\bar r_1,\ldots ,\bar r_d)$ correspond to the polynomial $ev+s = q$ .

If $u\neq 1$ , we use the induction hypothesis. Assume that $\bar r = h'(\bar r_1,\ldots ,\bar r_{d'})$ for an elementary operation $h'$ , where all the construction terms for $\bar r_1,\ldots ,\bar r_{d'}$ are simpler. For $j=1$ , $\ldots ,d'$ , and $i=1,2,\ldots $ , let us denote $r_i^j$ the jth position of $\bar r_i$ , and $r^j$ denote the jth position of $\bar r$ . By Lemma 2.7, all $r_i^j\in \mathbf {B}\cup \{0\}$ , so all $\bar r_i\in \hat {\mathbf {B}}$ .

Without loss of generality, $r^1 = uv$ . There are two options given by Lemma 2.7.

  1. 1. $r_1^1 = \cdots = r_{d'}^1 = uv$ , or

  2. 2. all $r_1^1,\ldots ,r_{d'}^1$ are zeros except one $r_i^1\in \mathbf {B}$ .

Case 1. Let $uev = x_1+\cdots +x_d$ where $x_1,\ldots ,x_d$ are monomials. Define

$$ \begin{align*}\bar r^{\prime}_i = (x_1,\dots,x_d, r_i^2, r_i^3, \ldots). \end{align*} $$

By induction hypothesis, since $\bar r_i\in R$ , also $\bar r^{\prime }_i \in R$ for all $i = 1,\ldots ,d'$ . Therefore also

$$ \begin{align*}h'(\bar r_1,\ldots,\bar r_{d'}) = (x_1,\dots,x_d, r^2, r^3, \ldots) \in R, \end{align*} $$

which is a representative of q, so Case 1 is finished.

Case 2. In this case, $r_i^1 = u_2v$ , where $u = u_1u_2$ and $u_1$ is one of the generators of $\mathbf {B}$ . Let $u_2ev = x_1+\cdots +x_d$ , where $x_1,\ldots ,x_d$ are monomials. Let

$$ \begin{align*}\bar r^{\prime}_i = (x_1,\dots,x_d, r_i^2, r_i^3, \ldots). \end{align*} $$

By induction hypothesis $\bar r^{\prime }_i\in R$ since $\bar r_i\in R$ . We define the others $r^{\prime }_j$ for $j\neq i$ as

$$ \begin{align*} \bar r^{\prime}_j = (\underbrace{0,\ldots,0}_d, r_j^2, r_j^3, \ldots), \end{align*} $$

they are also in R as they are just rearrangements of sequences $\bar r_j$ . Finally, $\bar r'=h(\bar r^{\prime }_1,\ldots ,\bar r^{\prime }_d)\in R$ and $q=\hat r'$ .⊣

Claim 2.10. To prove the theorem, it suffices to show that $n-1 \sim n$ in $\hat {\mathbf {B}}$ .

Indeed, if $n-1\sim n$ , there is a common expansion s by Proposition 2.1. Since s is an expansion of $n-1$ , there are $x_1,\ldots ,x_{n-1}$ such that $\sum _i^{n-1} x_i=s$ , and every $x_i$ is an expansion of $1$ . Similarly, since s is an expansion of n, there are $y_1,\ldots ,y_n$ such that $\sum _i^n y_i=s$ , and every $y_i$ is an expansion of $1$ . Therefore all the elements $x_i,y_i\in \hat R$ by Lemma 2.9 and the assumptions of Claim 2.5 are satisfied.

Now we translated the original problem into the language of the semiring $\hat {\mathbf {B}}$ modulo $\sim $ . Before general reasoning, we show the idea on the exampleFootnote 1 $n=2, m=1$ . So $\hat {\mathbf {B}}$ is generated by $b_{11}, b_{12}, b_{13}, b_{21}, b_{22}, b_{23}$ , congruence $\sim $ is generated by

$$ \begin{align*}1\sim b_{11}+b_{12}+b_{13} \sim b_{21}+b_{22}+b_{23} \sim b_{11}+b_{21} \sim b_{12}+b_{22} \sim b_{13}+b_{23}, \end{align*} $$

and we want to prove $1\sim 2$ . Clearly $2\sim 3$ since

$$ \begin{align*}2 \sim (b_{11}+b_{12}+b_{13})+(b_{21}+b_{22}+b_{23}) = (b_{11}+b_{21})+(b_{12}+b_{22})+(b_{13}+b_{23}) \sim 3. \end{align*} $$

Now, let us expand $1$ a bit.

$$ \begin{align*} 1 \sim b_{11}+b_{12}+b_{13} &\sim b_{11}(b_{21}+b_{22}+b_{23}) + (b_{11}+b_{12}+b_{13})b_{12} + (b_{11}+b_{12}+b_{13})b_{13} \\ &= b_{11}(b_{22}+b_{12}+b_{23}+b_{13}) + \cdots \sim 2b_{11}+\cdots. \end{align*} $$

We managed to get $2b_{11}$ in the expanded $1$ . Since $2\sim 3$ , we get an extra $b_{11}$ , and then collapse the expression using the reverse process. Therefore $1\sim 1+b_{11}$ . But there is nothing special about the generator $b_{11}$ , If we swapped $b_{11}\leftrightarrow b_{21}$ , $b_{12}\leftrightarrow b_{22}$ , $b_{13}\leftrightarrow b_{23}$ , we would get $1\sim 1+b_{21}$ by the same reasoning. Therefore

$$ \begin{align*}1 \sim 1+b_{21} \sim (1+b_{11})+b_{21} = 1+(b_{11}+b_{21}) \sim 2. \end{align*} $$

Now, let us return to the general setup with generators $b_{i,\;j,k}$ for $i=1,\ldots ,n$ , $j=1,\ldots ,(n+1)$ , and $k=1,\ldots ,m$ , and the congruence $\sim $ is generated by

$$ \begin{align*} &1\sim \sum_i^n\sum_k^m b_{i,\;j,k} \text{ for all } j=1,\ldots,(n+1), \\ &1\sim \sum_j^{n+1}\sum_k^m b_{i,\;j,k} \text{ for all } i=1,\ldots,n. \end{align*} $$

From the equations, we derive $n\sim n+1$

$$ \begin{align*} n\sim \sum_i^n\left(\sum_j^{n+1}\sum_k^m b_{i,\;j,k}\right) = \sum_j^{n+1}\left(\sum_i^n\sum_k^m b_{i,\;j,k}\right) = n+1. \end{align*} $$

We fix $i',\;j',k'$ . To prove that $(n-1)\sim (n-1)+b_{i',\;j',k'}$ it suffices to get $nb_{i',\;j',k'}\sim (n+1)b_{i',\;j',k'}$ in an expanded form of $n-1$ .

In the following calculations, by $x \geq y$ we mean $(\exists z: x=y+z)$ .

$$ \begin{align*} n-1 &\sim (n-1)\sum_j^{n+1}\sum_k^m b_{i',\;j,k} \geq (n-1)b_{i',\;j',k'} + \sum_{j\neq j'}^{n+1}\sum_k^m b_{i',\;j,k} \\ & = b_{i',\;j',k'}\cdot\sum_{i\neq i'}^n 1 + 1\cdot\sum_{j\neq j'}^{n+1}\sum_k^m b_{i',\;j,k} \\ &\sim b_{i',\;j',k'}\left(\sum_{i\neq i'}^n\sum_j^{n+1}\sum_k^m b_{i,\;j,k}\right) + \left(\sum_j^{n+1}\sum_k^m b_{i',\;j,k} \right) \left(\sum_{j\neq j'}^{n+1}\sum_k^m b_{i',\;j,k}\right) \\ &\geq b_{i',\;j',k'}\left(\sum_{i\neq i'}^n\sum_{j\neq j'}^{n+1}\sum_k^m b_{i,\;j,k} + \sum_{j\neq j'}^{n+1}\sum_k^m b_{i',\;j,k}\right) \\ &= b_{i',\;j',k'}\left(\sum_{j\neq j'}^{n+1}\sum_i^n\sum_k^m b_{i,\;j,k}\right) \sim b_{i',\;j',k'}\cdot\sum^{n+1}_{j\neq j'}1 = nb_{i',\;j',k'}. \end{align*} $$

Hence $n-1 \sim n-1 + b_{i,\;j,k}$ for any $i,\;j,k$ . We finally get the desired congruence

$$ \begin{align*} n-1 \sim n-1 + b_{1,1,1} \sim n-1 + b_{1,1,1} + b_{1,2,1} \sim \cdots \sim n-1 + \sum_j^{n+1}\sum_k^m b_{1,\;j,k} \sim n. \end{align*} $$

Corollary 2.1. Every $SD(\wedge )$ variety has $(m+m)$ -terms for some m.

3 Getting to $(3+m)$ -terms

In this section, we prove the following theorem.

Theorem 3.1. Every $SD(\wedge )$ variety $\mathcal {V}$ has a $(3+m')$ -terms for large enough $m'$ .

By Corollary 2.1 we know that the variety has the $(m+m)$ -terms for some m, and denote them $f, g_1, g_2$ . For simplicity, we may assume that $m\geq 2$ , the idempotent terms $f,g_1,g_2$ are the only basic operations of the variety, and they satisfy only idempotence, $(m+m)$ -equations and their consequences. Let $\mathbf {A}$ be the $\mathcal {V}$ -free algebra generated by elements $0,1$ .

We start with a syntactical lemma.

Lemma 3.2. Let o be an elementary operation, and $x = o(x_1,\ldots ,x_k)$ . If at least one $x_i\neq 0$ , then $x\neq 0$ . If at least one $x_i\neq 1$ , then $x\neq 1$ .

Proof There is an automorphism on $\mathbf {A}$ swapping $0$ and $1$ , so it suffices to show the first part: $x\neq 0$ if some $x_i\neq 0$ . Let $\mathbf {M} = (\{0, 1\}, f^{\mathbf {M}}, g_1^{\mathbf {M}}, g_2^{\mathbf {M}})$ where $f^{\mathbf {M}}, g_1^{\mathbf {M}}, g_2^{\mathbf {M}}$ are defined as maxima of arity $2m$ , m, m, respectively. They are idempotent and satisfy the equations of $(m+m)$ -terms, so there is a homomorphism $\varphi \colon \mathbf {A}\to \mathbf {M}$ . By induction on the term complexity, we show that for every x, if $x\neq 0$ , then $\varphi (x) = 1$ . It is clearly satisfied for $x = 0,1$ . Now consider any $x' = o'(x^{\prime }_1,\ldots ,x^{\prime }_{k'})$ where $o'$ is an elementary operation and assume that the claim is satisfied for $x^{\prime }_1,\ldots ,x^{\prime }_{k'}$ . If $x'\neq 0$ , then at least one $x^{\prime }_j\neq 0$ . By induction hypothesis, $\varphi (x^{\prime }_j) = 1$ , so

$$ \begin{align*} \varphi(x) = max(\varphi(x^{\prime}_1), \ldots, \varphi(x^{\prime}_j), \ldots, \varphi(x^{\prime}_{k'})) = 1. \end{align*} $$

This proves that $0$ is the only preimage of $0$ in $\phi $ . Finally, we prove the lemma. Since at least one $x_i\neq 0$ , $\varphi (x_i) = 1$ and we get $\varphi (x) = o(\varphi (x_1),\ldots ,\varphi (x_k)) = 1$ . Therefore, $x\neq 0$ as $\varphi (0) = 0$ .⊣

Similarly as in the proof of Theorem 2.4, we define $R_n$ to be an n-ary relation generated by tuples with exactly one element 1 and zeros everywhere else, where $n\in \{1,2,\ldots ,\omega \}$ .

For an algebra $\mathbf {B}\in \mathcal {V}$ , we define a $\mathbf {B}$ -pendant to be any subuniverse $P\subset \mathbf {B}\times \mathbf {A}^{\omega }$ that is invariant under all permutations of the $\omega $ positions on $\mathbf {A}^{\omega }$ .

For any $\mathbf {B}$ -pendant P we define $P|_0,P|_1\leq \mathbf {B}$ as follows:

$$ \begin{align*} P|_0 = \{b\in\mathbf{B}: (b, (0,0,\ldots,0))\in P\}, \quad P|_1 = \{b\in\mathbf{B}: \exists \bar r\in R_{\omega}\colon(b, \bar r)\in P\}. \end{align*} $$

If $P|_0$ and $P|_1$ intersect, we call the pendant P zipped. For a subuniverse $C\leq \mathbf {B}$ and an element $b\in \mathbf {B}$ , let $C[b]$ denote the smallest $\mathbf {B}$ -pendant P satisfying $C\leq P|_0$ and $\{b\}\times R_{\omega }\leq P$ .

Clearly, $C\leq C[b]|_0$ and $b\in C[b]|_1$ and if $b\in C$ , the pendant $C[b]$ is zipped since b is contained in both $C[b]|_0$ and $C[b]|_1$ . It is even true that $C = C[b]|_0$ . This follows from the fact that the set

$$ \begin{align*} \mathbf{B}\times(\mathbf{A}^{\omega}\setminus\{(0,0,\ldots)\}) \cup C\times\{(0,0,\ldots)\} \end{align*} $$

is a subuniverse of $\mathbf {B}\times \mathbf {A}^{\omega }$ containing $C[b]$ . Checking that it is a subuniverse is straightforward using Lemma 3.2.

Claim 3.3. To prove the theorem, it suffices to show that the $\mathbf {A}^3$ -pendant $R_3[(0,0,0)]$ is zipped.

Indeed, the pendant $P=R_3[(0,0,0)]$ is just $R_{\omega }$ viewed as a subuniverse of $\mathbf {A}^3\times \mathbf {A}^{\omega }$ . So when that pendant is zipped, there is a common element $\bar r_3\in P|_0 = R_3$ and $\bar r_3\in P|_1$ . By expanding the definition of $P|_1$ , we get $\bar r_{\omega } \in R_{\omega }$ such that $(\bar r_3, \bar r_{\omega })\in P = R_{\omega }$ . Let $g^{\prime }_1$ be the term producing $\bar r_3$ from the generators of $R_3$ , $g^{\prime }_2$ be the term producing $\bar r_{\omega }$ from the generators of $R_{\omega }$ , and $f'$ be the term producing $(\bar r_3, \bar r_{\omega })$ from the generators of $R_{\omega }$ . We can choose large enough $m'$ such that $g^{\prime }_2$ uses at most first $m'$ generators and $f'$ uses at most first $3+m'$ of them. So we perceive $g^{\prime }_2$ as $m'$ -ary and $f'$ as $(3+m')$ -ary. Since

$$ \begin{align*} g_1\left(\begin{pmatrix}1\cr0\cr0\end{pmatrix}\begin{pmatrix}0\cr1\cr0\end{pmatrix}\begin{pmatrix}0\cr0\cr1\end{pmatrix}\right)&=\bar r_3,\quad g_2\left(\begin{pmatrix}1\cr0\\\vdots\cr0\end{pmatrix}\cdots\begin{pmatrix}0\\\vdots\cr0\cr1\end{pmatrix}\right)=\bar r_{\omega}, \\ f\left(\begin{pmatrix}1\cr0\\\vdots\cr0\end{pmatrix}\cdots\begin{pmatrix}0\\\vdots\cr0\cr1\end{pmatrix}\right) &= \begin{pmatrix}\bar r_3\\\bar r_{\omega}\end{pmatrix}, \end{align*} $$

the equations of $(3+m')$ -terms are satisfied when we plug in $x=0$ and $y=1$ . However, the elements $0,1$ are the generators of a free algebra, so the equations are satisfied in general.

Lemma 3.4. Consider $\mathbf {B}\in \mathcal {V}$ , $C\leq \mathbf {B}$ , and $b\in \mathbf {B}$ . Let P be a $\mathbf {B}$ -pendant such that $C\leq P|_0$ and $b\in P|_1$ . Then $(C[b])|_1\leq P|_1$ .

Proof To see that, take an element $(b,\bar r_{\omega })\in P$ such that $\bar r_{\omega }\in R_{\omega }$ . Let $\bar r_{\omega }$ be of the form $(x_1, x_2, \ldots , x_n, 0, 0, \ldots )$ for some large enough n. Since P is invariant under permutations on $\mathbf {A}^{\omega }$ , it contains all the elements of the form

$$ \begin{align*} (b, (0,0,\ldots,0,x_1,x_2,\ldots,x_n,0,0,\ldots)). \end{align*} $$

We construct a homomorphism $\varphi \colon \mathbf {A}\to \mathbf {A}^n$ by mapping its generators

$$ \begin{align*} 0\mapsto(0,0,\ldots,0), 1\mapsto(x_1,\ldots,x_n). \end{align*} $$

We naturally extend $\varphi $ to mapping $\mathbf {A}^{\omega } \to (\mathbf {A}^n)^{\omega } = \mathbf {A}^{\omega }$ . Notice that $\varphi $ is an endomorphism of $R_{\omega }$ since it maps generators of $R_{\omega }$ into $R_{\omega }$ .

To finish the proof of the lemma, we take any $b'\in C[b] |_1$ and show that $b'\in P|_1$ . There is $\bar r^{\prime }_{\omega }\in R_{\omega }$ such that $(b', \bar r^{\prime }_{\omega })\in C[b]$ . Then $\varphi (\bar r^{\prime }_{\omega })\in R_{\omega }$ and moreover $(b', \varphi (\bar r^{\prime }_{\omega }))\in P$ . The latter holds since the endomorphism $\psi $ on $\mathbf {B}\times \mathbf {A}^{\omega }$ defined by $\psi ((y,x)) = (y,\varphi (x))$ maps the generators of $C[b]$ into P. In particular P contains all the elements $\psi (b, (0,\ldots ,0,1,0,\ldots ))$ for any position of 1, and $\psi (c, (0,0,\ldots )$ for any $c\in C$ . So $b'\in P|_1$ and this finishes the proof of the lemma. ⊣

Now, let h be the binary term defined as

$$ \begin{align*} h(x,y) = f(\underbrace{\vphantom{y}xx\ldots x}_m,\underbrace{yy\ldots y}_m). \end{align*} $$

Lemma 3.5. For any $\mathbf {B}$ -pendant P and $x\in P|_0, y\in P|_1$ , we have

$$ \begin{align*} h(x,y)\in P|_1,\quad h(y,x)\in P|_1. \end{align*} $$

Proof Without loss of generality, we may assume that $(y, (1,0,\ldots ,0))\in P$ . If not, we use Lemma 3.4 and work with $(P|_0)[y]$ instead of P. Then the lemma follows from the identities

The columns of the identities encode such sequences in $\mathbf {B}\times \mathbf {A}^{\omega }$ that

  • are contained in P: This is apparent from the left hand side,

  • have elements $h(x,y), h(y,x)$ at their first coordinates,

  • the other part is contained in $R_{\omega }$ : This is apparent from the right hand side.

Therefore $h(x,y), h(y,x)\in P|_1$ .⊣

Lemma 3.6. Consider $\mathbf {B}_1, \mathbf {B}_2\in \mathcal {V}$ and let P be a $(\mathbf {B}_1\times \mathbf {B}_2)$ -pendant. Assume that there exist $x,y\in \mathbf {B}_1$ and $u,v\in \mathbf {B}_2$ such that $(x,u),(y,u),(x,v)\in P|_0$ and $(y,v)\in P|_1$ . Then P is zipped.

Proof The pair $(h(x,y), h(v,u))$ is in the intersection $P|_0\cap P|_1$ . Indeed, it is contained in $P|_0$ since we can write

$$ \begin{align*}\begin{pmatrix}h(x,y)\cr h(v,u)\end{pmatrix} = h\begin{pmatrix}x & y\\ v & u\end{pmatrix}. \end{align*} $$

Alternatively, we can use the following expansion of $(h(x,y), h(v,u))$ :

$$ \begin{align*}\begin{pmatrix}h(x,y)\cr h(v,u)\end{pmatrix} = h\left( h\begin{pmatrix}x & x\\ v & u\end{pmatrix}, h\begin{pmatrix}y & y\\ v & u\end{pmatrix} \right). \end{align*} $$

By Lemma 3.5 used twice, the pair is also an element of $P|_1$ , which completes the proof. ⊣

Lemma 3.7. Consider $\mathbf {B}_1, \mathbf {B}_2 \in \mathcal {V}$ and let R be a subuniverse $R\leq \mathbf {B}_1\times \mathbf {B}_2$ . Assume that there are elements $x\in \mathbf {B}_1, u,v\in \mathbf {B}_2$ such that $(x,u),(x,v)\in R$ . Then for any $y\in \mathbf {B}_1$ the $(\mathbf {B}_1\times \mathbf {B}_2)$ -pendant $R[(y,u)]$ is zipped if and only if the $(\mathbf {B}_1\times \mathbf {B}_2)$ -pendant $R[(y,v)]$ is zipped.

Proof It suffices to show the forward implication. Since $R[(y,u)]$ is zipped, there is some $(y_0,u_0)\in R\cap R[(y,u)]|_1$ . Consider the 4-ary relation

$$ \begin{align*} R' = \{(a_1,a_1,a_2,a_2):(a_1,a_2)\in R\}, \end{align*} $$

and the $(\mathbf {B}_1^2\times \mathbf {B}_2^2)$ -pendant $P=R'[(x,y,u,v)]$ . Since $(y_0,u_0)\in R[(y,u)]|_1$ , we can find a quadruple $(x_0,y_0,u_0,v_0)$ in $P|_1$ for some additionally generated elements $x_0, v_0$ . So

$$ \begin{align*} (x_0,u_0)\in R[(x,u)]|_1, \quad (x_0,v_0)\in R[(x,v)]|_1, \quad (y_0,v_0)\in R[(y,v)]|_1. \end{align*} $$

Since $(x,u),(x,v) \in R$ , also $(x_0,u_0), (x_0,v_0)\in R$ . Let Q be the pendant $R[(y,v)]$ . We have $(x_0,u_0),(x_0,v_0),(y_0,u_0)\in Q|_0$ and $(y_0,v_0)\in Q|_1$ . Therefore, the pendant Q is zipped by Lemma 3.6. ⊣

Finally, we define a relation $\ltimes $ on $\mathbf {A}$ as follows. We write $x\ltimes y$ if there are $u,v\in \mathbf {A}$ such that

  1. (i) $(x,u,v)\in R_3$ ,

  2. (ii) the $\mathbf {A}^3$ -pendant $R_3[(y,u,v)]$ is zipped.

Notice that $\ltimes $ is reflexive: Indeed for any x, there are $u,v$ such that $(x,u,v)\in R_3$ . Then also $R[(x,u,v)]$ is zipped, so $x\ltimes x$ .

Lemma 3.8. Consider $c,x',y'\in \mathbf {A}$ such that $x\ltimes y$ , $(x,c,x')\in R_3$ , and $(y,c,y')\in R_3$ . Then $y'\ltimes x'$ .

Proof Consider $u,v$ as in the definition of the relation $\ltimes $ . We will show that $y'\ltimes x'$ by finding appropriate $u',v'$ . We set $u'=c$ and $v'=y$ , so the condition (i) is satisfied since $(y',c,y)\in R_3$ by symmetry of $R_3$ . To establish $y'\ltimes x'$ we need to prove that $R_3[(x',c,y)]$ is zipped, equivalently, that $R_3[(y,c,x')]$ is zipped. We interpret $\mathbf {A}^3$ as $\mathbf {A}\times \mathbf {A}^2$ and use Lemma 3.7. We plug in

$$ \begin{align*}x\mapsto x, y\mapsto y, u\mapsto (u,v), v\mapsto (c,x'). \end{align*} $$

Indeed $(x,u,v), (x,c,x')\in R_3$ and $R_3[(y,u,v)]|$ is zipped. So the assumptions of Lemma 3.7 are satisfied, and consequently $R[(y,c,x')]$ is zipped. ⊣

We are finally ready to prove the theorem. We start with $g_1(100\ldots 0)\ltimes f(100\ldots 0)$ and get to $1\ltimes h(1,0)$ using Lemma 3.8 and the following triples in $R_3$ :

So, there are $u,v$ such that $(1,u,v)\in R_3$ and $R_3[(h(1,0),u,v)]$ is zipped. The only element of $R_3$ of the form $(1,u,v)$ is $(1,0,0)$ . This is because the set

$$ \begin{align*}{(1,0,0)}\cup(\mathbf{A}\setminus\{1\})\times\mathbf{A}\times\mathbf{A} \end{align*} $$

is a subuniverse of $\mathbf {A}^3$ containing $R_3$ . Checking that it is a subuniverse is straightforward using Lemma 3.2. Therefore $u=v=0$ and $R_3[(h(1,0),0,0)]$ is zipped. However, by Lemma 3.5 $(h(1,0),0,0)\in R_3[(0,0,0)] |_1$ , so $R_3[(0,0,0)]$ is zipped (Lemma 3.4, universality of pendant construction), and the proof is finished by Claim 3.3.

4 A counterexample for $(2+m)$ -terms

Based on the result of the previous chapter that some $(3+m)$ -terms are satisfied in every $SD(\wedge )$ variety, one could ask whether the result could be strengthened to $(2+m)$ -terms. However, as we demonstrate in this section, such a generalization is not possible. Not only that there is an algebra in an $SD(\wedge )$ variety that does not have $(2+m)$ -terms but there is even such an algebra that belongs to a congruence distributive variety.

Even stronger Maltsev condition than congruence distributivity is the existence of a near-unanimity term. A near unanimity term (NU term for short) is a term t satisfying

$$ \begin{align*} t(x,x,\ldots,x,{\mathop{y\kern0pt}\limits_{i}},x,\ldots,x) = x \end{align*} $$

for any position i.

There is no algebra having an NU term and no $(2+m)$ -terms, since putting $g_2$ to be the NU term and $f,g_1$ to be just the projections on the first coordinate meet the requirements of the $(2+m)$ -terms. However, in our first example, we demonstrate that one existence of an NU term does not imply $(2+m)$ -terms for a fixed m.

Consider the following symmetric n-ary operations $t^{\mathbf {A}}_n, t^{\mathbf {B}}_n$ for $n\geq 5$ on rational numbers: Let $x_1\leq x_2\cdots \leq x_n$ be a sorted input of such an operation. Then

$$ \begin{align*}t^{\mathbf{A}}_n(x_1,\ldots,x_n) = \frac{x_2+\cdots+x_{n-1}}{n-2}, \quad t^{\mathbf{B}}_n(x_1,\ldots,x_n) = \frac{x_3+\cdots+x_{n-2}}{n-4}. \end{align*} $$

If the input is not sorted, we first sort it and then perform the calculation. These operations are clearly NU, that is,

$$ \begin{align*}t^{\mathbf{A}}_n(x,x,\ldots,x,y,x,\ldots x) = t^{\mathbf{B}}_n(x,x,\ldots,x,y,x,\ldots x) = x \end{align*} $$

for any position of y.

For proving key properties of t, we need a lemma.

Lemma 4.1. Let $x_1,\ldots ,x_n,y_1,\ldots ,y_n\in \mathbb {Q}$ be such that $x_i\leq y_i$ for all $i=1,\ldots ,n$ . Let $x_1',\ldots ,x^{\prime }_n$ be $x_1,x_2,\ldots ,x_n$ sorted in increasing order, and let $y^{\prime }_1,\ldots ,y^{\prime }_n$ be sorted $y_1,\ldots y_n$ . Then $x^{\prime }_i\leq y^{\prime }_i$ for all i and the set $\{i : x^{\prime }_i < y^{\prime }_i\}$ is at least as large as the set $\{i : x_i < y_i\}$ .

Proof Without loss of generality, let the numbers $x_i$ be increasing in lexicographical order. Therefore $x_i=x^{\prime }_i$ for all i. It is possible to sort the sequence $y_i$ by consecutive application of sorting transpositions, that is swapping $y_i$ with $y_j$ if $i<j$ and $y_i>y_j$ . An example of such a process is the well known bubble sort algorithm. We show that one sorting transposition preserves the condition $x_i\leq y_i$ for all i, and does not shrink the set $\{i : x_i < y_i\}$ . In one such transposition, the swapped positions $i,\;j$ are independent of all the others, so we may assume that there are no others. In particular $n=2$ , $x_1\leq x_2$ , $y_1> y_2$ , $x_1\leq y_1$ , $x_2\leq y_2$ , $y^{\prime }_1=y_2$ , and $y^{\prime }_2=y_1$ . First $x_1\leq x_2\leq y_2$ and $x_2\leq y_2<y_1$ , so $x_1\leq y^{\prime }_1$ and $x_2 < y^{\prime }_2$ . This shows that $x_i\leq y_i$ for all i. Now, let us investigate the number of strict inequalities. Since $x_2 < y^{\prime }_2$ , the size of the set $\{i : x^{\prime }_i<y^{\prime }_i\}$ is at least 1. If the size equals two, we are done. Otherwise $x_1=y^{\prime }_1$ , so $x_1=x_2=y_2$ . Since $x_2=y_2$ , the size of the set $\{i : x_i<y_i\}$ is at most one, so it is not larger than $\{i : x^{\prime }_i<y^{\prime }_i\}$ . ⊣

Claim 4.2. For any $x_1,\ldots ,x_n,y_1,\ldots ,y_n\in \mathbb {Q}$ such that $x_i\leq y_i$ for all i, we have $t^{\mathbf {A}}_n(x_1,\ldots ,x_n) \leq t^{\mathbf {A}}_n(y_1,\ldots ,y_n)$ . The inequality is strict if $x_i < y_i$ for at least three i.

Indeed, we can assume that $x_i$ and $y_i$ are sorted by Lemma 4.1. The first part is then clear from definition of $t^{\mathbf {A}}$ . If $x_i < y_i$ for at least three i, it happens for at least one $i\neq 1,n$ , and that $x_i<y_i$ causes the strict inequality.

Consider the algebras $\mathbf {A}_n = (\mathbb {Q}, t^{\mathbf {A}}_n)$ and $\mathbf {B}_n = (\mathbb {Q}, t^{\mathbf {B}}_n)$ . For $m\geq 1$ define the sets $U\subset \mathbb {Q}^2, V_m\subset \mathbb {Q}^m, W_m\subset \mathbb {Q}^{2+m}$ as follows:

$$ \begin{align*} U &= \{(a_1,a_2) : a_1+a_2=1\}, \quad \\ V_m = \{(b_1,\ldots,b_m) &: b_1\ldots b_m\geq 0\text{ and there is a nonzero } b_i.\} \\ W_m &= \{(a_1,a_2,b_1,\ldots,b_m) : \\ (a_1+a_2 < 1\text{ and }b_1\ldots &b_m\geq 0) \text{ or } (a_1+a_2 = 1 \text{ and } b_1=\cdots=b_m=0)\}. \end{align*} $$

Claim 4.3. For any $n\geq 5$ , the set U is a subuniverse of $\mathbf {A}_n^2$ .

The claim follows from the fact that if $x_1,x_2,\ldots ,x_n$ is non-decreasing, then also $1-x_n,\ldots ,1-x_2,1-x_1$ is non-decreasing.

Claim 4.4. For any $n\geq 5$ , $2m < n$ the set $V_m$ is a subuniverse of $\mathbf {B}_n^m$ .

Indeed, if all of $x_1,\ldots ,x_n$ are nonnegative and at least three non-zero, then $t^{\mathbf {B}}$ is also non-zero. Consider m-tuples $\bar x_1, \bar x_2, \ldots , \bar x_n$ . Every m-tuple $\bar x_i$ has a non-zero position $p_i$ . Since $2m < n$ , one of the positions has to repeat three times, $p=p_{i_1}=p_{i_2}=p_{i_3}$ . So the m-tuple $t(\bar x_1,\ldots \bar x_n)$ has a non-zero element at the position p.

Claim 4.5. For any $m\geq 1, n\geq 5$ , the set $W_m$ is a subuniverse of $\mathbf {A}_n^2\times \mathbf {B}_n^m$ .

For the same reason as in Claim 4.3, the projection of $W_i$ to $\mathbf {A}^2$ is a subuniverse of $\mathbf {A}^2$ . The question is about subtle detail how it interacts with the $\mathbf {B}^m$ -part. Let us take $(2+m)$ -tuples $\bar x_1,\ldots ,\bar x_n\in W_m$ and show that $t(\bar x_1,\ldots ,\bar x_n)$ belongs to $W_m$ as  well. Let $a_{i,\;j}, b_{i,\;j}$ be matrices such that $\bar x_j = (a_{1,\;j},a_{2,\;j},b_{1,\;j},\ldots ,b_{m,\;j})$ . We analyze two cases:

  1. 1. For at most two columns j it happens that $a_{1,\;j}+a_{2,\;j} < 1$ . Then all the other columns have zero $\mathbf {B}^m$ -part, so $t^{\mathbf {B}}(b_{i,1}) = 0$ for any $\mathbf {B}$ -row i. Hence $t(\bar x_1,\ldots ,\bar x_n)\in W_m$ .

  2. 2. For at least three columns j it happens that $a_{1,\;j}+a_{2,\;j} < 1$ . In other words, at these three positions j it happens that $a_{1,\;j} < 1-a_{2,\;j}$ while non-strict inequality is satisfied everywhere. Thus, by Claim 4.2, we have

    $$ \begin{align*}t^{\mathbf{A}}(a_{1,1},\ldots,a_{1,n}) < t^{\mathbf{A}}(1-a_{2,1},\ldots,1-a_{2,n}) = 1-t^{\mathbf{A}}(a_{2,1},\ldots,a_{2,n}). \end{align*} $$
    Equivalently,
    $$ \begin{align*} t^{\mathbf{A}}(a_{1,1},\ldots,a_{1,n}) + t^{\mathbf{A}}(a_{2,1},\ldots,a_{2,n}) < 1, \end{align*} $$
    so $t(\bar x_1,\ldots ,\bar x_n)$ belongs to $W_i$ .

So, in both cases, the result belongs to $W_m$ , and the claim is established.

We are now ready to construct the counterexamples.

Theorem 4.6. For any $n,m$ such that $n\geq 5$ and $2m < n$ , there is an algebra having an n-ary NU-term, $n\geq 5$ , but no $(2+m)$ -terms.

Proof The algebra is $\mathbf {C}_n = \mathbf {A}_n\times \mathbf {B}_n$ . For a contradiction, suppose that $\mathbf {C}_n$ has $(2+m)$ -terms $f,g_1,g_2$ . These terms are common for all the algebras in the variety generated by $\mathbf {C}$ . In particular, there are operations $g^{\mathbf {A}}_1,g^{\mathbf {A}}_2,f^{\mathbf {A}}$ on $\mathbf {A}_n$ and $g^{\mathbf {B}}_1,g^{\mathbf {B}}_2,f^{\mathbf {B}}$ on $\mathbf {B}_n$ such that

$$ \begin{align*} g^{\mathbf{A}}_1(1,0) &= f^{\mathbf{A}}(1,0,0,0,0,\ldots,0,0) = a_1,\\ g^{\mathbf{A}}_1(0,1) &= f^{\mathbf{A}}(0,1,0,0,0,\ldots,0,0) = a_2,\\ g^{\mathbf{B}}_2(1,0,0,\ldots,0,0) &= f^{\mathbf{B}}(0,0,1,0,0,\ldots,0,0) = b_1,\\ g^{\mathbf{B}}_2(0,1,0,\ldots,0,0) &= f^{\mathbf{B}}(0,0,0,1,0,\ldots,0,0) = b_2,\\ &\vdots\\ g^{\mathbf{B}}_2(0,0,0,\ldots,0,1) &= f^{\mathbf{B}}(0,0,0,0,0,\ldots,0,1) = b_m. \end{align*} $$

The tuple $(a_1,a_2,b_1,\ldots ,b_n)$ belongs to $W_m$ since $W_m$ contains all the columns on the right hand side. Similarly, $(a_1,a_2)\in U$ and $(b_1,\ldots ,b_m)\in V_m$ by left hand side. But there is no such tuple $W_m$ that is composed of the tuples in U and $V_m$ .⊣

Theorem 4.7. There is an algebra in a congruence distributive variety that has no $(2+m)$ -terms.

Proof The proof is similar, and we take the algebra $\mathbf {C}_6 = \mathbf {A}_6\times \mathbf {B}_6$ . We just modify it a bit to make $V_m$ a subuniverse for any m. Let s be the following 4-ary minor of t:

$$ \begin{align*} s(x,y,z,w) = t(x,y,z,w,w,w). \end{align*} $$

Consider the algebra $\mathbf {C}' = (\mathbb {Q}^2, s^{\mathbf {C}})$ . The algebra $\mathbf {C}'$ is congruence distributive, since it has the following directed Jónsson terms written as minors of the term s:

$$ \begin{align*} s(xyzz) &= t(xyzzzz),\\ s(xxyz) &= t(xxyzzz),\\ s(zzyx) &= t(xxxyzz),\\ s(zyxx) &= t(xxxxyz). \end{align*} $$

For the definition of directed Jónsson terms, we refer the reader to [Reference Kazda, Kozik, McKenzie and Moore5].

On the other hand, $\mathbf {C}'$ does not have any $(2+m)$ -terms. For a contradiction, let us assume that there are term operations $f^{\mathbf {C}}, g_1^{\mathbf {C}},g_2^{\mathbf {C}}$ in the algebra $\mathbf {C}$ . So there are such terms even in $\mathbf {A}'=(\mathbb {Q},s^{\mathbf {A}})$ and $\mathbf {B}'=(\mathbb {Q},s^{\mathbf {B}})$ . We consider the same $2+m$ equalities as in the previous proof, resulting in $a_1,a_2,b_1,\ldots ,b_m$ . Since the basic operations of algebras $\mathbf {A}',\mathbf {B}'$ are defined from the operations of the algebras $\mathbf {A},\mathbf {B}$ , the set U is still a subuniverse of $(\mathbf {A}')^2$ and the set $W_m$ is still a subuniverse of $(\mathbf {A}')^2\times (\mathbf {B}')^m$ . So $(a_1,a_2)\in U$ and $(a_1,a_2,b_1,\ldots ,b_m)\in W_m$ . We cannot directly use Claim 4.4 to ensure that $V_m$ is a subuniverse of $(\mathbf {B}')^m$ since the claim assumes $2m<6$ . However, it is still true. We can check it manually: If $\bar x,\bar y,\bar z,\bar w\in V$ and $w_i>0$ for some i, then even

$$ \begin{align*} s^{\mathbf{B}}(x_i,y_i,z_i,w_i) = t^{\mathbf{B}}(x_i,y_i,z_i,w_i,w_i,w_i)> 0, \end{align*} $$

so $s^{\mathbf {B}}(\bar x,\bar y,\bar z,\bar w)$ has a non-zero position. Therefore $V_m$ is a subuniverse of $(\mathbf {B}')^m$ , $(b_1,\ldots ,b_m)\in V_m$ , and we get the same contradiction as in the previous proof. ⊣

5 Further work

Since Question 1.1 remained open, the main objective is still to find out whether or not the $SD(\wedge )$ property is characterized by a strong Maltsev condition. The $(3+n)$ -terms are general enough for $SD(\wedge )$ while the $(2+n)$ -terms are too strong. Therefore we suggest $(3+3)$ -terms as the candidate for a strong Maltsev condition, or a good starting point for proving the opposite.

Question 5.1. Is there an $SD(\wedge )$ variety that does not have $(3+3)$ -terms?

It is also reasonable to start with a stronger property than congruence meet-semidistributivity, namely simple congruence distributivity, or the one in Theorem 1.3.

Question 5.2. Are $(3+3)$ -terms implied by

  1. (a) directed Jónsson terms? $($ equivalent to congruence distributivity, see [Reference Kazda, Kozik, McKenzie and Moore5] $)$

  2. (b) 3-ary and 4-ary weak NU terms $w_3,w_4$ such that $w_3(y,x,x)=w_4(y,x,x,x)$ ?

Miklós Maróti with Ralph McKenzie (see [Reference Maróti and McKenzie9, Theorem 1.3]) proved that congruence distributivity implies the existence of all at least ternary weak NU terms. However, the catalog of counterexamples is so weak, that even the “glued” weak NU terms, as in item (b), are still plausible candidates for the strong Maltsev condition too. On the other hand, congruence distributivity is the weakest general condition under which we know about the weak NU terms. So we ask the following.

Question 5.3. Is the existence of a weak NU term implied by the $SD(\wedge )$ property? In particular, is it implied by $(3+3)$ -terms?

Acknowledgments

Partially supported by the Czech Grant Agency (GAČR) under grant no. 18-20123S, by the National Science Centre Poland under grant no. UMO-2014/13/B/ST6/01812, and by the PRIMUS/SCI/12 project of the Charles University.

Footnotes

1 Although we have forbidden $m=1$ at the beginning of the proof in order to prevent formal difficulties with Lemmas 2.6 and 2.7, it serves well for the example of semiring calculations.

References

Baker, K. A., Mcnulty, G., and Wang, J., An extension of Willard’s finite basis theorem: Congruence meet-semidistributive varieties of finite critical depth. Algebra Universalis, vol. 52 (2005), pp. 289302.10.1007/s00012-004-1890-0CrossRefGoogle Scholar
Barto, L., The constraint satisfaction problem and universal algebra. The Bulletin of Symbolic Logic, vol. 21 (2015), no. 3, pp. 319337.10.1017/bsl.2015.25CrossRefGoogle Scholar
Hobby, D. and McKenzie, R., The Structure of Finite Algebras, Contemporary Mathematics, vol. 76, American Mathematical Society, Providence, RI, 1988.10.1090/conm/076CrossRefGoogle Scholar
Kazda, A., Taylor term does not imply any nontrivial linear one-equality Maltsev condition. Algebra Universalis, vol. 80 (2019), no. 1, p. 9.10.1007/s00012-019-0580-xCrossRefGoogle Scholar
Kazda, A., Kozik, M., McKenzie, R., and Moore, M., Absorption and Directed Jónsson Terms, Springer, Cham, 2018, pp. 203220.Google Scholar
Kearnes, K. A. and Kiss, E. W., The Shape of Congruence Lattices, American Mathematical Society, Providence, RI, 2013.Google Scholar
Kearnes, K., Marković, P., and McKenzie, R., Optimal strong Mal’cev conditions for omitting type 1 in locally finite varieties. Algebra Universalis, vol. 72 (2014), no. 1, pp. 91100.10.1007/s00012-014-0289-9CrossRefGoogle Scholar
Kozik, M., Krokhin, A., Valeriote, M., and Willard, R., Characterizations of several Maltsev conditions. Algebra Universalis, vol. 73 (2015), no. 3, pp. 205224.10.1007/s00012-015-0327-2CrossRefGoogle Scholar
Maróti, M. and McKenzie, R., Existence theorems for weakly symmetric operations. Algebra Universalis, vol. 59 (2008), nos. 3–4, pp. 463489.10.1007/s00012-008-2122-9CrossRefGoogle Scholar
Olšák, M., The weakest nontrivial idempotent equations. Bulletin of the London Mathematical Society, vol. 49 (2017), no. 6, pp. 10281047.10.1112/blms.12097CrossRefGoogle Scholar