Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-20T09:22:00.581Z Has data issue: false hasContentIssue false

Dirichlet law for factorisation of integers, polynomials and permutations

Published online by Cambridge University Press:  06 September 2023

SUN–KAI LEUNG*
Affiliation:
Département de Mathématiques et de Statistique, Université de Montréal, CP 6128 succ. Centre-Ville, Montréal, QC H3C 3J7, Canada. e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Let $k \geqslant 2$ be an integer. We prove that factorisation of integers into k parts follows the Dirichlet distribution $\mathrm{Dir}\left({1}/{k},\ldots,{1}/{k}\right)$ by multidimensional contour integration, thereby generalising the Deshouillers–Dress–Tenenbaum (DDT) arcsine law on divisors where $k=2$. The same holds for factorisation of polynomials or permutations. Dirichlet distribution with arbitrary parameters can be modelled similarly.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Cambridge Philosophical Society

1. Introduction

Given an integer $n \geqslant 1$ , it is natural to study the distribution of its divisors over the interval [1,n] (in logarithmic scale). Let d be a random integer chosen uniformly from the divisors of n. Then $D_n\;:\!=\;{\log d}/{\log n}$ is a random variable taking values in $[0,1].$ While one can show that the sequence of random variables $\{D_n\}_{n=1}^{\infty}$ does not converge in distribution, Deshouillers, Dress and Tenenbaum [ Reference Deshouillers, Dress and Tenenbaum5 ] proved the mean of the corresponding distribution functions converges to that of the arcsine law. More precisely, uniformly for $u \in [0,1],$ we have

\begin{align*}\frac{1}{x}\sum_{n \leqslant x}\mathbb{P}\left(D_n \leqslant u\right)=\frac{2}{\pi} \arcsin{\sqrt{u}}+O\left(\frac{1}{\sqrt{\log x}}\right),\end{align*}

where

\begin{align*}\mathbb{P}\left(D_n \leqslant u\right)\;:\!=\;\frac{1}{\tau(n)}\sum_{\substack{d | n \\[2pt] d \leqslant n^{u}}}1\end{align*}

is the distribution function of $D_n$ and the error term here is optimal (see also [ Reference Tenenbaum20 , chapter 6·2]).

Recently, Nyandwi and Smati [ Reference Nyandwi and Smati16 ] studied the distribution of pairs of divisors of a given integer on average. Similarly, they also proved the mean of the corresponding distribution functions converges to that of the beta two-dimensional law uniformly together with the optimal rate of convergence.

Our main aim here is to generalise their work to higher dimensions, which they claim is very technical following the usual approach (see [ Reference Nyandwi and Smati17 , p. 2]). Fix $k \geqslant 2.$ Given an integer $n \geqslant 1,$ let $(d_1,\ldots,d_k)$ be a random k-tuple chosen uniformly from the set of all possible factorisation $\{(m_1,\ldots,m_k) \in \mathbb{N}^k \, : \, n=m_1\cdots m_k\}.$ Then $\boldsymbol{D}_n=(D_n^{(1)},\ldots,D_n^{(k)})\;:\!=\;\left({\log d_1}/{\log n},\ldots,{\log d_k}/{\log n}\right)$ is a multivariate random variable taking values in $[0,1]^k.$ Similarly, we are interested in the mean

\begin{align*}\frac{1}{x}\sum_{n \leqslant x}\mathbb{P}\left(D_n^{(1)}\leqslant u_1,\ldots,D_n^{(k-1)}\leqslant u_{k-1}\right),\end{align*}

where

\begin{align*}\mathbb{P}\left(D_n^{(1)}\leqslant u_1,\ldots,D_n^{(k-1)}\leqslant u_{k-1}\right)\;:\!=\;\frac{1}{\tau_k(n)}\underset{d_1\cdots d_{k-1}|n}{\sum_{d_1 \leqslant n^{u_1}}\cdots\sum_{d_{k-1}\leqslant n^{u_{k-1}}}}1\end{align*}

is the distribution function of $\boldsymbol{D}_n.$

Note that since $n=d_1\cdots d_k,$ the multivariate random variable $\boldsymbol{D}_n$ must satisfy

\begin{align*}1=D_n^{(1)}+\cdots+D_n^{(k)},\end{align*}

and so it actually takes values in the $(k-1)$ -dimensional probability simplex. We now turn to the Dirichlet distribution, which is the most natural candidate of modeling such distribution.

Definition 1·1. Let $k \geqslant 2.$ The Dirichlet distribution of dimension k with parameters $\alpha_1,\ldots,\alpha_k>0$ is denoted by $\mathrm{Dir}\left(\alpha_1,\ldots,\alpha_k\right), $ which is defined on the $(k-1)$ -dimensional probability simplex

\begin{align*}\Delta^{k-1}\;:\!=\;\{(t_1,\ldots,t_k) \in [0,1]^k \, : \, t_1+\cdots+t_k=1\}\end{align*}

having density

\begin{align*} f_{\boldsymbol{\alpha}}(t_1,\ldots,t_k)\;:\!=\;\frac{\Gamma \left(\sum_{i=1}^k \alpha_i \right)}{\prod_{i=1}^{k}\Gamma(\alpha_i)}\prod_{i=1}^{k}t_i^{\alpha_i-1}.\end{align*}

For instance, when $k=2,$ Dirichlet distribution reduces to beta distribution $\mathrm{Beta}\left(\alpha, \beta\right)$ with parameters $\alpha, \beta$ . In particular, $\mathrm{Beta}\left({1}/{2}, {1}/{2}\right)$ is the arcsine distribution.

As we will see, factorisation of integers into k parts follows the Dirichlet distribution $\mathrm{Dir}\left({1}/{k},\ldots,{1}/{k}\right).$ Since for each i the parameter $\alpha_i={1}/{k}$ is less than 1, the density $f_{\boldsymbol{\alpha}}(t_1,\ldots,t_k)$ blows up most rapidly at the k vertices of the probability simplex. Therefore, our intuition that a typical factorisation of integers into k parts consists of one large factor and $k-1$ small factors is justified quantitatively.

By definition, for $u_1,\ldots,u_{k-1} \geqslant 0$ satisfying $u_1+\cdots+u_{k-1}\leqslant 1,$ the distribution function of $\mathrm{Dir}\left(\alpha_1,\ldots,\alpha_k \right)$ is given by

\begin{align*}F_{\boldsymbol{\alpha}}(u_1,\ldots,u_{k-1})\;:\!=\;\int_0^{u_1}\cdots\int_0^{u_{k-1}}f_{\boldsymbol{\alpha}}(t_1,\ldots,t_{k-1}, 1-t_1-\cdots -t_{k-1})dt_1\cdots dt_{k-1}.\end{align*}

From now on until Section 6, we shall fix $\boldsymbol{\alpha}=\left({1}/{k},\ldots,{1}/{k}\right)$ and omit the subscript.

The main results are stated as follows.

Theorem 1·1. Let $k\geqslant 2$ be a fixed integer. Then uniformly for $x \geqslant 2$ and $u_1,\ldots,u_{k-1} \geqslant 0$ satisfying $u_1+\cdots+u_{k-1}\leqslant 1,$ we have

(1·1) \begin{align} \frac{1}{x}\sum_{n \leqslant x}\frac{1}{\tau_k(n)}\underset{d_1\cdots d_{k-1} \mid n}{\sum_{d_1 \leqslant n^{u_1}}\cdots\sum_{d_{k-1} \leqslant n^{u_{k-1}}}}1=F(u_1,\ldots,u_{k-1}) +O\left(\frac{1}{(\!\log x)^{\frac{1}{k}}}\right).\end{align}

The error term here is optimal if full uniformity in $u_1,\ldots,u_{k-1}$ is required. Indeed, if we choose $u_1=\cdots=u_{k-2}=\frac{1}{k}, u_{k-1}=0,$ then one can show that the left-hand side of (1·1) is of order $(\!\log x)^{-\frac{1}{k}}$ using [ Reference Deshouillers, Dress and Tenenbaum5 , théorème T] followed by partial summation.

Remark 1·1. Instead of using the logarithmic scale, one may also study localised factorisation of integers, say for instance the quantity

\begin{align*}H^{k}(x, \boldsymbol{y}, \boldsymbol{z})\;:\!=\; \left|\left\{ n \leqslant x \ : \begin{array}{c} \text{there exists $(d_1,\ldots,d_{k-1}) \in \mathbb{N}^{k-1}$ such that } \\[5pt] \text{$d_1\cdots d_{k-1} | n$ and $y_i<d_i \leqslant z_i$ for $i=1,\ldots,k-1$} \end{array} \right\}\right|,\end{align*}

which was discussed in [ Reference Koukoulopoulos13 ].

Note that Theorem 1·1 implies that for any axis-parallel rectangle $R \subseteq \Delta^{k-1},$ we have

\begin{align*}\frac{1}{x}\sum_{n \leqslant x}\mathbb{P}\left( \boldsymbol{D}_n \in R \right)=\int_{R} dF+O\left(\frac{1}{(\!\log x)^{\frac{1}{k}}}\right).\end{align*}

Since every Borel subset of the simplex can be approximated by finite unions of such rectangles, the following corollary is an immediate consequence of Theorem 1·1.

Corollary 1·1. Let $k \geqslant 2$ be a fixed integer. For $x \geqslant 1,$ let n be a random integer chosen uniformly from [1,x] and $(d_1,\ldots,d_k)$ be a random k-tuple chosen uniformly from the set of all possible factorisation $\{(m_1,\ldots,m_k) \in \mathbb{N}^k\, : \, n=m_1\cdots m_k\}.$ Then as $x \to \infty,$ we have the convergence in distribution

\[\left(\frac{\log d_1}{\log n}, \ldots, \frac{\log d_k}{\log n}\right) \xrightarrow[]{d}\mathrm{Dir}\left(\frac{1}{k},\ldots,\frac{1}{k} \right).\]

It is a general phenomenon that the “anatomy” of polynomials or permutations is essentially the same as that of integers (see [ Reference Granville9, Reference Granville and Granville10 ]), and the main theorem here is no exception. In the realm of polynomials, the following theorem serves as the counterpart to Theorem 1·1.

Theorem 1·2. Let $k\geqslant 2$ be a fixed integer and q be a fixed prime power. Then uniformly for $n \geqslant 1$ and $u_1,\ldots,u_{k-1}\geqslant 0$ satisfying $u_1+\cdots+u_{k-1}\leqslant 1$ , we have

(1·2) \begin{align}\frac{1}{q^n}\sum_{F \in \mathcal{M}_q(n)}\frac{1}{\tau_k(F)}\underset{D_1\cdots D_{k-1} \mid F}{\sum_{\substack{D_1 \in \mathcal{M}_q\\[2pt] \deg D_1 \leqslant nu_1}}\cdots\sum_{\substack{D_{k-1} \in \mathcal{M}_q\\[2pt] \deg D_{k-1} \leqslant nu_{k-1}}}}1=F(u_1,\ldots,u_{k-1})+O\left(n^{-\frac{1}{k}} \right),\end{align}

where the notations are defined in Section 2.

Corollary 1·2. Let $k \geqslant 2$ be a fixed integer and q be a fixed prime power. For $n \geqslant 1,$ let F be a random polynomial chosen uniformly from $\mathcal{M}_q(n)$ and $(D_1,\ldots,D_k)$ be a random k-tuple chosen uniformly from the set of all possible factorisation $\{(G_1,\ldots,G_k) \in \mathcal{M}_q^k\, : \, F=G_1\cdots G_k\}.$ Then as $n \to \infty,$ we have the convergence in distribution

\[\left(\frac{\deg D_1}{n}, \ldots, \frac{\deg D_{k}}{n}\right) \xrightarrow[]{\;\;\;\;d\;\;\;\;}\mathrm{Dir}\left(\frac{1}{k},\ldots,\frac{1}{k} \right).\]

Similarly, in the realm of permutations, the following theorem serves as the counterpart to Theorem 1·1.

Theorem 1·3. Let $k\geqslant 2$ be a fixed integer. Then uniformly for $n \geqslant 1$ and $u_1,\ldots,u_{k-1}\geqslant 0$ satisfying $u_1+\cdots+u_{k-1}\leqslant 1$ , we have

(1·3) \begin{align}\frac{1}{n!}\sum_{\sigma \in S_n}\frac{1}{\tau_k(\sigma)}\underset{{\substack{[n]=A_1 \sqcup \cdots \sqcup A_k\\[2pt] \sigma(A_i)=A_i, i=1,\ldots,k \\[5pt] 0 \leqslant |A_i| \leqslant nu_i, i=1,\ldots,k-1}}}{\sum\cdots\sum}1=F(u_1,\ldots,u_{k-1})+O\left(n^{-\frac{1}{k}}\right),\end{align}

where the notations are defined in Section 2.

Corollary 1·3. Let $k \geqslant 2$ be a fixed integer. For $n \geqslant 1,$ let $\sigma$ be a random permutation chosen uniformly from $S_n$ and $(A_1,\ldots, A_k)$ be a random k-tuple chosen uniformly from the set of all possible $\sigma$ -invariant decomposition $\{(B_1,\ldots, B_k) \, : \, [n]=B_1\sqcup \cdots \sqcup B_k, \sigma(B_i)=B_i \text{ for }i=1,\ldots, k\}.$ Then as $n \to \infty,$ we have the convergence in distribution

\[\left(\frac{|A_1|}{n},\ldots, \frac{|A_k|}{n}\right) \xrightarrow[]{\;\;\;\;d\;\;\;\;} \mathrm{Dir}\left(\frac{1}{k},\ldots,\frac{1}{k} \right).\]

In Section 7, we model the Dirichlet distribution with arbitrary parameters by assigning probability weights which are not necessarily uniform to each integer and to each factorisation. Then, as we will see, most of the results in the literature about the distribution of divisors in logarithmic scale are direct consequences of Theorem 7·1, which is a generalisation of Theorem 1·1.

2. Notation

Throughout the paper, we shall adopt the following list of notation:

  1. (a) we say $f(x)=O(g(x))$ or $f(x) \ll g(x)$ if there exists a constant $C>0$ which might depend on $k,q, \boldsymbol{\alpha}, \boldsymbol{\beta}, \boldsymbol{c}, \boldsymbol{\delta}$ such that $|f(x)| \leqslant C \cdot g(x)$ whenever $x >x_0$ for some $x_0>0;$

  2. (b) $[n]\;:\!=\;\{1,2,\ldots, n\};$

  3. (c) $\tau_k(n)\;:\!=\;|\{(d_1,\ldots, d_k) \in \mathbb{N}^k \, : \, n=d_1\cdots d_{k} \} |$ and $\tau(n)\;:\!=\;\tau_2(n);$

  4. (d) $\mathcal{M}_q\;:\!=\;\{F \in \mathbb{F}_q[x] \, : \, F \text{ is monic}\};$

  5. (e) $\mathcal{M}_q(n)\;:\!=\;\{F \in \mathcal{M}_q \, : \, \deg{F}=n\};$

  6. (f) $\tau_k(F)\;:\!=\;|\{(D_1,\ldots, D_k) \in \mathcal{M}_q^k \, : \, F=D_1\cdots D_{k} \} |;$

  7. (g) $S_n$ denotes the group of permutations on $[n];$

  8. (h) $c(\sigma)$ denotes the number of disjoint cycles of the permutation $\sigma;$

  9. (i) $\tau_{\alpha}(\sigma)\;:\!=\;{\alpha}^{c(\sigma)};$

  10. (j) $\left[{n \atop k}\right]\;:\!=\;|\{\sigma \in S_n \, : \, c(\sigma)=k\}|$ denote (unsigned) Stirling numbers of the first kind.

3. Properties of $\mathcal{D}(s_1,\ldots, s_k)$

Both [ Reference Deshouillers, Dress and Tenenbaum5, Reference Nyandwi and Smati16 ] deal with the divisors one by one using [ Reference Deshouillers, Dress and Tenenbaum5 , théorème T] followed by partial summation. However, as k gets larger, it is increasingly laborious to achieve full uniformity as well as the optimal rate of convergence, especially when one of the u’s is small or $u_1+\cdots+u_{k-1}$ is close to 1. Instead, we would like to apply Mellin’s inversion formula to (the second derivative of) the multiple Dirichlet series $\mathcal{D}(s_1,\ldots, s_k)$ defined below, which allows a more symmetric approach to the problem so that all the divisors can be handled simultaneously. We first establish a few properties of the multiple Dirichlet series that are essential to the proof of Theorem 1·1.

Lemma 3·1. Let $\mathcal{D}(s_1,\ldots, s_k)$ denote the multiple Dirichlet series

\begin{align*}\sum_{n_1=1}^{\infty}\cdots\sum_{n_k=1}^{\infty}\frac{\tau_k(n_1\cdots n_k)^{-1}}{n_1^{s_1}\cdots n_k^{s_k}}.\end{align*}

Then $\mathcal{D}(s_1,\ldots, s_k)$ converges absolutely in the domain

\begin{align*}\Omega\;:\!=\;\{(s_1,\dots,s_k)\in \mathbb{C}^k \, : \, \text{$\mathrm{Re}(s_j)>1$ for $j=1,\dots,k$}\}\end{align*}

and uniformly on any compact subset of $\Omega.$ In particular, $\mathcal{D}(s_1,\dots,s_k)$ is an analytic function of k variables in $\Omega.$

Proof. Let $\sigma_j\;:\!=\;\mathrm{Re}(s_j)$ for $j=1,\ldots,k.$ Then, since

\begin{align*}\sum_{n_1=1}^{\infty}\cdots\sum_{n_k=1}^{\infty}\left|\frac{\tau_k(n_1\cdots n_k)^{-1}}{n_1^{s_1}\cdots n_k^{s_k}}\right| \leqslant \zeta(\sigma_1)\cdots\zeta(\sigma_k) <\infty,\end{align*}

the lemma follows.

Lemma 3·2. The multiple Dirichlet series $D(s_1,\dots,s_k)$ can be expressed as the Euler product

\begin{align*}\prod_p\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1s_1+\cdots+v_ks_k)}\end{align*}

in the domain $\Omega$ defined above.

Proof. Let $y \geqslant 2$ and $\sigma_j\;:\!=\;\mathrm{Re}(s_j)$ for $j=1,\dots,k.$ Then, since

\begin{align*}\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1\sigma_1+\cdots+v_k\sigma_k)}\leqslant \left(1-\frac{1}{p^{\sigma_1}} \right)^{-1}\cdots \left(1-\frac{1}{p^{\sigma_k}} \right)^{-1}<\infty,\end{align*}

the finite product

\begin{align*}\prod_{p \leqslant y}\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1s_1+\cdots+v_ks_k)}\end{align*}

is well-defined.

Let $S(y)\;:\!=\;\{n\geqslant 1 \,: \, p|n \text{ implies } p \leqslant y\}$ be the set of y-smooth numbers. Then, since

\begin{align*}\tau_k(p^v)=\left(\substack{v+k-1 \\[2pt] k-1}\right),\end{align*}

we have

\begin{align*}&\left|\prod_{p \leqslant y}\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1s_1+\cdots+v_ks_k)}- \mathcal{D}(s_1,\dots,s_k) \right|\\[5pt] &\quad =\left|\sum_{n_1 \in S(y)}\cdots\sum_{n_k \in S(y)}\frac{\tau_k(n_1\cdots n_k)^{-1}}{n_1^{s_1}\cdots n_k^{s_k}} - \mathcal{D}(s_1,\dots,s_k)\right|\\[5pt] & \qquad \leqslant \sum_{j=1}^k \prod_{\substack{i=1\\[2pt] i\neq j}}^k \zeta(\sigma_i)\sum_{n_j>y}\frac{1}{n_j^{\sigma_j}}.\end{align*}

The lemma follows by letting $y \to \infty.$

Lemma 3·3. For $j=1,\ldots,k,$ let $R_j \subseteq \left\{ s_j \in \mathbb{C} \,:\, \mathrm{Re}(s_j) > {3}/{4}, \, |\mathrm{Im}(s_j)| > {1}/{4} \right\}$ be a zero-free region for $\zeta(s_j).$ Then the multiple Dirichlet series $\mathcal{D}(s_1,\dots,s_k)$ can be continued analytically to the domain $\prod_{j=1}^k R_j.$ Moreover, we have the bound

(3·1) \begin{align} \mathcal{D}(s_1,\dots,s_k) \ll |\zeta(s_1)|^{\frac{1}{k}}\cdots|\zeta(s_k)|^{\frac{1}{k}}.\end{align}

Proof. Let $(s_1,\dots,s_k)\in \mathbb{C}^k$ with $\sigma_j\;:\!=\;\mathrm{Re}(s_j)>1$ for $j=1,\dots,k.$ Then by Lemma 3·2 we have the Euler product expression

\begin{align*}&\zeta(s_1)^{-\frac{1}{k}}\cdots\zeta(s_k)^{-\frac{1}{k}}\mathcal{D}(s_1,\dots,s_k)\\[5pt] =&\prod_p\left(\prod_{j=1}^{k}\left(1-\frac{1}{p^{s_j}}\right)^{\frac{1}{k}}\right)\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1s_1+\cdots+v_ks_k)},\end{align*}

where the kth root is understood as its principal branch.

For $j=1,\ldots,k,$ expanding the kth root as

\begin{align*}\sum_{r=0}^{\infty}(\!-\!1)^{r}\left(\substack{\frac{1}{k} \\[2pt] r}\right)p^{-rs_j},\end{align*}

we find that the factors of the Euler product are $1+O\left(\sum_{i=1}^{k}\sum_{j=1}^{k} p^{-(\sigma_i+\sigma_j)} \right) $ by Taylor’s theorem. Therefore, the function

(3·2) \begin{align}\zeta(s_1)^{-\frac{1}{k}}\cdots\zeta(s_k)^{-\frac{1}{k}}\mathcal{D}(s_1,\dots,s_k)\end{align}

can be continued analytically to the domain where $\mathrm{Re}(s_j) > {3}/{4}$ for $j=1,\dots,k,$ in which it is uniformly bounded.

On the other hand, for $(s_1,\dots,s_k) \in \prod_{j=1}^k R_j,$ we can express $\mathcal{D}(s_1,\dots,s_k)$ as

\begin{align*}\zeta(s_1)^{\frac{1}{k}}\cdots\zeta(s_k)^{\frac{1}{k}}\left(\zeta(s_1)^{-\frac{1}{k}}\cdots\zeta(s_k)^{-\frac{1}{k}}\mathcal{D}(s_1,\dots,s_k)\right),\end{align*}

and so the lemma follows.

Lemma 3·4. In the open hypercube

\begin{align*}Q\;:\!=\;\left\{(s_1,\ldots,s_k) \in \mathbb{C}^k \, : \, 1<\mathrm{Re}(s_j)<\frac{7}{4},\, |\mathrm{Im}(s_j)|<\frac{3}{4} \text{ $j=1,\dots,k$}\right\},\end{align*}

we have the estimate

\begin{align*}\frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k)=\left(1+\frac{1}{k}\right)^k\frac{1}{k^k}&(s_1-1)^{-\frac{1}{k}-2}\cdots(s_k-1)^{-\frac{1}{k}-2}\\[5pt] \times &(1+O(|s_1-1|+\cdots+|s_k-1|)).\end{align*}

Proof. By (3·2) and the fact that

(3·3) \begin{align} \zeta(s)=\frac{1}{s-1}+O(1),\end{align}

we have the power series representation

\begin{align*}(s_1-1)^{\frac{1}{k}}\cdots(s_k-1)^{\frac{1}{k}}\mathcal{D}(s_1,\dots,s_k)=\sum_{(i_1,\dots,i_k)\in \mathbb{Z}_{\geqslant 0}^k}a_{i_1,\dots,i_k}(s_1-1)^{i_1}\cdots(s_k-1)^{i_k}\end{align*}

in Q for some constants $a_{i_1,\dots,i_k} \in \mathbb{C}.$ It follows that

\begin{align*}\frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k)=\left(1+\frac{1}{k}\right)^k\frac{1}{k^k}&(s_1-1)^{-\frac{1}{k}-2}\cdots(s_k-1)^{-\frac{1}{k}-2}\nonumber\\[5pt] \times & (a_{\textbf{0}}+O(|s_1-1|+\cdots+|s_k-1|)),\end{align*}

where by (3·3) and Lemma 3·2, the leading coefficient

\begin{align*}a_{\textbf{0}}&=\prod_p\left(1-\frac{1}{p}\right)\sum_{v_1=0}^{\infty}\cdots\sum_{v_k=0}^{\infty}\left(\substack{v_1+\cdots+v_k+k-1 \\[2pt] {k-1}}\right)^{-1}p^{-(v_1+\cdots+v_k)}\\[5pt] &=\prod_p \left(1-\frac{1}{p} \right)\sum_{v=0}^{\infty}\left(\substack{v+k-1 \\[2pt] k-1}\right)\left(\substack{v+k-1 \\[2pt] k-1}\right)^{-1}p^{-v}=1.\end{align*}

4. Proof of Theorem 1·1

We begin with writing

where the main term is

(4·1) \begin{align}\sum_{n \leqslant x}\frac{1}{\tau_k(n)}\underset{d_1\cdots d_{k-1} \mid n}{\sum_{d_1 \leqslant x^{u_1}}\cdots\sum_{d_{k-1} \leqslant x^{u_{k-1}}}}1=\sum_{d_1 \leqslant x^{u_1}}\cdots\sum_{d_{k-1} \leqslant x^{u_{k-1}}}\sum_{d_k \leqslant x/(d_1\cdots d_{k-1})}\frac{1}{\tau_k(d_1\cdots d_{k})},\end{align}

and the error term is

(4·2)

Let us first bound the error term (4·2). For $j=1,\dots,k-1$ with $u_j \neq 0,$ we write $n=d_jm$ for some integer m. Then $d_j>n^{u_j}$ implies $m<d_j^{(1-u_j)/u_j},$ and the number of ways of obtaining m as a product $d_1\cdots d_{j-1}d_{j+1}\cdots d_k$ is bounded by $\tau_{k-1}(m).$ It follows that

(4·3) \begin{align} \sum_{n \leqslant x}\frac{1}{\tau_k(n)}\sum_{n^{u_j}<d_j\leqslant x^{u_j}}\underset{\substack{d_i \leqslant x^{u_i}\text{ for }i \neq j\\[2pt] d_1\cdots d_{k-1} \mid n}}{\sum\cdots\sum}\quad1 \leqslant\sum_{d_j\leqslant x^{u_j}}\sum_{m<d_j^{(1-u_j)/u_j}}\frac{\tau_{k-1}(m)}{\tau_k(d_jm)}.\end{align}

If $u_{j}\leqslant {1}/{2},$ then using [ Reference Koukoulopoulos14 , theorem 14·2], this is bounded by

\begin{align}\sum_{d_{j}\leqslant x^{u_{j}}} \sum_{m < d_{j}^{(1-u_{j})/u_{j}}}\frac{\tau_{k-1}(m)}{\tau_k(m)}&\ll \sum_{2\leqslant d_{j}\leqslant x^{u_{j}}} \frac{d_{j}^{(1-u_{j})/u_{j}}}{\left(\!\log d_{j}^{(1-u_{j})/u_{j}}\right)^{\frac{1}{k}}} \nonumber \\[5pt] &\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}. \nonumber\end{align}

Otherwise, it follows from the simple observation

(4·4) \begin{align}\frac{\tau_{k-1}(m)}{\tau_k(d_jm)} &\leqslant\frac{\tau_{k-1}(d_{j}m)}{\tau_k(d_{j}m)}\leqslant \frac{\tau_{k-1}(d_{j})}{\tau_k(d_{j})}\end{align}

that the expression (4·3) is bounded by

\begin{align}\sum_{d_{j}\leqslant x^{u_{j}}}d_{j}^{(1-u_{j})/u_{j}}\frac{\tau_{k-1}(d_{j})}{\tau_k(d_{j})}&\ll x^{1-u_{j}} \sum_{d_{j}\leqslant x^{u_{j}}} \frac{\tau_{k-1}(d_{j})}{\tau_k(d_{j})} \nonumber\\[5pt] &\ll \frac{x}{(\!\log x)^{\frac{1}{k}}} \nonumber\end{align}

again using [ Reference Koukoulopoulos14 , theorem 14·2].

Now we are left with the main term (4·1). In order to apply Mellin’s inversion formula, we follow the treatment in [ Reference Granville and Koukoulopoulos11 ] and [ Reference Koukoulopoulos14 , chapter 13].

Lemma 4·1. Let $T \geqslant 1.$ Let $\phi, \psi \colon [0, \infty) \to \mathbb{R}$ be smooth functions supported on [0,1] and $\left[0, 1+{1}/{T}\right]$ respectively with

\begin{align*}\begin{cases}\phi(y)=1 & \mbox{if } y \leqslant 1-\frac{1}{T},\\[2pt] \phi(y) \in [0,1] & \mbox{if } 1-\frac{1}{T} < y \leqslant 1,\\[2pt] \phi(y) = 0 & \mbox{if}\; y > 1 \end{cases}\end{align*}

and

\begin{align*}\begin{cases}\psi(y)=1 & \mbox{if } y \leqslant 1,\\[2pt] \psi(y) \in [0,1] & \mbox{if } 1 < y \leqslant 1 + \frac{1}{T}\\[2pt] \phi(y) = 0 & \mbox{if}\; y > 1 + \frac{1}{T} .\end{cases}\end{align*}

Moreover, for each integer $j\geqslant 0$ , their derivatives satisfy the growth condition $\phi^{(j)}(y), \psi^{(j)}(y) \ll_j T^j$ uniformly for $y\geqslant 0.$ Let $\Phi(s), \Psi(s)$ be the Mellin transform of $\phi(y), \psi(y)$ respectively for $1 \leqslant \mathrm{Re}(s) \leqslant 2$ , i.e.

\begin{align*}\Phi(s)=\int_{0}^{\infty}\phi(y)y^s\frac{dy}{y},\end{align*}

and

\begin{align*}\Psi(s)=\int_{0}^{\infty}\psi(y)y^s\frac{dy}{y}.\end{align*}

Then we have the estimates

(4·5) \begin{align} \Phi(s), \Psi(s)=\frac{1}{s}+O\left(\frac{1}{T}\right),\end{align}

and

(4·6) \begin{align} \Phi(s), \Psi(s) \ll_j \frac{T^{j-1}}{|s|^j}\end{align}

for $j\geqslant 1.$

Proof. See [ Reference Granville and Koukoulopoulos11 , theorem 4].

We need the following version of Hankel’s lemma to extract the main contribution from the multidimensional contour integral in the proof of Lemma 4·3.

Lemma 4·2. Let $x>1, \sigma>1$ and $\mathrm{Re}(\alpha)>1.$ Then we have

\begin{align*}\frac{1}{2\pi i}\int_{\mathrm{Re}(s)=\sigma}\frac{x^s}{s(s-1)^{\alpha}}ds=\frac{1}{\Gamma(\alpha)}\int_{1}^x(\!\log y)^{\alpha-1}dy.\end{align*}

Proof. See [ Reference Koukoulopoulos14 , lemma 13·1].

We now prove the main lemma.

Lemma 4·3. Let $x_1,\dots,x_k \geqslant e$ and $S(x_1,\dots,x_k)$ denote the weighted sum

\begin{align*}\sum_{d_1 \leqslant x_1}\cdots\sum_{d_k \leqslant x_k}\frac{(\!\log d_1)^2 \cdots (\!\log d_k)^2}{\tau_k(d_1\cdots d_k)}.\end{align*}

Then we have

\begin{align*}S(x_1,\dots,x_k)=\frac{1}{\Gamma\left(\frac{1}{k} \right)^k}\prod_{j=1}^{k}\int_1^{x_j}(\!\log y_j)^{\frac{1}{k}+1}dy_j+R(x_1,\dots,x_k)\end{align*}

with

\begin{align*}R(x_1,\dots,x_k) \ll x_1\cdots x_k\sum_{j=1}^k \left(\prod_{\substack{i=1\\[2pt] i \neq j}}^k (\!\log x_i)^{\frac{1}{k}+1}\right) (\!\log x_j)^{\frac{1}{k}}.\end{align*}

As in [ Reference Granville and Koukoulopoulos11 ] and [ Reference Koukoulopoulos14 , chapter 13], we introduce powers of logarithms to ensure that the major contribution to the multiple Perron integral below comes from $s_1,\ldots,s_k \approx 1.$ Later on, they will be removed by partial summation.

Proof. The proof consists of four steps: Mellin inversion, localisation, approximation and completion. For $j=1,\ldots,k$ , let $T_j=2(\!\log x_j)^{2}$ and $\phi_j, \psi_j$ be any smooth functions coincide with $\phi, \psi$ respectively from Lemma 4·1. Then the weighted sum $S(x_1,\ldots,x_k)$ is bounded between

\begin{align*}\sum_{d_1=1}^{\infty}\cdots\sum_{d_k =1}^{\infty}\frac{(\!\log d_1)^2 \cdots (\!\log d_k)^2}{\tau_k(d_1\cdots d_k)}\phi_1\left(\frac{d_1}{x_1} \right)\cdots\phi_k\left(\frac{d_k}{x_k}\right),\end{align*}

and

(4·7) \begin{align} \sum_{d_1=1}^{\infty}\cdots\sum_{d_k =1}^{\infty}\frac{(\!\log d_1)^2 \cdots (\!\log d_k)^2}{\tau_k(d_1\cdots d_k)}\psi_1\left(\frac{d_1}{x_1} \right)\cdots\psi_k\left(\frac{d_k}{x_k}\right).\end{align}

To avoid repetitions, we only establish the upper bound here. Applying Mellin’s inversion formula, the expression (4·7) becomes

\begin{align*}\sum_{d_1 =1}^{\infty}\cdots\sum_{d_k =1}^{\infty}\frac{(\!\log d_1)^2 \cdots (\!\log d_k)^2}{\tau_k(d_1\cdots d_k)}\prod_{j=1}^{k}\left(\frac{1}{2\pi i}\int_{\mathrm{Re}(s_j)=1+\frac{1}{2\log x_j}}\Psi_j(s_j)\left( \frac{d_j}{x_j}\right)^{-s_j}ds_j \right).\end{align*}

Then, by Lemma 3·1 and Lemma 4·1, it is valid to interchange the order of summation and integration, and so this becomes

(4·8) \begin{align} \frac{1}{(2\pi i)^k}\int_{\mathrm{Re}(s_1)=1+\frac{1}{2\log x_1}}\cdots\int_{\mathrm{Re}(s_k)=1+\frac{1}{2\log x_k}}&\left(\frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k)\right) \nonumber\\[5pt] \times &\Psi_1(s_1)\cdots\Psi_k(s_k)x_1^{s_1}\cdots x_k^{s_k}ds_1\cdots ds_k.\end{align}

For each $j=1,\dots,k,$ we decompose the vertical contour $ I_j\;:\!=\;\left\{s_j \in \mathbb{C}\, : \,\mathrm{Re}(s_j)=1+{1}/{2\log x_j}\right\}$ as $I_{j}^{(1)} \cup I_{j}^{(2)} \cup I_{j}^{(3)}$ (traversed upwards), where

\begin{align*}I_{j}^{(1)}\;:\!=\;\left\{s_j \in I_j \, : \, |\mathrm{Im}(s_j)|\leqslant 1/2\right\},\end{align*}
\begin{align*}I_{j}^{(2)}\;:\!=\;\left\{s_j \in I_j \, : \, 1/2<|\mathrm{Im}(s_j)|\leqslant T_j^2/2 \right\}\end{align*}

and

\begin{align*}I_{j}^{(3)}\;:\!=\;\left\{s_j \in I_j \, : \, |\mathrm{Im}(s_j)|> T_j^2/2\right\}.\end{align*}

To establish an upper bound on the second derivative of the multiple Dirichlet series, we shall apply Cauchy’s integral formula for derivatives of k variables. For this purpose, we invoke Lemma 3·3 with the classical zero-free region

with $c=1/100$ say, for $j=1,\ldots,k.$ Moreover, we introduce the distinguished boundary

\begin{align*}\Gamma_{s_1,\dots,s_k}\;:\!=\;\left\{(w_1,\dots,w_k) \in \mathbb{C}^k \, : \,|w_j-s_j|=\begin{cases}\frac{|s_j-1|}{2} & \mbox{if } s_j \in I_j^{(1)},\\[5pt] \frac{c}{4\log T_j} & \mbox{if } s_j \in I_j^{(2)},\\[5pt] \frac{1}{4\log x_j} & \mbox{if } s_j \in I_j^{(3)}\end{cases}\text{ for $j=1,\dots,k$}\right\}\end{align*}

as there are various bounds on $\zeta(w_j)$ depending on the height. Then, Cauchy’s formula implies

\begin{align*}\frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k)&\ll \frac{\sup_{(w_1,\dots,w_k) \in \Gamma_{s_1,\dots,s_k}}|\mathcal{D}(w_1,\dots,w_k)|}{\!\left(\!\prod_{\substack{j:s_j \in I_{j}^{(1)}}}|s_j-1|^2\!\right)\!\left(\!\prod_{j:s_j \in I_{j}^{(2)}}(\!\log T_j)^{-2}\!\right)\!\left(\!\prod_{j:s_j \in I_{j}^{(3)}}(\!\log x_j)^{-2}\!\right)}\end{align*}

with

\begin{align*}\mathcal{D}(w_1,\dots,w_k)\ll |\zeta(w_1)|^{\frac{1}{k}}\cdots|\zeta(w_k)|^{\frac{1}{k}}\end{align*}

given by (3·1) from Lemma 3·3. Using (3·3), [ Reference Titchmarsh21 , theorem 3·5] that

\begin{align*}\zeta(w_j) \ll \log T_j\end{align*}

whenever ${1}/{4}\leqslant|\mathrm{Im}(w_j)| \leqslant T_j^2,$ and the simple upper bound

\begin{align*}|\zeta(w_j)| \leqslant \sum_{n=1}^{\infty}\frac{1}{n^{1+\frac{1}{4\log x_j}}} \ll \log x_j,\end{align*}

we arrive at the derivative bound

(4·9) \begin{align} \frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k) &\ll\prod_{j:s_j \in I_{j}^{(1)}}|s_j-1|^{-\frac{1}{k}-2}\prod_{j:s_j \in I_{j}^{(2)}}(\!\log T_j)^{\frac{1}{k}+2}\prod_{j:s_j \in I_{j}^{(3)}}(\!\log x_j)^{\frac{1}{k}+2}.\end{align}

Applying (4·6) with $j=1,2$ from Lemma 4·1, for $j=1,\dots,k,$ we have the estimates

\begin{align*}(\!\log x_j)^{\frac{1}{k}+2}\int_{I_{j}^{(3)}}|\Psi(s_j)x_j^{s_j}ds_j| &\ll x_j(\!\log x_j)^{\frac{1}{k}+2}T_j\int_{T_j^2/2}^{\infty}\frac{dt}{t^2}\\[5pt] &\ll \frac{x_j(\!\log x_j)^{\frac{1}{k}+2}}{T_j},\end{align*}
\begin{align*}(\!\log T_j)^{\frac{1}{k}+2}\int_{I_{j}^{(2)}}|\Psi(s_j){x_j}^{s_j}ds_j| &\ll x_j(\!\log T_j)^{\frac{1}{k}+2}\int_{1/2}^{T_j^2/2}\frac{dt}{t}\\[5pt] &\ll x_j(\!\log T_j)^{\frac{1}{k}+3},\end{align*}

and

(4·10) \begin{align} \int_{I_{j}^{(1)}}|(s_j-1)^{-\frac{1}{k}-2}\Psi(s_j){x_j}^{s_j}ds_j|&\ll \int_{I_{j}^{(1)}}\left|(s_j-1)^{-\frac{1}{k}-2}{x_j}^{s_j}\frac{ds_j}{s_j}\right| \nonumber\\[5pt] &\ll x_j\int_{-1/2}^{1/2}\left|\frac{1}{2\log x_j}+it \right|^{-\frac{1}{k}-2}dt \nonumber\\[5pt] &\ll x_j(\!\log x_j)^{\frac{1}{k}+1}.\end{align}

Therefore, combining with (4·5) from Lemma 4·1 and (4·9), the main contribution to (4·8) is

(4·11) \begin{align} \frac{1}{(2\pi i)^k}\int_{I_{1}^{(1)}}\cdots\int_{I_{k}^{(1)}}\left(\frac{\partial^{2k}}{\partial s_1^2\cdots\partial s_k^2}\mathcal{D}(s_1,\dots,s_k)\right)x_1^{s_1}\cdots x_k^{s_k}\frac{ds_1}{s_1}\cdots\frac{ds_k}{s_k}\end{align}

with an error term

(4·12) \begin{align} \ll x_1\cdots x_k\sum_{j=1}^k \left(\prod_{\substack{i=1\\[2pt] i \neq j}}^k (\!\log x_i)^{\frac{1}{k}+1}\right) (\!\log x_j)^{\frac{1}{k}}\end{align}

as $T_j= 2(\!\log x_j)^{2}$ for $j=1,\ldots,k.$

Applying Lemma 3·4, the main contribution to (4·11) is

(4·13) \begin{align} \left(1+\frac{1}{k}\right)^k\frac{1}{k^k}\prod_{j=1}^k\left(\frac{1}{2\pi i}\int_{I_j^{(1)}}(s_j-1)^{-\frac{1}{k}-2}x_j^{s_j}\frac{ds_j}{s_j}\right)\end{align}

with an error term

(4·14) \begin{align} \ll \sum_{j=1}^k \int_{I_{j}^{(1)}}\left|(s_j-1)^{-\frac{1}{k}-1}x_j^{s_j}\frac{ds_j}{s_j} \right| \prod_{\substack{i=1\\[2pt] i \neq j}}^k\int_{I_{i}^{(1)}} \left|(s_i-1)^{-\frac{1}{k}-2}x_i^{s_i}\frac{ds_i}{s_i} \right|.\end{align}

For $j=1,\dots,k,$ we have

\begin{align*}\int_{I_{j}^{(1)}} \left|(s_j-1)^{-\frac{1}{k}-1}x_j^{s_j}\frac{ds_j}{s_j} \right|&\ll x_j\int_{-1/2}^{1/2} \left|\frac{1}{2\log x_j}+it \right|^{-\frac{1}{k}-1}dt\\[5pt] &\ll x_j(\!\log x_j)^{\frac{1}{k}}.\end{align*}

Combining with (4·10), the expression (4·14) is

(4·15) \begin{align} \ll x_1\cdots x_k\sum_{j=1}^k \left(\prod_{\substack{i=1\\[2pt] i \neq j}}^k (\!\log x_i)^{\frac{1}{k}+1}\right)(\!\log x_j)^{\frac{1}{k}}.\end{align}

Since for $j=1,\dots,k$ we have the bound

\begin{align*}\int\limits_{\substack{\mathrm{Re}(s_j)=1+\frac{1}{2\log x_j}\\[2pt] |\mathrm{Im}(s_j)|>\frac{1}{2}}} \left|(s_j-1)^{-\frac{1}{k}-2}x_j^{s_j}\frac{ds_j}{s_j} \right|&\ll x_j\int_{1/2}^{\infty} t^{-\frac{1}{k}-3}dt\\[5pt] &\ll x_j,\end{align*}

it follows from (4·10) that the main contribution to (4·13) is

(4·16) \begin{align} \left(1+\frac{1}{k}\right)^k\frac{1}{k^k} \prod_{j=1}^k \left(\frac{1}{2\pi i} \int_{\mathrm{Re}(s_j)=1+\frac{1}{2\log x_j}}(s_j-1)^{-\frac{1}{k}-2}x_j^{s_j}\frac{ds_j}{s_j}\right)\end{align}

with an error term

(4·17) \begin{align} \ll x_1\cdots x_k\sum_{j=1}^k \prod_{\substack{i=1\\[2pt] i \neq j}}^k (\!\log x_i)^{\frac{1}{k}+1}.\end{align}

Applying Lemma 4·2, for $j=1,\dots,k$ we have

\begin{align*}\frac{1}{2\pi i}\int_{\mathrm{Re}(s_j)=1+\frac{1}{2\log x_j}}(s_j-1)^{-\frac{1}{k}-2}x_j^{s_j}\frac{ds_j}{s_j}&=\frac{1}{\Gamma \left(\frac{1}{k}+2 \right)}\int_{1}^{x_j}(\!\log y_j)^{\frac{1}{k}+1}dy_j\nonumber\\[5pt] &=\left(1+\frac{1}{k}\right)^{-1} \frac{k}{\Gamma \left(\frac{1}{k} \right)}\int_{1}^{x_j}(\!\log y_j)^{\frac{1}{k}+1}dy_j.\end{align*}

Finally, the lemma follows from collecting the main term (4·16) and the error terms (4·12), (4·15) and (4·17).

To proceed to the computation of the main term (4·1), we first show that it suffices to limit ourselves to the region where $u_1+\cdots+u_{k-1} \leqslant 1-{1}/{\log x}.$ Otherwise, if $u_1+\cdots+u_{k-1} > 1-{1}/{\log x},$ then we may assume without loss of generality that $u_{k-1}\geqslant {1}/{2k}.$ We now show that when replacing $u_{k-1}$ by $u_{k-1}-{1}/{\log x},$ both the right-hand sides of (4·1) and (1·1) are changed by a negligible amount. Arguing similarly as before, we have

(4·18) \begin{align} \sum_{d_1 \leqslant x^{u_1}}\cdots\sum_{d_{k-2}\leqslant x^{u_{k-2}}}&\sum_{x^{u_{k-1}-\frac{1}{\log x}}\leqslant d_{k-1} \leqslant x^{u_{k-1}}}\sum_{d_k\leqslant x/(d_1\cdots d_{k-1})}\frac{1}{\tau_k(d_1\cdots d_{k})} \nonumber\\[5pt] \leqslant & \sum_{x^{u_{k-1}-\frac{1}{\log x}}\leqslant d_{k-1} \leqslant x^{u_{k-1}}}\sum_{m\leqslant x/d_{k-1}}\frac{\tau_{k-1}(m)}{\tau_{k}(d_{k-1}m)}.\end{align}

If $u_{k-1}\leqslant {1}/{2},$ then using [ Reference Koukoulopoulos14 , theorem 14·2], this is bounded by

\begin{align*}\ll x^{u_{k-1}}\sum_{m\leqslant x^{1+\frac{1}{\log x}-u_{k-1}} }\frac{\tau_{k-1}(m)}{\tau_{k}(m)}\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align*}

Otherwise, again it follows from the observation (4·4) that (4·18) is

\begin{align*}\ll x^{1-u_{k-1}}\sum_{d_{k-1}\leqslant x^{u_{k-1}}}\frac{\tau_{k-1}(d_{k-1})}{\tau_{k}(d_{k-1})}\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align*}

On the other hand, by making the change of variables $t_j=(1-t_{k-1})s_j$ for $j=1,\ldots, k-2,$ we have

(4·19) \begin{align} x\int_{u_{k-1}-\frac{1}{\log x}}^{u_{k-1}}&t_{k-1}^{\frac{1}{k}-1}\left(\int_0^{u_1}\cdots\int_0^{u_{k-2}}t_1^{\frac{1}{k}-1}\cdots t_{k-2}^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-2}\right)dt_{k-1} \nonumber\\[5pt] \leqslant & x \int_{u_{k-1}-\frac{1}{\log x}}^{u_{k-1}}t_{k-1}^{\frac{1}{k}-1}(1-t_{k-1})^{-\frac{1}{k}}dt_{k-1} \nonumber\\[5pt] &\times \int_0^{\frac{u_1}{1-u_{k-1}}}\cdots\int_0^{\frac{u_{k-2}}{1-u_{k-1}}}s_1^{\frac{1}{k}-1}\cdots s_{k-2}^{\frac{1}{k}-1}(1-s_1-\cdots-s_{k-2})^{\frac{1}{k}-1}ds_1\cdots ds_{k-2} \nonumber\\[5pt] \ll & x \int_{0}^{\frac{1}{\log x}}t_{k-1}^{\frac{1}{k}-1}(1-t_{k-1})^{-\frac{1}{k}}dt_{k-1}\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align}

Therefore, we can always assume $u_1+\cdots+u_{k-1} \leqslant 1-{1}/{\log x}.$ Arguing similarly, we can further limit ourselves to the smaller region where $u_1,\dots,u_{k-1} \geqslant {1}/{\log x}$ as well.

In order to apply Lemma 4·3, we express the main term (4·1) as

(4·20) \begin{align} \sum_{3 \leqslant d_1 \leqslant x^{u_1}}\cdots&\sum_{3 \leqslant d_{k-1}\leqslant x^{u_{k-1}}}\sum_{3 \leqslant d_k \leqslant x/(d_1\cdots d_{k-1})}\frac{1}{\tau_k(d_1\cdots d_{k})} \nonumber\\[5pt] &+O\left(\sum_{j=1}^{k-1}\sum_{d_j=1,2}\underset{\substack{d_i \leqslant x^{u_i}\\[2pt] i=1,\dots,k-1, i \neq j}}{\sum\cdots\sum}\sum_{d_k \leqslant x/(d_1\cdots d_{k-1})}\frac{1}{\tau_k(d_1\cdots d_{k})}\right) \nonumber \\[5pt] &+O\left(\sum_{d_k=1,2}\sum_{d_1 \leqslant x^{u_1}}\cdots\sum_{d_{k-1}\leqslant x^{u_{k-1}}}\frac{1}{\tau_k(d_1\cdots d_{k})}\right).\end{align}

For $j=1,\dots,k-1,$ again it follows from [ Reference Koukoulopoulos14 , theorem 14·2] that

\begin{align*}\sum_{d_j=1,2}\underset{\substack{d_i \leqslant x^{u_i}\\[2pt] i=1,\dots,k-1, i \neq j}}{\sum\cdots\sum}\sum_{d_k \leqslant x/(d_1\cdots d_{k-1})}\frac{1}{\tau_k(d_1\cdots d_{k})}&\leqslant \sum_{m \leqslant x}\frac{\tau_{k-1}(m)}{\tau_k(m)}+\sum_{m \leqslant x/2} \frac{\tau_{k-1}(m)}{\tau_{k-1}(2m)}\nonumber\\[5pt] &\ll \frac{x}{(\!\log x)^{\frac{1}{k}}},\end{align*}

and similarly

\begin{align*}\sum_{d_{k}=1,2}\sum_{d_1 \leqslant x^{u_1}}\cdots\sum_{d_{k-1}\leqslant x^{u_{k-1}}}\frac{1}{\tau_k(d_1\cdots d_{k})}& \leqslant \sum_{m \leqslant x}\frac{\tau_{k-1}(m)}{\tau_k(m)}+\sum_{m \leqslant x/2} \frac{\tau_{k-1}(m)}{\tau_{k-1}(2m)} \nonumber\\[5pt] &\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align*}

By partial summation (or more precisely multiple Riemann–Stieltjes integration) and Lemma 4·3, the main term of (4·20) is

\begin{align}&\int_{e}^{x^{u_1}}\cdots\int_{e}^{x^{u_{k-1}}}\int_{e}^{\frac{x}{x_1\cdots x_{k-1}}}\frac{1}{(\!\log x_1)^2}\cdots\frac{1}{(\!\log x_k)^2}d S(x_1,\dots,x_k) \nonumber\\[5pt] =&\frac{1}{\Gamma\left(\frac{1}{k} \right)^k}\int_{e}^{x^{u_1}}\cdots\int_{e}^{x^{u_{k-1}}}\int_{e}^{\frac{x}{x_1\cdots x_{k-1}}}(\!\log x_1)^{\frac{1}{k}-1}\cdots(\!\log x_k)^{\frac{1}{k}-1} dx_1\cdots dx_k \nonumber\\[5pt] &+\int_{e}^{x^{u_1}}\cdots\int_{e}^{x^{u_{k-1}}}\int_{e}^{\frac{x}{x_1\cdots x_{k-1}}}\frac{1}{(\!\log x_1)^2}\cdots\frac{1}{(\!\log x_k)^2}d R(x_1,\dots,x_k)\nonumber\\[5pt] \;=\!:\;& I_1 + I_2. \nonumber\end{align}

Finally, it remains to compute the integrals $I_1$ and $I_2.$

Lemma 4·4. The first integral $I_1$ equals

\begin{align*}F(u_1,\ldots,u_{k-1})x+O\left(\frac{x}{(\!\log x)^{\frac{1}{k}}} \right).\end{align*}

Proof. Making the change of variables $x_j=x^{t_j}$ for $j=1,\ldots,k-1$ , the integral $I_1$ becomes

\begin{align*}\frac{\log x}{\Gamma\left(\frac{1}{k} \right)^k}\int_{{\frac{1}{\log x}}}^{u_1}\cdots\int_{{\frac{1}{\log x}}}^{u_{k-1}}\int_{{\frac{1}{\log x}}}^{1-t_1-\cdots -t_{k-1}}t_1^{\frac{1}{k}-1}\cdots t_k^{\frac{1}{k}-1}x^{t_1+\cdots+t_k}dt_1\cdots dt_k.\end{align*}

Integrating by parts with respect to $t_k$ gives

(4·21) \begin{align} \int_{\frac{1}{\log x}}^{1-t_1-\cdots-t_{k-1}}t_k^{\frac{1}{k}-1}x^{t_k}dt_k= &(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}\frac{x^{1-t_1-\cdots-t_{k-1}}}{\log x}-\frac{e}{(\!\log x)^{\frac{1}{k}}} \nonumber\\[5pt] &+\left(1-\frac{1}{k}\right)\frac{1}{\log x}\int_{\frac{1}{\log x}}^{1-t_1-\cdots t_{k-1}}t_k^{\frac{1}{k}-2}x^{t_k}dt_k.\end{align}

Therefore, the contribution of the first term of (4·21) to the integral $I_1$ is

\begin{align*}\frac{x}{\Gamma\left(\frac{1}{k} \right)^k}\int_{{\frac{1}{\log x}}}^{u_1}\cdots\int_{{\frac{1}{\log x}}}^{u_{k-1}}t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1}.\end{align*}

Note that

(4·22) \begin{equation} \begin{gathered}[b] \!\!F(u_1,\ldots,u_{k-1})x=\frac{x}{\Gamma\left(\frac{1}{k} \right)^k}\int_{{\frac{1}{\log x}}}^{u_1}\cdots\int_{{\frac{1}{\log x}}}^{u_{k-1}}t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \\[5pt] +O\left(x\sum_{j=1}^{k-1}\int_0^{\frac{1}{\log x}}\underset{\substack{0 \leqslant t_i \leqslant u_i\\[2pt] i \neq j}}{\int\cdots\int}t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \right). \end{gathered}\end{equation}

Without loss of generality, it suffices to bound the term where $j=k-1.$ Similar to (4·19), we have

(4·23) \begin{align} x\int_0^{\frac{1}{\log x}}t_{k-1}^{\frac{1}{k}-1}&\left(\int_0^{u_1}\cdots\int_0^{u_{k-2}}t_1^{\frac{1}{k}-1}\cdots t_{k-2}^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-2}\right)dt_{k-1} \nonumber \\[5pt] &\ll x\int_0^{\frac{1}{\log x}}t_{k-1}^{\frac{1}{k}-1}(1-t_{k-1})^{-\frac{1}{k}}dt_{k-1}\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align}

On the other hand, the contribution of the second term of (4·21) to the integral $I_1$ is

\begin{align*}\ll (\!\log x)^{1-\frac{1}{k}}\prod_{j=1}^{k-1}\int_{\frac{1}{\log x}}^{u_j}t_j^{\frac{1}{k}-1}x^{t_j}dt_j&\ll (\!\log x)^{1-\frac{1}{k}}\prod_{j=1}^{k-1} u_j^{\frac{1}{k}-1}\frac{x^{u_j}}{\log x}.\end{align*}

If $u_j > {1}/{2k}$ for some $j=1,\ldots, k-1,$ then this is

(4·24) \begin{align} \ll x(\!\log x)^{-(k-1)/k} \leqslant \frac{x}{(\!\log x)^{\frac{1}{k}}},\end{align}

as $k \geqslant 2$ . Otherwise, the contribution is

\begin{align*}\ll x^{u_1+\cdots+u_{k-1}}(\!\log x)^{1-\frac{1}{k}} \ll x^{1/2}.\end{align*}

We also have

\begin{align*}\int_{\frac{1}{\log x}}^{1-t_1-\cdots t_{k-1}}t_k^{\frac{1}{k}-2}x^{t_k}dt_k &\ll (1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-2}\frac{x^{1-t_1-\cdots-t_{k-1}}}{\log x}\end{align*}

so that the contribution of the last term of (4·21) to the integral $I_1$ is

\begin{align*}\ll \frac{x}{\log x}&\int_{\frac{1}{\log x}}^{u_1}\cdots\int_{\frac{1}{\log x}}^{u_{k-1}}t_1^{\frac{1}{k}-1}\cdot st_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-2}dt_1\cdots dt_{k-1}.\end{align*}

Making the change of variables $s=1-t_1-\cdots-t_{k-1},$ this is bounded by

(4·25) \begin{align} \frac{x}{\log x} \int_{\frac{1}{\log x}}^{1-\frac{k-1}{\log x}}s^{\frac{1}{k}-2}&\left(\int_{\frac{1}{\log x}}^{u_1}\cdots\int_{\frac{1}{\log x}}^{u_{k-2}}t_1^{\frac{1}{k}-1}\cdots t_{k-2}^{\frac{1}{k}-1}(1-s-t_1-\cdots-t_{k-2})^{\frac{1}{k}-1}dt_1\cdots dt_{k-2}\right)ds.\end{align}

Similar to (4·19), the integral in the parentheses is $\ll (1-s)^{-\frac{1}{k}},$ and so (4·25) is

(4·26) \begin{align} \ll \frac{x}{\log x} \int_{\frac{1}{\log x}}^{1-\frac{k-1}{\log x}}s^{\frac{1}{k}-2}(1-s)^{-\frac{1}{k}}ds \ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align}

Collecting the main term of (4·22) and the error terms (4·23), (4·24) and (4·26), the lemma follows.

Lemma 4·5. The second integral $I_2$ is

\begin{align*}\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align*}

Proof. The integral $I_2$ is bounded by

\begin{align*}\underset{\substack{l_j \leqslant u_j \log x , j=1,\ldots, k-1\\[2pt] l_{k} \leqslant \log x-l_1-\cdots-l_{k-1} }}{\sum \cdots \sum} I_{2}^{(\boldsymbol{l})},\end{align*}

where

\begin{align*}I_{2}^{(\boldsymbol{l})}\;:\!=\;\underset{x_j \in [e^{l_j}, e^{l_j+1}], j=1,\dots,k}{\int \cdots \int}\frac{1}{(\!\log x_1)^2}\cdots\frac{1}{(\!\log x_k)^2}dR(x_1,\ldots,x_k).\end{align*}

By integration by parts, for each $\boldsymbol{l},$ the integral $I_{2}^{(\boldsymbol{l})}$ is bounded by

\begin{align*}\sum_{J \subseteq [k]}\underset{x_j \in [e^{l_j}, e^{l_j+1}], j\in J}{\int \cdots \int}\!\left(\sum_{x_j \in \{e^{l_j}, e^{l_j+1}\}, j \notin J} |R(x_1,\ldots, x_k)|\!\right)\!\prod_{j \notin J}\frac{1}{{l_j}^2}\prod_{j \in J}\left|d\!\left(\frac{1}{(\!\log x_j)^2} \right) \right|\;=\!:\;\sum_{J \subseteq [k]} I_{2}^{(\boldsymbol{l};J)}.\end{align*}

For each subset $J \subseteq [k],$ the integral $I_{2}^{(\boldsymbol{l};J)}$ is

\begin{align*}\ll \left(\max_{x_j \in [e^{l_j}, e^{l_j+1}], j=1,\ldots,k}|R(x_1,\ldots,x_k)|\right)\underset{x_j \in [e^{l_j}, e^{l_j+1}], j=1,\ldots,k}{\int \cdots \int} \prod_{j \notin J}\frac{1}{x_j (\!\log x_j)^2}\prod_{j \in J} & \frac{1}{x_j (\!\log x_j)^3}\nonumber\\[5pt] \times & dx_1 \cdots dx_k.\end{align*}

Applying Lemma 4·3, this is

\begin{align*}\ll \sum_{i=1}^k\underset{x_j \in [e^{l_j}, e^{l_j+1}], j=1,\ldots,k}{\int \cdots \int} \frac{1}{\log x_i} \prod_{j \notin J}\frac{1}{(\!\log x_j)^{1-\frac{1}{k}}} \prod_{j \in J} \frac{1}{(\!\log x_j)^{2-\frac{1}{k}}} dx_1\cdots dx_k.\end{align*}

Summing over every $\boldsymbol{l},$ we have

\begin{align*}\underset{\substack{l_j \leqslant u_j \log x , j=1,\ldots, k-1\\[2pt] l_k \leqslant \log x -l_1-\cdots-l_{k-1}}}{\sum \cdots \sum} I_2^{(\boldsymbol{l};J)}\ll \sum_{i=1}^k \int_{e}^{x^{u_1}e}\cdots \int_{e}^{x^{u_{k-1}}e}\int_{e}^{\frac{e^kx}{x_1\cdots x_{k-1}}} &\frac{1}{\log x_i} \prod_{j \notin J}\frac{1}{(\!\log x_j)^{1-\frac{1}{k}}} \\[5pt] \times & \prod_{j \in J} \frac{1}{(\!\log x_j)^{2-\frac{1}{k}}}dx_1\cdots dx_k.\end{align*}

To avoid repetitions, we only bound the contribution of the term where $i=k$ here. Making the change of variables $x_j=x^{t_j}$ for $j=1,\ldots,k-1$ , it becomes

\begin{align*}\frac{1}{(\!\log x)^{|J|}} \int_{\frac{1}{\log x}}^{u_1+\frac{1}{\log x}} \cdots \int_{\frac{1}{\log x}}^{u_{k-1}+\frac{1}{\log x}}\int_{\frac{1}{\log x}}^{1+\frac{k}{\log x}-t_1-\cdots-t_{k-1}}t_k^{-1} \prod_{j \notin J}t_j^{\frac{1}{k}-1}\prod_{j \in J}t_j^{\frac{1}{k}-2} x^{t_1+\cdots+t_k} dt_1\cdots dt_k,\end{align*}

which is

(4·27) \begin{align} \ll \int_{\frac{1}{\log x}}^{u_1+\frac{1}{\log x}} \cdots \int_{\frac{1}{\log x}}^{u_{k-1}+\frac{1}{\log x}}\int_{\frac{1}{\log x}}^{1+\frac{k}{\log x}-t_1-\cdots-t_{k-1}}&t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}t_k^{\frac{1}{k}-2}x^{t_1+\cdots+t_k}dt_1\cdots dt_k.\end{align}

Integrating by parts with respect to $t_k$ gives

\begin{align*}\int_{\frac{1}{\log x}}^{1+\frac{k}{\log x}-t_1-\cdots-t_{k-1}} t_k^{\frac{1}{k}-2}x^{t_k}dt_k\ll \left(1+\frac{k}{\log x}-t_1-\cdots-t_{k-1} \right)^{\frac{1}{k}-2}\frac{x^{1-t_1-\cdots-t_{k-1}}}{\log x}.\end{align*}

Therefore, the expression (4·27) is

\begin{align*}\ll \frac{x}{\log x} \int_{\frac{1}{\log x}}^{u_1+\frac{1}{\log x}} \cdots \int_{\frac{1}{\log x}}^{u_{k-1}+\frac{1}{\log x}} t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}\left(1+\frac{k}{\log x}-t_1-\cdots-t_{k-1} \right)^{\frac{1}{k}-2} dt_1\cdots d_{k-1}.\end{align*}

Similar to (4·19), this is

\begin{align*}\ll \frac{x}{\log x}\int_{\frac{2}{\log x}}^{1+\frac{1}{\log x}}s^{\frac{1}{k}-2}\left( 1+\frac{k}{\log x}-s \right)^{-\frac{1}{k}}ds\ll \frac{x}{(\!\log x)^{\frac{1}{k}}}.\end{align*}

Finally, the lemma follows from summing over every subset $J \subseteq [k].$

5. Proof of Theorem 1·2

We first define the function field analogue of the multiple Dirichlet series $\mathcal{D}(s_1,\dots,s_k).$

Definition 5·1. For $(s_1,\dots,s_k) \in \Omega$ , the multiple Dirichlet series $\mathcal{D}_{\mathbb{F}_q[x]}(s_1,\dots,s_k)$ is defined as

\begin{align*}\underset{F_1,\dots,F_k \in \mathcal{M}_q}{\sum\cdots\sum}\frac{\tau_k(F_1\cdots F_k)^{-1}}{q^{s_1\deg F_1+\cdots+s_k \deg F_k}}.\end{align*}

Having the multiple Dirichlet series $\mathcal{D}_{\mathbb{F}_q[x]}(s_1,\dots,s_k)$ in hand, it is now clear that we can follow exactly the same steps as before. Moreover, since the function field zeta function $\zeta_{\mathbb{F}_q}(s)$ never vanishes (see [ Reference Rosen18 , chapter 2]), some of the computations above can be simplified considerably. To avoid repetitions, the complete proof is omitted here.

6. Proof of Theorem 1·3

It is clear that one can argue similarly but a more direct and elementary proof is presented here. We begin with the combinatorial analogue of the mean of divisor functions.

Lemma 6·1. Let $\alpha \in \mathbb{C}\setminus \mathbb{Z}_{\leqslant 0}$ and $n\in \mathbb{Z}_{\geqslant 0}.$ Then we have

(6·1) \begin{align} \frac{1}{n!}\sum_{\sigma \in S_n}\tau_{\alpha}(\sigma)=\left(\substack{n+\alpha-1 \\[2pt] n}\right).\end{align}

Moreover, we have the estimate

(6·2) \begin{align} \left(\substack{n+\alpha-1 \\[2pt] n}\right)=\begin{cases}1 &\mbox{if } n = 0, \\[5pt] \frac{1}{\Gamma \left(\alpha \right)}n^{\alpha-1}\left(1+O_{\alpha}\left(\frac{1}{n}\right) \right) & \mbox{if } n\geqslant1.\end{cases}\end{align}

Proof. Although (6·1) is fairly standard, say for instance one may apply [ Reference Stanley19 , corollary 5·1·9] with $f \equiv \alpha$ , we provide a short proof here for the sake of completeness. Adopting the notations in Section 2, we write

\begin{align*}\sum_{\sigma \in S_n}\tau_{\alpha}(\sigma)&=\sum_{k=0}^{n}\left[{n\atop k}\right]\alpha^k.\end{align*}

Now any permutation $\sigma \in S_n$ with k disjoint cycles can be constructed by the following procedure. To begin with, there are $(n-1)(n-2)\cdots(n-i_1+1)$ ways of choosing $i_1-1$ distinct integers from $[n]\setminus\{1\}$ to form a cycle $C_1$ with length $i_1$ containing $1.$ Then, fix any integer $m \in [n]$ not contained in the cycle $C_1$ . Similarly, there are $(n-i_1-1)\cdots(n-i_1-i_2+1)$ ways of choosing $i_2-1$ integers from $[n]\setminus C_1$ to form another cycle $C_2$ with length $i_2$ containing m. Repeating the same procedure until $i_1+\cdots+i_k=n$ , we arrive at a permutation with k disjoint cycles.

Therefore, the expression (6·1) follows from the explicit formula

\begin{align*}\left[{n\atop k}\right]=\underset{i_1+\cdots+i_k=n}{\sum_{i_1=1}^n\cdots\sum_{i_k=1}^n}\frac{(n-1)!}{\prod_{j=1}^{k}(n-i_1-\cdots-i_j)},\end{align*}

which can be seen as the coefficient of ${\alpha}^k$ in the falling factorial

\begin{align*}(n+\alpha-1)(n+\alpha-2)\cdots(\alpha+1)\alpha.\end{align*}

On the other hand, to prove (6·2), we express the binomial coefficient as a ratio of gamma functions, followed by the application of Stirling’s formula.

Similar to Theorem 1·1, without loss of generality we shall assume $u_1,\ldots u_{k-1}, 1-u_1-\cdots-u_{k-1} \geqslant {1}/{n}.$ Interchanging the order of summation, we have

(6·3) \begin{align} \frac{1}{n!}\sum_{\sigma \in S_n}\frac{1}{\tau_k(\sigma)}\underset{{\substack{[n]=A_1 \sqcup \cdots \sqcup A_k\\[2pt] \sigma(A_i)=A_i, i=1,\dots,k \\[5pt] 0 \leqslant |A_i| \leqslant nu_i, i=1,\dots,k-1}}}{\sum\cdots\sum}1&=\frac{1}{n!}\underset{\substack{0 \leqslant m_i \leqslant nu_i \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\left(\substack{n \\[2pt] m_1,\dots,m_k}\right)\prod_{i=1}^k\left(\sum_{\sigma \in S_{m_i}}\frac{1}{\tau_k(\sigma)} \right) \nonumber\\[5pt] &=\underset{\substack{0 \leqslant m_i \leqslant nu_i \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^k \left(\frac{1}{m_i!}\sum_{\sigma \in S_{m_i}}\frac{1}{\tau_k(\sigma)} \right),\end{align}

where $m_k\;:\!=\;n-m_1-\cdots -m_{k-1}.$

Note that $\tau_k(\sigma)^{-1}=\tau_{1/k}(\sigma).$ Applying (6·1) from Lemma 6·1, the expression (6·3) equals

(6·4) \begin{align} \underset{\substack{0 \leqslant m_i \leqslant nu_i \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^k\left(\substack{m_i+\frac{1}{k}-1 \\[2pt] m_i}\right).\end{align}

Let $I \subseteq [k-1]$ be a nonempty subset. Then using (6·2) from Lemma 6·1, the contribution of $m_i=0$ to (6·4) for $i\in I$ is

\begin{align}\ll \underset{\substack{1 \leqslant m_i \leqslant nu_i\\[2pt] i \notin I}}{\sum\cdots\sum}\left(\prod_{i \notin I}m_i^{\frac{1}{k}-1}\right)m_k^{\frac{1}{k}-1}= n^{-\frac{|I|}{k}}\underset{\substack{1 \leqslant m_i \leqslant nu_i \\[2pt] i \notin I}}{\sum\cdots\sum}&\left(\prod_{i \notin I}\left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}\right)\nonumber \\[5pt] \times & \left(1-\frac{m_1}{n}-\cdots-\frac{m_{k-1}}{n}\right)^{\frac{1}{k}-1}n^{-(k-|I|)}, \nonumber\end{align}

which is

(6·5) \begin{align} \ll n^{-\frac{|I|}{k}} \underset{\substack{0 \leqslant t_i \leqslant u_i \\[2pt] i \notin I}}{\int\cdots\int}\left(\prod_{i \notin I}t_i^{\frac{1}{k}-1}\right)\left(1-\sum_{i \notin I}t_i\right)^{\frac{1}{k}-1}\prod_{i \notin I}dt_i\ll n^{-\frac{|I|}{k}}.\ \end{align}

Also, the contribution of $m_j>nu_j-1$ for some $j=1,\dots,k-1$ to (6·4) given that $m_1,\dots,m_k\geqslant 1$ is

\begin{align}\ll \sum_{j=1}^{k-1}{(nu_j)}^{\frac{1}{k}-1}\underset{\substack{1 \leqslant m_i \leqslant nu_i\\[2pt] i\neq j}}{\sum\cdots\sum}\prod_{\substack{i=1\\[2pt] i\neq j}}^k m_i^{\frac{1}{k}-1}=n^{-\frac{1}{k}} \sum_{j=1}^{k-1}{(nu_j)}^{\frac{1}{k}-1}\underset{\substack{1 \leqslant m_i \leqslant nu_i\\[2pt] i\neq j}}{\sum\cdots\sum}\prod_{\substack{i=1\\[2pt] i\neq j}}^k\left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}. \nonumber\end{align}

Since $u_j \geqslant {1}/{n}$ for $j=1,\dots,k-1,$ this is

(6·6) \begin{align} &\ll n^{-\frac{1}{k}} \sum_{j=1}^{k-1} (nu_j)^{\frac{1}{k}-1}\underset{\substack{0 \leqslant t_i \leqslant u_i, i=1,\dots,k-1\\[2pt] i \neq j}}{\int\cdots\int}\prod_{\substack{i=1\\[2pt] i \neq j}}^{k-1}t_i^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \nonumber\\[5pt] &\ll n^{-\frac{1}{k}} \sum_{j=1}^{k-1} (nu_j)^{\frac{1}{k}-1}\left(1-u_j\right)^{-\frac{1}{k}}\ll n^{-\frac{1}{k}}.\end{align}

Collecting the error terms (6·5) and (6·6), the expression (6·4) equals

(6·7) \begin{align} \underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^k\left(\substack{m_i+\frac{1}{k}-1\\[2pt] m_i}\right)+O(n^{-\frac{1}{k}}).\end{align}

Applying (6·2) from Lemma 6·1, the main term of (6·7) is the Riemann sum

(6·8) \begin{align}\frac{1}{\Gamma \left(\frac{1}{k} \right)^k}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^k \left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}\frac{1}{n^{k-1}},\end{align}

with an error term

(6·9) \begin{align} \ll \frac{1}{n}\sum_{j=1}^{k}\left(\frac{m_j}{n}\right)^{\frac{1}{k}-2}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1\\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{\substack{i=1\\[2pt] i \neq j}}^k\left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}\frac{1}{n^{k-1}}.\end{align}

Let us first bound the error term (6·9). For each $j=1,\dots,k-1,$ we have

\begin{align}\frac{1}{n}&\sum_{1 \leqslant m_j \leqslant nu_j-1}\left(\frac{m_j}{n} \right)^{\frac{1}{k}-2}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1, i\neq j}}{\sum\cdots\sum}\prod_{\substack{i=1\\[2pt] i \neq j}}^{k-1}\left(\frac{m_i}{n} \right)^{\frac{1}{k}-1}\frac{1}{n^{k-1}} \nonumber\\[5pt] &\ll \frac{1}{n} \int_{\frac{1}{n}}^{1-\frac{1}{n}} t_j^{\frac{1}{k}-2}\underset{\substack{0 \leqslant t_i \leqslant u_i\\[2pt] i=1,\dots,k-1, i \neq j}}{\int\cdots\int}\prod_{\substack{i=1\\[2pt] i \neq j}}^{k-1}t_i^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \nonumber\\[5pt] &\ll \frac{1}{n}\int_{\frac{1}{n}}^{1-\frac{1}{n}}t_j^{\frac{1}{k}-2}(1-t_j)^{-\frac{1}{k}}dt_j\ll n^{-\frac{1}{k}}. \nonumber\end{align}

Arguing similarly for $j=k,$ we also have

\begin{align}\frac{1}{n}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^{k-1}\left(\frac{m_i}{n} \right)^{\frac{1}{k}-1}\left(\frac{m_k}{n} \right)^{\frac{1}{k}-2}\frac{1}{n^{k-1}} \ll n^{-\frac{1}{k}}.\nonumber\end{align}

Therefore, the error term (6·9) is $\ll n^{-\frac{1}{k}}$ and we are left with the main term (6·8).

The distribution function $F(u_1,\ldots,u_{k-1})$ equals

(6·10) \begin{align} \frac{1}{\Gamma \left(\frac{1}{k} \right)^k}&\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1 }}{\sum\cdots\sum}\int_{\frac{m_1}{n}}^{\frac{m_1+1}{n}}\cdots\int_{\frac{m_{k-1}}{n}}^{\frac{m_{k-1}+1}{n}}t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \nonumber\\[3pt] &+O\left(\sum_{j=1}^{k-1}\int_{0}^{\frac{1}{n}}t_j^{\frac{1}{k}-1}\underset{\substack{0 \leqslant t_i \leqslant u_i\\[2pt] i\neq j}}{\int\cdots\int}\prod_{\substack{i=1\\[2pt] i \neq j}}^{k-1}t_j^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \right) \nonumber\\[3pt] &+O\left(\sum_{j=1}^{k-1}\int_{u_j-\frac{1}{n}}^{u_j}t_j^{\frac{1}{k}-1}\underset{\substack{0 \leqslant t_i \leqslant u_i \\[2pt] i\neq j}}{\int\cdots\int}\prod_{\substack{i=1\\[2pt] i \neq j}}^{k-1}t_j^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}dt_1\cdots dt_{k-1} \right).\end{align}

The first error term in (6·10) is

(6·11) \begin{align} \ll \sum_{j=1}^{k-1}\int_{0}^{\frac{1}{n}}t_j^{\frac{1}{k}-1}(1-t_j)^{-\frac{1}{k}}dt_j\ll n^{-\frac{1}{k}}.\end{align}

The second error term in (6·10) is

(6·12) \begin{align} \ll \sum_{j=1}^{k-1}\int_{u_j-\frac{1}{n}}^{u_j}t_j^{\frac{1}{k}-1}(1-t_j)^{-\frac{1}{k}}dt_j&\leqslant \sum_{j=1}^{k-1}\int_{0}^{\frac{1}{n}}t_j^{\frac{1}{k}-1}(1-t_j)^{-\frac{1}{k}}dt_j \nonumber\\[3pt] &\ll n^{-\frac{1}{k}}.\end{align}

By Taylor’s theorem, for $(t_1,\dots,t_{k-1}) \in\left[{m_1}/{n}, ({m_1+1})/{n} \right]\times \cdots \times [({m_{k-1}})/{n}, ({m_{k-1}+1})/{n}],$ we have

\begin{align*}t_1^{\frac{1}{k}-1}\cdots t_{k-1}^{\frac{1}{k}-1}(1-t_1-\cdots-t_{k-1})^{\frac{1}{k}-1}=\prod_{i=1}^k\left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}+O\!\left(\!\frac{1}{n}\sum_{j=1}^{k}\left(\frac{m_j}{n}\right)^{\frac{1}{k}-2}\prod_{\substack{i=1\\[2pt] i \neq j}}^k\!\left(\!\frac{m_i}{n}\right)^{\frac{1}{k}-1}\!\right)\!.\end{align*}

Using the approximation, we conclude from (6·10), (6·11) and (6·12) that

\begin{align*}F(u_1,\ldots,u_{k-1})=&\frac{1}{\Gamma \left(\frac{1}{k} \right)^k}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\prod_{i=1}^k \left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}\frac{1}{n^{k-1}}+O\left(n^{-\frac{1}{k}} \right)\\[5pt] &+O\left(\frac{1}{n}\underset{\substack{1 \leqslant m_i \leqslant nu_i-1 \\[2pt] i=1,\dots,k-1}}{\sum\cdots\sum}\sum_{j=1}^{k}\left(\frac{m_j}{n}\right)^{\frac{1}{k}-2}\prod_{\substack{i=1\\[2pt] i \neq j}}^k\left(\frac{m_i}{n}\right)^{\frac{1}{k}-1}\frac{1}{n^{k-1}} \right),\end{align*}

and the last error term here is exactly the same as (6·9), which is again $\ll n^{-\frac{1}{k}}.$

7. Factorisation into k parts in the general setting

With a view to model Dirichlet distribution with arbitrary parameters, we further explore the factorisation of integers into k parts in the general setting using multiplicative functions of several variables defined below.

Definition 7·1. An arithmetic function of k variables $F \colon \mathbb{N}^k \to \mathbb{C}$ is said to be multiplicative if it satisfies the condition $F(1,\ldots,1)=1$ and the functional equation

\begin{align*}F(m_1n_1,\ldots,m_kn_k)=F(m_1,\ldots,m_k)F(n_1,\ldots,n_k)\end{align*}

whenever $(m_1\cdots m_k, n_1\cdots n_k)=1,$ or equivalently,

\begin{align*}F(n_1,\ldots,n_k)=\prod_{p}F(p^{v_p(n_1)},\ldots,p^{v_p(n_k)}),\end{align*}

where $v_p(n)\;:\!=\;\max\{k \geqslant 0 \, : \, p^k | n\}.$

Remark 7·1. Multiplicative functions of several variables, such as the “GCD function” and the “LCM function” are interesting for their own sake. See [ Reference Tóth22 ] for further discussion.

To adapt the proof of Theorem 1·1, we consider the following class of multiplicative functions.

Definition 7·2. Let $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k), \boldsymbol{\beta}=(\beta_1,\ldots,\beta_k), \boldsymbol{c}= (c_1,\ldots,c_k),\boldsymbol{\delta}=(\delta_1,\ldots,\delta_k)$ with $\alpha_j, \beta_j, c_j>0, \delta_j\geqslant 0$ for $j=1,\ldots,k.$ We denote by $\mathcal{M}(\boldsymbol{\alpha};\boldsymbol{\beta},\boldsymbol{c}, \boldsymbol{\delta})$ the class of non-negative multiplicative functions of k variables $F \colon \mathbb{N}^k \to \mathbb{C}$ satisfying the following conditions:

  1. (a) (divisor bound) for $j=1,\ldots,k,$ we have $|F(1,\ldots,\overbrace{n}^{\boldsymbol{j}-th},\ldots 1)| \leqslant \tau_{\beta_j}(n),$ where

    \begin{align*}\tau_{\beta}(n)\;:\!=\;\prod_{p}\left(\substack{v_p(n)+\beta-1 \\[2pt] v_p(n)}\right)\end{align*}
    is the generalised divisor function;
  2. (b) (analytic continuation) let $s=\sigma+it \in \mathbb{C}.$ For $j=1,\ldots,k$ , the Dirichlet series

    defined for $\sigma>1$ can be continued analytically to the domain where $\sigma>1-{c_j}/{\log (2+|t|)};$
  3. (c) (growth rate) for $j=1,\ldots,k,$ in the domain above we have the bound

    \begin{align*}\mathcal{P}_F(s;\boldsymbol{\alpha},j) \leqslant \delta_j\log (2+|t|).\end{align*}

For instance, the multiplicative function $F(n_1,\ldots,n_k)=\tau_k(n_1\cdots n_k)^{-1}$ belongs to the class

Applying the Mellin transform to (higher derivatives of) the multiple Dirichlet series

\begin{align*}\mathcal{D}_F(s_1,\ldots,s_k)\;:\!=\;\sum_{n_1=1}^{\infty}\cdots\sum_{n_k=1}^{\infty}\frac{F(n_1,\ldots,n_k)}{n_1^{s_1}\cdots n_k^{s_k}}\end{align*}

as before, one can prove the following generalisation of Lemma 4·3.

Lemma 7·1. Given a multiplicative function of k variables $ F \in \mathcal{M}(\boldsymbol{\alpha};\boldsymbol{\beta}, \boldsymbol{c}, \boldsymbol{\delta}).$ Let $m \geqslant 2$ be an integer and $x_1,\dots,x_k \geqslant e.$ We denote by $S_F(x_1,\dots,x_k;m)$ the weighted sum

\begin{align*}\sum_{d_1\leqslant x_1}\cdots\sum_{d_k\leqslant x_k}(\!\log d_1)^m \cdots (\!\log d_k)^m F(d_1,\ldots,d_k).\end{align*}

Then there exists $m_0=m_0(\boldsymbol{\alpha},\boldsymbol{\beta}, \boldsymbol{c}, \boldsymbol{\delta})$ such that for any integer $m\geqslant m_0,$ we have

\begin{align*}S_F(x_1,\ldots,x_k;m) = \prod_{j=1}^k\frac{1}{\Gamma(\alpha_j)}\int_{1}^{x_j}(\!\log y_j)^{\alpha_j+m-1} dy_j + R_F(x_1,\ldots,x_k;m)\end{align*}

with

\begin{align*}R_F(x_1,\dots,x_k;m) \ll x_1\cdots x_k\sum_{j=1}^k \left(\prod_{\substack{i=1\\[2pt] i \neq j}}^k (\!\log x_i)^{\alpha_i+m-1}\right) (\!\log x_j)^{\alpha_j+m-2}.\end{align*}

To model the Dirichlet distribution by factorizing integers into k parts, we consider the following class of pairs of multiplicative functions.

Definition 7·3. Let $\theta>0$ and $\boldsymbol{\alpha}$ be a positive k-tuple. We denote by $\mathcal{M}_{\theta}(\boldsymbol{\alpha})$ the class of pairs of multiplicative functions $(f; G) $ satisfying the following conditions:

  1. (a) for $n \geqslant 1,$ we have

    \begin{align*}\sum_{n=d_1\cdots d_k}G(d_1,\ldots,d_k)>0;\end{align*}
  2. (b) the multiplicative function f belongs to the class $\mathcal{M}(\theta;\beta',c', \delta' )$ for some $\beta', c', \delta';$

  3. (c) the multiplicative function of k variables

    \begin{align*}F(d_1,\ldots,d_k)\;:\!=\;f(n)\cdot\frac{G(d_1,\ldots,d_k)}{\sum_{n=e_1\cdots e_k}G(e_1,\ldots,e_k)}\end{align*}
    belongs to the class $\mathcal{M}(\boldsymbol{\alpha};\boldsymbol{\beta}, \boldsymbol{c}, \boldsymbol{\delta})$ for some $\boldsymbol{\beta}, \boldsymbol{c}, \boldsymbol{\delta},$ where $n=d_1\cdots d_k.$

Remark 7·2. By definition, we must have $\theta=\alpha_1+\cdots+\alpha_k.$

Then, applying Lemma 7·1 followed by partial summation as before, one can prove the following generalisation of Theorem 1·1.

Theorem 7·1. Let $(f;G)$ be a pair of multiplicative functions belonging to the class $\mathcal{M}_{\theta}(\boldsymbol{\alpha}).$ Then uniformly for $x \geqslant 2$ and $u_1,\ldots,u_{k-1} \geqslant 0$ satisfying $u_1+\cdots+u_{k-1}\leqslant 1,$ we have

\begin{align*}\left(\sum_{m \leqslant x}f(m)\right)^{-1}&\sum_{n\leqslant x}f(n)\left( \sum_{n=e_1\cdots e_k}G(e_1,\ldots,e_k) \right)^{-1}\underset{n=d_1\cdots d_{k}}{\sum_{d_1 \leqslant n^{u_1}}\cdots\sum_{d_{k-1} \leqslant n^{u_{k-1}}}\sum_{d_k \leqslant n}}G(d_1,\ldots,d_k)\\[5pt] =&F_{\boldsymbol{\alpha}}(u_1,\ldots, u_{k-1})+O\left(\frac{1}{(\!\log x)^{\min \{1,\alpha_1,\ldots,\alpha_k\}}} \right).\end{align*}

Finally, we conclude with the following generalisation of Corollary 1·1.

Corollary 7·1. Given a pair of multiplicative functions $(f;G)$ belonging to the class $\mathcal{M}_{\theta}(\boldsymbol{\alpha}).$ For $x\geqslant 1,$ let n be a random integer chosen from [1, x] with probability $\left(\sum_{m \leqslant x}f(m)\right)^{-1}f(n)$ and $(d_1,\ldots,d_k)$ be a random k-tuple chosen from the set of all possible factorisation $\{(m_1,\ldots,m_k) \in \mathbb{N}^k\, : \, n=m_1\cdots m_k\}$ with probability $\left( \sum_{n=e_1\cdots e_k}G(e_1,\ldots,e_k) \right)^{-1}G(d_1,\ldots,d_k)$ . Then as $x \to \infty,$ we have the convergence in distribution

\[\left(\frac{\log d_1}{\log n}, \ldots, \frac{\log d_k}{\log n}\right) \xrightarrow[]{\;\;\;\;d\;\;\;\;}\mathrm{Dir}\left(\alpha_1,\ldots,\alpha_k \right).\]

Remark 7·3. See [ Reference Bareikis and Mac̆iulis2, Reference Bareikis and Mac̆iulis3 ] for the cases where $k=2,3$ respectively, where $G(d_1,\ldots,d_k)$ is of the form $(f_1\ast \cdots \ast f_{k-1} \ast 1)(d_1\cdots d_k)$ for some multiplicative functions $f_1,\ldots,f_{k-1}\;:\;\mathbb{N} \to \mathbb{C}.$

Example 7·1. For $k\geqslant 2,$ let $\theta, \lambda_1, \ldots, \lambda_k>0.$ We consider the pair of multiplicative functions

\begin{align*}f(n)=\tau_{\theta}(n);\quad G(d_1,\ldots,d_k)= \tau_{\lambda_1}(d_1) \cdots\tau_{\lambda_k}(d_k).\end{align*}

Then the Dirichlet distribution of dimension k

\begin{align*}\mathrm{Dir}\left(\frac{\theta\lambda_1}{\lambda_1+\cdots+\lambda_k},\ldots,\frac{\theta\lambda_k}{\lambda_1+\cdots+\lambda_k} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $\theta, \lambda_1,\ldots, \lambda_k=1,$ it reduces to Theorem 1·1.

Example 7·2. For $q\geqslant 3,$ let $\{a_1,\ldots,a_{\varphi(q)}\}$ be a reduced residue system $\ (\mathrm{mod}\ q)$ . We consider the pair of multiplicative functions

Then the Dirichlet distribution of dimension $\varphi(q)$

\begin{align*}\mathrm{Dir}\left(\frac{1}{\varphi(q)},\ldots,\frac{1}{\varphi(q)} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $q=4,$ it reduces to [ Reference Montgomery and Vaughan15 , exercise 6·2·22].

Example 7·3. For $k\geqslant 2,$ we consider the pair of multiplicative functions

Then the Dirichlet distribution of dimension k

\begin{align*}\mathrm{Dir}\left(\frac{1}{2k},\ldots,\frac{1}{2k} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $k=2,$ it reduces to [ Reference Daoud, Hidri and Naimi6 , theorem 2].

Example 7·4. For $k\geqslant 2,$ we consider the pair of multiplicative functions

Then the Dirichlet distribution of dimension k

\begin{align*}\mathrm{Dir}\left(\frac{1}{k},\ldots,\frac{1}{k} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $k=2,$ it reduces to [ Reference Feng and Cui8 , theorem 2] with $y=x.$

Example 7·5. For $k\geqslant 2$ , let $\mathcal{R}$ be a subset of $\{\{i,j\}\,:\, 1 \leqslant i \neq j \leqslant k\}.$ We consider the pair of multiplicative functions

\begin{align*}f(n) \equiv 1;\quad G(d_1,\ldots,d_k)=\begin{cases}1 & \mbox{if $(d_i,d_j)=1$ whenever$\{i,j\}\notin \mathcal{R},$}\\[5pt] 0 & \mbox{otherwise.}\end{cases}\end{align*}

Then the Dirichlet distribution of dimension k

\begin{align*}\mathrm{Dir}\left( \frac{1}{k},\ldots,\frac{1}{k} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $k=2^r$ for $r \geqslant 2,$ it reduces to [ Reference de la Bretèche and Tenenbaum4 , théorème 1·1] with a suitable subset $\mathcal{R}$ via total decomposition sets (see [ Reference Hall12 , theorem 0·20]), which is itself a generalisation of [ Reference Bareikis and Mac̆iulis1 , theorem 2·1] for $r=2.$

Example 7·6. For $k\geqslant 3$ , we consider the pair of multiplicative functions

\begin{align*}f(n) \equiv 1;\quad G(d_1,\ldots,d_k)=\prod_{j=1}^{k-1}\frac{1}{\tau(d_j\cdots d_k)}.\end{align*}

Then the Dirichlet distribution of dimension k

\begin{align*}\mathrm{Dir}\left(\frac{1}{2},\frac{1}{4},\ldots,\frac{1}{2^{k-2}},\frac{1}{2^{k-1}},\frac{1}{2^{k-1}} \right)\end{align*}

can be modelled in the sense of Corollary 7·1. In particular, when $k=3,$ it reduces to [ Reference de la Bretèche and Tenenbaum4 , théorème 1·2].

Unsurprisingly, we expect that Theorem 7·1 should also hold for polynomials or permutations. Specifically, in the realm of permutations, the counterpart to multiplicative functions is the generalised Ewens measure (see [ Reference Elboim and Gorodetsky7 ]). Detailed proofs will be provided in the author’s doctoral thesis.

Acknowledgements

The author is grateful to Andrew Granville and Dimitris Koukoulopoulos for their suggestions and encouragement. He would also like to thank Sary Drappeau for pointing out relevant papers, and the anonymous referee for helpful comments and corrections.

References

Bareikis, G. and Mac̆iulis, A.. Cesàro means related to the square of the divisor function. Acta Arith. 156(1) (2012), 83–99.CrossRefGoogle Scholar
Bareikis, G. and Mac̆iulis, A.. Modeling the beta distribution using multiplicative functions. Lith. Math. J. 57(2) (2017), 171–182.CrossRefGoogle Scholar
Bareikis, G. and Mac̆iulis, A.. Bivariate beta distribution and multiplicative functions. Eur. J. Math. 7(4) (2021), 1668–1688.CrossRefGoogle Scholar
de la Bretèche, R. and Tenenbaum, G.. Sur les processus arithmétiques liés aux diviseurs. Adv. in Appl. Probab. 48(A) (2016), 63–76.CrossRefGoogle Scholar
Deshouillers, J.-M. Dress, F. and Tenenbaum, G.. Lois de répartition des diviseurs. I. Acta Arith. 34(4) (1979), 273–285.CrossRefGoogle Scholar
Daoud, M. S. Hidri, A. and Naimi, M.. The distribution law of divisors on a sequence of integers. Lith. Math. J. 55(4) (2015), 474–488.CrossRefGoogle Scholar
Elboim, D. and Gorodetsky, O.. Multiplicative arithmetic functions and the generalised Ewens measure. ArXiv: 1909.00601 (2022).Google Scholar
Feng, B. and Cui, Z.. DDT theorem over square-free numbers in short interval. Front. Math. China 12(2) (2017), 367375.CrossRefGoogle Scholar
Granville, A.. The anatomy of integers and permutations. preprint (2008), available at: https://dms.umontreal.ca/andrew/PDF/Anatomy.pdf.Google Scholar
Granville, A. and Granville, J.. Prime suspects. The anatomy of integers and permutations, illustrated by Robert J. Lewis (Princeton University Press, Princeton, NJ, 2019).Google Scholar
Granville, A. and Koukoulopoulos, D.. Beyond the LSD method for the partial sums of multiplicative functions. Ramanujan J. 49(2) (2019), 287319.CrossRefGoogle ScholarPubMed
Hall, R. R.. Sets of multiples. Cambridge Tracts in Math. 118 (Cambridge University Press, Cambridge, 1996).CrossRefGoogle Scholar
Koukoulopoulos, D.. Localised factorisations of integers. Proc. Lond. Math. Soc. (3) 101(2) (2010), 392–426.CrossRefGoogle Scholar
Koukoulopoulos, D.. The distribution of prime numbers. Grad. Stud. Math. 203 (Amer. Math. Soc., 2019).CrossRefGoogle Scholar
Montgomery, H. L. Vaughan, R. C.. Multiplicative number theory. I. Classical theory. Cambridge Stud. Adv. Math. 97 (Cambridge University Press, Cambridge, 2007).CrossRefGoogle Scholar
Nyandwi, S. and Smati, A.. Smati. Distribution laws of pairs of divisors. Integers 13 (2013), paper no. A13, 13.Google Scholar
Nyandwi, S. and Smati, A.. Distribution laws of smooth divisors. ArXiv: 1806.05955 (2018).Google Scholar
Rosen, M.. Number theory in function fields. Grad. Texts in Math. 210 (Springer-Verlag, New York, 2002).CrossRefGoogle Scholar
Stanley, R. P.. Enumerative combinatorics. Vol. 2. Cambridge Stud. Adv. Math. 62 (Cambridge University Press, Cambridge, 1999). With a foreword by Gian–Carlo Rota and appendix 1 by Sergey Fomin.Google Scholar
Tenenbaum, G.. Introduction to analytic and probabilistic number theory, third ed. Grad. Stud. Math. 163 (Amer. Math. Soc., Providence, RI, 2015). Translated from the 2008 French edition by Patrick D. F. Ion.CrossRefGoogle Scholar
Titchmarsh, E. C.. The theory of the Riemann zeta-function, second ed. Grad. Stud. Math. 163 (The Clarendon Press, Oxford University Press, New York, 1986). Edited and with a preface by D. R. Heath–Brown.Google Scholar
Tóth, L.. Multiplicative arithmetic functions of several variables: a survey. Mathematics without boundaries (Springer, New York, 2014), pp. 483514.CrossRefGoogle Scholar