Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-26T02:54:54.008Z Has data issue: false hasContentIssue false

ON AN AVERAGE GOLDBACH REPRESENTATION FORMULA OF FUJII

Published online by Cambridge University Press:  17 January 2023

DANIEL A. GOLDSTON
Affiliation:
Department of Mathematics and Statistics San Jose State University San Jose, California USA [email protected]
ADE IRMA SURIAJAYA*
Affiliation:
Faculty of Mathematics Kyushu University Fukuoka Japan
Rights & Permissions [Opens in a new window]

Abstract

Fujii obtained a formula for the average number of Goldbach representations with lower-order terms expressed as a sum over the zeros of the Riemann zeta function and a smaller error term. This assumed the Riemann Hypothesis. We obtain an unconditional version of this result and obtain applications conditional on various conjectures on zeros of the Riemann zeta function.

Type
Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Foundation Nagoya Mathematical Journal

1 Introduction and statement of results

Let

(1) $$ \begin{align} \psi_2(n) = \sum_{m+m'=n} \Lambda(m)\Lambda(m'), \end{align} $$

where $\Lambda $ is the von Mangoldt function, defined by $\Lambda (n)=\log p$ if $n=p^m$ , p a prime and $m\ge 1$ , and $\Lambda (n)= 0$ otherwise. Thus, $\psi _2(n)$ counts Goldbach representations of n as sums of both primes and prime powers, and these primes are weighted to make them have a density of 1 on the integers. Fujii [Reference FujiiF1], [Reference FujiiF2], [Reference FujiiF3] in 1991 proved the following theorem concerning the average number of Goldbach representations.

Theorem (Fujii)

Assuming the Riemann Hypothesis, we have

(2) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{N^2}{2} - 2\sum_{\rho} \frac{N^{\rho+1}}{\rho(\rho +1)} + O(N^{4/3}(\log N)^{4/3}), \end{align} $$

where the sum is over the complex zeros $\rho =\beta +i\gamma $ of the Riemann zeta function $\zeta (s)$ , and the Riemann Hypothesis is $\beta = 1/2$ .

Thus, the average number of Goldbach representations is connected to the zeros of the Riemann zeta function, and as we will see later, is also closely connected to the error in the prime number theorem. The sum over zeros appears in the asymptotic formula even without the assumption of the Riemann Hypothesis, but the Riemann Hypothesis is needed to estimate the error term. With regard to the sum over zeros, it is useful to keep in mind the Riemann–von Mangoldt formula

(3) $$ \begin{align} N(T) := \sum_{0 < \gamma \le T} 1= \frac{T}{2\pi} \log \frac{T}{2 \pi} - \frac{T}{2\pi} + O(\log T) \end{align} $$

(see [Reference InghamI, Th. 25] or [Reference TitchmarshT, Th. 9.4]). Thus, $N(T) \sim \frac {T}{2\pi }\log T$ , and we also obtain

(4) $$ \begin{align} N(T+1) - N(T) = \sum_{T<\gamma \le T+1} 1 \ll \log T. \end{align} $$

This estimate is very useful, and it shows that the sum over zeros in (2) is absolutely convergent. Hence, the Riemann Hypothesis implies that

$$ \begin{align*}\sum_{n\le N} \psi_2(n) = \frac{N^2}{2} + O(N^{3/2}).\end{align*} $$

This was shown by Fujii in [Reference FujiiF1]. Unconditionally, Bhowmik and Ruzsa [Reference Bhowmik and RuzsaBR] showed that the estimate

$$ \begin{align*}\sum_{n\le N} \psi_2(n) = \frac{N^2}{2} + O(N^{2-\delta})\end{align*} $$

implies that for all complex zeros $\rho =\beta +i\gamma $ of the Riemann zeta function, we have $\beta <1-\delta /6$ . We are interested, however, in investigating the error in (2), that is, the error estimate not including the sum over zeros.

It was conjectured by Egami and Matsumoto [Reference Egami, Matsumoto, Kanemitsu and LiuEM] in 2007 that the error term in (2) can be improved to $O(N^{1+\varepsilon })$ for any $\varepsilon>0$ . That error bound was finally achieved, assuming the Riemann Hypothesis, by Bhowmik and Schlage-Puchta in [Reference Bhowmik and Schlage-PuchtaBS] who obtained $O(N\log ^5\!N)$ , and this was refined by Languasco and Zaccagnini [Reference Languasco and ZaccagniniLZ1] who obtained the following result.

Theorem (Languasco–Zaccagnini)

Assuming the Riemann Hypothesis, we have

(5) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{ N^2}{2} -2 \sum_{\rho} \frac{N^{\rho+1} }{\rho(\rho +1)}+O( N\log^3\!N).\end{align} $$

A different proof of this theorem was given in [Reference Goldston, Yang, Tian and YeGY] along the same lines as [Reference Bhowmik and Schlage-PuchtaBS]. It was proved in [Reference Bhowmik and Schlage-PuchtaBS] that unconditionally,

(6) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{ N^2}{2} -2 \sum_{\rho} \frac{N^{\rho+1} }{\rho(\rho +1)}+\Omega( N\log\log N),\end{align} $$

and therefore the error term in (5) is close to best possible.

In this paper, we combine and hopefully simplify the methods of [Reference Bhowmik and Schlage-PuchtaBS] and [Reference Languasco and ZaccagniniLZ1]. Our method is based on an exact form of Fujii’s formula (2) where the error term is explicitly given. We state this as Theorem 1. We will relate this error term to the distributions of primes, and this can be estimated using the variance of primes in short intervals.

We follow the notation and methods of Montgomery and Vaughan [Reference Montgomery and VaughanMV1] fairly closely in what follows. Sums in the form of $\sum _{\rho }$ or $\sum _{\gamma }$ run over nontrivial zeros $\rho =\beta +i\gamma $ of the Riemann zeta function, and all other sums run over the positive integers unless specified otherwise, so that $\sum _n = \sum _{n\ge 1}$ and $\sum _{n\le N} = \sum _{1\le n\le N}$ . We use the power series generating function

(7) $$ \begin{align} \Psi(z) = \sum_{n} \Lambda(n) z^n, \end{align} $$

which converges for $|z|<1$ , and obtain the generating function for $\psi _2(n)$ in (1) directly since

(8) $$ \begin{align} \Psi(z)^2 = \sum_{m,m'}\Lambda(m)\Lambda(m') z^{m+m'} = \sum_n \left(\sum_{m+m'=n}\Lambda(m)\Lambda(m')\right) z^{n}= \sum_{n} \psi_2(n) z^n. \end{align} $$

Our goal is to use properties of $\Psi (z)$ to study averages of the coefficients $\psi _2(n)$ of $\Psi (z)^2$ .

Our version of Fujii’s theorem is as follows. We take

(9) $$ \begin{align} z=re(\alpha), \qquad e(u)= e^{2\pi i u},\end{align} $$

where $0\le r < 1$ , and define

(10) $$ \begin{align} I(r,\alpha) := \sum_n r^ne(\alpha n), \qquad I_N(r,\alpha) := \sum_{n\le N} r^ne(\alpha n). \end{align} $$

We also, accordingly, write $\Psi (z) = \sum _{n} \Lambda (n) r^ne(\alpha n)$ as $\Psi (r,\alpha )$ .

Theorem 1. For $N\ge 2$ , we have

(11) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{N^2}{2} - 2\sum_{\rho} \frac{N^{\rho +1}}{\rho(\rho +1)} -\Big( 2\log 2\pi-\frac12\Big)N + 2\frac{\zeta '}{\zeta}(-1) - \sum_k \frac{N^{1-2k}}{k(2k-1)} + E(N) , \end{align} $$

where, for $0<r<1$ ,

(12) $$ \begin{align} E(N) := \int_0^1 (\Psi(r,\alpha)-I(r,\alpha))^2 I_N(1/r,-\alpha)\, d\alpha. \end{align} $$

In particular, we have

(13) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{N^2}{2} - 2\sum_{\rho} \frac{N^{\rho +1}}{\rho(\rho +1)} + E(N) +O(N). \end{align} $$

The prime number theorem is equivalent to

(14) $$ \begin{align} \psi(x) := \sum_{n\le x}\Lambda (n) \sim x ,\qquad \text{as} \quad x\to \infty. \end{align} $$

It was shown in [Reference Bhowmik and Schlage-PuchtaBS] that the error term $E(N)$ in Theorem 1 can be estimated using the functionsFootnote 1

(15) $$ \begin{align} H(x) := \int_0^x (\psi(t) -t)^2 \, dt \qquad \text{and} \quad J(x,h) := \int_0^x (\psi(t+h) -\psi(t) -h)^2 \, dt. \end{align} $$

The integral $H(x)$ was studied by Cramér [Reference CramérC] in 1921; he proved, assuming the Riemann Hypothesis, that

(16) $$ \begin{align} H(x) \ll x^2. \end{align} $$

Selberg [Reference SelbergS] was the first to study variances like $J(x,h)$ and obtain results on primes from this type of variance, both unconditionally and on the Riemann Hypothesis. The estimate we need is a refinement of Selberg’s result, which was first obtained by Saffari and Vaughan [Reference Saffari and VaughanSV]; they proved, assuming the Riemann Hypothesis, that for $1\le h \le x$ ,

(17) $$ \begin{align} J(x,h) \ll hx\log^2\Big(\frac{2x}{h}\Big). \end{align} $$

By a standard argument using Gallagher’s lemma [Reference MontgomeryM1, Lem. 1.9], we obtain the following unconditional bound for $E(N)$ .

Theorem 2. Let $\log _2\!N$ denote the logarithm base 2 of N. Then, for $N\ge 2$ , we have

(18) $$ \begin{align} \begin{aligned} |E(N)| \le \mathcal{E}(N) &:= \int_0^1 |\Psi(r,\alpha)-I(r,\alpha)|^2 |I_N(1/r,-\alpha)|\, d\alpha \\ &\ll \sum_{0\le k < \log_2\!N} \frac{N}{2^k} \mathcal{W}\left(N,\frac{N}{2^{k+2}}\right), \end{aligned} \end{align} $$

where, for $1\le h \le N$ ,

(19) $$ \begin{align} \begin{aligned} \mathcal{W}(N,h) &:= \int_0^{1/2h} \left| \sum_n ( \Lambda(n)-1) e^{-n/N} e(n\alpha) \right|^2 \, d\alpha \\ &\ll \frac1{h^2} H(h) + \frac{1}{N^2}\sum_j \frac1{2^j}H(jN) + \frac1{h^2}\sum_j \frac1{2^j}J(jN,h) + \frac{N}{h^2}. \end{aligned} \end{align} $$

Not only can Theorem 2 be applied to the error term in Fujii’s theorem, but it can also be used in bounding the error in the prime number theorem.

Theorem 3. For $N\ge 2$ and $0<r<1$ , we have

(20) $$ \begin{align} \psi(N) - N = \int_0^1 (\Psi(r,\alpha)-I(r,\alpha)) I_N(1/r,-\alpha)\, d\alpha , \end{align} $$

and

(21) $$ \begin{align} \psi(N) - N \ll \sqrt{\mathcal{E}(N)\log N}. \end{align} $$

The formula (20) has already been used for a related problem concerning averages of Goldbach representations [Reference Bhowmik and RuzsaBR, (7)], and similar formulas are well known [Reference KoukoulopoulosKou, (5.20)].

For our first application, we assume the Riemann Hypothesis and use (16) and (17) to recover (5), and also obtain the classical Riemann Hypothesis error in the prime number theorem due to von Koch in 1901 [Reference von KochKoc].

Theorem 4. Assuming the Riemann Hypothesis, then $\mathcal {E}(N) \ll N\log ^3\!N$ and $\psi (N) = N +O(N^{1/2}\log ^2\!N)$ .

For our second application, we will strengthen the results in Theorem 4 by assuming conjectures on bounds for $J(x,h)$ , which we will prove to be consequences of conjectured bounds related to Montgomery’s pair correlation conjecture. Montgomery introduced the function, for $x>0$ and $T\ge 3$ ,

(22) $$ \begin{align} F(x,T) := \sum_{0<\gamma,\gamma'\le T} x^{i(\gamma -\gamma')} w(\gamma -\gamma') , \qquad w(u) = \frac{4}{4+u^2},\end{align} $$

where the sum is over the imaginary parts $\gamma $ and $\gamma '$ of zeta-function zeros. By [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, Lem. 8], $F(x,T)$ is real, $F(x,T)\ge 0$ , $F(x,T)=F(1/x,T)$ , and assuming the Riemann Hypothesis,

(23) $$ \begin{align} F(x,T) = \frac{T}{2\pi} \left( x^{-2}\log^2 T + \log x\right) \left( 1 + O\left(\sqrt{\frac{\log \log T}{\log T}}\right)\right)\end{align} $$

uniformly for $1\le x\le T$ . For larger x, Montgomery [Reference MontgomeryM2] conjectured that, for every fixed $A>1$ ,

(24) $$ \begin{align} F(x,T) \sim \frac{T}{2\pi}\log T \end{align} $$

holds uniformly for $T\le x \le T^A$ . This conjecture implies the pair correlation conjecture for zeros of the zeta function [Reference Goldston, Mezzadri and SnaithGo]. For all $x>0$ , we have the trivial (and unconditional) estimate

(25) $$ \begin{align} F(x,T)\le F(1,T) \ll T\log^2 T, \end{align} $$

which follows from (4).

The main result in [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM] is the following theorem connecting $F(x,T)$ and $J(x,h)$ .

Theorem (Goldston–Montgomery)

Assume the Riemann Hypothesis. Then the $F(x,T)$ conjecture (24) is equivalent to the conjecture that for every fixed $\epsilon>0$ ,

(26) $$ \begin{align} J(x,h) \sim h x \log\Big(\frac{x}{h}\Big) \end{align} $$

holds uniformly for $1\le h\le x^{1-\epsilon } $ .

Adapting the proof of this last theorem, we obtain the following results.

Theorem 5. Assume the Riemann Hypothesis.

  • (A) If for any $A>1$ and $T\ge 2$ we have

    (27) $$ \begin{align} F(x,T) = o( T\log^2 T) \qquad \text{ uniformly for} \quad T\le x \le T^{A}, \end{align} $$
    then this implies, for $x\ge 2$ ,
    (28) $$ \begin{align} J(x,h) = o(h x \log^2\!x),\qquad \text{for} \quad 1\le h\le x,\end{align} $$
    and this bound implies $\mathcal {E}(N) = o(N\log ^3\!N)$ and $\psi (N) = N +o(N^{1/2}\log ^2\!N)$ .
  • (B) If for $T\ge 2$

    (29) $$ \begin{align} F(x,T) \ll T\log x \qquad \text{holds uniformly for} \quad T\le x \le T^{\log T}, \end{align} $$
    then we have, for $x\ge 2$ ,
    (30) $$ \begin{align} J(x,h) \ll h x \log x,\qquad \text{for} \quad 1\le h\le x,\end{align} $$
    and this bound implies $\mathcal {E}(N) \ll N\log ^2\!N$ and $\psi (N) = N +O(N^{1/2}(\log N)^{3/2})$ .

Montgomery’s $F(x,T)$ Conjecture (24) immediately implies (27), so we are using a weaker conjecture on $F(x,T)$ in Theorem 5(A). For Theorem 5(B), Montgomery’s $F(x,T)$ Conjecture (24) only implies (29) for $T\le x \le T^A$ , and this is a new conjecture for the range $T^A\le x \le T^{\log T}$ . Theorems 2 and 5 show that with either assumption, we obtain an improvement on the bound $E(N)\ll N\log ^3\!N$ in (5), which we state as the following corollary.

Corollary 1. Assume the Riemann Hypothesis. If either (27) or (28) is true, then we have

(31) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{ N^2}{2} -2 \sum_{\rho} \frac{N^{\rho+1} }{\rho(\rho +1)}+o( N\log^3\!N),\end{align} $$

whereas if either (29) or (30) is true, we have

(32) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{ N^2}{2} -2 \sum_{\rho} \frac{N^{\rho+1} }{\rho(\rho +1)}+O( N\log^2\!N).\end{align} $$

We will prove a more general form of the implication that a bound on $F(x,T)$ implies a corresponding bound on $J(x,h)$ , but Theorem 5 contains the most interesting special cases. We make crucial use of the Riemann Hypothesis bound (17) for h very close to x. The conjectures (29) and (30) are weaker than what the Goldston–Montgomery theorem suggests are true, but they suffice in Theorem 5 and the use of stronger bounds does not improve the results on $\mathcal {E}(N)$ . The result on the prime number theorem in Theorem 5(A) is due to Heath-Brown [Reference Heath-BrownH, Th. 1]. The bound in (29) is trivially true by (25) if $x\ge T^{\log T}$ .

For our third application, we consider the situation where there can be zeros of the Riemann zeta function off the half-line, but these zeros satisfy the bound $1/2<\Theta < 1$ , where

$$\begin{align*}\Theta := \sup\{ \beta : \zeta(\beta +i \gamma) =0\}. \end{align*}$$

The following is a special case of a more general result recently obtained in [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+].

Theorem 6 (Bhowmik, Halupczok, Matsumoto, and Suzuki)

Assume $1/2<\Theta < 1$ . For $N\ge 2$ and $1\le h\le N$ , we have

(33) $$ \begin{align} J(N,h) \ll h N^{2\Theta}\log^4\!N \qquad \text{and}\quad \mathcal{E}(N) \ll N^{2\Theta}\log^5\!N.\end{align} $$

Weaker results of this type were first obtained by Granville [Reference GranvilleGra1], [Reference GranvilleGra2]. To prove this theorem, we only need to adjust a few details of the proof in [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+] to match our earlier theorems. With more work, the power of $\log N$ can be improved, but that makes no difference in applications. We will not deal with the situation when $\Theta =1$ , which depends on the width of the zero-free region to the left of the line $\sigma =1$ and the unconditional error in the prime number theorem. The converse question of using a bound for $E(N)$ to obtain a zero-free region has been solved in an interesting recent paper of Bhowmik and Ruzsa [Reference Bhowmik and RuzsaBR]. As a corollary, we see that the terms in Fujii’s formula down to size $\beta \ge 1/2$ are main terms assuming the conjecture $\Theta < 3/4$ instead of the Riemann Hypothesis.

Corollary 2. Suppose $1/2 \le \Theta < 3/4$ . Then

(34) $$ \begin{align} \sum_{n\le N} \psi_2(n) = \frac{ N^2}{2} -2 \sum_{\substack{\rho\\ \beta\ge 1/2}} \frac{N^{\rho+1}}{\rho(\rho+1)}+o( N^{3/2}).\end{align} $$

The term $E(N)$ in Fujii’s formula will give main terms $\gg N^{3/2}$ from zeros with $\beta \ge 3/4$ . For weighted versions of Fujii’s theorem, there are formulas where the error term corresponding to $E(N)$ is explicitly given in terms of sums over zeros (see [Reference Brüdern, Kaczorowski and PerelliBKP], [Reference Languasco and ZaccagniniLZ2]). In principle, one could use an explicit formula for $\Psi (z)$ in Theorem 1 to obtain a complicated formula for $E(N)$ in terms of zeros, but we have not pursued this.

We conclude with a slight variation on Fujii’s formula. We have been counting Goldbach representations using the von Mangoldt function $\Lambda (n)$ , so that we include prime powers and also representations for odd integers. Doing this leads to nice formulas such as Fujii’s formula because of the weighting by $\log p$ and also that complicated lower order terms coming from prime powers have all been combined into the sum over $\rho $ term in Fujii’s formula. The reason for this can be seen in Landau’s formula [Reference LandauL], which states that for fixed x and $T\to \infty $ ,

(35) $$ \begin{align} \sum_{0<\gamma \le T} x^{\rho} =- \frac{T\Lambda(x)}{2\pi} + O(\log T),\end{align} $$

where we define $\Lambda (x)$ to be zero for real noninteger x. In the following easily proven theorem, we remove the Goldbach representations counted by the von Mangoldt function for odd numbers.

Theorem 7. We have

(36) $$ \begin{align} \sum_{\substack{n\le N \\ n \ \mathrm{odd}}} \psi_2(n) = 2 N \log N +O(N),\end{align} $$

and therefore, by (13),

(37) $$ \begin{align} \sum_{\substack{n\le N \\ n \ \mathrm{even}}} \psi_2(n) = \frac{N^2}{2} - 2\sum_{\rho} \frac{N^{\rho +1}}{\rho(\rho +1)} - 2 N \log N + E(N) +O(N).\end{align} $$

The interesting aspect of (37) is that a new main term $-2N\log N$ has been introduced into Fujii’s formula, and this term comes from not allowing representations where the von Mangoldt function is evaluated at the prime 2 and its powers. If we denote the error term $E_{\mathrm {even}}(N) := - 2N\log N+E(N)$ in (37), then we see that at least one or both of $E(N)$ and $E_{\mathrm {even}}(N)$ are $\Omega (N\log N)$ . The simplest answer for which possibility occurs would be that the error term in Fujii’s formula is smaller than any term generated by altering the support of the von Mangoldt function, in which case $E_{\mathrm {even}}(N) = - 2N\log N +o(N\log N)$ and $E(N) = o(N\log N)$ . Whether this is true or not seems to be a very difficult question.

2 Proofs of Theorems 14

Proof of Theorem 1

We first obtain a weighted version of Fujii’s theorem. By (14), we see $\Lambda (n)$ is on average 1 over long intervals, and therefore as a first-order average approximation to $\Psi (z)$ , we use

(38) $$ \begin{align} I(z) := \sum_{n} z^n = \frac{z}{1-z} \end{align} $$

for $|z|<1$ . Observe that, on letting $n=m+m'$ in the calculations below,

$$\begin{align*}I(z) \sum_{m} a_m z^m = \sum_{m,m'} a_m z^{m+m'} =\sum_{n\ge 2} \left(\sum_{ m\le n-1}a_m\right) z^n, \end{align*}$$

and therefore

$$\begin{align*}I(z)^2 = I(z) I(z) = \sum_{n\ge 2} \left(\sum_{ m\le n-1}1\right) z^n = \sum_{n} (n-1)z^n ,\end{align*}$$

and

$$\begin{align*}I(z)\Psi(z) =\sum_{n\ge 2}\left(\sum_{ m\le n-1}\Lambda(m)\right) z^n = \sum_n\psi(n-1)z^n.\end{align*}$$

Thus,

$$\begin{align*}(\Psi(z)-I(z))^2 = \Psi(z)^2 - 2\Psi(z)I(z) + I(z)^2 = \sum_n \left( \psi_2(n) -2\psi(n-1) + (n-1)\right)z^n ,\end{align*}$$

and we conclude that

(39) $$ \begin{align} \sum_n \psi_2(n) z^n = \sum_n \left(2\psi(n-1) -(n-1)\right)z^n +(\Psi(z)-I(z))^2 , \end{align} $$

which is a weighted version of Theorem 1.

To remove the weighting, we take $ z=re(\alpha )$ with $0 < r < 1$ , and recalling (10), we have

(40) $$ \begin{align} \int_0^1 (\sum_{n} a_n r^ne(\alpha n)) I_N(1/r,-\alpha) \, d\alpha = \sum_{n}\sum_{n'\le N} a_n r^{n - n'}\int_0^1 e((n-n')\alpha)\, d\alpha = \sum_{n\le N} a_n. \end{align} $$

Thus, using (39) with $z=re(\alpha )$ and $0 < r < 1$ in (40), we obtain

$$ \begin{align*} \sum_{n\le N} \psi_2(n) &= \sum_{n\le N}(2\psi(n-1)-(n-1)) + \int_0^1(\Psi(r,\alpha)-I(r,\alpha))^2 I_N(1/r,-\alpha)\, d\alpha \\ &= \sum_{n\le N}(2\psi(n-1)-(n-1)) + E(N). \end{align*} $$

Utilizing [Reference InghamI, Chap. 2, (13)], $\psi _1(x) := \int _0^x \psi (t)\, dt$ , and $\psi _1(N) = \sum _{n\le N}\psi (n-1)$ , we have

(41) $$ \begin{align} \sum_{n\le N} \psi_2(n) = 2\psi_1(N) - \frac12 (N-1)N + E(N). \end{align} $$

To complete the proof, we use the explicit formula, for $x\ge 1$ ,

(42) $$ \begin{align} \psi_1(x) = \frac{x^2}{2} -\sum_{\rho} \frac{x^{\rho +1}}{\rho(\rho +1)} - (\log 2\pi)x +\frac{\zeta '}{\zeta}(-1) - \sum_k \frac{x^{1-2k}}{2k(2k-1)} \end{align} $$

(see [Reference InghamI, Th. 28] or [Reference Montgomery and VaughanMV2, Chapter 12 Subsection 12.1.1, Exer. 6]). Substituting (42) into (41), we obtain Theorem 1.

Proof of Theorem 2

Letting

(43)

we have

$$ \begin{align*} \mathcal{E}(N) &:= \int_0^1 |\Psi(r,\alpha)-I(r,\alpha)|^2 |I_N(1/r,-\alpha)|\, d\alpha \\ &= 2\int_0^{1/2} \left| \sum_n \Lambda_0(n)r^n e(n\alpha) \right|^2 |I_N(1/r,-\alpha)| \,d\alpha. \end{align*} $$

We will now choose r in terms of N by setting

(44) $$ \begin{align} r=e^{-1/N}, \end{align} $$

and with this choice, it is easy to see that for $|\alpha | \le 1/2$ , we have

(45) $$ \begin{align} |I_N(1/r,-\alpha)| = \left| \frac{e-e(\alpha N)}{r - e(\alpha)}\right| \ll \min\Bigg(N, \frac1{|\alpha|}\Bigg).\end{align} $$

Therefore, we obtain

(46) $$ \begin{align} \mathcal{E}(N) \ll \int_0^{1/2} \left| \sum_n \Lambda_0(n)r^n e(n\alpha) \right|^2 \min\Bigg(N, \frac1{\alpha}\Bigg) \, d\alpha. \end{align} $$

Letting

then $ \frac 12 h_N(\alpha ) \le \min (N, \frac 1{\alpha }) \le h_N(\alpha )$ in the range $0\le \alpha \le 1/2$ . Now, if we put

then $h_N(\alpha )\le H_N(\alpha ) \le 2 h_N(\alpha )$ and therefore $ \frac 14 H_N(\alpha ) \le \min (N, \frac 1{\alpha }) \le H_N(\alpha ) $ in the range $0\le \alpha \le 1/2$ . We conclude

$$\begin{align*}\min\Big(N, \frac1{\alpha}\Big) \asymp h_N(\alpha) \asymp H_N(\alpha), \qquad \text{for} \quad 0\le \alpha \le 1/2. \end{align*}$$

Thus,

(47) $$ \begin{align} \mathcal{E}(N) \ll \sum_{0\le k< \log_2\!N} \frac{N}{2^k} \int_0^{2^{k+1}/N} \left| \sum_n \Lambda_0(n) r^n e(n\alpha) \right|^2 \, d\alpha = \sum_{0\le k< \log_2\!N} \frac{N}{2^k} \mathcal{W}\Bigg(\frac{N}{2^{k+2}}\Bigg). \end{align} $$

Gallagher’s lemma gives the bound (see [Reference MontgomeryM1, Lem. 1.9] or [Reference Goldston, Yang, Tian and YeGY, §4])

$$\begin{align*}\int_0^{1/2h}\left|\sum_n a_n e(n\alpha) \right|^2 \, d\alpha \ll \frac1{h^2} \int_{-\infty}^{\infty} \left| \sum_{ x<n\le x+h} a_n \right|^2 \, dx\end{align*}$$

provided $\sum _n|a_n| <\infty $ , and therefore we have

$$ \begin{align*} \mathcal{W}(N,h) &= \int_0^{1/2h} \left| \sum_n \Lambda_0(n)r^n e(n\alpha) \right|^2 \, d\alpha \\ &\ll \frac1{h^2} \int_{-\infty}^{\infty} \left| \sum_{ x<n\le x+h} \Lambda_0(n)r^n \right|^2 \, dx \\ &= \frac1{h^2} \int_{-h}^0 \left| \sum_{ n\le x+h} \Lambda_0(n)r^n \right|^2 \, dx + \frac1{h^2} \int_{0}^{\infty} \left| \sum_{ x<n\le x+h} \Lambda_0(n)r^n \right|^2 \, dx. \end{align*} $$

We conclude that

(48) $$ \begin{align} \mathcal{W}(N,h) \ll \frac1{h^2} \left( I_1(N,h) +I_2(N,h)\right), \end{align} $$

where

(49) $$ \begin{align} I_1(N,h) := \int_0^h \left| \sum_{ n\le x} \Lambda_0(n)e^{-n/N} \right|^2 \, dx, \qquad I_2(N,h) := \int_0^{\infty} \left|\sum_{x< n\le x+h} \Lambda_0(n)e^{-n/N} \right|^2 \, dx. \end{align} $$

To bound $I_1(N,h)$ and $I_2(N,h)$ , we will use partial summation on the integrands with the counting function

(50) $$ \begin{align} R(u) := \sum_{n\le u}\Lambda_0(n) = \psi(u) - \lfloor u \rfloor .\end{align} $$

Recalling $H(x)$ and $J(x,h)$ from (15), and making use here and later of the inequality $(a+b)^2\le 2(a^2+b^2)$ , we have, for $x\ge 1$ ,

(51) $$ \begin{align} \begin{aligned} \int_0^x R(t)^2\, dt &= \int_0^x \left( \psi(t) - \lfloor t \rfloor \right)^2\, dt = \int_0^x \left( \psi(t)-t + t - \lfloor t \rfloor \right)^2\, dt \\ &\ll \int_0^x \left( \psi(t)-t \right)^2\, dt + \int_0^x \left( t - \lfloor t \rfloor \right)^2\, dt \\ &\ll H(x) + x \end{aligned} \end{align} $$

and

(52) $$ \begin{align} \begin{aligned} \int_0^x \left(R(t+h)-R(t)\right)^2\, dt &\ll J(x,h) + \int_0^x (\lfloor t+h\rfloor - \lfloor t\rfloor -h)^2 dt \\ & \ll J(x,h) + x. \end{aligned}\end{align} $$

Next, by partial summation,

(53) $$ \begin{align} \sum_{n\le x}\Lambda_0(n) e^{-n/N} = \int_0^x e^{-u/N} dR(u) = R(x)e^{-x/N} + \frac1N\int_0^xR(u) e^{-u/N} \, du, \end{align} $$

and therefore, for $1\le h \le N$ , we have, using Cauchy–Schwarz and (51),

(54) $$ \begin{align} \begin{aligned} I_1(N,h) & \ll \int_0^h\left( R(x)^2e^{-2x/N} + \frac1{N^2}\left(\int_0^x|R(u)| e^{-u/N} \, du\right)^2 \right)\, dx \\& \ll \int_0^h\left( R(x)^2e^{-2x/N} + \frac1{N^2}\left(\int_0^xR(u)^2 e^{-u/N} \, du\right) \left(\int_0^x e^{-u/N}\,du\right)\right)\, dx \\& \ll \left(1+ \frac{h^2}{N^2}\right) \int_0^h R(x)^2\, dx \ll \int_0^h R(x)^2\, dx\ll H(h) + h. \end{aligned} \end{align} $$

For $I_2(N,h)$ , we replace x by $x+h$ in (53) and take their difference to obtain

(55) $$ \begin{align}\begin{aligned} \sum_{x<n\le x+h}\Lambda_0(n) e^{-n/N} &= R(x+h)e^{-(x+h)/N} - R(x)e^{-x/N} + \frac1N\int_x^{x+h}R(u) e^{-u/N} \, du \\ &= \left(R(x+h)-R(x)\right) e^{-x/N} + O\left( \frac{h}{N}|R(x+h)|e^{-x/N} \right) \\ &\qquad+O\left(\frac1N\int_x^{x+h}|R(u)|e^{-u/N}\, du\right). \end{aligned} \end{align} $$

Thus, we have

$$ \begin{align*} I_2(N,h) &\ll \int_0^{\infty} (R(x+h)-R(x))^2 e^{-2x/N}\, dx + \frac{h^2}{N^2} \int_0^{\infty} R(x+h)^2 e^{-2x/N}\, dx \\ &\qquad+ \frac1{N^2} \int_0^{\infty} \left(\int_x^{x+h} |R(u)|e^{-u/N}\, du \right)^2\, dx. \end{align*} $$

Applying the Cauchy–Schwarz inequality to the double integral above and changing the order of integration, we bound this term by

$$\begin{align*}\le \frac{h}{N^2} \int_0^{\infty} \int_x^{x+h} R(u)^2e^{-2u/N}\, du \, dx = \frac{h^2}{N^2} \int_0^{\infty} R(u)^2e^{-2u/N}\, du, \end{align*}$$

and therefore, for $1\le h \le N$ ,

$$\begin{align*}I_2(N,h) \ll \frac{h^2}{N^2} \int_0^{\infty} R(x)^2 e^{-2x/N}\, dx +\int_0^{\infty} (R(x+h)-R(x))^2 e^{-2x/N}\, dx.\end{align*}$$

Now, for any integrable function $f(x)\ge 0$ , we have

$$ \begin{align*} \int_0^{\infty} f(x)e^{-2x/N}\, dx &\le \int_0^N f(x)\, dx + \sum_{j=1}^{\infty} e^{-2j} \int_{jN}^{(j+1)N} f(x)\, dx \\ &\le \sum_{j=1}^{\infty} \frac{1}{2^{j-1}} \int_0^{jN} f(x)\, dx, \end{align*} $$

and therefore

(56) $$ \begin{align}\begin{aligned} I_2(N,h) &\ll \sum_{j=1}^{\infty}\frac{1}{2^{j}} \int_0^{jN}\left( \frac{h^2}{N^2} R(x)^2+ (R(x+h)-R(x))^2 \right)\, dx \\& \ll \sum_{j=1}^{\infty}\frac{1}{2^{j}}\left( \frac{h^2}{N^2}H(jN)+ \frac{jh^2}{N} +J(jN,h) + jN \right) \\& \ll \sum_{j=1}^{\infty}\frac{1}{2^{j}}\left(\frac{h^2}{N^2}H(jN)+ J(jN,h) \right) +N .\end{aligned}\end{align} $$

Thus, by (48), (54), and (56), we conclude that, for $1\le h\le N$ ,

(57) $$ \begin{align} \mathcal{W}(N,h) \ll \frac1{h^2} H(h) + \frac{1}{N^2}\sum_{j=1}^{\infty} \frac{1}{2^{j}}H(jN) + \frac1{h^2}\sum_{j=1}^{\infty} \frac{1}{2^{j}}J(jN,h) + \frac{N}{h^2}. \end{align} $$

Proof of Theorem 3

From (43), we have $ \Psi (z) - I(z) = \sum _n \Lambda _0(n)z^n$ , and (20) follows from (40). To obtain (21), applying the Cauchy–Schwarz inequality to (20) and using (45), we have

$$\begin{align*}\psi(N) - N \ll \sqrt{\mathcal{E}(N) \int_0^{1/2}\min(N,\frac1{\alpha})\, d\alpha } \ll \sqrt{\mathcal{E}(N) \log N}.\end{align*}$$

Proof of Theorem 4

Assuming the Riemann Hypothesis, from (16) and (17), we have $H(x)\ll x^2$ and $J(x,h) \ll hx\log ^2\!x$ for $1\le h \le x$ , and therefore, by (19), we have $\mathcal {W}(N,h)\ll \frac {N}{h}\log ^2\!N $ for $1\le h\le N$ , and therefore $\mathcal {E}(N) \ll N\log ^3\!N$ , which is the first bound in Theorem 4, and Theorem 3 gives the second bound.

3 Theorem 8 and the proof of Theorem 5

We will prove the following more general form of Theorem 5. In addition to $J(x,h)$ from (15), we also make use of the related variance

(58) $$ \begin{align} \mathcal{J}(x,\delta) := \int_0^{x} \left( \psi((1+\delta)t) - \psi(t) - \delta t \right)^2\, dt, \qquad \text{ for} \quad 0<\delta \le 1. \end{align} $$

The variables h in $J(x,h)$ and $\delta $ in $\mathcal {J}(x,\delta )$ are roughly related by $h\asymp \delta x$ . Saffari and Vaughan [Reference Saffari and VaughanSV] proved in addition to (17) that, assuming the Riemann Hypothesis, we have, for $0< \delta \le 1$ ,

(59) $$ \begin{align} \mathcal{J}(x,\delta) \ll \delta x^2 \log^2(2/\delta). \end{align} $$

Theorem 8. Assume the Riemann Hypothesis. Let $x\ge 2$ , and let $\mathcal {L}(x)$ be a continuous increasing function satisfying

(60) $$ \begin{align} \log x \le \mathcal{L}(x) \le \log^2\!x. \end{align} $$

Then the assertion

(61) $$ \begin{align} F(x,T) \ll T\mathcal{L}(x) \qquad \text{ uniformly for } \quad e^{\sqrt{\mathcal{L}(x)}} \le T \le x \end{align} $$

implies the assertion

(62) $$ \begin{align} \mathcal{J}(x,\delta) \ll \delta x^2\mathcal{L}(x) \qquad \text{ uniformly for } \quad 1/x \le \delta \le 2e^{-\sqrt{\mathcal{L}(x)}} , \end{align} $$

which implies the assertion

(63) $$ \begin{align} J(x,h) \ll hx\mathcal{L}(x) \qquad \text{ uniformly for } \quad 1 \le h \le 2xe^{-\sqrt{\mathcal{L}(x)}}, \end{align} $$

which implies $\mathcal {E}(N)\ll N\mathcal {L}(N) \log N$ and $\psi (N) = N +O(N^{1/2}\sqrt {\mathcal {L(N)}}\log N)$ .

Montgomery’s function $F(x,T)$ is used for applications to both zeros and primes. For applications to zeros, it is natural to first fix a large T and consider zeros up to height T and then pick x to be a function of T that varies in some range depending on T. This is how Montgomery’s conjecture is stated in (24) and also how the conjectures (27) and (29) in Theorem 5 are stated. However, in applications to primes, following Heath-Brown [Reference Heath-BrownH], it is more convenient to fix a large x and consider the primes up to x, and then pick T as a function of x that varies in some range depending on x. This is what we have done in Theorem 8.

The ranges we have used in our conjectures on $F(x,T)$ , $\mathcal {J}(x,\delta )$ , and $J(x,T)$ in Theorem 8 are where these conjectures are needed. In proving Theorem 8, however, it is convenient to extend these ranges to include where the bounds are known to be true on the Riemann Hypothesis.

Lemma 1. Assume the Riemann Hypothesis. Letting $x\ge 2$ , then the assertion (61) implies, for any bounded $A\ge 1$ ,

(64) $$ \begin{align} F(x,T) \ll T\mathcal{L}(x) \qquad \text{ uniformly for } \quad 2 \le T \le x^A; \end{align} $$

the assertion (62) implies

(65) $$ \begin{align} \mathcal{J}(x,\delta) \ll \delta x^2\mathcal{L}(x) \qquad \text{ uniformly for } \quad 0 \le \delta \le 1; \end{align} $$

and the assertion (63) implies

(66) $$ \begin{align} J(x,h) \ll hx\mathcal{L}(x) \qquad \text{ uniformly for } \quad 0 \le h \le x. \end{align} $$

Proof of Lemma 1

We first prove (64). By the trivial bound (25), we have unconditionally in the range $2\le T\le e^{\sqrt {\mathcal {L}(x)}}$ that

$$\begin{align*}F(x, T)\ll T \log^2T\ll T\mathcal{L}(x), \end{align*}$$

which with (61) proves (64) for the range $2\le T\le x$ . For the range $x\le T \le x^A$ , we have $T^{1/A}\le x \le T$ , and therefore on the Riemann Hypothesis (23) implies $F(x,T) \sim T\log x \ll T\mathcal {L}(x) $ by (60).

Next, for (65) in the range $ 2e^{-\sqrt {\mathcal {L}(x)}}\le \delta \le 1$ , we have $\log ^2(2/\delta ) \le \mathcal {L}(x) $ , and therefore on the Riemann Hypothesis by (59), we have $\mathcal {J}(x,\delta ) \ll \delta x^2 \log ^2(2/\delta )\ll \delta x^2 \mathcal {L}(x) $ in this range. For the range $0\le \delta < 1/x$ , we have $0\le \delta x < 1$ and therefore there is at most one integer in the interval $(t, t+\delta t]$ for $0\le t \le x$ . Hence,

(67) $$ \begin{align} \begin{aligned} \mathcal{J}(x, \delta) & \ll \int_0^{x} \left( \psi((1+\delta)t) - \psi(t) \right)^2\, dt + \int_0^{x} (\delta t )^2\, dt \\& \ll \sum_{n\le x+1}\Lambda(n)^2\int_{n/(1+\delta)}^n \, dt +\delta^2x^3 \\& \ll \delta \sum_{n\le x+1}\Lambda(n)^2 n +\delta^2x^3 \\& \ll \delta x^2\log x + \delta^2x^3 \\& \ll \delta x^2( \log x + \delta x) \ll \delta x^2\log x \ll \delta x^2\mathcal{L}(x). \end{aligned} \end{align} $$

By this and (62), we obtain (65). A nearly identical proof for $J(x,h)$ shows that (63) implies (66).

Proof of Theorem 5 from Theorem 8

To prove (A), choose $\mathcal {L}(x) = \epsilon \log ^2\!x$ . The assumption (61) becomes $F(x,T) \ll \epsilon T\log ^2\!x$ for $x^{\sqrt {\epsilon }} \le T \le x$ , or equivalently $T \le x \le T^{1/\sqrt {\epsilon }}$ . Letting $A=1/\sqrt {\epsilon }$ , we have $F(x,T) \ll A^{-2} T\log ^2\!x$ for $T \le x \le T^A$ . The assertion (27) implies that this bound for $F(x,T)$ holds since

$$\begin{align*}F(x,T) = o(T\log^2T) \ll o(T\log^2\!x) \ll A^{-2} T\log^2\!x\end{align*}$$

if $A\to \infty $ sufficiently slowly. Thus, (63) holds which implies by Lemma 1, $J(x,h) \ll \epsilon h x \log ^2\!x$ for $1\le h\le x$ . Moreover, by Theorem 8, $\mathcal {E}(N)\ll \epsilon N \log ^3\!N$ and $\psi (N) = N +O(\sqrt {\epsilon }N^{1/2}\log ^2\!N)$ , where $\epsilon $ can be taken as small as we wish.

To prove Theorem 5(B), choose $\mathcal {L}(x) = \log x$ . The assumption (61) becomes $F(x,T) \ll T\log x$ for $e^{\sqrt {\log x}} \le T \le x$ , or equivalently $T \le x \le T^{\log T}$ and this is satisfied when (29) holds. Thus, by (63) and Lemma 1, $J(x,h) \ll h x \log x$ for $1\le h\le x$ , and by Theorem 8, $\mathcal {E}(N)\ll N \log ^2\!N$ and $\psi (N) = N +O(N^{1/2}\log ^{3/2} N)$ .

Proof of Theorem 8, (61) implies (62)

We start from the easily verified identity

$$\begin{align*}F(x,T) = \frac2{\pi}\int_{-\infty}^{\infty} \left|\sum_{0<\gamma\le T}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt, \end{align*}$$

which is implicitly in [Reference MontgomeryM2] (see also [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, (26)] and [Reference Goldston, Mezzadri and SnaithGo, §4]). Next, Montgomery showed, using (4),

(68) $$ \begin{align} F(x,T) = \frac2{\pi}\int_0^T \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt +O(\log^3 T). \end{align} $$

The main tool in proving this part of Theorem 8 is the following result.

Lemma 2. For $0<\delta \le 1$ and $e^{2\kappa } = 1+\delta $ , let

(69) $$ \begin{align} G(x,\delta):= \int_0^{\infty} \left(\frac{\sin \kappa t}{t}\right)^2 \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt. \end{align} $$

Assuming the Riemann Hypothesis, we have, for $1/x \le \delta \le 1$ ,

(70) $$ \begin{align} \mathcal{J}(x,\delta) \ll x^2 G(x,\delta) + O(\delta x^2). \end{align} $$

We will prove Lemma 2 after completing the proof of Theorem 8.

Since $\kappa = \frac 12\log (1+\delta ) \asymp \delta $ for $0<\delta \le 1$ , we have

$$\begin{align*}0\le \left(\frac{\sin \kappa t}{t}\right)^2 \ll \min(\kappa^2 , 1/t^2)\ll \min( \delta^2, 1/t^2),\end{align*}$$

and hence

$$\begin{align*}G(x,\delta) \ll \int_0^{\infty} \min\left( \delta^2, \frac{1}{t^2}\right) \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt. \end{align*}$$

For $U\ge 1/\delta $ , we have, since by (4) $\sum _{\gamma } 1/(1+(t-\gamma )^2)\ll \log (|t|+2)$ ,

$$\begin{align*}\int_U^{\infty} \frac{1}{t^2} \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt \ll \int_U^{\infty} \frac{\log^2 t}{t^2} \, dt \ll \frac{\log^2U}{U}. \end{align*}$$

On taking $U=2\log ^2(2/\delta )/\delta $ , we have by (68)

$$\begin{align*} G(x,\delta) &\ll \int_0^U \min\left( \delta^2, \frac{1}{t^2}\right) \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt +O(\delta)\\ & \ll \delta^2\int_0^{2/\delta}\left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt \\ & \qquad+ \sum_{ k\ll \log\log (4/\delta)}\frac{\delta^2}{2^{2k}}\int_{2^k/\delta}^{2^{k+1}/\delta}\left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt +O(\delta) \\ & \ll \sum_{ k\ll\log\log(4/\delta)}\frac{\delta^2}{2^{2k}}\left( F(x, 2^k/\delta)+\log^3(2^k/\delta)\right) +O(\delta)\\ & \ll \delta^2 \sum_{ k\ll \log\log(4/\delta) }\frac{1}{2^{2k}} F(x, 2^k/\delta)\ + O(\delta). \end{align*}$$

We now assume (61) holds and therefore by Lemma 1, (64) also holds. Taking $ 1/x \le \delta \le 1$ , we see that for $1\le k \ll \log \log (4/\delta )$ we have $2\le 2^k/\delta \ll x\log ^Cx$ , for a constant C. Therefore, $F(x,2^k/\delta )$ is in the range where (64) applies, and therefore

$$\begin{align*}\begin{aligned} G(x,\delta) & \ll \delta^2 \sum_{ k\ll \log\log(4/\delta) }\frac{1}{2^{2k}} (2^k/\delta)\mathcal{L}(x) \ +O(\delta) \\ & \ll \delta \mathcal{L}(x), \end{aligned} \end{align*}$$

which by (70) proves (62) over a wider range of $\delta $ than required.

Proof of Theorem 8, (62) implies (63), and the remaining results

To complete the proof of Theorem 8, we need the following lemma of Saffari–Vaughan [Reference Saffari and VaughanSV, (6.21)] (see also [Reference Goldston, Vaughan, Greaves, Harman and HuxleyGV, pp. 126–127]), which we will prove later.

Lemma 3 (Saffari–Vaughan)

For any $1\le h \le x/4$ , and any integrable function $f(x)$ , we have

(71) $$ \begin{align} \int_{x/2}^x (f(t+h)-f(t))^2\, dt \le \frac{2x}{h} \int_0^{8h/x}\left(\int_0^x(f(y+\delta y)- f(y) )^2 \, dy\right) \, d\delta. \end{align} $$

Taking $f(t) = \psi (t) - t$ in Lemma 3, and assuming $1\le h \le x/8$ so that we may apply (65), we have

$$\begin{align*}\begin{aligned} J(x,h) - J(x/2, h) & \ll \frac{x}{h} \int_0^{8h/x} \mathcal{J}(x,\delta) \, d\delta \\ & \ll \frac{x^3\mathcal{L}(x)}{h} \int_0^{8h/x} \delta \, d\delta \\& \ll h x \mathcal{L}(x). \end{aligned} \end{align*}$$

Replacing x by $x/2, x/4, \ldots , x/2^{k-1}$ and adding, we obtain

$$\begin{align*}\begin{aligned} J(x,h) - J(x/2^k, h) &= \sum_{j\le k}\left(J(x/2^{j-1},h) - J(x/2^j, h)\right) \\& \ll h \sum_{j\le k} (x/2^{j-1}) \mathcal{L}(x/2^{j-1}) \\ & \ll h x \mathcal{L}(x), \end{aligned}\end{align*}$$

where we used $\mathcal {L}(x/2^{j-1})\le \mathcal {L}(x)$ , and note that here we need $h\le \frac {x}{2^{k+2}}$ . Taking k so that $\log ^2\!x \le 2^{k} \le 2\log ^2\!x$ , then by (17), we have

$$\begin{align*}J(x/2^{k},h) \le J(x/\log^2\!x,h) \ll hx. \end{align*}$$

Thus,

$$\begin{align*}J(x,h) \ll hx\mathcal{L}(x), \end{align*}$$

for $1\le h\le \frac {x}{8\log ^2\!x}\le \frac {x}{2^{k+2}}$ . This proves (63) over a larger range of h than required.

Now, applying to (19) the estimates (16) and (66) (which is implied by (63)), we obtain

$$\begin{align*}\mathcal{W}(N,h) \ll \frac{N\mathcal{L}(N)}{h}, \qquad 1\le h\le N, \end{align*}$$

which by (18) gives the bound $\mathcal {E}(N)\ll N\mathcal {L}(N) \log N$ and this with (21) gives

$$\begin{align*}\psi(N) = N +O(N^{1/2}\sqrt{\mathcal{L(N)}}\log N). \end{align*}$$

Proof of Lemma 2

We define $e^{2\kappa } = 1+\delta $ , $0<\delta \le 1$ , and define

(72) $$ \begin{align} a(s) := \frac{(1+\delta)^s-1}{s}. \end{align} $$

Thus,

$$\begin{align*}|a(it)|^2 = 4\left(\frac{\sin \kappa t}{t}\right)^2,\end{align*}$$

and

$$\begin{align*}\begin{aligned} G(x,\delta) &= \frac14 \int_0 ^{\infty} |a(it)|^2 \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt \\ & = \frac18 \int_{-\infty} ^{\infty} |a(it)|^2 \left|\sum_{\gamma}\frac{x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt, \end{aligned} \end{align*}$$

on noting the integrand is an even function since in the sum for every $\gamma $ there is a $-\gamma $ . The next step is to bring $|a(it)|^2$ into the sum over zeros using [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, Lem. 10], from which we immediately obtain, for $Z\ge 1/\delta $ ,

$$\begin{align*}G(x, \delta) = \frac18\int_{-\infty}^{\infty} \left|\sum_{|\gamma|\le Z}\frac{a(1/2+i\gamma)x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt +O\left(\delta^2\log^3(2/\delta)\right) +O\left(\frac{\log^3\!Z}{Z}\right).\end{align*}$$

We comment that the proof of Lemma 10 in [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM] is an elementary argument making use of (4). We have not made use of the Riemann Hypothesis yet, but henceforth we will assume and use $\rho = 1/2+i\gamma $ and $a(\rho ) = a(1/2+i\gamma )$ . In order to keep our error terms small, we now choose

(73) $$ \begin{align} Z = x\log^3\!x, \quad \text{and} \quad 1/x \le \delta \le 1. \end{align} $$

Thus,

(74) $$ \begin{align} G(x, \delta) + O(\delta) = \frac18\int_{-\infty}^{\infty} \left|\sum_{|\gamma|\le Z}\frac{a(1/2+i\gamma)x^{i\gamma}}{1+(t-\gamma)^2}\right|^2\, dt .\end{align} $$

Define the Fourier transform by

$$\begin{align*}\widehat{f}(u) := \int_{-\infty}^{\infty} f(t) e(-tu)\, dt. \end{align*}$$

Plancherel’s theorem says that if $f(t)$ is in $L^1 \cap L^2$ , then $\widehat {f}(u)$ is in $L^2$ , and we have

$$\begin{align*}\int_{-\infty}^{\infty} |f(t)|^2 \, dt = \int_{-\infty}^{\infty} |\widehat{f}(u)|^2 \, du.\end{align*}$$

An easy calculation gives the Fourier transform pair

$$\begin{align*}g(t) = \sum_{|\gamma|\le Z}\frac{a(\rho)x^{i\gamma}}{1+(t-\gamma)^2}, \qquad \widehat{g}(u) = \pi \sum_{|\gamma|\le Z}a(\rho)x^{i\gamma} e(-\gamma u)e^{-2\pi |u|} , \end{align*}$$

and therefore by Plancherel’s theorem we have, with $y= 2\pi u$ in the third line below,

(75) $$ \begin{align} \begin{aligned} G(x,\delta) +O(\delta) &= \frac{\pi^2}{8}\int_{-\infty}^{\infty} \bigg|\sum_{|\gamma|\le Z} a(\rho)x^{i\gamma}e(-\gamma u)\bigg|^2e^{-4\pi |u|}\, du \\& = \frac{\pi^2}{8} \int_{-\infty}^{\infty} \bigg|\sum_{|\gamma|\le Z} a(\rho)(xe^{-2\pi u})^{i\gamma}\bigg|^2e^{-4\pi |u|}\, du \\ & = \frac{\pi}{16} \int_{-\infty}^{\infty} \bigg|\sum_{|\gamma|\le Z} a(\rho)(xe^{-y})^{i\gamma}\bigg|^2e^{-2 |y|}\, dy \\ & \ge \frac{\pi}{16} \int_0^{\infty} \bigg|\sum_{|\gamma|\le Z} a(\rho)(xe^{-y})^{i\gamma}\bigg|^2e^{-2 y}\, dy. \end{aligned} \end{align} $$

On letting $t=xe^{-y}$ in the last integral, we obtain

$$\begin{align*}\int_0^{\infty} \bigg|\sum_{|\gamma|\le Z} a(\rho)(xe^{-y})^{i\gamma}\bigg|^2e^{-2 y}\, dy = \frac{1}{x^2} \int_0^{x} \bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt ,\end{align*}$$

and we conclude that

(76) $$ \begin{align} \int_0^{x} \bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt \ll x^2 G(x,\delta) + O(\delta x^2). \end{align} $$

We now complete the proof of Lemma 2 by proving that we have

(77) $$ \begin{align} \int_0^{x} \bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt =\mathcal{J}(x,\delta)+O(\delta x^2). \end{align} $$

By the standard truncated explicit formula [Reference Montgomery and VaughanMV2, Chap. 12, Th. 12.5] (see also [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, (34)]), we have, for $2\le t\le x$ , and $Z\ge x$ ,

(78) $$ \begin{align} - \sum_{|\gamma|\le Z} a(\rho) t^{\rho} = \psi((1+\delta)t) - \psi(t) - \delta t + E(t,Z), \end{align} $$

where

(79) $$ \begin{align} E(t,Z) \ll \frac{t\log^2(tZ)}{Z} + \log t \min\Bigg(1,\frac{t}{Z\lVert t\rVert}\Bigg) + \log t \min\Bigg(1,\frac{t}{Z\lVert (1+\delta)t\rVert}\Bigg) , \end{align} $$

and $\lVert u\rVert $ is the distance from u to the nearest integer. Using the trivial estimate

$$\begin{align*}\psi((1+\delta)t) - \psi(t)\ll \delta t \log t , \end{align*}$$

we have

$$\begin{align*}\left|\sum_{|\gamma|\le Z} a(\rho) t^{\rho}\right|^2 = ( \psi((1+\delta)t) - \psi(t) - \delta t)^2 + O( \delta t (\log t) |E(t,Z)| ) + O(|E(t,Z)|^2). \end{align*}$$

There is a small complication at this point since we want to integrate both sides of this equation over $0\le t\le x$ , but (78) requires $2\le t\le x$ . Therefore, integrating from $2\le t \le x$ , we obtain

(80) $$ \begin{align} \int_0^{x} \bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt = \mathcal{J}(x,\delta) - \mathcal{J}(2,\delta) +\int_0^2\bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt + O(E^*), \end{align} $$

where

(81) $$ \begin{align} E^*= \delta x \log x\int_2^x|E(t,Z)|\, dt + \int_2^x|E(t,Z)|^2\, dt. \end{align} $$

We note first that, by (67),

$$\begin{align*}\mathcal{J}(2,\delta) \ll \delta.\end{align*}$$

Next, for $|\text {Re}(s)|\ll 1$ , we have $|a(s)|\ll \min (1,1/|s|)$ , and by (4), we obtain

$$ \begin{align*}\sum_{|\gamma|\le Z} |a(\rho)|\ll \log^2\!Z.\end{align*} $$

Thus,

$$\begin{align*}\int_0^{2} \bigg|\sum_{|\gamma|\le Z} a(\rho)t^{\rho}\bigg|^2\, dt \ll \log^4\!Z \ll \log^4\!x .\end{align*}$$

Both of these errors are negligible compared to the error term $\delta x^2 \gg x$ . It remains to estimate $E^*$ . First, for $j=1$ or $2$ ,

$$\begin{align*}\begin{aligned} \int_2^x \min(1, \frac{t}{Z\lVert t\rVert})^j \, dt & \ll \sum_{n\le 2x } \int_{n}^{n+1/2} \min\Bigg(1, \frac{n}{Z(t-n)}\Bigg)^j \, dt \\& \ll \sum_{n\le 2x } \left(\frac{n}{Z} + (\frac{n}{Z})^j\int_{n+n/Z}^{n+1/2} \frac{1}{(t-n)^j} \, dt\right) \\& \ll \frac{x^2\log^{2-j}\!Z}{Z} \ll \frac{x}{\log^{j+1}\!x}. \end{aligned}\end{align*}$$

The same estimate holds for the term in (79) with $\lVert (1+\delta )t\rVert $ by a linear change of variable in the integral. Thus,

$$\begin{align*}\int_2^x|E(t,Z)|\, dt \ll \frac{x^2\log^2(xZ)}{Z} + \frac{x}{\log x} \ll \frac{x}{\log x} \end{align*}$$

and

$$\begin{align*}\int_2^x|E(t,Z)|^2\, dt \ll \frac{x^3\log^4(xZ)}{Z^2} + \frac{x}{\log x} \ll \frac{x}{\log x}. \end{align*}$$

We conclude from (81) that

$$\begin{align*}E^* \ll \delta x^2 + \frac{x}{\log x} \ll \delta x^2, \end{align*}$$

since, by (73), $x \ll \delta x^2$ . Substituting these estimates into (80) proves (77).

Proof of Lemma 3

This proof comes from [Reference Goldston, Vaughan, Greaves, Harman and HuxleyGV, pp. 126–127]. We take $1\le h \le x/4$ . On letting $t=y+u$ with $0\le u\le h$ , we have

$$\begin{align*}\begin{aligned} J :&=\int_{x/2}^x (f(t+h)-f(t))^2\, dt = \int_{x/2-u}^{x-u} (f(y+u+h)-f(y+u))^2\, dy \\ & = \frac1h\int_0^h \left(\int_{x/2-u}^{x-u} (f(y+u+h)-f(y+u))^2\, dy \right) du \\ & \le \frac1h\int_0^h \left(\int_{x/2-u}^{x-u} \left( |f(y+u+h)-f(y)| +|f(y+u)-f(y)| \right)^2\, dy \right) du \\ & \le \frac2h\int_0^h \left(\int_{x/2-u}^{x-u} (f(y+u+h)-f(y))^2 +(f(y+u)-f(y))^2 \, dy \right) du. \end{aligned} \end{align*}$$

Since the integration range of the inner integral always lies in the interval $[x/4,x]$ , we have

$$\begin{align*}J \le \frac2h\int_0^{2h} \left(\int_{x/4}^{x} (f(y+u)-f(y))^2 \, dy \right) du = \frac2h \iint\limits_{\mathcal{R}} (f(y+u)-f(y))^2 dA,\end{align*}$$

where $\mathcal {R}$ is the region defined by $x/4\le y \le x$ and $0\le u\le 2h$ . Making the change of variable $u=\delta y$ , then $\mathcal {R}$ is defined by $x/4\le y \le x$ and $0\le \delta \le 2h/y$ , and changing the order of integration gives

$$\begin{align*}\begin{aligned} J &\le \frac2h \int_{x/4}^{x} \left(\int_0^{2h/y} (f(y+\delta y)-f(y))^2 y \, d\delta \right) \, dy \\ & \le \frac{2x}h \int_{x/4}^{x} \left(\int_0^{8h/x} (f(y+\delta y)-f(y))^2 \, d\delta \right) \, dy. \end{aligned} \end{align*}$$

Inverting the order of integration again, we conclude that

$$\begin{align*}J \le \frac{2x}{h} \int_0^{8h/x}\left(\int_0^x(f(y+\delta y)- f(y) )^2 \, dy\right) \, d\delta. \end{align*}$$

4 Proofs of Theorem 6, Corollary 2, and Theorem 7

Proof of Theorem 6

From [Reference InghamI, Th. 30], it is well known that

$$\begin{align*}\psi(x) - x \ll x^{\Theta}\log^2\!x , \end{align*}$$

but it seems less well known that, in 1965, Grosswald [Reference GrosswaldGro] refined this result by proving that, for $1/2<\Theta <1$ ,

(82) $$ \begin{align} \psi(x) - x \ll x^{\Theta}, \end{align} $$

from which we immediately obtain

(83) $$ \begin{align} H(x) := \int_0^x ( \psi(t) - t)^2\, dt \ll x^{2\Theta+1 }. \end{align} $$

From [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+, Lem. 8], we have from the case $q=1$ that, for $x\ge 2$ and $1\le h\le x$ ,

(84) $$ \begin{align} \int_x^{2x} ( \psi(t+h) -\psi(t) - h)^2\, dt \ll h x^{2\Theta}\log^4\!x.\end{align} $$

We first need to prove that the same bound holds for $J(x,h)$ . We have

$$\begin{align*}\begin{aligned} J(x,h) &= \int_0^{h} ( \psi(t+h) -\psi(t) - h)^2\, dt + \int_h^{x} ( \psi(t+h) -\psi(t) - h)^2\, dt \\ & := J_1(h) +J_2(x,h). \end{aligned}\end{align*}$$

For $J_1(h)$ , we use (83) to see that, for $1\le h\le x$ ,

$$\begin{align*}J_1(h) \ll \int_0^h (\psi(t+h) - (t+h))^2 \, dt + \int_0^h (\psi(t) -t)^2\, dt \ll H(2h) \ll h^{2\Theta +1} \ll h x^{2\Theta}. \end{align*}$$

For $J_2(x,h)$ , we apply (84) and find for any interval $(x/2^{k+1}, x/2^{k}]$ contained in $[h/2,x]$

$$\begin{align*}\int_{x/2^{k+1}}^{x/2^{k}}( \psi(t+h) -\psi(t) - h)^2\, dt \ll \frac{h x^{2\Theta} \log^4\!x}{2^{2k\Theta}},\end{align*}$$

and summing over $k\ge 0$ to cover the interval $[h,x]$ , we obtain $ J_2(x,h) \ll h x^{2\Theta }\log ^4\!x.$ Combining these estimates, we conclude, as desired,

(85) $$ \begin{align} J(x,h) \ll h x^{2\Theta}\log^4\!x , \end{align} $$

and using (83) and (85) in Theorem 2 gives $\mathcal {E}(N) \ll N^{2\Theta }\log ^5\!N$ .

Proof of Corollary 2

Using (13) of Theorem 1, we see that, since the sum over zeros is absolutely convergent, the contribution from zeros with $\beta <1/2$ is $o(x^{3/2})$ .

Proof of Theorem 7

We have if n is odd that

$$\begin{align*}\psi_2(n) = \sum_{m+m'=n}\Lambda(m)\Lambda(m') = 2\log 2\sum_{\substack{n=2^j +m \\ j\ge 1}}\Lambda(m) = 2\log 2 \sum_{j\leq \log_2\!n}\Lambda(n-2^j)\end{align*}$$

and therefore

$$\begin{align*}\sum_{\substack{n\le N \\ n \ \mathrm{odd}}} \psi_2(n) = 2\log 2 \sum_{j\le \log_2\!N}\sum_{\substack{2^j< n\le N \\ n \ \mathrm{odd}}}\Lambda(n-2^j).\end{align*}$$

We may drop the condition that n is odd in the last sum with an error of $O(\log ^2\!N)$ since the only nonzero terms in the sum when n is even are when n is a power of 2. Hence,

$$\begin{align*}\sum_{\substack{n\le N \\ n \ \mathrm{odd}}} \psi_2(n) = 2\log 2 \sum_{j\le \log_2\!N}\left(\psi(N-2^j) +O(\log^2\!N)\right).\end{align*}$$

Applying the prime number theorem with a modest error term, we have

$$\begin{align*}\sum_{\substack{n\le N \\ n \ \mathrm{odd}}} \psi_2(n) = 2\log 2 \sum_{j\le \log_2\!N}\left((N-2^j) +O(N/\log N)\right) = 2N\log N +O(N).\end{align*}$$

Added in proof

Languasco, Perelli, and Zaccagnini [Reference Languasco, Perelli and ZaccagniniLPZ1], [Reference Languasco, Perelli and ZaccagniniLPZ2], [Reference Languasco, Perelli and ZaccagniniLPZ3] have obtained many results connecting conjectures related to pair correlation of zeros of the Riemann zeta function to conjectures on primes. It has been brought to our attention that the main result in our follow-up paper [Reference Goldston and SuriajayaGS] to this paper on the error in the prime number theorem has already been obtained in [Reference Languasco, Perelli and ZaccagniniLPZ2]. Our method is based on a generalization $F_{\beta }(x,T)$ of $F(x,T)$ from (22) where $w(u)$ is replaced with $w_{\beta }(u) = \frac {4\beta ^2}{4\beta ^2+u^2}$ . In [Reference Languasco, Perelli and ZaccagniniLPZ2], they used $F_{\beta }(x,T)$ with a change of variable $\beta = 1/\tau $ . The results we obtained are analogous to some of their results. In the paper [Reference Languasco, Perelli and ZaccagniniLPZ3], it is shown how this method can be applied to generalizations of $\mathcal {J}(x,\delta )$ in (58). We direct interested readers to these papers of Languasco, Perelli, and Zaccagnini.

Footnotes

Suriajaya was supported by Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research Grant Numbers 18K13400 and 22K13895, and also by the Ministry of Education, Culture, Sports, Science and Technology Initiative for Realizing Diversity in the Research Environment.

1 In [Reference Bhowmik and Schlage-PuchtaBS] and [Reference Goldston, Yang, Tian and YeGY], there is a third integral error term that is also needed, but the method of [Reference Languasco and ZaccagniniLZ1] and here avoids this term.

References

Bhowmik, G., Halupczok, K., Matsumoto, K., and Suzuki, Y., Goldbach representations in arithmetic progressions and zeros of Dirichlet L-functions , Mathematika 65 (2019), 5797.CrossRefGoogle Scholar
Bhowmik, G. and Ruzsa, I. Z., Average Goldbach and the quasi-Riemann hypothesis , Anal. Math. 44 (2018), 5156.CrossRefGoogle Scholar
Bhowmik, G. and Schlage-Puchta, J.-C., Mean representation number of integers as the sum of primes , Nagoya Math. J. 200 (2010), 2733.CrossRefGoogle Scholar
Brüdern, J., Kaczorowski, J., and Perelli, A., Explicit formulae for averages of Goldbach representations , Trans. Amer. Math. Soc. 372 (2019), no. 10, 69816999.CrossRefGoogle Scholar
Cramér, H., Some theorems concerning prime numbers , Ark. Mat. Astr. Fys. 15 (1921), 133.Google Scholar
Egami, S. and Matsumoto, K., “Convolutions of the von Mangoldt function and related Dirichlet series” in Proceedings of the 4th China-Japan Seminar Held at Shandong (eds., Kanemitsu, S. and Liu, J.-Y.), Ser. Number Theory Appl. 2, World Sci., Hackensack, NJ, 2007.Google Scholar
Fujii, A., An additive problem of prime numbers , Acta Arith. 58 (1991), 173179.CrossRefGoogle Scholar
Fujii, A., An additive problem of prime numbers. II, Proc. Japan Acad. Ser. A Math. Sci. 67 (1991), 248252.Google Scholar
Fujii, A., An additive problem of prime numbers. III, Proc. Japan Acad. Ser. A Math. Sci. 67 (1991), 278283.Google Scholar
Goldston, D. A., “Notes on pair correlation of zeros and prime numbers” in Recent Perspectives in Random Matrix Theory and Number Theory (eds., Mezzadri, F. and Snaith, N. C.), London Math. Soc. Lecture Note Ser. 322, Cambridge Univ. Press, Cambridge, 2005, 79110.CrossRefGoogle Scholar
Goldston, D. A. and Montgomery, H. L., “Pair correlation of zeros and primes in short intervals” in Analytic Number Theory and Diophantine Problems: Proceedings of a Conference at Oklahoma State University (1984) (eds., Adolphson, A. C., Conrey, J. B., Ghosh, A., and Yager, R. I.), Birkhauser, Boston, MA, 1987, 183203.CrossRefGoogle Scholar
Goldston, D. A. and Suriajaya, A. I., The prime number theorem and pair correlation of zeros of the Riemann zeta-function , Res. Number Theory 8 (2022), article ID 71. doi:10.1007/s40993-022-00371-4.CrossRefGoogle Scholar
Goldston, D. A. and Vaughan, R. C., “On the Montgomery–Hooley asymptotic formula” in Sieve Methods, Exponential Sums, and their Application in Number Theory (eds., Greaves, G. R. H., Harman, G., and Huxley, M. N.), Cambridge Univ. Press, Cambridge, (1996), 117142.Google Scholar
Goldston, D. A. and Yang, L., “The average number of Goldbach representations” in Prime Numbers and Representation Theory (eds., Tian, Y. and Ye, Y.), Science Press, Beijing, 2017, 112.Google Scholar
Granville, A., Refinements of Goldbach’s conjecture, and the generalized Riemann hypothesis , Funct. Approx. Comment. Math. 37 (2007), 159173.CrossRefGoogle Scholar
Granville, A., Corrigendum to “refinements of Goldbach’s conjecture, and the generalized Riemann hypothesis” , Funct. Approx. Comment. Math. 38 (2008), 235237.CrossRefGoogle Scholar
Grosswald, É., Sur l’ordre de grandeur des différences $\psi \left(x\right)-x\ et\ \pi \left(x\right)-\mathsf{li}\,x$ , C. R. Acad. Sci. Paris 260 (1965), 38133816.Google Scholar
Heath-Brown, D. R., Gaps between primes, and the pair correlation of zeros of the zeta-function , Acta Arith. 41 (1982), 8599.CrossRefGoogle Scholar
Ingham, A. E., The Distribution of Prime Numbers, Cambridge Math. Libr., Cambridge Univ. Press, Cambridge, 1990. Reprint of the 1932 original; with a foreword by R. C. Vaughan.Google Scholar
Koukoulopoulos, D., The Distribution of Prime Numbers, Grad. Stud. Math. 203, Amer. Math. Soc., Providence, RI, 2019.CrossRefGoogle Scholar
Landau, E., Über die Nullstellen der Zetafunktion , Math. Ann. 71 (1912), 548564.CrossRefGoogle Scholar
Languasco, A., Perelli, A., and Zaccagnini, A., Explicit relations between pair correlation of zeros and primes in short intervals , J. Math. Anal. Appl. 394 (2012), 761771.CrossRefGoogle Scholar
Languasco, A., Perelli, A., and Zaccagnini, A., An extension of the pair-correlation conjecture and applications , Math. Res. Lett. 23 (2016), 201220.CrossRefGoogle Scholar
Languasco, A., Perelli, A., and Zaccagnini, A., An extended pair-correlation conjecture and primes in short intervals , Trans. Amer. Math. Soc. 369 (2017), no. 6, 42354250.CrossRefGoogle Scholar
Languasco, A. and Zaccagnini, A., The number of Goldbach representations of an integer , Proc. Amer. Math. Soc. 140 (2012), 795804.CrossRefGoogle Scholar
Languasco, A. and Zaccagnini, A., A Cesàro average of Goldbach numbers , Forum Math. 27 (2015), 19451960,CrossRefGoogle Scholar
Montgomery, H. L., Topics in Multiplicative Number Theory, Lecture Notes in Math. 227, Springer, Berlin–New York, 1971.CrossRefGoogle Scholar
Montgomery, H. L., “The pair correlation of zeros of the zeta function” in Analytic Number Theory, Proc. Sympos. Pure Math. XXIV, St. Louis Univ., St. Louis, MO, 1972, 181193.Google Scholar
Montgomery, H. L. and Vaughan, R. C., Error terms in additive prime number theory , Quart. J. Math. Oxford (2) 24 (1973), 207216.CrossRefGoogle Scholar
Montgomery, H. L. and Vaughan, R. C., Multiplicative Number Theory, Cambridge Stud. Adv. Math. 97, Cambridge Univ. Press, Cambridge, 2007.Google Scholar
Saffari, B. and Vaughan, R. C., On the fractional parts of $x/n$ and related sequences II, Ann. Inst. Fourier (Grenoble) 27 (1977), 130.CrossRefGoogle Scholar
Selberg, A., On the normal density of primes in small intervals, and the difference between consecutive primes , Arch. Math. Naturvid. 47 (1943), 87105.Google Scholar
Titchmarsh, E. C., The Theory of the Riemann Zeta-Function, 2nd ed., Clarendon, Oxford, 1986. Revised by D. R. Heath-Brown.Google Scholar
von Koch, H., Sur la distribution des nombres premiers , Acta Math. 24 (1901), 159182.CrossRefGoogle Scholar