We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Numerous learning tasks can be described as the process of extrapolating patterns from observed data. One of the driving intuitions behind the theory of algorithmic randomness is that randomness amounts to the absence of any effectively detectable patterns: it is thus natural to regard randomness as antithetical to inductive learning. Osherson and Weinstein [11] draw upon the identification of randomness with unlearnability to introduce a learning-theoretic framework (in the spirit of formal learning theory) for modelling algorithmic randomness. They define two success criteria—specifying under what conditions a pattern may be said to have been detected by a computable learning function—and prove that the collections of data sequences on which these criteria cannot be satisfied correspond to the set of weak 1-randoms and the set of weak 2-randoms, respectively. This learning-theoretic approach affords an intuitive perspective on algorithmic randomness, and it invites the question of whether restricting attention to learning-theoretic success criteria comes at an expressivity cost. In other words, is the framework expressive enough to capture most core algorithmic randomness notions and, in particular, Martin-Löf randomness—arguably, the most prominent algorithmic randomness notion in the literature? In this article, we answer the latter question in the affirmative by providing a learning-theoretic characterisation of Martin-Löf randomness. We then show that Schnorr randomness, another central algorithmic randomness notion, also admits a learning-theoretic characterisation in this setting.
For a rumour spreading protocol, the spread time is defined as the first time everyone learns the rumour. We compare the synchronous push&pull rumour spreading protocol with its asynchronous variant, and show that for any n-vertex graph and any starting vertex, the ratio between their expected spread times is bounded by $O({n^{1/3}}{\log ^{2/3}}n)$. This improves the $O(\sqrt n)$ upper bound of Giakkoupis, Nazari and Woelfel (2016). Our bound is tight up to a factor of O(log n), as illustrated by the string of diamonds graph. We also show that if, for a pair α, β of real numbers, there exist infinitely many graphs for which the two spread times are nα and nβ in expectation, then $0 \le \alpha \le 1$ and $\alpha \le \beta \le {1 \over 3} + {2 \over 3} \alpha $; and we show each such pair α, β is achievable.
This paper investigates the computational complexity of deciding if a given finite idempotent algebra has a ternary term operation $m$ that satisfies the minority equations $m(y,x,x)\approx m(x,y,x)\approx m(x,x,y)\approx y$. We show that a common polynomial-time approach to testing for this type of condition will not work in this case and that this decision problem lies in the class NP.
Let r ⩾ 2 be a fixed constant and let $ {\cal H} $ be an r-uniform, D-regular hypergraph on N vertices. Assume further that D → ∞ as N → ∞ and that degrees of pairs of vertices in $ {\cal H} $ are at most L where L = D/( log N)ω(1). We consider the random greedy algorithm for forming a matching in $ {\cal H} $. We choose a matching at random by iteratively choosing edges uniformly at random to be in the matching and deleting all edges that share at least one vertex with a chosen edge before moving on to the next choice. This process terminates when there are no edges remaining in the graph. We show that with high probability the proportion of vertices of $ {\cal H} $ that are not saturated by the final matching is at most (L/D)(1/(2(r−1)))+o(1). This point is a natural barrier in the analysis of the random greedy hypergraph matching process.
Given complex numbers w1,…,wn, we define the weight w(X) of a set X of 0–1 vectors as the sum of $w_1^{x_1} \cdots w_n^{x_n}$ over all vectors (x1,…,xn) in X. We present an algorithm which, for a set X defined by a system of homogeneous linear equations with at most r variables per equation and at most c equations per variable, computes w(X) within relative error ∊ > 0 in (rc)O(lnn-ln∊) time provided $|w_j| \leq \beta (r \sqrt{c})^{-1}$ for an absolute constant β > 0 and all j = 1,…,n. A similar algorithm is constructed for computing the weight of a linear code over ${\mathbb F}_p$. Applications include counting weighted perfect matchings in hypergraphs, counting weighted graph homomorphisms, computing weight enumerators of linear codes with sparse code generating matrices, and computing the partition functions of the ferromagnetic Potts model at low temperatures and of the hard-core model at high fugacity on biregular bipartite graphs.
An abelian processor is an automaton whose output is independent of the order of its inputs. Bond and Levine have proved that a network of abelian processors performs the same computation regardless of processing order (subject only to a halting condition). We prove that any finite abelian processor can be emulated by a network of certain very simple abelian processors, which we call gates. The most fundamental gate is a toppler, which absorbs input particles until their number exceeds some given threshold, at which point it topples, emitting one particle and returning to its initial state. With the exception of an adder gate, which simply combines two streams of particles, each of our gates has only one input wire, which sends letters (‘particles’) from a unary alphabet. Our results can be reformulated in terms of the functions computed by processors, and one consequence is that any increasing function from ℕk to ℕℓ that is the sum of a linear function and a periodic function can be expressed in terms of (possibly nested) sums of floors of quotients by integers.
This paper provides short proofs of two fundamental theorems of finite semigroup theory whose previous proofs were significantly longer, namely the two-sided Krohn-Rhodes decomposition theorem and Henckell’s aperiodic pointlike theorem. We use a new algebraic technique that we call the merge decomposition. A prototypical application of this technique decomposes a semigroup $T$ into a two-sided semidirect product whose components are built from two subsemigroups $T_{1}$, $T_{2}$, which together generate $T$, and the subsemigroup generated by their setwise product $T_{1}T_{2}$. In this sense we decompose $T$ by merging the subsemigroups $T_{1}$ and $T_{2}$. More generally, our technique merges semigroup homomorphisms from free semigroups.
We improve some previously known deterministic algorithms for finding integer solutions $x,y$ to the exponential equation of the form $af^{x}+bg^{y}=c$ over finite fields.
We give the first polynomial upper bound on the mixing time of the edge-flip Markov chain for unbiased dyadic tilings, resolving an open problem originally posed by Janson, Randall and Spencer in 2002 [14]. A dyadic tiling of size n is a tiling of the unit square by n non-overlapping dyadic rectangles, each of area 1/n, where a dyadic rectangle is any rectangle that can be written in the form [a2−s, (a + 1)2−s] × [b2−t, (b + 1)2−t] for a, b, s, t ∈ ℤ⩾ 0. The edge-flip Markov chain selects a random edge of the tiling and replaces it with its perpendicular bisector if doing so yields a valid dyadic tiling. Specifically, we show that the relaxation time of the edge-flip Markov chain for dyadic tilings is at most O(n4.09), which implies that the mixing time is at most O(n5.09). We complement this by showing that the relaxation time is at least Ω(n1.38), improving upon the previously best lower bound of Ω(n log n) coming from the diameter of the chain.
We present an average-case analysis of a variant of dual-pivot quicksort. We show that the algorithmic partitioning strategy used is optimal, that is, it minimizes the expected number of key comparisons. For the analysis, we calculate the expected number of comparisons exactly as well as asymptotically; in particular, we provide exact expressions for the linear, logarithmic and constant terms.
An essential step is the analysis of zeros of lattice paths in a certain probability model. Along the way a combinatorial identity is proved.
The paper introduces a graph theory variation of the general position problem: given a graph $G$, determine a largest set $S$ of vertices of $G$ such that no three vertices of $S$ lie on a common geodesic. Such a set is a max-gp-set of $G$ and its size is the gp-number $\text{gp}(G)$ of $G$. Upper bounds on $\text{gp}(G)$ in terms of different isometric covers are given and used to determine the gp-number of several classes of graphs. Connections between general position sets and packings are investigated and used to give lower bounds on the gp-number. It is also proved that the general position problem is NP-complete.
In 1960 Rényi, in his Michigan State University lectures, asked for the number of random queries necessary to recover a hidden bijective labelling of n distinct objects. In each query one selects a random subset of labels and asks, which objects have these labels? We consider here an asymmetric version of the problem in which in every query an object is chosen with probability p > 1/2 and we ignore ‘inconclusive’ queries. We study the number of queries needed to recover the labelling in its entirety (Hn), before at least one element is recovered (Fn), and to recover a randomly chosen element (Dn). This problem exhibits several remarkable behaviours: Dn converges in probability but not almost surely; Hn and Fn exhibit phase transitions with respect to p in the second term. We prove that for p > 1/2 with high probability we need
$$H_n=\log_{1/p} n +{\tfrac{1}{2}} \log_{p/(1-p)}\log n +o(\log \log n)$$
queries to recover the entire bijection. This should be compared to its symmetric (p = 1/2) counterpart established by Pittel and Rubin, who proved that in this case one requires
$$ H_n=\log_{2} n +\sqrt{2 \log_{2} n} +o(\sqrt{\log n})$$
queries. As a bonus, our analysis implies novel results for random PATRICIA tries, as the problem is probabilistically equivalent to that of the height, fillup level, and typical depth of a PATRICIA trie built from n independent binary sequences generated by a biased(p) memoryless source.
Fix a finite semigroup $S$ and let $a_{1},\ldots ,a_{k},b$ be tuples in a direct power $S^{n}$. The subpower membership problem (SMP) for $S$ asks whether $b$ can be generated by $a_{1},\ldots ,a_{k}$. For combinatorial Rees matrix semigroups we establish a dichotomy result: if the corresponding matrix is of a certain form, then the SMP is in P; otherwise it is NP-complete. For combinatorial Rees matrix semigroups with adjoined identity, we obtain a trichotomy: the SMP is either in P, NP-complete, or PSPACE-complete. This result yields various semigroups with PSPACE-complete SMP including the six-element Brandt monoid, the full transformation semigroup on three or more letters, and semigroups of all $n$ by $n$ matrices over a field for $n\geq 2$.
We give complexity analysis for the class of short generating functions. Assuming #P$\not \subseteq$FP/poly, we show that this class is not closed under taking many intersections, unions or projections of generating functions, in the sense that these operations can increase the bit length of coefficients of generating functions by a super-polynomial factor. We also prove that truncated theta functions are hard for this class.
Finding a hidden partition in a random environment is a general and important problem which contains as subproblems many important questions, such as finding a hidden clique, finding a hidden colouring, finding a hidden bipartition, etc.
In this paper we provide a simple SVD algorithm for this purpose, addressing a question of McSherry. This algorithm is easy to implement and works for sparse graphs under optimal density assumptions. We also consider an approximating algorithm, which on one hand works under very mild assumptions, but on other hand can sometimes be upgraded to give the exact solution.
We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a $d$-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ with precision $\unicode[STIX]{x1D716}$ using $O(\unicode[STIX]{x1D70F}(\log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})/\log \log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})))$ queries and $O(\unicode[STIX]{x1D70F}(\log ^{2}(\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})/\log \log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716}))n)$ additional 2-qubit gates, where $\unicode[STIX]{x1D70F}=d^{2}\Vert H\Vert _{\max }t$. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous- and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault-correction procedure. Our simplification relies on a new form of ‘oblivious amplitude amplification’ that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error.
We call `bits' a sequence of devices indexed by positive integers, where every device can be in two states: 0 (idle) and 1 (active). Start from the `ground state' of the system when all bits are in 0-state. In our first binary flipping (BF) model the evolution of the system behaves as follows. At each time step choose one bit from a given distribution P on the positive integers independently of anything else, then flip the state of this bit to the opposite state. In our second damaged bits (DB) model a `damaged' state is added: each selected idling bit changes to active, but selecting an active bit changes its state to damaged in which it then stays forever. In both models we analyse the recurrence of the system's ground state when no bits are active. We present sufficient conditions for both the BF and DB models to show recurrent or transient behaviour, depending on the properties of the distribution P. We provide a bound for fractional moments of the return time to the ground state for the BF model, and prove a central limit theorem for the number of active bits for both models.
NTRU is a public-key cryptosystem introduced at ANTS-III. The two most used techniques in attacking the NTRU private key are meet-in-the-middle attacks and lattice-basis reduction attacks. Howgrave-Graham combined both techniques in 2007 and pointed out that the largest obstacle to attacks is the memory capacity that is required for the meet-in-the-middle phase. In the present paper an algorithm is presented that applies low-memory techniques to find ‘golden’ collisions to Odlyzko’s meet-in-the-middle attack against the NTRU private key. Several aspects of NTRU secret keys and the algorithm are analysed. The running time of the algorithm with a maximum storage capacity of $w$ is estimated and experimentally verified. Experiments indicate that decreasing the storage capacity $w$ by a factor $1<c<\sqrt{w}$ increases the running time by a factor $\sqrt{c}$.
Let $\mathbf{f}$ and $\mathbf{g}$ be polynomials of a bounded Euclidean norm in the ring $\mathbb{Z}[X]/\langle X^{n}+1\rangle$. Given the polynomial $[\mathbf{f}/\mathbf{g}]_{q}\in \mathbb{Z}_{q}[X]/\langle X^{n}+1\rangle$, the NTRU problem is to find $\mathbf{a},\mathbf{b}\in \mathbb{Z}[X]/\langle X^{n}+1\rangle$ with a small Euclidean norm such that $[\mathbf{a}/\mathbf{b}]_{q}=[\mathbf{f}/\mathbf{g}]_{q}$. We propose an algorithm to solve the NTRU problem, which runs in $2^{O(\log ^{2}\unicode[STIX]{x1D706})}$ time when $\Vert \mathbf{g}\Vert ,\Vert \mathbf{f}\Vert$, and $\Vert \mathbf{g}^{-1}\Vert$ are within some range. The main technique of our algorithm is the reduction of a problem on a field to one on a subfield. The GGH scheme, the first candidate of an (approximate) multilinear map, was recently found to be insecure by the Hu–Jia attack using low-level encodings of zero, but no polynomial-time attack was known without them. In the GGH scheme without low-level encodings of zero, our algorithm can be directly applied to attack this scheme if we have some top-level encodings of zero and a known pair of plaintext and ciphertext. Using our algorithm, we can construct a level-$0$ encoding of zero and utilize it to attack a security ground of this scheme in the quasi-polynomial time of its security parameter using the parameters suggested by Garg, Gentry and Halevi [‘Candidate multilinear maps from ideal lattices’, Advances in cryptology — EUROCRYPT 2013 (Springer, 2013) 1–17].