No CrossRef data available.
Published online by Cambridge University Press: 04 November 2019
Numerous learning tasks can be described as the process of extrapolating patterns from observed data. One of the driving intuitions behind the theory of algorithmic randomness is that randomness amounts to the absence of any effectively detectable patterns: it is thus natural to regard randomness as antithetical to inductive learning. Osherson and Weinstein [11] draw upon the identification of randomness with unlearnability to introduce a learning-theoretic framework (in the spirit of formal learning theory) for modelling algorithmic randomness. They define two success criteria—specifying under what conditions a pattern may be said to have been detected by a computable learning function—and prove that the collections of data sequences on which these criteria cannot be satisfied correspond to the set of weak 1-randoms and the set of weak 2-randoms, respectively. This learning-theoretic approach affords an intuitive perspective on algorithmic randomness, and it invites the question of whether restricting attention to learning-theoretic success criteria comes at an expressivity cost. In other words, is the framework expressive enough to capture most core algorithmic randomness notions and, in particular, Martin-Löf randomness—arguably, the most prominent algorithmic randomness notion in the literature? In this article, we answer the latter question in the affirmative by providing a learning-theoretic characterisation of Martin-Löf randomness. We then show that Schnorr randomness, another central algorithmic randomness notion, also admits a learning-theoretic characterisation in this setting.