We report on some recent work centered on attempts to understand when one set is more random than another. We look at various methods of calibration by initial segment complexity, such as those introduced by Solovay [125], Downey, Hirschfeldt, and Nies [39], Downey, Hirschfeldt, and LaForte [36], and Downey [31]; as well as other methods such as lowness notions of Kučera and Terwijn [71], Terwijn and Zambella [133], Nies [101, 100], and Downey, Griffiths, and Reid [34]; higher level randomness notions going back to the work of Kurtz [73], Kautz [61], and Solovay [125]; and other calibrations of randomness based on definitions along the lines of Schnorr [117].
These notions have complex interrelationships, and connections to classical notions from computability theory such as relative computability and enumerability. Computability figures in obvious ways in definitions of effective randomness, but there are also applications of notions related to randomness in computability theory. For instance, an exciting by-product of the program we describe is a more-or-less natural requirement-free solution to Post's Problem, much along the lines of the Dekker deficiency set.