Let A1, A2, ..., An be any n objects, such as variables, categories, people, social groups, ideas, physical objects, or any other. The empirical data to be analyzed are coefficients of similarity or distance within pairs (Ai, Ai), such as correlation coefficients, conditional probabilities or likelihoods, psychological choice or confusion, etc. It is desired to represent these data parsimoniously in a coordinate space, by calculating m coordinates {xia} for each Ai for a semi-metric d of preassigned form dij = d(|xi1 -xj1|, |xi2 -xj2|, ..., |xim - xjm|). The dimensionality m is sought to be as small as possible, yet satisfy the monotonicity condition that dij <dkl whenever the observed data indicate that Ai is “closer” to Aj than Ak is to Al. Minkowski and Euclidean spaces are special metric examples of d. A general coefficient of monotonicity μ is defined, whose maximization is equivalent to optimal satisfaction of the monotonicity condition, and which allows various options both for treatment of ties and for weighting error-of-fit. A general rationale for algorithm construction is derived for maximizing μ by gradient-guided iterations; this provides a unified mathematical solution to the basic operational problems of norming the gradient to assure proper convergence, of trading between speed and robustness against undesired stationary values, and of a rational first approximation. Distinction is made between single-phase (quadratic) and two-phase (bilinear) strategies for algorithm construction, and between “hard-squeeze” and “soft-squeeze” tactics within these strategies. Special reference is made to the rank-image and related transformational principles, as executed by current Guttman-Lingoes families of computer programs.