The object of sampling surveys is to evaluate variables characterizing an aggregate (the “whole”) through the observation of only a fraction of that whole, the “sample.” Though these surveys do focus on sociological or economic variables, it is because of opinion polls – often referred to as Gallup polls, from the name of the American businessman who first applied them to election forecasts and market surveys – that the investigative techniques involved in sampling surveys gained their claim to fame. The identification with this particular type of survey has become so strong that the French word for sampling survey, sondage, which, for statisticians, designates that survey method which substitutes the “part” for the “whole” (sampling) has become, for the general public, synonymous with the term enquête d’opinion (poll), to such an extent that any controversy on sondages now focuses on the scientific validity of the concept of “opinion” rather than on the legitimacy of extrapolating “from the part to the whole.” This technique, however, now perfectly well codified thanks to probabilistic techniques and the computation of “confidence intervals” has a complex history, which predates the Gallup method of the 1930s. The probabilistic justification of the method’s legitimacy stems from a number of developments in different survey techniques, focusing on “typical cases,” “examples,” and, subsequently, on “purposive” sampling techniques, as opposed to “random” sampling. This legitimization, relying on probabilities, has not, however, gained general acceptance, since even nowadays, the so-called “quota” method does not follow the canons of random selection and of confidence intervals.