Article contents
The Historical Experiment as a Research Strategy in the Study of World Politics
Published online by Cambridge University Press: 04 January 2016
Extract
As one looks back on the important developments in political science over the past two decades, there is much to be applauded. While it may be premature to say that we have come “of age” as a scientific discipline, the field is clearly in better shape today than it was in the early 1950s. One indicator is the ratio between mere speculation and observed empirical regularities reported in our journal articles. Another is the decline in the percentage (if not in absolute numbers) of our colleagues who insist that political phenomena are just not amenable to scientific examination. A third might be the dramatic increase in the number of political scientists who have been exposed to training in the techniques and rationale of data making and data analysis. The list could be extended, but we need not do so here.
On the other hand, a stance of comfortable complacency would be very premature. Not only have we fared badly in coming to grips with the knowledge-action relationship in the abstract, but we have by and large done a poor job of shaping the policies of our respective national, provincial, and local governments. Since others as well as myself have dealt—if not definitively—with these issues before, let me eschew further discussion of the knowledge application question for the moment, and go on to matters of basic research. Of the more serious flaws to date, two stand out particularly. One is the lack of balance between a concern for cumulativeness on the one hand and the need for innovation on the other. My impression is that students of national politics (at least those who work in the vineyard of empirical regularities) have been more than conscientious in staying with one set of problems, such as the relationship between political attitudes and voting behavior. But students of inter-national politics have, conversely, tended to move all too quickly from one problem to another, long before cumulative evidence has been generated and before our findings are integrated into coherent wholes.
- Type
- Research Article
- Information
- Copyright
- Copyright © Social Science History Association 1977
References
Notes
1 Singer, J. David, “Knowledge, Practice, and the Social Sciences in International Politics,” in Palmer, Norman, ed., A Design for International Relations Research, Monograph 10 of the American Academy of Political and Social Science (October 1970), 137-49Google Scholar; and “The Peace Researcher and Foreign Policy Prediction,” Peace Science Society Papers, 21 (1973) 1-13.
2 Singer, J. David, “Cumulativeness in the Social Sciences: Some Counter-Prescriptions,” PS, 8:1 (Fall, 1975), 19-21.CrossRefGoogle Scholar
3 Pearson, Karl, The Grammar of Science (New York: 1957) [originally published in 1892]Google Scholar ; Fisher, Ronald A., Statistical Methods for Research Workers (London, 1925)Google Scholar; Fisher, Ronald A. and Prance, Ghuean T., The Design of Experiments (London, 1935)Google Scholar; Stouffer, Samuel A., Social Research to Test Ideas (New York, 1962)Google Scholar; Blalock, Hubert M. Jr., Causal Inferences in Nonexperimental Research (Chapel Hill, 1961)Google Scholar; and Campbell, Donald T. and Stanley, Julian, Experimental and Quasi-Experimental Designs for Research (Chicago, 1966).Google Scholar
4 For example, Eldersveld, Samuel J., “Experimental Propaganda Techniques and Voting Behavior,” American Political Science Review, 50 (March 1956), 154-65CrossRefGoogle Scholar; Simon, Herbert and Stern, Frederick, “The Effect of Television upon Voting Behavior in Iowa in the 1952 Presidential Election,” American Political Science Review, 49 (June 1955), 470-77CrossRefGoogle Scholar; and Eulau, Heinz, “Policy Making in American Cities: Comparisons in a Quasi-Longitudinal, Quasi-Experimental Design” (New York, 1971).Google Scholar
5 Representative of that view is Etzioni, Amitai, Political Unification (New York, 1965), on page 88Google Scholar: “There probably will never be a science of international relations as there is one of physics or chemistry, if for no other reason than experiments are practically impossible and the number of cases is too small for a rigorous statistical analysis.” And even so creative and catholic a scholar as Lazarfield, Paul in “The American Soldier—An Expository Review,” Public Opinion Quarterly, 13 (Fall 1949), 377-404CrossRefGoogle Scholar alleges on page 378 that survey methods “do not use experimental techniques.” On the other hand, on page 155 in “The History of Human Conflict,” in McNeil, Elton B., ed., The Nature of Human Conflict (Englewood Cliffs, 1965)Google Scholar, Ole R. Holsti and Robert C. North actually allude to the possibility of transforming history “into something approaching a laboratory of international behavior.” A similar point is made by Snyder, Richard C. in “Some Perspectives on the Use of Experimental Techniques in the Study of International Relations,” in Guetzkow, Harold et al, eds., Simulation in International Relations (Englewood Cliffs, 1963), 1-23Google Scholar. In sociology, two of the early proponents of the experimental mode were Greenwood, Ernest, Experimental Sociology: A Study in Method (New York, 1945)CrossRefGoogle Scholar and Chapin, F. Stuart, Experimental Designs in Sociological Research (New York, 1947).Google Scholar
6 Elsewhere, I have spelled out in greater detail my views on the more promising strategies for explaining war in general (Singer, J. David, “Modern International War: From Conjecture to Explanation,” in Lepawsky, Albert et al, eds., The Search for World Order [New York, 1971], 47-71Google Scholar), as well as the strategy being pursued in the Correlates of War project in particular (Singer, J. David, “The Correlates of War Project,” World Politics, 24 [January 1972], 243-70).CrossRefGoogle Scholar
7 Rapoport, Anatol, “Methodology in the Physical, Biological, and Social Science,” General Systems, 14 (1969), 179-86.Google Scholar
8 Some would differentiate between these two versions of the field experiment. When the researcher merely waits for a condition or event that is probably coming anyway, we refer to a “natural” field experiment; illustrative is the Herbert Simon and Frederick Stern study on the effects of TV upon voter turnout. When the researcher consciously injects the stimulus condition, we refer to a “contrived” field experiment.
9 Campbell and Stanley, Experimental and Quasi-Experimental Designs.
10 Webb, Eugene et al, Unobtrusive Measures: Non-Reactive Research in the Social Sciences (Chicago, 1966).Google Scholar
11 Laponce, Jean, “Experimenting: A Two-Person Game between Man and Nature,” in Laponce, Jean and Smoker, Paul, eds., Experimentation and Simulation in Political Science (Toronto, 1972), 4-5.CrossRefGoogle Scholar
12 Laponce, Jean, “An Experimental Method to Measure the Tendency to Equibalance in a Political System,” American Political Science Review, 60 (December 1966) 982-93.CrossRefGoogle Scholar
13 Anatol Rapoport and Melvin Guyer, “The Psychology of Conflict Involving Mixed-Motive Decisions,” Final Research Report NIH-MH 12880-02, 1969.
14 Bales, Robert Freed, “A Set of Categories for the Analysis of Small Group Interaction,” American Sociological Review, 15 (1950), 257-63CrossRefGoogle Scholar; and Timothy, Leary, Interpersonal Diagnosis of Personality (New York, 1957).Google Scholar
15 Hutt, S. J. and Hutt, Corinne, Direct Observation and Measurement of Behavior (Springfield, Illinois, 1970).Google Scholar
16 Eldersveld, “Experimental Propaganda Techniques.”
17 On the detailed procedures of observation, measurement, and index construction in the Correlates of War project, see: Small, Melvin and Singer, J. David, “Formal Alliances, 1816-1965: An Extension of the Basic Data,” Journal of Peace Research, 3 (1969), 257-82CrossRefGoogle Scholar; Small, Melvin and Singer, J. David, “Diplomatic Importance of States, 1816-1970: An Extension and Refinement of the Indicator,” World Politics, 25 (July 1973), 577-99CrossRefGoogle Scholar; Singer, J. David and Small, Melvin, The Wages of War, 1816-1965: A Statistical Handbook (New York, 1972)Google Scholar; Wallace, Michael D. and Singer, J. David, “Inter-Governmental Organization in the Global System, 1816-1964: A Quantitative Description,” International Organization, 24 (Spring 1970), 239-87CrossRefGoogle Scholar; and Ray, James Lee and Singer, J. David, “Measuring the Concentration of Power in the International System,” Sociological Methods and Research, 1 (May 1973), 403-37.CrossRefGoogle Scholar
18 As a matter of fact, the laboratory experimenter must face several problems that need not concern those of us who conduct historical experiments. Perhaps foremost among these is the problem of experimenter bias. As Rosenthal, Robert, in “On the Social Psychology of the Psychological Experiment,” American Scientist, 51 (June 1963) 268-83Google ScholarPubMed; and Experimenter Effects in Behavioral Research (New York, 1966), and others have demonstrated, the theoretical biases of the researcher—or laboratory assistants—constitute a recurrent source of distortion in the experimental work of physical and biological, as well as social scientists. In small group experiments, for example, the way in which the stimulus is presented can induce systematic bias in the behavior of the subjects. Similarly, the observation and measurement of human or animal responses to a stimulus can be systematically distorted by the expectations of the observer.
19 Kort, Fred, “Regression Analysis and Discriminant Analysis,” American Political Science Review, 67 (June 1973), 555-59.CrossRefGoogle Scholar
20 Campbell and Stanley, Experimental and Quasi-Experimental Designs.
21 Alkery, Hayward, “Causal Inference and Political Analysis,” in Bernd, Joseph, ed., Mathematical Applications in Political Science 2 (Dallas, 1966), 7-43.Google Scholar
22 Campbell and Stanley, Experimental and Quasi-Experimental Designs, 5.
23 These are: “History, the specific events occurring between the first and second measurement in addition to the experimental variable ; Maturation, processes within the respondents operating as a function of the passage of time per se (not specific to the particular events), including growing older, growing hungrier, growing more tired, and the like; Testing, the effects of taking a test upon the scores of a second testing; Instrumentation, in which changes in the observers or scorers used may produce changes in the obtained measurements; Statistical regression, operating where groups have been selected on the basis of their extreme scores; Biases resulting in differential selection of respondents for the comparison groups; Experimental mortality, or differential loss of respondents from the comparison groups; Selection-maturation interaction, etc., which in certain of the multiple-group quasi-experimental designs, such as Design 10, is confounded with, i.e., might be mistaken for, the effect of the experimental variable.”
24 A reasonable position on this issue is that taken by Blalock (Causal Inferences, 5): “There appears to be an inherent gap between the languages of theory and research which can never be bridged in a completely satisfactory way. One thinks in terms of… causes … but one’s tests are made in terms of covariations, operations, and pointer readings.”
25 Smelser, Neil J., Essays in Sociological Explanation (Englewood Cliffs, N.J., 1968)Google Scholar distinguishes between “parameter variables” and “operative variables” to emphasize this distinction.
26 We should, of course, make clear that History—like all disciplines-contains a wide range of epistemological viewpoints. Illustrative of the growing trend toward greater rigor in that discipline are: Rowney, Don K. and Graham, James Q. Jr., eds., Quantitative History (Homewood, Illinois, 1969)Google Scholar; Lorwin, Val. R. and Price, Jacob M., eds., The Dimensions of the Past (New Haven, 1972)Google Scholar; and Dollar, Charles M. and Jensen, Richard J., Historian’s Guide to Statistics (New York, 1971).Google Scholar
27 One problem with this comparative case method, however, is the likelihood of ending up with too few cases and too many variables giving us a poor N:K ratio, see Deutsch, Karl W., Singer, J. David, and Smith, Keith, “The Organizing Efficiency of Theories,” American Behavioral Scientist, 9 (October 1965), 30-33.CrossRefGoogle Scholar
28 The theoretical model is articulated in considerable detail in Deutsch, Karl W. and Singer, J. David, “Multipolar Power Systems and International Stability,” World Politics, 16 (April 1964), 390-406.CrossRefGoogle Scholar
29 Beer, Samuel H., “The Comparative Method and the Study of British Politics,” Comparative Politics, 1 (October 1968), 19.CrossRefGoogle Scholar
30 Lijphart, Arend, “Comparative Politics and the Comparative Method,” American Political Science Review, 65 (September 1971), 684.CrossRefGoogle Scholar
31 Nagel, Ernest, The Structure of Science (New York, 1961), 452.Google Scholar
32 While this is an imaginary experiment, simplified for illustrative purposes, a fair number of “real” ones have been conducted within the Correlates of War project at Michigan. See, for example, Singer, J. David and Small, Melvin, “Alliance Aggregation and the Onset of War, 1815-1945,” in Singer, J. David, ed., Quantitative International Politics (New York, 1968), 247-86Google Scholar; Singer, J. David, Bremer, Stuart, and Stuckey, John, “Capability Distribution, Uncertainty, and Major Power War, 1816-1965,” in Russett, Bruce, ed., Peace, War, and Numbers (Beverly Hills, 1972), 19-48Google Scholar; and Wallace, Michael D., War and Rank Among Nations, (Lexington, Massachusetts, 1973)Google Scholar. For another imaginary one, see Singer, J. David, On the Scientific Study of Politics (New York, 1972)Google Scholar. I am indebted to John Stuckey for the illustrations used here.
33 Piatt, John R., “Strong Inference,” Science, 146 (October 1964), 347-53.Google Scholar
34 Guetzkow et al., Simulation in International Relations.
35 Naylor, Thomas H., Computer Simulation Experiments with Models of Economic Systems (New York, 1971).Google Scholar
36 For an excellent example of a simulation that combines empirical and logical components, see Bremer, Stuart A., Simulated Worlds: A Computer Model of National Decision Making (Princeton, 1977).Google Scholar
- 4
- Cited by