Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-17T22:23:54.673Z Has data issue: false hasContentIssue false

Editorial: Methodology in judgment and decision making research

Published online by Cambridge University Press:  01 January 2023

Andreas Glöckner*
Affiliation:
Max Planck Institute for Research on Collective Goods, Kurt-Schumacher-Str. 10, D-53113, Bonn (Germany)
Benjamin E. Hilbig*
Affiliation:
School of Social Sciences, University of Mannheim, 68131, Mannheim (Germany)
*
Rights & Permissions [Opens in a new window]

Abstract

In this introduction to the special issue on methodology, we provide background on its original motivation and a systematic overview of the contributions. The latter are discussed with correspondence to the phase of the scientific process they (most strongly) refer to: Theory construction, design, data analysis, and cumulative development of scientific knowledge. Several contributions propose novel measurement techniques and paradigms that will allow for new insights and can thus avail researchers in JDM and beyond. Another set of contributions centers around how models can best be tested and/or compared. Especially when viewed in combination, the papers on this topic spell out vital necessities for model comparisons and provide approaches that solve noteworthy problems prior work has been faced with.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2011] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Methodology is one of the vital pillars of all science. Indeed, the question of how we go about our scientific quests—rather than what exactly we are investigating—has stimulated numerous debates and controversies over the past centuries. Mostly, this debate has served the common purpose of establishing certain standards which serve as a road map for scientists. Though disciplines and subfields vary greatly in their specific methodological standards, all share some degree of concern for such matters.

The field of psychology certainly is no exception. On the contrary, “[o]ne of the hallmarks of modern academic psychology is its methodological sophistication” (Reference RozinRozin, 2009, p. 436). Methodological issues play a prominent role in the ongoing exchange and a growing number of contributions have recently addressed potential methodological problems inherent in the behavioral sciences (e.g., see the recent special issues in Perspectives on Psychological Science by De Houwer, Fiedler, & Moors, 2011; and Reference KruschkeKruschke, 2011). Doubts have been raised concerning the subjects on which findings are typically based (Reference Henrich, Heine and NorenzayanHenrich, Heine, & Norenzayan, 2010), the approaches taken in theory development and testing (Reference Marewski, Schooler and GigerenzerGigerenzer, 1998; Reference HendersonHenderson, 1991; Trafimow, 2003, 2009; Reference Wallach and WallachWallach & Wallach, 1994), the nature of the behavior assessed (Reference Baumeister, Vohs and FunderBaumeister, Vohs, & Funder, 2007), or specific practices of data collection and questionable standards in data analysis (Reference DienesDienes, 2011; Reference Simmons, Nelson and SimonsohnSimmons, Nelson, & Simonsohn, in press; Reference WagenmakersWagenmakers, 2007; Reference Wagenmakers, Wetzels, Borsboom and van der MaasWagenmakers, Wetzels, Borsboom, & van der Maas, 2011; Reference Wetzels, Matzke, Lee, Rouder, Iverson and WagenmakersWetzels, et al., 2011), to name but a few examples.Footnote 1

Also, there are several projects in development that attempt to coordinate collective action for solving fundamental methodological problems. The Filedrawer Project (www.psychfiledrawer.org) provides an online archive of replication attempts to address the problem that “[m]ost journals […] are rarely willing to publish even carefully conducted non-replications that question the validity of findings that they have published” (www.psychfiledrawer.org/about.php); this problem, in turn can lead to publication biases (see Renkewitz, Fuchs, & Fiedler, 2011). In a similar vein, the Reproducibility Project (http://openscienceframework.org) aims to estimate the reproducibility of findings published in top psychological journals by conducting a collective, distributed attempt to replicate findings from a large sample of recently published papers. Needless to say, still other examples of papers and projects highlight methodological challenges and provide potential solutions. In a nutshell, all essentially hint at the continuous struggle for increasingly conclusive, robust, and general knowledge.

In our view, this struggle also goes on in the field of Judgment and Decision Making (JDM) research. Often enough, important advances in this area are motivated by methodological criticism. For example, of the many reactions to the recently proposed priority heuristic for risky choice (Reference BirnbaumBrandstätter, Gigerenzer, & Hertwig, 2006) a substantial number raise methodological concerns pertaining to research strategy in general, the diagnosticity of tasks used, or the data analyses applied (e.g. Andersen, Harrison, Lau, & Rutström, 2010; Birnbaum, 2008a, 2008b; Reference Birnbaum and LaCroixBirnbaum & LaCroix, 2008; Reference Katsikopoulos and LanFiedler, 2010; Reference Baumeister, Vohs and FunderGlöckner & Betsch, 2008; Reference HendersonHilbig, 2008; Reference Regenwetter, Dana and Davis-StoberRegenwetter, Dana, & Davis-Stober, 2011; Reference Regenwetter, Ho and TsetlinRegenwetter, Ho, & Tsetlin, 2007). Other theoretical controversies have similarly stimulated debate that largely centers around methodological issues (e.g., Reference Brighton and GigerenzerBrighton & Gigerenzer, 2011; Reference Camilleri and NewellCamilleri & Newell, 2011; Reference TrafimowHilbig, 2010; Reference Regenwetter, Dana and Davis-StoberHilbig & Richter, 2011; Reference Marewski, Schooler and GigerenzerMarewski, Schooler, & Gigerenzer, 2010; Reference PachurPachur, 2011).

These examples and others demonstrate a need for an explicit and focused exchange of methodological arguments in JDM and potentially some room for improving common practices in this field. This assertion provided the main motivation for setting up a call for papers on methodology in JDM research. Aiming to keep our own agendas out of the early stages of development, we kept the initial call for papers deliberately broad. The gratifying upshot was an unexpectedly large number of interesting and important submissions.

Despite the breadth of the initial call, however, an early observation was that relatively few (if any) contributions dealt with issues in the philosophy of science or concerned methodological issues of theory formation and revision. Instead, the vast majority of manuscripts addressed issues of design and data analysis. This unequal distribution will become obvious in what follows: In this introduction to the special issue, we briefly discuss all contributions ordered by the stages of scientific discovery to which they (mostly) refer (see Figure 1).

Figure 1: Overiew of contributions.

2 Overview of papers

In this overview, we commence with issues of theory construction, before we then turn to experimental design and measurement. Next, we discuss the papers pertaining to those steps that follow data collection, namely data analysis, and cumulative development of knowledge. Note, however, that several of the papers speak to more than one of these matters. As such, ordering and grouping the contributions in the current way should not be taken to imply that each paper relates to only one of the phases of scientific progress.

2.1 Theory construction

Two papers in this special issue discuss theory construction and theory development in the field of JDM (Glöckner & Betsch, 2011; Katsikopoulos & Lan, 2011). Following Poppers approach of critical rationalism, Glöckner and Betsch advocate that scientific progress crucially necessitates that theories be formulated so as to comprise high empirical content, while being falsifiable. The authors point out some common drawbacks in corresponding theory formulation in JDM—especially a tendency towards formulation of weak theories. Also, for certain classes of JDM models, some remedies are suggested. More generally, observable shortcomings are partially attributed to a social dilemma structure (i.e., strictly maximizing personal interests would harm the collective interest to achieve scientific progress). It is suggested that the scientific community should agree upon a change in publication policies to overcome this dilemma structure.

Katsikopolus and Lan take a historical perspective and discuss general developments in the field of JDM by investigating Herbert Simon’s influence on current work. In a review of recent articles in the field, the authors demonstrate the strong influence that Simon’s ideas had on today’s thinking in JDM. Katsikopolus and Lan also critically assess the way in which these ideas are treated in current work. In particular, the authors argue that integrative approaches for research on descriptive and prescriptive models are sought too seldom.

2.2 Design

Many of the contributions in this special issue focus on the steps between theory construction and data collection. That is, they concern the design stage, including the use of measurement methods, as well as the selection of appropriate tasks and stimuli.

2.2.1 Measurement methods

Schulte-Mecklenbeck, Kühberger, and Ranyard (2011) discuss classic and more recently developed process tracing methods and present examples of how these techniques can strongly aid development and testing of JDM process models. In a similar vein, Franco-Watkins and Johnson (2011) suggest applying an eye-moving window technique (i.e., information board in which information is revealed once it is looked at). They argue that this information board variant allows for combining the advantages of classic Mouselab techniques and eye-tracking; specifically, this method should allow for fast and effortless information acquisition, while ensuring that the researcher gains full insight on which information was looked up, for how long, and when.

A third paper proposing a new method to gain insight on cognitive processes was contributed by Reference HilbigKoop and Johnson (2011). They suggest applying a measure of response dynamics which is based on analyzing different aspects of mouse-trajectories between a starting position and the option chosen. The underlying idea is that the attraction exerted by the non-chosen option will manifest itself in these trajectories (e.g., Reference Moshagen and HilbigSpivey & Dale, 2006) and thus provides insight concerning the on-line formation of preferences. Overall, these different contributions commonly signify that the application and combination of classic and new methods will provide important insights concerning processes underlying judgment and decision making.

2.2.2 Diagnostic task selection

Another issue of research design discussed in several papers is the selection of tasks that allow for actually discriminating between theories or hypotheses. Doyle, Chen and Savani (2011) provide a method (using Excel-Solver) for selecting tasks that differentiate optimally between theoretical models of temporal discounting. They show how to construct tasks that make the rate parameters of prominent theories orthogonal or even inversely related.

In a rather different domain, Murphy, Ackermann and Handgraaf (2011) provide a method to measure social value orientation (Reference Van and A. M.Van Lange, 1999) by using a few highly diagnostic tasks in which participants distribute money between themselves and others. The innovative method is based on a slider format which—combined with diagnostic tasks—makes data collection very efficient. Indeed, both approaches by Doyle et al. and Murphy et al. also seem promising in that they can probably be extended to other concepts relevant in JDM such as loss aversion, risk aversion etc.

Another contribution addresses the issue of diagnostic task selection from a somewhat different angle. Jekel, Fiedler and Glöckner (2011) provide a standard method for diagnostic task selection in probabilistic inference tasks. The suggested Euclidian Diagnostic Task Selection method increases the efficiency in research design and reduces the degree of subjectivity in task selection. Jekel et al. also provide a ready-made tool programmed in R that makes it easy to use the method in future research (see also Jekel et al., 2010). Overall, there is agreement that diagnostic task selection is crucial for model comparison and model testing.

2.3 Data analysis

The majority of papers in the special issue are concerned with core issues of data analysis, including contributions suggesting improved methods for model comparisons, demonstrating the advantages of Bayesian methods, or pointing to the advantages of mixed-model approaches.

2.3.1 Model comparisons

Several papers focus on methods for model comparisons. Davis-Stober and Brown (2011) describe how to apply a normalized maximum likelihood (NML) approach to strategy classification in probabilistic inference and risky choice. One of the crucial advantages is that NML takes into account models’ overall flexibility instead of correcting for the number of free parameters only. The paper also illustrates how to test models assuming that decision makers do not stick to single strategies, but rather use a mixture of these.

Moshagen and Hilbig (2011) connect to the ideas discussed in Glöckner and Betsch, though focusing more on the importance of falsification. They show that comparing the fit of competing models can easily lead to entirely false conclusions once the true data-generating model is not actually among those considered (Reference De Houwer, Fiedler and MoorsBröder & Schiffer, 2003). As a remedy, they suggest including a test of absolute model fit which provides a chance for refuting false models.

Broomell, Budescu and Por (2011) show that the problem of overlapping model predictions (see also the contribution by Jekel et al.) can lead to biased conclusions in model comparisons and model competitions (see Reference Erev, Ert, Roth, Haruvy, Herzog and HauErev, et al., 2010). The reason for this is that global measures of fit can hide the level of agreement between the predictions of various models. Broomell et al. propose the use of more informative pair-wise model comparisons and demonstrate the advantages of such an approach. Also, the contribution by Jekel et al. discussed in the previous section adds insight on this matter by suggesting certain improvements in model comparisons. The same holds true for the hierarchical Bayesian approach put forward by Reference TrafimowLee and Newell (2011) that is discussed in the next section.

2.3.2 Bayesian approaches

Another prominent issue concerns the application of Bayesian approaches and replacing classic methods of hypothesis testing by corresponding methods. Reference TrafimowLee and Newell (2011) demonstrate the advantages of using hierarchical Bayesian methods for modeling search and stopping rules of decision strategies at the level of individuals. One of the core advantages over the strategy classification methods discussed above is that the hierarchical structure uses what has been learned about one subject for assisting inference for another one (“shrinkage”). Lee and Newell further show that their method will provide new insight on the nature of individual differences (e.g., in information search) which might also help to solve the debate between multi-strategy and uni-models for decision making (e.g., Newell, 2005).

In another paper on Bayesian methods, Matthews (2011) discusses potential advantages of replacing classic Fisherian and Neyman-Pearson hypothesis testing. He exemplifies that a reanalysis of previous studies when replacing classic t-tests by Bayesian t-tests (Reference Rouder, Speckman, Sun, Morey and IversonRouder, Speckman, Sun, Morey, & Iverson, 2009) can lead to strikingly different conclusions. The Bayesian approach allows for comparing mutually exclusive hypotheses on the same footage, thus avoiding the problems of p-values and allowing for evidence for the null hypothesis. In a long-term perspective, the Bayesian approach would also aid knowledge accumulation by considering the sum of previous research findings when setting the priors for later analyses. We hope that the paper inspires further constructive discussion concerning the clear advantages but also the remaining drawbacks of Bayesian statistics.

2.3.3 Mixed-model approaches

Budescu and Johnson (2011) suggest a model-based approach to improve the analysis of the calibration of probability judgments. In calibration research, judgments must be compared against event probabilities. However, event probabilities are often unknown. The authors show that aggregating over observations can lead to wrong conclusions and suggest using a model-based approach instead. Specifically, they put forward a mixed-model regression approach (simultaneously taking into account effects between and within subjects) to estimate event probabilities which are then compared against probability judgments to determine calibration. Similar to the hierarchical approach by Lee and Newell, one crucial advantage of this mixed-model based approach is that estimates for within- and between subjects effects are more stable because they profit from the larger underlying data basis.

2.4 Cumulative development of knowledge

There are two contributions in the special issue that—besides touching on questions of data analysis—also speak to the matter of cumulative development of knowledge. One is the above mentioned paper by Matthews (2011) on using Bayesian approaches. As mentioned above, replacing (or complementing) classic hypothesis testing by the Bayesian approach aids knowledge accumulation. In a second contribution, Renkewitz, Fuchs, and Fiedler (2011) address the important issue of publication biases. By exemplarily re-analyzing two JDM-specific meta-analyses, they demonstrate that publication biases are also present in JDM research. Such biases, in turn, will hinder appropriate cumulative development of knowledge. Indeed, severely distorted overall estimations of effect size—or even premature acceptance of the existence and stability of effects—can be the consequences. The authors discuss both specific methods to identify publication biases (in meta-analyses) and further provide recommendations on how changes in the overall standards and publication practices might counteract the problem identified.

3 Summary and conclusions

We are pleased to say that the 15 papers contained in this special issue avail many important insights in JDM methodology and provide helpful tools and suggestions which—in our view—will further improve the confidence we may have in our findings. Despite the fact that these 15 contributions are motivated by some methodological weaknesses in the field of JDM, it is also important to highlight that many of the problems tackled speak for the methodological sophistication of JDM research that is already in place. Of course, of those points raised in this special issue some are more and others less controversial. Indeed, our experience in handling these papers throughout the review process showed that some papers have more potential for debate than others. Nonetheless, the constructive way in which all contributions describe ways to overcome methodological weaknesses makes us optimistic that this issue might inspire further positive developments.

It seems as if the techniques and policies for improving our methodological standards are available. One of the foremost aims of the special issue was to inspire a more intense debate concerning these issues in order to improve the degree to which standards are shared within the community which is the basic requirement for their comprehensive enforcement. This is necessary for achieving scientific progress and overcoming social dilemma structures inherent in joint scientific discovery.

Footnotes

We thank all authors for their stimulating contributions, the many reviewers who provid ed timely and vital feedback, and the editor-in-chief, Jon Baron, for making this spe cial issue possible.

1 For methodological debates in medicine which apply to psychological research see also Ioannidis (2005) and Reference Ioannidis, Tatsioni and KarassaIoannidis, Tatsioni, and Karassa (2010).

References

Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2010). Behavioral econometrics for psychologists. Journal of Economic Psychology, 31, 553576.CrossRefGoogle Scholar
Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2, 396403.CrossRefGoogle ScholarPubMed
Birnbaum, M. H. (2008a). Evaluation of the priority heuristic as a descriptive model of risky decision making: Comment on Brandstätter, Gigerenzer, and Hertwig (2006). Psychological Review, 115, 253260.CrossRefGoogle ScholarPubMed
Birnbaum, M. H. (2008b). New tests of cumulative prospect theory and the priority heuristic: Probability-outcome tradeoff with branch splitting. Judgment and Decision Making, 3, 304316.CrossRefGoogle Scholar
Birnbaum, M. H., & LaCroix, A. R. (2008). Dimension integration: Testing models without trade-offs. Organizational Behavior and Human Decision Processes, 105, 122133.CrossRefGoogle Scholar
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). Making choices without trade-offs: The priority heuristic. Psychological Review, 113, 409432.CrossRefGoogle ScholarPubMed
Brighton, H., & Gigerenzer, G. (2011). Towards competitive instead of biased testing of heuristics: A reply to Hilbig and Richter (2011). Topics in Cognitive Science, 3, 197205.CrossRefGoogle ScholarPubMed
Bröder, A., & Schiffer, S. (2003). Bayesian strategy assessment in multi-attribute decision making. Journal of Behavioral Decision Making, 16, 193213.CrossRefGoogle Scholar
Broomell, S. B., Budescu, D. V., & Por, H.-H. (2011). Pair-wise comparisons of multiple models. Judgment and Decision Making, 6, 821831.CrossRefGoogle Scholar
Budescu, D. V., & Johnson, T. R. (2011). A model-based approach for the analysis of the calibration of probability judgments. Judgment and Decision Making, 6, 857869.CrossRefGoogle Scholar
Camilleri, A. R., & Newell, B. R. (2011). When and why rare events are underweighted: A direct comparison of the sampling, partial feedback, full feedback and description choice paradigms. Psychonomic Bulletin & Review, 18, 377384.CrossRefGoogle ScholarPubMed
Davis-Stober, C. P., & Brown, N. (2011). A shift in strategy or “error”? Strategy classification over multiple stochastic specifications. Judgment and Decision Making, 6 800813.CrossRefGoogle Scholar
De Houwer, J., Fiedler, K., & Moors, A. (2011). Strengths and limitations of theoretical explanations in psychology: Introduction to the special section. Perspectives on Psychological Science, 6, 161162.CrossRefGoogle Scholar
Dienes, Z. (2011). Bayesian versus orthodox statistics: Which side are you on? Perspectives on Psychological Science, 6, 274290.CrossRefGoogle Scholar
Doyle, J. R., Chen, C. H., & Savani, K. (2011). New designs for research in delay discounting. Judgment and Decision Making, 6, 759770.CrossRefGoogle Scholar
Erev, I., Ert, E., Roth, A. E., Haruvy, E., Herzog, S. M., Hau, R., et al. (2010). A Choice Prediction Competition: Choices from Experience and from Description. Journal of Behavioral Decision Making, 23, 1547.CrossRefGoogle Scholar
Fiedler, K. (2010). How to study cognitive decision algorithms: The case of the priority heuristic. Judgment and Decision Making, 5, 2132.CrossRefGoogle Scholar
Franco-Watkins, A. M., & Johnson, J. G. (2011). Applying the decision moving window to risky choice: Comparison of eye-tracking and mousetracing methods. Judgment and Decision Making, 6, 740749.CrossRefGoogle Scholar
Gigerenzer, G. (1998). Surrogates for theories. Theory & Psychology, 8, 195204.CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2008). Do people make decisions under risk based on ignorance? An empirical test of the priority heuristic against cumulative prospect theory. Organizational Behavior and Human Decision Processes, 107, 7595.CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2011). The Empirical Content of theories in judgment and decision making: shortcomings and remedies. Judgment and Decision Making, 6, 711721.CrossRefGoogle Scholar
Henderson, D. K. (1991). On the testability of psychological generalizations (psychological testability). Philosophy of Science, 58, 586606.CrossRefGoogle Scholar
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 6183.CrossRefGoogle ScholarPubMed
Hilbig, B. E. (2008). One-reason decision making in risky choice? A closer look at the priority heuristic. Judgment and Decision Making, 3, 457462.CrossRefGoogle Scholar
Hilbig, B. E. (2010). Reconsidering "evidence” for fast-and-frugal heuristics. Psychonomic Bulletin & Review, 17, 923930.CrossRefGoogle ScholarPubMed
Hilbig, B. E., & Richter, T. (2011). Homo heuristicus outnumbered: Comment on Gigerenzer and Brighton (2009). Topics in Cognitive Science, 3, 187196.CrossRefGoogle ScholarPubMed
Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2, 697701.CrossRefGoogle ScholarPubMed
Ioannidis, J. P., Tatsioni, A., & Karassa, F. B. (2010). Who is afraid of reviewers’ comments? Or, why anything can be published and anything can be cited. European Journal of Clinical Investigation, 40, 285287.CrossRefGoogle ScholarPubMed
Jekel, M., Fiedler, S., & Glöckner, A. (2011). Diagnostic task selection for strategy classification in judgment and decision making. Judgment and Decision Making, 6, 782799.CrossRefGoogle Scholar
Jekel, M., Nicklisch, A., & Glöckner, A. (2010). Implementation of the Multiple-Measure Maximum Likelihood strategy classification method in R: Addendum to Glöckner (2009) and practical guide for application. Judgment and Decision Making, 5, 5463.CrossRefGoogle Scholar
Katsikopoulos, K. V., & Lan, C.-H. (2011). Herbert Simon’s spell on judgment and decision making. Judgment and Decision Making, 6, 722732.CrossRefGoogle Scholar
Koop, G. J., & Johnson, J. G. (2011). Continuous process tracing and the Iowa Gambling Task: Extending response dynamics to multialternative choice. Judgment and Decision Making, 6, 750758.CrossRefGoogle Scholar
Kruschke, J. K. (2011). Introduction to special section on bayesian data analysis. Perspectives on Psychological Science 6, 272273.CrossRefGoogle ScholarPubMed
Lee, M. D., & Newell, B. R. (2011). Using hierarchical Bayesian methods to examine the tools of decision-making. Judgment and Decision Making, 6, 832842.CrossRefGoogle Scholar
Marewski, J. N., Schooler, L. J., & Gigerenzer, G. (2010). Five principles for studying people’s use of heuristics Acta Psychologica Sinica, 42, 7287.CrossRefGoogle Scholar
Matthews, W. J. (2011). What would judgment and decision making research be like if we took a Bayesian approach to hypothesis testing? Judgment and Decision Making, 6, 843856.CrossRefGoogle Scholar
Moshagen, M., & Hilbig, B. E. (2011). Methodological notes on model comparisons and strategy classification: A falsificationist proposition. Judgment and Decision Making, 6, 814820.CrossRefGoogle Scholar
Murphy, R. O., Ackerman, K. A., & Handgraaf, M. J. J. (2011). Measuring social value orientation. Judgment and Decision Making, 6, 771781CrossRefGoogle Scholar
Newell, B. R. (2005). Re-Visions of rationality? Trends in Cognitive Sciences, 9, 1115.CrossRefGoogle ScholarPubMed
Pachur, T. (2011). The limited value of precise tests of the recognition heuristic. Judgment and Decision Making, 6, 413422.CrossRefGoogle Scholar
Regenwetter, M., Dana, J., & Davis-Stober, C. P. (2011). Transitivity of preferences. Psychological Review, 118, 4256.CrossRefGoogle ScholarPubMed
Regenwetter, M., Ho, M.-H. R., & Tsetlin, I. (2007). Sophisticated approval voting, ignorance priors, and plurality heuristics: A behavioral social choice analysis in a Thurstonian framework. Psychological Review, 114, 9941014.CrossRefGoogle Scholar
Renkewitz, F., Fuchs, H. M., & Fiedler, S. (2011). Is there evidence of publication biases in JDM research? Judgment and Decision Making, 6, 870881.CrossRefGoogle Scholar
Rouder, J. N., Speckman, P. L., Sun, D. C., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16, 225237.CrossRefGoogle ScholarPubMed
Rozin, P. (2009). What kind of empirical research should we publish, fund, and reward? A different perspective. Perspectives on Psychological Science, 4, 435439.CrossRefGoogle ScholarPubMed
Schulte-Mecklenbeck, M., Kühberger, A., & Ranyard, R. (2011). The role of process data in the development and testing of process models of judgment and decision making. Judgment and Decision Making, 6, 733739.CrossRefGoogle Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (in press). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science.Google Scholar
Spivey, M. J., & Dale, R. (2006). Continuous Dynamics in Real-Time Cognition. Current Directions in Psychological Science, 15, 207211.CrossRefGoogle Scholar
Trafimow, D. (2003). Hypothesis testing and theory evaluation at the boundaries: Surprising insights from Bayes’s theorem. Psychological Review, 110, 526535.CrossRefGoogle ScholarPubMed
Trafimow, D. (2009). The theory of reasoned action: A case study of falsification in psychology. Theory & Psychology, 19, 501518.CrossRefGoogle Scholar
Van, Lange, A. M., P. (1999). The pursuit of joint outcomes and equality in outcomes: An integrative model of social value orientation. Journal of Personality and Social Psychology, 77, 337349.Google Scholar
Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14, 779804.CrossRefGoogle Scholar
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & van der Maas, H. L. J. (2011). Why psychologists must change the way they analyze their data: The case of Psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100, 426432.CrossRefGoogle Scholar
Wallach, L., & Wallach, M. A. (1994). Gergen versus the mainstream: Are hypotheses in social psychology subject to empirical test? Journal of Personality and Social Psychology, 67, 233242.CrossRefGoogle Scholar
Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., & Wagenmakers, E.-J. (2011). Statistical evidence in experimental psychology: An empirical comparison using 855 t Tests. Perspectives on Psychological Science, 6, 291298.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1: Overiew of contributions.