Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-24T02:28:58.111Z Has data issue: false hasContentIssue false

4 - Following the Scientific Path in Applied Psychology

from Part II - Beginning your Career

Published online by Cambridge University Press:  21 July 2022

Mitchell J. Prinstein
Affiliation:
University of North Carolina, Chapel Hill

Summary

We examine why science is important to applied psychology, even if one’s motivation to be a psychologist is primarily practical. Helping others takes knowledge and skill, and often applied psychologists face situations that do not produce immediate or clear outcomes. In such situations experiential learning can only do so much, and science is needed to be effective long term. When the history of training models in applied psychology is reviewed from the inception of the field to the present day, it is clear that students of applied psychology need to learn how to do research that will inform practice, how to assimilate the research evidence as it emerges, and how to incorporate empiricism into practice itself. We argue that the kind of knowledge needed by practitioners requires a focus on the needs of those served by psychologists, a more personalized and process-based research approach, and a laser-like focus on issues of broad importance. A scientist-practitioner is a consumer of research, but is also able to identify, acquire, develop, and apply empirically supported treatments and assessments to those in need, and to think about their own work with an empirical mind set.

Type
Chapter
Information
The Portable Mentor
Expert Guide to a Successful Career in Psychology
, pp. 88 - 101
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

If the average applied psychology student is asked confidentially why they are pursuing a career in their field, the most likely answer is “to help people.” Although this answer is such a cliché that it sometimes causes graduate admissions committee members to wrinkle their noses, in fact it is perfectly appropriate. The ultimate purpose of applied psychology is to alleviate human suffering and promote human health and happiness. Unfortunately, good will does not necessarily imply good outcomes. If mere intentionality were enough, there would never have been a reason for psychology in the first place, because human beings have always desired a happy life and shown compassion for others. It is not enough for psychology students to want to help: one must also know how to help.

In most areas of human skill and competence, “know-how” comes in two forms, and psychology is no exception. Sometimes knowledge is acquired by actually doing a task, perhaps with guidance and shaping from others, and with a great deal of trial and error. This approach is especially helpful when the outcomes of action are immediate, clear, and limited to a specific range of events. Motor skills such as walking or shooting a basketball are actions of that kind. The baby trying to learn to walk stands and then falls hundreds of times before the skill of walking is acquired. The basketball goes through the hoop or it does not, providing just the feedback needed – even experienced players will shoot hundreds of times a day to keep this skill sharp. In areas such as these, “practice makes perfect,” or at least adequate.

Sometimes, however, knowledge is best acquired in part through verbal rules. This approach is especially helpful when a task is complex and the outcomes are probabilistic, delayed, subtle, and multifaceted. You could never learn to send a rocket to the moon or to build a skyscraper through direct experience. For rule-based learning to be effective, however, the rules themselves have to be carefully tested and systematized. One of the greatest inventions of human beings the last 2000 years has been the development of the scientific method as a means of generating and testing rules that work. Human “know-how” has advanced most quickly in areas that are most directly touched by science, as a glance around almost any modern living room will confirm.

The problem faced by students of applied psychology is that the desire to be of help immediately pushes in the direction of “learning by doing” even though often the situations applied psychologists face do not produce outcomes that are immediate, clear, or occur within a known range of options. Consider parents who want to know how to raise their children. There are times that poor advice can seem to produce good immediate outcomes at the expense of long-term success. For example, telling children they are doing wonderfully, no matter what, may feel good initially but the children may grow up with a sense of entitlement and a poor understanding of how hard work is needed to succeed. Similarly, a clinician in psychotherapy can do an infinite number of things. The immediate results are a weak guide to the acquisition of real clinical know-how because effects can be delayed, probabilistic, subtle, and multifaceted.

All of this would be admitted by everyone were it not for two things. First, some aspects of the clinical situation are and need to be responsive to directed shaping and trial and error learning. Experience alone may teach clinicians how to behave in the role of a helper, for example. As the role is acquired, the confidence of clinicians will almost always increase, because the clinician “knows what to do.” Some of this kind of learning is truly important, such as learning to relate to another person in a genuine way, but trial and error does not necessarily lead to an increase in the ability to actually produce desired clinical outcomes. That brings us to the second feature of the situation that can mistakenly capture the actions of students in professional psychology. Clients change for many reasons and what practitioners cannot see, without specific attempts to do so, is what would have happened if the practitioner had done something different. Many medical practices (e.g., blood-letting; mud packs) survived for centuries due to the judgmental bias produced by this process. Many problems wax and wane regardless of intervention, and some features of professional interventions are reassuring and helpful almost regardless of the specifics. Thus, with experience, most practitioners feel not only confident, but also competent, because in general it appears that good outcomes are being achieved. It is natural in these circumstances for the practitioner to respond based on their “clinical experience.”

That is a mistake. Over more than half a century in virtually every area in which clinical judgment is pitted against statistical prediction, statistical prediction does a better job (Reference Grove and LloydGrove & Lloyd, 2006). Yet even when faced with clear clinical failures, practitioners are most likely to rely on clinical judgment rather than objective data to determine what to do next (Reference Stewart and ChamblessStewart & Chambless, 2008). This suggests that it can be psychologically difficult to integrate the rules that emerge from research with one actual history of ongoing effort to be of help to others.

Part of the problem is that science can suggest courses of action that are not personally preferred, which takes considerable psychological flexibility to overcome. Consider the use of exposure methods in anxiety disorders, which arguably have stronger scientific support than any other form of psychological intervention for any mental health problem (Reference Abramowitz, Deacon and WhitesideAbramowitz et al., 2019). Despite overwhelming empirical support, few clients receive this treatment, and when they do, often it is not delivered properly (Reference Farrell, Deacon, Kemp, Dixon and SyFarrell et al., 2013). Dissemination research has helped explain this distressing fact. Meta-analyses show that training in exposure increases knowledge about it, but not its use (Reference Trivasse, Webb and WallerTrivasse et al., 2020). Instead, what most determines use of exposure is the psychological posture of clinicians themselves. When practitioners are unwilling to feel their own discomfort over causing discomfort in someone else, even if it will help them, they avoid using exposure methods or detune their delivery (Reference Scherr, Herbert and FormanScherr et al., 2015). Problems of this kind abound in evidence-based care. As another example, drug and alcohol counselors need to learn to sit with their discomfort over “using drugs to treat the use of drugs” to encourage the use of methadone for clients addicted to heroin (Reference Varra, Hayes, Roget and FisherVarra et al., 2008). Rules alone do not ensure use of evidence-based practices: practitioners themselves need to be open to the psychological difficulties of that scientific journey and scientists need to think of practitioners more as people than as mere tools for dissemination (Reference Hayes and HofmannHayes & Hofmann, 2018a).

In one sense, scientist-practitioners are those who have deliberately stepped into the ambiguity that lies between the two kinds of “know-how.” They are willing to live with the conflict between the urgency of helping others and the sometimes slow pace of scientific knowledge. Fortunately, due to the past efforts of others, in most areas of applied psychology this is a road that fits with provider values: this openness to discomfort is for a larger purpose. There is considerable evidence that the use of empirically supported procedures increases positive outcomes (Reference Hayes and HofmannHayes & Hofmann, 2018b). When agencies convert to the use of such methods, client outcomes are better, especially if practitioners are encouraged to fit specific methods to specific client needs (Reference Weisz, Chorpita, Palinkas, Schoenwald, Miranda, Bearman, Daleiden, Ugueto, Ho, Martin, Gray, Alleyne, Langer, Southam-Gerow and GibbonsWeisz et al., 2012). Improvements tend to be longer-lasting (Reference Cukrowicz, Timmons, Sawyer, Caron, Gummelt and JoinerCukrowicz et al., 2011), and staff turnover is reduced (Reference Aarons, Sommerfeld, Hecht, Silovsky and ChaffinAarons et al., 2009).

But in other ways, this is a road with difficulties. Most patients given psychosocial treatment do not receive evidence-based care (Reference Wolitzky-Taylor, Zimmermann, Arch, De Guzman and LagomasinoWolitzky-Taylor et al., 2015). There are some understandable reasons. Adherence to treatment manuals does not alone guarantee good outcomes (Reference Shadish, Matt, Navarro and PhillipsShadish et al., 2000) and the important work of learning how to use scientifically supported methods in more flexible ways to fit individual needs is still in its infancy (Reference Fisher and BoswellFisher & Boswell, 2016; Reference Hayes, Hofmann and StantonHayes, Hofmann, & Stanton, 2020). It is important to know the specific processes of change that account for the effects of these methods, but that is often not clear (Reference Hayes, Hofmann and CiarrochiHayes, Hofmann, & Ciarrochi, 2020; Reference La Greca, Silverman and LochmanLa Greca et al., 2009). While there is considerable evidence that relationship factors are key to many clinical outcomes (Reference Norcross and WampoldNorcross & Wampold, 2011), there remains limited evidence of the specific variables that alter these factors while maintaining positive outcomes (Reference Creed and KendallCreed & Kendall, 2005; Reference Hayes, Hofmann and CiarrochiHayes, Hofmann, & Ciarrochi, 2020).

What often drives the research of an applied scientist is the possibility of doing a greater amount of good by reaching a larger number of people than could be reached directly. Ultimately the idea that scientifically filtered processes and procedures will help more people more efficiently and effectively is the dream of applied science. Unfortunately, this dream is surprisingly hard to realize. It is difficult to produce research that will be consumed by others and that will make a difference in applied work. For the practitioner, a reliance on scientifically based procedures will not fully remove the tension between clinical experience and scientific forms of knowing, because virtually no technologies exist that are fully curative, and only a fraction of clients will respond fully and adequately based on what is now known.

This chapter is for students who are considering taking “the scientific path” in their applied careers. We will discuss how to be effective within the scientist-practitioner model, whether in the clinic or in the research laboratory. We will briefly examine its history, and then consider how to produce and consume research in a way that makes a difference.

1. History of the Scientist-Practitioner Model

From the early inceptions of applied psychology, science and practice were thought of by many as inseparable. This is exemplified by Lightmer Witmer’s claim that:

The pure and the applied sciences advance in a single front. What retards the progress of one, retards the progress of the other; what fosters one, fosters the other. But in the final analysis the progress of psychology, as of every other science, will be determined by the value and amount of its contributions to the advancement of the human race.

This vision began to be formalized in 1947 (Reference Shakow, Hilgard, Kelly, Luckey, Sanford and ShafferShakow et al., 1947) when the American Psychological Association adopted as standard policy the idea that professional psychology graduate students would be trained both as scientists and as practitioners. In August of 1948 a collection of professionals representing the spectrum of behavioral health care providers met in Boulder, Colorado with the intent of defining the content of graduate training in clinical psychology. One important outcome of this two-week long conference was the unanimous recommendation for the adoption of the scientist-practitioner model of training. At the onset of the conference, not all attendees were in agreement on this issue. Some doubted that a true realization of this model was even possible. Nevertheless, there were at least five general reasons for the unanimous decision.

The first reason was the understanding that specialization in one area versus the other tended to produce a narrowness of thinking, thus necessitating the need for training programs that promoted flexibility in thinking and action. It was believed that such flexibility could be established when “persons within the same general field specialize in different aspects, as inevitably happens, cross-fertilization and breadth of approach are likely to characterize such a profession” (Reference RaimyRaimy, 1950, p. 81).

The second reason for the unanimous decision was the belief that training in both practice and research could begin to circumvent the lack of useful scientific information regarding effective practice that was then available. It was hoped that research conducted by those interested in practice would yield information useful in the guidance of applied decisions.

The third reason for the adoption of the scientist-practitioner model was the generally held belief that there would be no problem finding students capable of fulfilling the prescribed training. The final two reasons why the model was ultimately adopted is the cooperative potential for the merger of these two roles. It was believed that a scientist who held at hand many clinical questions would be able to set forth a research agenda adequate for answering these questions, and could expect economic support for research agendas that could be funded by clinical endeavors.

Despite the vision from the Boulder Conference, its earnest implementation was still very much in question. The sentiment was exemplified by Reference RaimyRaimy (1950):

Too often, however, clinical psychologists have been trained in rigorous thinking about nonclinical subject matter and clinical problems have been dismissed as lacking in “scientific respectability.” As a result, many clinicians have been unable to bridge the gap between their formal training and scientific thinking on the one hand, and the demands of practice on the other. As time passes and their skills become more satisfying to themselves and to others, the task of thinking systematically and impartially becomes more difficult.

(p. 86)

The scientist-practitioner model was revisited in conference form quite frequently in the years that followed. While these conferences tended to reaffirm the belief in the strength of the model, they also revealed an undercurrent of dissatisfaction and disillusionment with the model as it was applied in practice. The scientist-practitioner split feared by the original participants in the Boulder Conference gradually became more and more of a reality. In 1961, a report published by the Joint Commission on Mental Health voiced concerns regarding this split. In 1965 a conference was held in Chicago where the participants displayed open disgruntlement about the process of adopting and applying the model (Reference Hoch, Ross and WinderHoch et al., 1966).

The late 1960s and 1970s brought a profound change in the degree of support for the scientist-practitioner model. Professional schools were created, at first within the university setting and then in free-standing form (Reference PetersonPeterson, 1968, Reference Peterson1976). The Vail Conference went far beyond previous conferences in explicitly endorsing the creation of doctor of psychology degrees and downplaying the scientist-practitioner model as the appropriate model for professional training in psychology (Reference KormanKorman, 1976). The federal government, however, began to fund well-controlled and large-scale psychosocial research studies, providing a growing impetus for the creation of a research base relevant to practice.

The 1980s and 1990s saw contradictory trends. The split of the American Psychological Society (now the Association for Psychological Science) from the American Psychological Association, a process largely led by scientist-practitioners, reflected the growing discontent of scientist-practitioners in professional psychology disconnected from science (Reference HayesHayes, 1987). Professional schools, few of which adopted a scientist-practitioner model, proliferated but began to run into economic problems as the managed care revolution undermined the dominance of psychology as a form of independent practice (Reference Hayes, Follette, Dawes and GradyHayes et al., 1995). The federal government began to actively promote evidence-based practice, through a wide variety of funded initiatives in dissemination, diffusion, and research/practice collaboration. Research-based clinical practice guidelines began to appear (Reference Hayes and GreggHayes & Gregg, 2001), and the field of psychology began to launch formal efforts to summarize a maturing clinical research literature, such as the Division 12 initiative in developing a list of empirically supported treatments (Reference Chambless, Sanderson, Shoham, Johnson, Pope, Crits-Christoph, Baker, Johnson, Woody, Sue, Beutler, Williams and McMurryChambless et al., 1996). An outgrowth of APS, the Academy of Psychological Clinical Science (APCS), began with a 1994 conference on ‘‘Psychological Science in the 21st Century.’’ In 1995, the APCS was formally established and began recognizing doctoral and internship programs that advocated science-based clinical training.

In the 2000s, the movement toward “evidence-based practice” began to take hold in psychology (Reference GoodheartGoodheart, 2011), but the definition of “evidence” was considerably broadened to give equal weight to the personal experiences of the clinician and to scientific evidence. The penetration of formal scientific evidence into psychological practice continued to be slow (Reference Stewart and ChamblessStewart & Chambless, 2007), which began to receive national publicity. For example, Newsweek ran a story under the title “Ignoring the Evidence: Why do psychologists reject science?” (Reference BegleyBegley, 2009). Practical concerns also began to be raised about the dominance of the individual psychotherapy model in comparison to web- and phone-based interventions, self-help approaches, and media-based methods (Reference Kazdin and BlaséKazdin & Blasé, 2011). Treatment guidelines (e.g., Reference Hayes, Follette, Dawes and GradyHayes et al., 1995) began to be embraced even by leaders of mainstream psychology (Reference GoodheartGoodheart, 2011). Finally, more science-based organizations took stronger steps to accredit training programs that emphasize a “clinical scientist” model, and to advocate for these values in the public arena. In 2007 the APCS formally launched the Psychological Clinical Science Accreditation System; in 2011 there were about a dozen doctoral programs accredited by this process; a decade later there are over 60 accredited programs and 12 internships.

The last decade has been what looks like a retrenchment in many ways, but really it is more of a revitalization and reformation of the scientist-practitioner model. A substantial body of evidence about what practices work best is now available, but the systems for disseminating that evidence are faltering. For example, the National Registry of Evidence-Based Programs and Practices maintained by the Substance Abuse and Mental Health Services Administration in the United States Department of Health and Human Services (www.nrepp.samhsa.gov/) has been shut down by the United States government, and the list of evidence-based intervention methods maintained by the Clinical Psychology Division of the American Psychological Association is being updated only irregularly. At the same time, professional training programs that eschew the importance of science to day-to-day professional practice continue to grow.

With the publication of the fifth edition of the Diagnostic and Statistical Manual of the American Psychiatric Association, the funders of research in mental illness appear to have abandoned hope that research focused on syndromes will ever lead to a deep understanding of mental health problems. In part in response to criticisms of the DSM-5, the National Institute on Mental Health (NIMH) established the Research Domain Criteria (RDoC) program that aims to classify mental disorders based on processes of change linked to developmental neurobiological changes (Reference Insel, Cuthbert, Garvey, Heinssen, Pine, Quinn, Quinn, Sanislow and WangInsel et al., 2010).

Meanwhile, psychology is turning in a more process-based direction as well (Reference Hayes and HofmannHayes & Hofmann, 2018b), with a greater emphasis on theory-based, dynamic, progressive, contextually bound, modifiable, and multilevel changes or mechanisms that occur in predictable, empirically established sequences oriented toward desirable outcomes (Reference Hofmann and HayesHofmann & Hayes, 2019). If this transition continues, trademarked protocols linked to syndromes will receive less attention in the future as a model of evidence-based therapy, and comprehensive models of evidence-based processes of change, linked to evidence-based intervention kernels that move these processes, and that help a specific client achieve their desired goals, will receive more attention.

The student of applied psychology needs to think through these issues and consider their implications for professional values. Professionals of tomorrow will face considerable pressures to adopt evidence-based practices. We would argue that this can be a good thing, if psychological professionals embrace their role in the future world of scientifically based professional psychology. Doing so requires learning how to do research that will inform practice, how to assimilate the research evidence as it emerges, and how to incorporate empiricism into practice itself. It is to those topics that we now turn.

2. Doing Research That Makes a Difference

The vast majority of psychological research makes little impact. The modal number of citations for published psychological research between 2005 and 2010 was only two (Reference KurillaKurilla, 2017) and most psychology faculty and researchers are little known outside of their immediate circle of students and colleagues. From this situation we can conclude the following: If a psychology student does what usually comes to mind in psychological research based on the typical research models, he or she will make only a limited impact, because that is precisely what others have done who have come to that end. A more unusual approach is needed to do research that makes a difference.

Making a difference in psychological research can be facilitated by clarity about (a) the nature of science, and (b) the information needs of practitioners.

2.1 The Nature of Science

Science is a rule-generating enterprise that has as its goal the development of increasingly organized statements of relations among events that allow analytic goals to be met with precision, scope, and depth, and based on verifiable experience. There are two key aspects to this definition. First, the product of science is verbal rules based on experiences that can be shared with others. Agreements about scientific method within particular research paradigms tell us how and when certain things can be said: for example, conclusions can be reached when adequate controls are in place, or when adequate statistical analyses have been done. A great deal of emphasis is placed on these issues in psychology education (e.g., issues of “internal validity” and “scientific method”) and we have little additional to offer in this chapter on those topics.

Second, these rules have five specific properties of importance: organization, analytic utility, precision, scope, and depth. Scientific products can be useful even when they are not organized (e.g., when a specific fact is discovered that is of considerable importance), but the ultimate goal is to organize these verbal products over time. That is why theories and models are so central to mature sciences.

The verbal products of science are meant to be useful in accomplishing analytic ends. These ends vary from domain to domain and from paradigm to paradigm. In applied psychology, however, the most important analytic ends are implied by the practical goal of the field itself – namely, the prediction and influence of psychological events of practical importance. Not all research practices are equal in producing particular analytic ends. For example, understanding or prediction are of little utility in actually influencing target phenomena if the important components of the theory cannot be manipulated directly. For that reason, it helps to start with the end goal and work backward to the scientific practices that could reach that goal. We will do so shortly by considering the research needs of practitioners.

Finally, we want theories that apply in highly specified ways to given phenomena (i.e., they are precise); apply to a broad range of phenomena (i.e., they have scope); and are coherent across different levels of analysis in science, such as across biology and psychology (i.e., they have depth). Of these, the easiest to achieve is precision, and perhaps for this reason the most emphasis in the early days of clinical science was on the development of manuals and technical descriptions that are precise and replicable. Perhaps the hardest dimension to achieve, however, is scope, and, as we will argue in a moment, that is the property most missing in our current approaches to applied psychology.

2.2 The Knowledge Needed by Practitioners

Over 50 years ago, Gordon Paul eloquently summarized the empirical question that arises for the practitioner: “what treatment, by whom, is most effective for this individual with that specific problem, and under which set of circumstances does that come about” (Reference Paul and FranksPaul, 1969). Clients have unique needs, and unique problems. For that reason, practitioners need scientific knowledge that tells them what to do to be effective with the specific people with whom they work. It must explain how to change things that are accessible to the practitioner so that better outcomes are obtained. Practitioners also need scientifically established know-how that is broadly applicable to the practical situation and can be learned and flexibly applied with a reasonable amount of effort and in a fashion that is respectful of their professional role.

Clinical manuals have been a major step forward in developing scientific knowledge that can focus on things the clinician can manipulate directly in the practical situation, but not enough work has gone into how to develop manuals that are easy to master and capable of being flexibly applied to clients with unique combinations of needs (Reference Kendall and BeidasKendall & Beidas, 2007). With the proliferation of empirically supported manuals, more needs to be done to come up with processes that can allow the field to synthesize and distill down the essence of disparate technologies, and combine essential features of various technologies into coherent treatment plans for individuals with mixed needs.

That is a major reason that a focus on processes of change has grown. In essence, Paul’s question is being reformulated to this one: “What core biopsychosocial processes should be targeted with this client given this goal in this situation, and how can they most efficiently and effectively be changed?” (Reference Hofmann and HayesHofmann & Hayes, 2019, p. 38.)

The only way that question can be answered is through models and theories that apply to the individual case. It is often said that practitioners avoid theory and philosophy in favor of actual clinical techniques, but an examination of popular psychology books read by practitioners shows that this is false. Practitioners need knowledge with scope, because they often face novel situations with unusual combinations of features. Popular books take advantage of this need by presenting fairly simplified models, often ones that can be expressed in a few acronyms, that claim to have broad applicability.

Broad models and theories are needed in the practice environment because they provide a basis for the use of knowledge when confronted with a new problem or situation, and suggest how to develop new kinds of practical techniques. In addition, because teaching based purely on techniques can become disorganized and incoherent as techniques proliferate, theory and models make scientific knowledge more teachable.

Book publishers, workshop organizers, and others in a position to know how practitioners usually react often cringe if researchers try to get too theoretical, but this makes sense given the kind of theories often promulgated by researchers, which are typically complicated, narrow, limited, and arcane. Worse, many theories do not tell clinicians what to do because they do not focus primarily on how to change external variables. Clinical theory is not an end in itself, and thus should not be concerned primarily about “understanding” separated from prediction and influence, nor primarily with the unobservable or unmanipulable.

To be practically useful, psychological theories and models must also be progressive, meaning that they evolve over time to raise new, interesting, and empirically productive questions that generate coherent data. It is especially useful if the model can be developed and modified to fit a variety of applied and basic issues. They also need to be as simple as possible, both in the sense that they are easy to learn and in the sense that they simplify complexity where that can be done.

Finally, to be truly useful, applied research must fit the practical and personal realities of the practice environment. It does no good to create technologies that no one will pay for, that are too complicated for systems of care to adopt, that do not connect with the personal experiences of practitioners, that are focused on methods of delivery that cannot be mounted, or that focus on targets of change that are not of importance. For that reason, applied psychology researchers must be intimately aware of what is happening in the world of practice (e.g., what is managed care?; how are practitioners paid?; what problems are most costly to systems of care?; and so on). The growth of websites, apps, bibliotherapy, peer support, and other ways of delivering psychological help indirectly is exploding. The expansion of psychology from mental health to physical and behavioral health, as well as social health in areas such as prejudice and stigma, is obvious.

2.3 Research of Importance

Putting all of these factors together, applied research programs that make a difference tend to reach the practitioner with a combination both of a technology and an underlying theory or model that illuminates how processes of change apply to the individual case and that is progressive, simplifying, fits with the practical realities of applied work, and is learnable, flexible, appealing, effective, broadly applicable, and important. This is a challenging formula, because it demands a wide range of skills from psychological researchers who hope to make an applied impact. Anyone can create a treatment and try to test it. Anyone can develop a narrow “model” and examine a few empirical implications. What is more difficult is figuring out how to develop broadly applicable models that are conceptually simple and interesting and that have clear and unexpected technological implications. Doing so requires living in both worlds: science and practice. The need for this breadth of focus also helps makes sense of the need for broad knowledge of psychological science that is often pursued in more scientifically based clinical programs.

3. The Practical Role of the Scientist-Practitioner

In the practical environment, the scientist-practitioner is an individual who performs three primary roles. First, the scientist-practitioner is a consumer of research, able to identify, acquire, and apply empirically supported treatments and assessments to those in need. This requires well-developed practical skills, but it also requires substantial empirical skills. The purpose of this consumption is to put empirically based procedures into actual practice.

Second, the scientist-practitioner evaluates his or her own program and practices. The modern day scientist-practitioner “must not only be a superb clinician capable of supervising interventions, and intervening directly on difficult cases, but must also be intimately familiar with the process of evaluating the effectiveness of interventions … and must adapt the scientific method to practical settings” (Reference Hayes, Barlow and Nelson-GrayHayes et al., 1999, p. 1). This requires knowledge of time series or “single case” research designs, clinical replications series, effectiveness research approaches, and idiographic analysis of change processes viewed as complex networks, among others. Additive model group research methods, which use existing programs as a kind of baseline and thus raise far fewer ethical issues than group research protocols with no treatment control groups, are also gaining in popularity in applied settings.

Third, the scientist-practitioner reports advances to applied and scientific communities, contributing both to greater understanding of applied problems and to the evolution of effective systems of care. In today’s landscape, a wide variety of contributions are possible from practical sites.

For example, clinical replications series and open effectiveness trials in applied settings are highly valued in the empirical clinical literature (e.g., Reference Persons, Bostrom and BertagbolliPersons et al., 1999; Reference Watkins, Hunter, Hepner, Paddock, de la Cruz, Zhou and GilmoreWatkins et al., 2011). Clinical replication series are large collections of single case experimental designs and empirical case studies using well-defined treatment approaches and intensive measurement. Their purpose is to determine rates of successes and failures, and factors that contribute to these outcomes, in a defined patient group.

These kinds of contributions are essential to the overall goal of developing scientific know-how that will help alleviate human suffering. Clinical replication series provide an excellent example. For clinical research to be useful to practitioners, it must be known what kinds of client are most likely to respond to what kinds of treatments in the real-world setting. Indeed, sometimes methods that succeed in highly controlled efficacy trials fail in effectiveness trials when real-world issues are factored in (e.g., Reference Hallfors, Cho, Sanchez, Khatapoush, Kim and BauerHallfors et al., 2006). This question cannot be adequately answered purely based on data from major research centers because the number and variety of clients needed to address such questions are much too large. Only practitioners have the client flow and practical interest that formal clinical replication series demand.

As processes of change have come to the fore, the role of idiographic research has also been increasingly emphasized (Reference Hayes, Hofmann, Stanton, Carpenter, Sanford, Curtiss and CiarrochiHayes et al., 2019). That is true for several reasons, but a profound one is that behavioral science is realizing what the physical sciences concluded 90 years ago: processes of change based on analysis of collections of individual units will apply to those individual units only if they are “ergodic,” that is, if they are identical and unchanging (Reference Molenaar and CampbellMolenaar & Campbell, 2009). That means that psychology will not be able to understand how change processes work unless they begin with idiographic findings, and only the practice base has adequate access to the numbers of cases needed, one at a time.

3.1 The Scientist-Practitioner in Organized Healthcare Delivery Systems

The combination of roles embraced by scientist-practitioners give them a special place in the healthcare marketplace as organized systems of care become more dominant. No one else is better prepared to help triage clients into efficient methods of intervention, to train and supervise others in the delivery of cost effective and empirically based approaches, to deliver these approaches themselves, to work with complicated or unresponsive cases to learn how to innovate new approaches, and to evaluate these delivery systems.

4. Looking Ahead

The history of science suggests that, in the long run, society will ultimately embrace scientific knowing over know-how that emerges from trial and error whenever substantial scientific evidence exists. That has happened in architectural and structural design, public health, physical medicine, food safety, and myriad other areas, presumably because scientific know-how is a better guide to effective practices. The same shift is beginning to occur in mental health and substance abuse areas. But while progress has been made in the identification of techniques that are effective with specific problems or in promoting specific goals, it is clear that we still have a long way to go. Today’s students will help decide how fast the transition to an empirically based profession will be.

If the trends seen in other fields are a good guide, ultimately applied psychology will be required to adopt an evidence-based model. In the present day, however, professional trends continue to pull the field in both directions. Some in the practice leadership have argued against embracing the movement toward empirically supported treatments, preferring instead the adoption of new forms of professional training (e.g., pharmacotherapy training).

Meanwhile, changes in the field itself make the scientist-practitioner model more viable than ever. For example, the skills needed to add value to organized behavioral healthcare delivery systems are precisely those emphasized by the scientist-practitioner model. Idiographic analysis of processes of change requires a vast network of evidence-based practitioners. Expansion from mental health to behavioral health and a positive social goal will require careful empirical thinking. The scientist-practitioner model may yet provide the common ground upon which psychology as a discipline can become more relevant to human society.

Students of professional psychology will have a large role in determining how these struggles for identity will ultimately work themselves out. The scientific path is not an easy one for applied psychology students to take, but for the sake of humanity, it seems to be the one worth taking.

References

Aarons, G. A., Sommerfeld, D. H., Hecht, D. B., Silovsky, J. F., & Chaffin, M. J. (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77, 270280.CrossRefGoogle ScholarPubMed
Abramowitz, J. S., Deacon, B. J., & Whiteside, S. P. H. (2019). Exposure therapy for anxiety: Principles and practice (2nd ed.). New York: Guilford.Google Scholar
Begley, S. (2009). Ignoring the evidence: Why do psychologists reject science? Newsweek. October, 12, p. 30.Google Scholar
Chambless, D. L., Sanderson, W. C., Shoham, V., Johnson, S. B., Pope, K. S., Crits-Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S., Beutler, L., Williams, D. A., & McMurry, S. (1996). An update on empirically validated therapies. The Clinical Psychologist, 49, 518.Google Scholar
Creed, T., & Kendall, P. C. (2005). Empirically supported therapist relationship building behavior within a cognitive−behavioral treatment of anxiety in youth. Journal of Consulting and Clinical Psychology, 73, 498505.CrossRefGoogle Scholar
Cukrowicz, K. C., Timmons, K. A., Sawyer, K., Caron, K. M., Gummelt, H. D., & Joiner, T. R. (2011). Improved treatment outcome associated with the shift to empirically supported treatments in an outpatient clinic is maintained over a ten-year period. Professional Psychology: Research and Practice, 42, 145152.CrossRefGoogle Scholar
Farrell, N. R., Deacon, B. J., Kemp, J. J., Dixon, L. J., & Sy, J. T. (2013). Do negative beliefs about exposure therapy cause its suboptimal delivery? An experimental investigation. Journal of Anxiety Disorders, 27(8), 763771. Doi: 10.1016/j.janxdis.2013.03.007CrossRefGoogle ScholarPubMed
Fisher, A. J., & Boswell, J. F. (2016). Enhancing the personalization of psychotherapy with dynamic assessment and modeling. Assessment, 23(4), 496506.CrossRefGoogle ScholarPubMed
Goodheart, C. D. (2011). Psychology practice: Design for tomorrow. American Psychologist, 66, 339347.CrossRefGoogle ScholarPubMed
Grove, W. M., & Lloyd, M. (2006). Meehl’s contribution to clinical versus statistical prediction. Journal of Abnormal Psychology, 115, 192194.CrossRefGoogle ScholarPubMed
Hallfors, D., Cho, H., Sanchez, V., Khatapoush, S., Kim, H., & Bauer, D. (2006). Efficacy vs effectiveness trial results of an indicated ‘model’ substance abuse program: Implications for public health. American Journal of Public Health, 96, 22542259.CrossRefGoogle ScholarPubMed
Hayes, S. C. (1987). The gathering storm. Behavior Analysis, 22, 4147.Google Scholar
Hayes, S. C., & Gregg, J. (2001). Factors promoting and inhibiting the development and use of clinical practice guidelines. Behavior Therapy, 32, 211217.CrossRefGoogle Scholar
Hayes, S. C., & Hofmann, S. G. (2018a). A psychological model of the use of psychological intervention science: Seven rules for making a difference. Clinical Psychology: Research and Practice, 25(3), e12259. Doi: 10.1111/cpsp.12259Google Scholar
Hayes, S. C., & Hofmann, S. G. (Eds.) (2018b). Process-based CBT: The science and core clinical competencies of cognitive behavioral therapy. Oakland, CA: Context Press/New Harbinger Publications.Google Scholar
Hayes, S. C., Follette, V. M., Dawes, R. M., & Grady, K. E. (Eds.) (1995). Scientific standards of psychological practice: Issues and recommendations. Reno, NV: Context Press.Google Scholar
Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999). The scientist-practitioner: Research and accountability in the age of managed care. Boston, MA: Allyn and Bacon.Google Scholar
Hayes, S. C., Hofmann, S. G., Stanton, C. E., Carpenter, J. K., Sanford, B. T., Curtiss, J. E., & Ciarrochi, J. (2019). The role of the individual in the coming era of process-based therapy. Behaviour Research and Therapy, 117, 4053. Doi: 10.1016/j.brat.2018.10.005CrossRefGoogle ScholarPubMed
Hayes, S. C., Hofmann, S. G., & Ciarrochi, J. (2020). A process-based approach to psychological diagnosis and treatment: The conceptual and treatment utility of an extended evolutionary model. Clinical Psychology Review, 82, 101908. Doi: 10.1016/j.cpr.2020.101908CrossRefGoogle Scholar
Hayes, S. C., Hofmann, S. G., & Stanton, C. E. (2020). Process-based functional analysis can help behavioral science step up to the challenges of novelty: COVID – 19 as an example. Journal of Contextual Behavioral Science, 18, 128145. Doi: 10.1016/j.jcbs.2020.08.009CrossRefGoogle Scholar
Hoch, E. L., Ross, A. O., & Winder, C. L. (Eds.) (1966). Professional education of clinical psychologists. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Hofmann, S. G., & Hayes, S. C. (2019). The future of intervention science: Process based therapy. Clinical Psychological Science, 7(1), 3750. Doi: 10.1177/2167702618772296CrossRefGoogle ScholarPubMed
Insel, T., Cuthbert, B., Garvey, M., Heinssen, R., Pine, D. S., Quinn, K., Quinn, K., Sanislow, C., & Wang, P. (2010). Research Domain Criteria (RDoC): Toward a new classification framework for research on mental disorders. American Journal of Psychiatry, 167(7), 748751. doi:10.1176/appi.ajp.2010.09091379CrossRefGoogle Scholar
Kazdin, A. E., & Blasé, S. L. (2011). Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspectives on Psychological Science, 6, 2137.CrossRefGoogle ScholarPubMed
Kendall, P. C., & Beidas, R. S. (2007). Smoothing the trail for dissemination of evidence-based practices for youth: Flexibility within fidelity. Professional Psychology: Research and Practice, 38, 1320.CrossRefGoogle Scholar
Korman, M. (Ed.) (1976). Levels and patterns of professional training in psychology. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Kurilla, B. (2017). How many citations does a typical research paper in psychology receive? Geek Psychologist Blog. Retrieved October 18, 2020 from http://geekpsychologist.com.Google Scholar
La Greca, A. M., Silverman, W. K., & Lochman, J. E. (2009). Moving beyond efficacy and effectiveness in child and adolescent intervention research. Journal of Consulting and Clinical Psychology, 77, 373382.CrossRefGoogle ScholarPubMed
Molenaar, P. C. M., & Campbell, C. G. (2009). The new person-specific paradigm in psychology. Current Directions in Psychological Science, 18(2), 112117.CrossRefGoogle Scholar
Norcross, J. C., & Wampold, B. E. (2011). Evidence-based therapy relationships: Research conclusions and clinical practices. Psychotherapy, 48, 98102.CrossRefGoogle ScholarPubMed
Paul, G. L. (1969). Behavior modification research: Design and tactics. In Franks, C. M. (Ed.), Behavior therapy: Appraisal and status. New York: McGraw Hill.Google Scholar
Persons, J. B., Bostrom, A., & Bertagbolli, A. (1999). Results of randomized controlled trials of cognitive therapy for depression generalize to private practice. Cognitive Therapy & Research, 23, 535548.CrossRefGoogle Scholar
Peterson, D. R. (1968). The clinical study of social behavior. New York: Appleton-Century-Crofts.Google Scholar
Peterson, D. R. (1976). Is psychology a profession? American Psychologist, 31, 572581.CrossRefGoogle ScholarPubMed
Raimy, V. C. (Ed.) (1950). Training in clinical psychology (Boulder Conference). New York: Prentice-Hall.Google Scholar
Scherr, S. R., Herbert, J. D., & Forman, E. M. (2015). The role of therapist experiential avoidance in predicting therapist preference for exposure treatment for OCD. Journal of Contextual Behavioral Science, 4(1), 2129.CrossRefGoogle Scholar
Shadish, W. R., Matt, G. E., Navarro, A. M., & Phillips, G. (2000). The effects of psychological therapies in clinically representative conditions: A meta-analysis. Psychological Bulletin, 126, 512529.CrossRefGoogle ScholarPubMed
Shakow, D., Hilgard, E. R., Kelly, E. L., Luckey, B., Sanford, R. N., & Shaffer, L. F. (1947). Recommended graduate training program in clinical psychology. American Psychologist, 2, 539558.Google Scholar
Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63, 267281.CrossRefGoogle ScholarPubMed
Stewart, R. E., & Chambless, D. L. (2008). Treatment failures in private practice: How do psychologists proceed? Professional Psychology: Research and Practice, 39(2),176181.CrossRefGoogle Scholar
Trivasse, H., Webb, T. L., & Waller, G. (2020). A meta-analysis of the effects of training clinicians in exposure therapy on knowledge, attitudes, intentions, and behavior. Clinical Psychology Review, 80, 101887.CrossRefGoogle ScholarPubMed
Varra, A. A., Hayes, S. C., Roget, N., & Fisher, G. (2008). A randomized control trial examining the effect of Acceptance and Commitment Training on clinician willingness to use evidence-based pharmacotherapy. Journal of Consulting and Clinical Psychology, 76, 449458.CrossRefGoogle ScholarPubMed
Watkins, K. E., Hunter, S. B., Hepner, K. A., Paddock, S. M., de la Cruz, E., Zhou, A. J., & Gilmore, J. (2011). An effectiveness trial of group cognitive behavioral therapy for patients with persistent depressive symptoms in substance abuse treatment. Archives of General Psychiatry, 68, 577584.CrossRefGoogle ScholarPubMed
Weisz, J. R., Chorpita, B. F., Palinkas, L. A., Schoenwald, S. K., Miranda, J., Bearman, S. K., Daleiden, E. L., Ugueto, A. M., Ho, A., Martin, J., Gray, J., Alleyne, A., Langer, D. A., Southam-Gerow, M. A., Gibbons, R. D., & Research Network on Youth Mental Health (2012). Testing standard and modular designs for psychotherapy treating depression, anxiety, and conduct problems in youth: A randomized effectiveness trial. Archives of General Psychiatry, 69, 274282. doi:10.1001/archgenpsychiatry.2011.147CrossRefGoogle ScholarPubMed
Witmer, L. (1907/1996). Clinical psychology. American Psychologist, 51, 248251. [Original article from The Psychological Clinic, 1907, 1, 1–9.]CrossRefGoogle Scholar
Wolitzky-Taylor, K., Zimmermann, M., Arch, J. J., De Guzman, E., & Lagomasino, I. (2015). Has evidence-based psychosocial treatment for anxiety disorders permeated usual care in community mental health settings? Behaviour Research and Therapy, 72, 917.CrossRefGoogle ScholarPubMed

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×