Copious evidence suggests that many clinical psychologists today, perhaps the majority, are deeply ambivalent about the role of science in informing their practice. For instance, they value personal clinical experience over research evidence (Groopman, 2007
), tend to use assessment practices that have dubious psychometric support (Garb, Wood, Lilienfeld, & Nezworski, 2005
), and tend not to use procedures for which there is the strongest evidence of efficacy (Barlow et al., 1999
; Crow et al., 1999
; Haas & Clopton, 2003
; Hollon et al., 2002
; Motta, Little, & Tobin, 1993
; Phillips & Brandon, 2004
; Thomas & Jolley, 1998
; T.C.Wade&Baker, 1977
). Thus, the current practices and views of clinical psychologists are very similar to those of physicians in the early 1900s.
At that point in its developmental trajectory, medicine was at equipoise between an intuitive enterprise that largely depended on personal experience and clinical folklore and an enterprise founded on the rational application of scientific evidence. In fact, for much of its history, medicine resembled clinical psychology as it currently exists—that is, experiencing spirited debate about and resistance to the idea of accepting scientific research and theory as the preeminent arbiter of psychological practice (reflecting a schism dating back at least to the conflict between Empiricists and Rationalists in the first century BC; e.g., Porter, 1997
). There are other similarities between clinical psychology and prescientific medicine. Like clinical psychology, for much of its evolution, medicine intended to apply research findings to the resolution of exigent problems that the individual clinician encountered. The clinician (be it a barber, surgeon, or physician) presented himself or herself as having the knowledge, skills, and responsibility to ameliorate or treat a host of problems, but in fact, the clinician often did not have any specialized knowledge or tools that would be effective in this regard (see the arguments for nonspecific effects of psychotherapy below). Similarly, at various points in the past, we clinical psychologists have presented ourselves as having the knowledge and skills to treat conditions such as schizophrenia, bipolar illness, and autism when, in fact, we had no scientific basis for entering the fray. What else does the history of medicine reveal about clinical psychology’s plight? For much of its existence, medical training occurred in free-standing programs outside of universities. Notable tension existed between those who believed that medical decisions should be based on science and those who valued traditional empiricism (i.e., informal individual observation), personal clinical experience, and tradition. This debate regarding the proper basis of medical practice played out for much of the last 2,000 years.
In its earliest incarnations, medicine was viewed as a craft or an art (see Numbers, in press
, for a fascinating review of the conflict between scientific and nonscientific approaches to medicine, which informed the review below). For instance, both Aristotle and the Hippocratic writers labeled medical practice as techne
—that is, an art. However, by the time of the Roman Empire, the dispute over the proper role of science was ongoing, with Pliny regarding it as part of natural history and Galen viewing it as akin to archery, benefiting more from practice than from reasoning (French, 1994
; Talbot, 1978
). In the 12th century, the esteemed “medicus” William of Malmesbury endorsed practice, not “scientia,” as the basis of his skills. Conversely, Taddeo Alderotti, a highly respected 13th-century physician from Bologna, stressed that the proper basis of medicine was theoretically inspired science; without such grounding, medicine could not be distinguished from “the usual practice that old women carry on” (Siraisi, 1977
, p. 30). Similarly, the 14th-century French physician Guy de Chauliac observed, “If the doctors have not learned geometry, astronomy, dialectics, nor any other good discipline, soon the leather workers, carpenters, and furriers will quit their own occupations and become doctors” (Bullough, 1966
; quoted in Numbers, in press
). This point–counterpoint reverberated well into recent times. August Comte rejected the proposition that clinical decisions should be based on empirical, probabilistic grounds. As recently as the 1930s, the eminent historian of science Henry Sigerist proclaimed that medicine was neither an “applied science” nor a “branch of science” (Sigerist, 1936
). Thus, throughout much of its history, medicine was beset by debate about whether science or clinical experience and intuition should guide practice (Numbers, in press
). Those who championed clinical experience often noted that probabilistic science could not be applied successfully because each person is unique, the clinical encounter is too complex to be captured by formulas, and sufficient scientific evidence did not exist.
Accepting that the current scientific grounding of medicine is a virtue, it seems instructive to identify those events that secured its current status. A historical review reveals that proclaiming that medicine should be scientific counted for very little. Indeed, these proclamations had negligible impact despite their repetition over the ages. Moreover, an individual’s own avowals that his or her approach to medicine was scientific seem to be similarly inert; calling something scientific does not make it so. In fact, the cloak of scientific respectability has been so appealing that even such “healers” as Mary Baker Eddy, the founder of Christian Science, stressed repeatedly that her healing system was scientific in nature (Glover, 1875
). Similarly, Palmer, who advocated “the science of magnetic healing” prior to ultimately founding chiropractic, proclaimed, “I ascertained these truths, acquired instruction, heretofore unrecognized, regarding the performance of function in health and disease. I systematized and correlated these principles, made them practical. By doing so I created, brought into existence, originated a science, which I named Chiropractic; therefore, I am a scientist” (Peterson & Wiese, 1995
; quoted in Numbers, in press
). Thus, throughout the evolution of scientific medicine, many doctors or “healers” paid lip service to science but failed to base their work on science, or they generated ad hoc explanations for why their practices were valuable despite an absence of scientific support.
If not lip service and public proclamations, what did foster a more scientific approach to medicine? A crucial element in the evolution of medicine, certainly in the United States, was the transfer of medical training from free-standing proprietary schools to ones formally housed within universities (Bullough, 1966
). Until the early 1900s, the majority of doctors trained in the United States were trained in proprietary medical schools that emphasized practice and tradition and deemphasized basic science. As one observer noted, “it is vain to expect that medicine, as a science, can be widely known and diffused, when it is not taught as a science in the schools” (Jackson, 1849
, p. 361). It is not surprising, therefore, that doctors trained in free-standing, for-profit schools contributed little to scientific knowledge and also continued the practices of bleeding, blistering, purging, and puking (Numbers, in press
), despite no scientific evidence of efficacy.
What actually brought about this radical transformation in medical education starting in the early 1900s—a shift from nonempirical training in free-standing, proprietary medical schools to science-based training within established universities? The change often is attributed to a single event: the publication of the Flexner report in 1910 (Flexner, 1910
). However, the full story is more complex and illuminating. Prior to the Flexner report, the American Medical Association (AMA), at the urging of high-profile academic physicians, already had launched a campaign to transform medical education from arts-andcrafts training into formal training in applied science. The AMA appointed five prominent clinical scientists to its Council of Medical Education and asked the Council to review and evaluate medical education. In 1906, there were 162 medical schools in the United States; the Council examined all the medical schools and found only 82 to be acceptable. The AMA chose not to publish these findings, however, choosing instead to ask an outside agency—the Carnegie Foundation for the Advancement of Teaching—to conduct a similar, independent review. This review, which yielded similar results, culminated in the publication of the influential Flexner report.
Even prior to the Flexner report, however, state medical licensing boards, with the AMA’s backing, had begun to increase their licensing requirements, asking applicants to document that they had been trained adequately in the basic sciences. In addition, the AMA Council began grading medical schools on a clear, quantitative outcome criterion: their graduates’ scores on state licensing examinations. This grading system made it difficult for most free-standing, proprietary, tuition-driven medical schools to compete and survive. Not only did their students earn lower scores on the exams, but the programs did not provide their students with the required training and resources specified in the licensing requirements—that is, a science-based curriculum, adequate faculty, high admission standards, and essential facilities (e.g., libraries, laboratories, and physical resources). According to Starr (1982)
, “proprietary medical colleges faced a Hobson’s choice” (p. 119). Complying with the new requirements meant higher admission standards, which meant fewer tuition-paying students, higher costs per trainee, and lower profits. However, disregarding the requirements meant being stigmatized publicly, which meant fewer applicants, hence lower profits. Some proprietary schools simply went out of business or merged with university-based programs. Others, however, attempted to survive by pretending to comply with the higher requirements. As a result, when Flexner visited programs during his review, he uncovered a host of misrepresentations, such as “libraries” with no science books, ghost “faculty members” who spent most of their time away from the program pursuing their private practice, “laboratories” that amounted to little more than a few test tubes, and “admission standards” that would be waived for any student able to pay the fee. The combination of higher licensing requirements and the Council’s grading system ultimately led to a dramatic reduction in the number of medical schools, from 162 in 1906 to 95 in 1915. Starr (1982)
concluded that “changing economic realities, rather than the Flexner report, were what killed so many medical schools in the years after 1906” (p. 118).
Still, reform did not occur overnight. Inferior medical schools survived, and charlatans continued to practice. (Of course, medicine is not entirely free from such problems even today.) In the 1920s and well into the 1930s, for instance, Morris Fishbein, editor of the Journal of the American Medical Association
, pursued an aggressive and persistent campaign of attacks on unfounded medical practices and fraudulent practitioners. He filed charges against unscrupulous practitioners with state licensing boards and even testified in person, urging boards to revoke the licenses of individuals he regarded as charlatans, hucksters, and flimflam artists (e.g., see Brock, 2008
, for a lively account of Fishbein’s pursuit over many years of one colorful, high-profile charlatan, John R. Brinkley).
All of these events, in combination, contributed to the dramatic reform of medicine—a reform that promoted a scientific approach to education and practice. All medical students were expected to receive broad training in science, not just training in the application of interventions or narrow training in “medical sciences.” Moreover, most medical training was expected to take place in university-based medical schools that had high admission standards and had adequate resources. A cornerstone of the university-based medical school training was a curriculum that comprised scientific training in biology, chemistry, physiology, anatomy, and so on. Today much of this training occurs in the undergraduate curriculum that precedes medical school, and considerable additional basic science training continues to be offered in the first years of medical school.
Finally, a critical element in the development of medicine as a scientific enterprise was the demonstration of notable successes that were widely and clearly attributed to the scientific study of disease and its treatment. That is, the scientific approach to medicine was demonstrated by success stories such as those produced by Virchow, Bernard, Fleury, Lister, Pasteur, Koch, and others. Although the particular discoveries made by these pioneers (related to germ theory, penicillin, inoculation, etc.) were highly significant, even more significant was the vindication of the scientific approach. These discoveries were revolutionary, but not because they rendered a significant proportion of disorders tractable. Rather, such discoveries changed the face of medicine because they illuminated the route to cumulative progress.
Medicine, like any human enterprise, is not perfect; occasionally, mindless tradition, human error, fear of lawsuits, ignorance, and cost factors negatively influence medical decisions. Physicians often practice in a manner that is inconsistent with research evidence, and they often are lax in the application of clinical practice guidelines (Hepner et al., 2007
; McKinlay, McLeod, Dowell, & Marshall, 2004
; Spranger, Ries, Berge, Radford, & Victor, 2004
). In addition, physicians frequently use medications for off-label indications, and when they do so, there typically is scant research evidence to support such use (Radley, Finkelstein, & Stafford, 2006
However, there are important differences between physicians and clinical psychologists in regard to empirically supported practice. For instance, when physicians diverge from empirically based medicine or guideline recommendations, it often is because of factors such as treatment costs, treatment availability, strong patient resistance to recommended treatments, and uncertainty about how to apply guidelines (Farquhar, Kofa, & Slutsky, 2002
; Grol, 2001
; Rello et al., 2002
). Fundamental conflict with the value or appropriateness of evidence-based practice, built on rigorous randomized controlled trials (RCTs), tends not to be an important factor. In fact, physicians see guidelines and other initiatives based on experimental medicine as appropriate and clearly consistent with the intended nature of practice (Malacco et al., 2005
; Shea, DePuy, Allen, &Weinfurt, 2007
). In one survey, only 3% of family practice physicians disagreed in principle with evidence- or guideline-based practice and indicated resistance to such practice (Wolfe, Sharp, & Wang, 2004
). In summary, physicians have positive views regarding experimental evidence and recognize that it constitutes the preeminent touchstone regarding practice (e.g., Farquhar et al., 2002
; Schaafsma, Hulshof, van Dijk, & Verbeek, 2004
). This fact may explain why physician adherence to evidencebased practice recommendations is often high. In one study, approximately 85% of patients seen at an internal medicine clinic were receiving care that constituted good evidence-based practice (Lucas et al., 2004
; also Grol, 2001
Physicians’ openness to scientific evidence also may explain why they are relatively responsive to new research evidence or corrective feedback. This responsiveness can be seen in changes in practice that follow the publication of new data (Bush et al., 2007
) and new findings from health task forces (Asano, Toma, Stern, & McLeod, 2004
). Prompting physicians to conduct literature searches prior to making care decisions also leads to significant change in practice patterns (Lucas et al., 2004
). Certainly some of physicians’ tractability can be attributed to the fact that their practice increasingly is monitored (e.g., via electronic medical records), which provides contingent feedback and incentives for adherence. Nevertheless, there is considerable evidence that physicians highly value scientific evidence regarding practice and generally are open to altering their practice in reaction to evidence.
One way to appreciate this evolution in medicine is to understand it as a transformation from credential-based practice to procedure-based practice. The former characterized early medicine and still describes contemporary practice in clinical psychology. In the credential-based model, once individuals earn the critical diploma (MD or PhD) and are granted state licenses to practice, it is assumed that they are competent to (a) diagnose clients’ problems accurately, (b) decide on the most appropriate and effective interventions for these problems, and (c) deliver these interventions faithfully and efficiently. On the basis of the assumption that “credentials equal competence,” the practitioners, in this model, have nearly complete autonomy; essentially, they are free to do whatever they think best, are not accountable to anyone, and are unconstrained by procedural guidelines or practice standards (except, perhaps, for the prohibition regarding sexual relations with a patient). In the procedure-based model, in contrast, credentials alone do not give practitioners the freedom to operate without constraint; rather, practitioners are expected to know and follow scientifically based practice guidelines, are expected to be trained in the specific procedures they undertake, and often have their practice monitored to ensure their adherence to good standards of practice. In short, the procedure-based model uses scientific evidence as an ongoing yardstick for the evaluation of practice, whereas the credential-based model does not.
Psychology’s Ambivalent Relationship With Science
Consider the situation of the individual who needs psychological clinical services. In most cases, the individual does not know the odds that his or her psychological disorder will improve with treatment as opposed to without it. The individual does not know the extent to which treatment will produce relief that goes beyond that produced by a placebo or a credible ritual. In most cases, the average clinical psychologist cannot enlighten the person because the clinician himself or herself does not know. In some cases, the clinician’s ignorance is due to a lack of information (the data simply do not exist), but certainly in many cases, if not most, the average clinician is not motivated or trained to seek such information.
The typical clinical psychologist also is unlikely, or unable, to tell the patient (or health care decision makers or payers) how the treatment she or he favors compares with others on the bases of efficacy and cost–benefit (with cost being defined on the basis of either patient or institutional costs). In fact, the individual seeking help does not even know whether a clinician she or he sees in therapy views scientific data or evidence as relevant to assessment and treatment. In fact, considerable evidence indicates that many, if not most, clinicians view science or research as having relatively little relevance to their practice activities and decisions (e.g., Elbogen, Mercado, Scalora, & Tomkins, 2002
; Lucock, Hall, & Noble, 2006
; Nunez, Poole, & Memon, 2003
). That is, they privilege their intuition and informal problem solving over what the research literature has to offer (e.g., Silver, 2001
). For instance, over the past 30 to 40 years, surveys have found consistently that clinicians value experiential factors over research in guiding their assessment activities and decisions, and their assessment practices often conflict with the best available research information (Motta et al., 1993
; Thomas & Jolley, 1998
; T.C. Wade & Baker, 1977
). Similarly, most clinicians give more weight to their personal experiences than to science in making decisions about intervention (e.g., Stewart & Chambless, 2007
). Thus, although it is patent that impressionistic, clinical judgments are prey to numerous biases and clearly are inferior to more systematized decision-making strategies, clinicians continue to use the former and eschew the latter (Garb, 1998
). The upshot is that the person seeking psychological services from a clinical psychologist cannot assume that his or her treatment will be informed by the fruits of the inferential, deductive discipline known as science. In summary, the consumer of medicine and the consumer of applied psychological clinical science most likely will encounter clinicians at very different stages of scientific evolution: The medical consumer is much more likely to receive care that is guided by the best available science.
Clinicians’ devaluing of available scientific evidence, and their refractoriness to new findings, is so well known that this schism between scientists and clinicians has been the focus of numerous books and articles over the past half century (e.g., Cook, 1958
; Kimble, 1984
; Lilienfeld, Fowler, Lohr, & Lynn, 2005
; Lilienfeld, Lynn,&Lohr, 2003
; Rice, 1997
; Tavris, 2003
). Clinical psychologists often practice in a manner that conflicts with considerable research evidence or at least is not clearly supported by research evidence (Faust & Ziskin, 1988
; Hollon et al., 2002
). Furthermore, practitioners often say they do not care, because they consider the available scientific evidence to be relatively uninformative or irrelevant to their practice decisions (Palmiter, 2004
; T.C. Wade & Baker, 1977
It is easy to be transfixed by the many issues that have served as foci of the science–practitioner debate. The debate has been played out over such issues as, for example, whether prediction should be intuitive (i.e., clinical) versus based on statistical formulae (see Dawes, Faust, & Meehl, 1989
; Holt, 1970
), the validity of clinicians’ expert judgment and its proper role in court testimony (Faust & Ziskin, 1988
; Matarazzo, 1992
), and the use of particular psychological tests (e.g., Draw-a-Person, early Rorschach test use; Silver, 2001
). Throughout these debates over the years, clinicians repeatedly have made the same sorts of arguments as to why their practices are valid despite little research support: For example, the complexity of the subject matter, science has not yet “caught up” to the clinicians’ insights, each patient or prediction problem is unique, clinical experience is the most valuable source of information, and so on. The striking similarity in the arguments made over the years, however, suggests that the identified issues are superficial manifestations of a more fundamental conflict: Specifically, clinical psychologists’ struggle to justify practices that they rightfully acknowledge do not arise from science or research. Moreover, these arguments are eerily reminiscent of those of nonscientific physicians who defended the practice of medicine as a craft.
The most recent issue that illuminates the clinician’s ambivalence about science is many clinicians’ reaction to the effort to identify ESTs (empirically supported treatments). We review this debate about ESTs because it reflects psychology’s latest attempt to strengthen the science base of clinical psychology, and it shows that the schism between science- and practice-oriented psychologists is very much alive at the start of this new millennium. This issue also shows how far we are from building a clinical psychology that can address today’s mental and behavioral health needs in an optimal manner.
The EST Debate: Beyond the Confines of Science
We tend to agree with EST critics that there are many cases in which we still do not have a very good database for informing policy and decision makers, for guiding clinicians, and for intervening optimally in mental and physical health conditions in which psychological interventions might be helpful. In other words, the debate has helped expose inadequacies in the evidence base for current psychotherapeutic practice. As noted above, there certainly is strong evidence that particular therapeutic strategies are highly efficacious (e.g., Franklin & Foa, 2002
) and that their beneficial effects translate well into the real world (e.g., Franklin & DeRubeis, 2006
). However, the EST critics are correct in saying that there are gaping holes in the evidence base for much of what we do in the applied context (Norcross & Lambert, 2006
A review of evidence highlighted by this debate yields the following: (a) The critics are correct that the field needs additional therapeutic techniques that consistently produce strong effects over and above those produced by general relationshipbased interventions or other sorts of generic therapeutic strategies (e.g.,Wampold, 2001
;Wampold, Ollendick,&King, 2006
; Westen, Novotny, & Thompson-Brenner, 2004
). In other words, in some cases, the evidence for the relative effectiveness of ESTs is not clearly established. (b) Clinical psychologists are faced with some clinical disorders or sets of problems for which the extant research base does not provide proven strategies; thus, the clinician either turns to clinical intuition and surmises how to address these challenges or does nothing (Reed, 2006
;Westen et al., 2004
). (c) The critics are correct that nonspecific or general factors such as features of the clinician and the nature of the patient–clinician relationship are meaningfully related to outcomes, although these factors probably account for a relatively small percentage of variance in change (Bourgeois, Sabourin, & Wright, 1990
; Horvath & Symonds, 1991
; Martin, Garske, & Davis, 2000
). (d) For some disorders, we do not have definitive knowledge about dose–response relations for treatment and outcomes, about how therapeutic effects (including nonspecific effects) can be produced so as to be optimally costeffective relative to competing interventions, about how to enhance the reach (population penetration) of our interventions, and so on. (e) The critics do not contend that ESTs are ineffective but rather question the extent to which ESTs are effective due to unique mechanisms or procedures. However, our view is that if an EST performs well relative to other competitors for the health care dollar (e.g., pharmacotherapy), this finding retains public health and clinical significance. If there are other interventions that produce similar effects, then it would be important to learn how clinicians can achieve those effects reliably, cheaply, and quickly—so that these interventions can also be designated as ESTs. These might also become strong competitors for the nation’s health care dollars. It makes no sense to beggar effective interventions simply because others may also work.
The limits to our knowledge have profound implications for our field. However, not one of the limitations noted above challenges the notion that the greatest benefits from psychological intervention will occur if that intervention is based on the best available science rather than on hunch or surmise. Moreover, these concerns do not undercut the fact that there currently are numerous psychological interventions that are strongly supported by research and yet greatly underused. In other words, clinicians have numerous opportunities to apply experimentally supported interventions, but many choose not to do so.
Some clinicians might take solace in findings that nonspecific effects often are correlated with outcomes; they may be tempted to use such effects to justify an eclectic or nonspecific approach to therapy; one that is based on no specific techniques, hypotheses, or putative mechanisms. Research on nonspecific effects provides little support for the current practices of psychology, however. Legitimate and important issues surround nonspecific effects, but the resolution of the debate about nonspecific effects has little potential to validate a science-based practice of clinical psychology. In theory, some aspects of nonspecific effects are malleable or teachable: for example, behaviors that contribute to the therapeutic alliance (the therapist–patient relationship). Even these hold little promise that they represent special opportunities for clinical psychology, however.
Before becoming too enamored of nonspecific or therapeutic alliance factors, it is important to note the marginal scientific status of those constructs. An appraisal of the extant literature on the therapeutic alliance leaves unanswered a host of fundamental questions: (a) whether observed relations with outcomes in uncontrolled studies reflect a causal effect on outcome (Castonguay, Constantino, & Holtforth, 2006
; Crits-Christoph, Gibbons, & Hearon, 2006
); (b) whether the major sources of variance in these factors reflect enduring person variables that are not affected by concentrated scientific training or even therapy training (Hardy et al., 2001
; Hilliard, Henry, & Strupp, 2000
; Muran, Segal, Samstag, & Crawford, 1994
; Zuroff et al., 2000
; although cf. Klein et al., 2003
); (c) whether contributory skills can be isolated, and if so, to what extent they can be trained or enhanced effectively via practice so that they are disseminable and cost-effective relative to brief behavioral interventions (e.g., Andres-Hyman, Strauss, & Davidson, 2007
; Blatt, Sanislow, Zuroff, & Pilkonis, 1996
; Castonguay et al., 2006
; Crits-Christoph et al., 2006
; Stein & Lambert, 1995
; Yalom, 1980
); and (d) whether intense, science-based training, or even prolonged graduate training, is helpful or relevant to skill acquisition or delivery (Stein & Lambert, 1995
). Indeed, the evidence regarding therapeutic alliance and nonspecific effects is sufficiently ineffable that no set of procedures can be distilled into any specific therapeutic techniques and thereby earn ESTstatus. At present, there is little basis for assuming that the induction of nonspecific effects will constitute a special province of scientifically trained psychologists or constitute a central basis of psychological practice. However, it may constitute a basis of practice of low-cost providers who do not need intensive training or a complex skill set.2
It also is important to note that nonspecific factors are central to all sorts of professional functions, not just psychotherapy, yet they hardly constitute a sufficient basis for science-based intervention. The doctor–patient relationship is very important to the practice of medicine. However, the status and perceived value of medicine are not based primarily on the physician’s ability to listen sympathetically, be nonauthoritarian, and so on (although there is recognition of the importance of a good doctor–patient relationship). The role of medicine and its stature would be very different if it involved all bedside manner and no procedures. The rigorous standards used to select medical students and the challenging and extensive training required are based on the notion that complex, science-based procedures are essential.
To the extent that the debate surrounding ESTs focuses on what it is about therapy that is effective, the debate is interesting and probably helpful. However, the debate, or its resolution, holds little prospect for salvaging the field as a practice discipline. Even if EST supporters mount effective, cogent arguments, as they already have (e.g., Franklin & DeRubeis, 2006
; Hollon, 2006
; Sher, 2006
), a great many clinicians will not be receptive, which is suggested by their resistance to scientific evidence and by the fact that most clinicians are not using the interventions that currently are supported most powerfully by research (e.g., Barlow et al., 1999
; Crow et al., 1999
; Haas & Clopton, 2003
; Hollon et al., 2002
; Phillips & Brandon, 2004
). The open resistance to research evidence, and the frank acknowledgment that much of practice is ascientific, is not a good basis for asking society to support the practice of clinical psychology as it currently exists (Nathan, 2000
So, our review of the EST controversy suggests the following: (a) By clinicians’ own admission, much of what they do is little informed by scientific evidence; (b) many leading proponents of psychotherapy doubt whether much of the extant scientific evidence is valid or relevant; (c) although there are specific interventions that have relatively strong research support, these are seldom used; and (d) the factors that many practitioners point to as constituting the core of their therapeutic armamentarium (i.e., nonspecific factors) are poorly understood, may not be teachable, and almost certainly do not require extensive science-based training or highly privileged status for their delivery. All these things are occurring in a societal context of growing mental health needs, unprecedented constraints on health care resources, and a growing recognition that health care decisions must be informed by the best available research and economic evidence.