PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Curr Opin Psychiatry. Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
PMCID: PMC2844849
NIHMSID: NIHMS180957

Beyond outcomes monitoring: Measurement Feedback Systems (MFS) in child and adolescent clinical practice

Abstract

Purpose of review

To review literature published during the past year relevant to identifying the best measures for monitoring progress in the treatment child and adolescent clients and their families.

Recent findings

The current literature shows an increasing focus on clinical utility in measure development as demonstrated by the recent emphasis on evidence based assessment. However, there is very little research on how the inclusion of monitoring might enhance clinician practice and ultimately youth and family outcomes. There is great promise in expanding our field of thinking beyond mere outcome measurement to a Measurement Feedback System (MFS) that provides timely feedback that is comprehensive and concurrent with treatment.

Summary

Investment in the development of MFSs is needed to enhance clinical judgment and increase effectiveness of treatment. Clinical utility and consumer appeal need to be key considerations for measures intended to be used in everyday clinical practice. Most importantly, we must harness the power of technology and advances in measurement to provide clinicians with the tools to use effectively the systematic data provided through frequent measurement with MFSs.

Keywords: Treatment outcomes, monitoring, clinical practice, child and adolescent, feedback

Introduction

Much has been written about the importance of using outcome data in clinical practice but such advocacy is outmoded and of limited clinical usefulness. We believe that what is necessary for effective treatment is a Measurement Feedback System (MFS) that is comprehensive, administered frequently concurrent with treatment, and provides timely feedback to clinicians [1**].

Real world mental health treatment requires attention to the complexities inherent in the interaction among treatments, settings, therapists, and clients. In order for clinicians to make appropriate decisions concerning treatment they must be able to evaluate client progress in meeting outcome goals but also have valid knowledge concerning key clinical processes. Yet the available evidence indicates that clinicians are not accurate in making these judgments based solely on their own observations and experience [1**, 2, 3*].

MFSs can transform practice not only by supporting clinical decision making but also in helping assure us that treatments labeled as evidence based treatments (EBTs), when brought into community settings, actually are effective [4*]. A practical MFS must contain measures that are short, psychometrically sound, and are useful in everyday practice by clinicians. Further, MFSs should assess several domains by multiple reporters that include treatment progress (e.g., youth and family outcomes) and treatment processes (e.g., therapeutic alliance, treatment activities).

Enhancing Clinician Judgment Through MFSs

In most settings information concerning client progress resides completely within the clinician’s purview. Even supervision, when provided, is based upon the therapist’s report, not an independent assessment. While supervision during training may include direct observation, outside of training it typically relies on the clinician’s self-report. Moreover, it is rare for graduate students to receive instruction in decision-making skills. This dependence on clinical judgment alone persists despite the plethora of research over the last 60 years that demonstrates critical flaws in the clinicians’ intuition and observations of the therapeutic process [2, 3*]. While we believe the clinician’s training, experience, and education are central to guiding the ongoing therapeutic process, they are not sufficient. Clinical decision-making is typically based on multiple imperfect cues where errors in judgment will always occur [5*]. The addition of a MFS will enhance clinical decision-making by providing systematic feedback the clinician could use to make incremental adjustments to the treatment plan.

A study of clinician- and research-based diagnostic agreement provides compelling evidence that agreement predicts better therapy engagement and treatment outcomes [6**]. We agree with the authors’ suggestion to directly measure therapeutic alliance and engagement in addition to outcomes. Incorporating standardized measures into clinical practice settings may improve youth outcomes, but the costs involved in instituting structured diagnostic interviewing into clinical practice settings may present a significant barrier. However, a recent study found the use of semi-structured diagnostic interviews both feasible and effective in identifying youth in need of mental health services in the school setting [7]. An innovative approach to utilizing the principles of structured and semi-structured interviewing in real world mental health settings is the use of Bayesian logic to create a dynamic system using diagnostic base rate information to select interview content [8*]. In an initial feasibility study, these authors found that the dynamic system reduced administration time yet maintained accuracy.

Use of MFSs: Benefits for the field

Few mental health clinicians are accountable for the quality or outcomes of their services since payment for services is usually based on treatment type, length, and location. One of the main purposes of MFSs is to bring such accountability to the provision of mental health services. The “stick” of accountability can also be accompanied by the “carrot” of incentives for providing effective services. For example, the United States system of Medicare provides financial incentives for physicians to participate in a quality reporting initiative [9]. Of note, there is need for continued discussion of how quality indicators, such as outcomes, should be used to rate performance and determine funding criteria. Client and agency characteristics that may affect outcomes should be considered in any ‘pay for performance’ system [10*].

With support from the United States Substance Abuse and Mental Health Services Administration, several states are implementing some form of a MFS such as Massachusetts, Utah, and Hawaii [1**] although states vary widely in their decisions about what data to collect, how to provide feedback, and how to make the feedback useful [11*, 12**]. A recent case study details the process for implementing evidence-based assessment for adolescent substance abuse treatment at the state level [13**], providing a thorough review of issues faced at the external, organizational, and individual staff levels. It has been suggested that both state and federal government should take a more active role in fostering accountability through the promotion of outcomes management systems to support clinical decision making [14].

It is unlikely that traditional EBT research will be able to identify the exceedingly complex combinations of variations in treatments that will be most advantageous for which clients in which settings. A modular approach to identifying common elements or specific strategies across EBTs is a relatively new focus in the field [15**, 16**] with significant implications for research and practice. Identifying effective strategies within and across EBTs can lead to more flexibility in individually tailoring treatment. MFSs offer an additional approach for individualizing treatments. Systematic and frequent measurement of treatment progress and processes is a key component to promoting a practice-based evidence in a continuous quality improvement framework [1**]. Kazdin [17**] echoes this argument in a recent review of the continuing gap between research and practice in which he notes that clinical work can contribute directly to the scientific knowledge through the large amount of information that can be gathered in a MFS. He argues strongly for the use of systematic measures of client progress as a way to promote high quality care by individualizing treatment, ongoing monitoring of treatment effects, and complementing clinical judgment through systematic evaluation.

Criteria for Evaluating MFSs

Measures intended to be used in busy clinical settings must meet the standard scientific psychometric criteria as well as several practical considerations. In addition to being reliable and valid, measures should be sensitive to change and provide interpretable change indicators. They should be clinically useful and acceptable to consumers. To make routine assessment feasible, measures should be short [1**, 1720].

For over a decade there has been an increasing emphasis on evidence based assessment (EBA), focusing on the clinical utility of measures to guide interventions rather than just upon their use in research [21**, 22**]. A recent special issue of the Journal of Pediatric Psychology (2008) reviewed the evidence base for assessment of medical treatment adherence, pain, coping and stress, psychosocial adjustment and psychopathology, and other problems related to pediatric psychology. While some measures were identified as meeting the criteria as well-established assessments [23], there were several suggestions for improvement in the areas of assessing cultural differences, multi-dimensional assessment, and integration into clinical practice (e.g., comprehensive treatment planning) [2427].

While not incompatible, it is important to note that there are different criteria for determining the quality of EBA. For example, the approach used in the special issue mentioned above was similar to that used to evaluate empirically supported treatments (ranging from promising to well-established). Hunsley and Mash [21**, 28] advocate an approach that specifies levels of reliability, including norms, and several forms of validity, including treatment sensitivity and clinical utility. For example, clinical utility is rated not only by consideration of costs, ease of use, etc. but also published evidence of demonstrable clinical benefit.

In contrast to the outmoded typical pre-post outcomes measurement, monitoring client progress through frequent administration of measures necessitates the use of measures that are sensitive to change over time in addition to being valid and reliable. It is difficult for a practicing clinician to determine what is clinically meaningful by simply ‘eyeballing’ scores. Bost and colleagues [29*] propose four statistical strategies to assist clinicians in determining whether a change in scores is meaningful, including the Reliable Change Index (RCI), RCI adjusted for systematic bias (RCIadj), bivariate regression (BIV), and multivariate regression (MIV). The authors note that RCI is the simplest and most straightforward of the four approaches, assisting the practitioner in determining whether a change in client scores is due to chance. Because RCI is disadvantaged by not accounting for systematic biases such as client demographic characteristics or practice effects, the latter three strategies are proposed as additional indices.

While these are not new statistical techniques, their application is relatively new in clinical practice. Few measures are designed to be administered frequently, and even fewer published studies include consideration of sensitivity to change. Providing clinicians with reliable and valid indices of clinically meaningful change may well serve to promote perceived usefulness of a MFS. Practically, however, this would require the use of computerized technology to make such information rapidly available and feasible in a busy clinical setting. Further, sensitivity to change implies sensitivity to change caused by treatment. The most direct way to assess this is to compare scores from individuals who have received a known effective treatment in the real world to those who received no treatment or treatment as usual. Unfortunately, there are few treatments for children and adolescents that have shown be consistently effective [30]. Finally, it is important to attend to the clinical significance of any change as opposed to just statistical significance [31**].

Multidimensional Monitoring

In addition to the above criteria, MFSs should measure broadly; in other words, assess several domains relevant to youth outcomes through a multidimensional battery with multiple reporters. Domains should include not only traditional standards of treatment progress (e.g., symptoms and functioning) but also indices of treatment process (e.g., therapeutic alliance, session engagement, therapeutic activities).

Recent studies have provided intriguing evidence that factors common to any treatment (as compared to those specific to an EBT) are associated with outcomes in both youth and adult psychotherapy. For example, the finding that a substantial amount of youth improvement occurs very early in treatment supports this common factors perspective, arguing that specific treatment effects had not had sufficient time to be effective [32]. Similarly, studies using more sophisticated analytic techniques have confirmed the existence of a therapist effect [33]. However, the field is deficient in the number of good studies that link process with outcomes.

Research on therapeutic alliance, the most popular construct with over 6000 (almost all with adults) contains less than a handful of experimental studies. However, it has been shown to be modestly but robustly related to youth outcomes in correlational studies despite continued lack of consensus on how to conceptualize and measure alliance [34*]. One study of therapeutic alliance in youth psychotherapy found that the link between alliance and youth outcomes varied by type of treatment, with a weak correlation in nondirective supportive therapy compared to a stronger correlation between alliance and youth outcomes in more structured cognitive behavioral treatment [35]. A second study found low consistency among reporters of alliance [36], consistent with our own research as yet unpublished where we have found that clinicians are not accurate at estimating youth and parent alliance. These two studies (and our own research) highlight the idea that therapeutic alliance is important to include in any ongoing measurement given the discrepancies between reporters and the potential influence on alliance of different kinds of treatment.

While measures of session content are usually part of an EBT’s treatment adherence monitoring, typical treatment in community settings is a “black box” of unknown dimensions. Attempting to describe non-EBT is a daunting task but one of the few examples can be found in a study by Bearsley-Smith and colleagues [37**]. Using a very careful developmental approach they adapted a measure that describes techniques utilized in treatment that was reported to be a valuable tool for viewing themes and trends in individual treatment. Of particular note in the development process was the inclusion of mental health practitioners who would be typical users of the measure. Our approach to measuring session content differs, focusing on topics addressed during treatment rather than specific strategies or techniques [38*]. There is clearly a great deal of development and refinement necessary to measure what happens in the typical treatment. Inclusion of such a measure is an important part of a MFS, particularly in meeting the increasing demand for documentation and identifying aspects of treatment as usual as potentially promising interventions [39*].

Conclusion: Outcomes Monitoring through A Measurement Feedback System (MFS)

Systematic feedback that uses valid, reliable, and standardized measures has been found to have substantial and replicable benefits in adult mental health treatment [17**, 40, 41], and is currently being tested in youth mental health practice settings [1**, 17**]. Typical availability of computers in practice settings makes the technology readily available to take advantage of the advances in psychometrics that make feedback feasible.

Although the use of a MFS potentially carries great clinical benefits, these benefits cannot be realized if it is not used. Currently, the field knows very little about how best to implement and sustain such interventions, particularly in how to include feedback on individual client progress and change over time. The success of outcomes management and feedback hinges on a complex mix of organizational- and clinician-level factors. Of the few studies to date, significant barriers to implementation have been identified that contribute not only to a lack of adherence to protocols (e.g. completion of measures) but to a general lack of utilization in supervision or clinical decision-making [1**]. Comprehensive and collaborative efforts are needed from policy makers, researchers, and clinicians to promote the widespread use of MFSs to improve mental health care for children and adolescents. Further research is needed to develop and refine measures of treatment process and outcomes for routine clinical use and learn how best to sustainably implement MFSs. Mental health policy reform should make resources available for harnessing technological advances to support the day-to-day use of MFSs and careful consideration of how to use them in determining indicators of quality and performance. Clinicians must be willing to change the way they practice by actively engaging in the development and testing of MFSs to ensure clinical utility. Stakeholders need to collaborate in an epistemological shift in what we consider appropriate approaches to assessing quality of mental health care—beyond outcomes monitoring—to the active use of systematic data to inform ongoing clinical decision-making.

Acknowledgments

Preparation of this article was partially supported by grants from NIMH (MH068589-01) and the Lowenstein Foundation. Drs. Bickman and Kelley report that they and Vanderbilt University have a financial agreement with Qualifacts Systems Inc. for the online development of CFIT.

References

1 ** Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(10):1114–9. Review of Measurement Feedback Systems (MFSs) with a focus on barriers to adoption. [PMC free article] [PubMed]
2. Block J. The Q-Sort in Character Appraisal: Encoding Subjective Impressions of Persons Quantitatively. Washington, DC, US: American Psychological Association; 2008. Subjective impressions in clinical psychology; pp. 93–104.
3 * Garb HN, Lilienfeld SO, Fowler KA. Psychological assessment and clinical judgment. In: Maddux JEW, Barbara A, editors. Psychopathology: Foundations for a contemporary understanding. 2. New York, NY, US: Routledge/Taylor & Francis Group; 2008. pp. 103–24. Chapter that reviews assessment from a broad perspective, including both structured methods (e.g., self-reports, structured interviews) and clinical judgment.
4 * Bickman L. Why Don’t We Have Effective Mental Health Services? Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(6):437–9. Editorial on the need for MFSs in mental health care reform. [PubMed]
5 * Wigton RS. What do the theories of Egon Brunswik have to say to medical education? Advances in Health Sciences Education. 2008;13(1):109–21. While this article was written for health care practitioners, the author’s findings are relevant to educating mental health clinicians through the discussion of judgment and decision-making research. [PubMed]
6 ** Jensen-Doss A, Weisz JR. Diagnostic agreement predicts treatment process and outcomes in youth mental health clinics. Journal of Consulting and Clinical Psychology. 2008;76(5):711–22. An empirical study of diagnostic agreement between clinician-rated diagnosis and structured diagnostic interviews. Of note are the analyses of mediating factors that influence the relationship between agreement and outcomes. In addition, the authors emphasize the importance of directly measuring treatment process, such as therapeutic alliance and engagement. [PubMed]
7. Nemeroff R, Levitt JM, Faul L, Wonpat-Borja A, Bufferd S, Setterberg S, et al. Establishing ongoing, early identification programs for mental health problems in our schools: A feasibility study. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(3):328–38. [PubMed]
8 * Chorpita BF, Nakamura BJ. Dynamic structure in diagnostic structured interviewing: A comparative test of accuracy and efficiency. Journal of Psychopathology and Behavioral Assessment. 2008;30(1):52–60. An excellent example of a simple strategy to enhance clinical diagnostic skills while minimizing administrative and personnel costs of integrating additional strategies into regular clinical care.
9. Centers for Medicare and Medicaid Services. Physician Quality Reporting Initiative (PQRI) Provisions of the 2008 Medicare Physician Fee Schedule Final Rule. 2008. Retrieved February 5, 2009 from http://www.cms.hhs.gov/PQRI/Downloads/2008PQRIMPFSSummary.pdf.
10 * Ogles BM, Carlson B, Hatfield D, Karpenko V. Models of case mix adjustment for Ohio mental health consumer outcomes among children and adolescents. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(4):295–304. To promote widespread use of outcome management systems in mental health practice, further sophistication of statistical models for comparing outcomes by clinician or agency are needed. This study uses case mix adjustment as a potential tool to account for individual client characteristics that may influence treatment outcomes. [PubMed]
11 * Bruns EJ, Hoagwood KE. State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(4):369–73. Part I of a two-part column, this article briefly reviews current efforts in several states to implement evidence-based practice for youth, with the intent to describe the diversity of approaches despite the common goal of improving youth services. [PubMed]
12 ** Bruns EJ, Hoagwood KE, Rivard JC, Wotring J, Marsenich L, Carter B. State implementation of evidence-based practice for youths, part II: Recommendations for research and policy. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(5):499–504. Part II of a two-part column, this article describes lessons learned by the diverse state efforts to reform youth mental health care with some important recommendations for research and policy. [PubMed]
13 ** Gotham HJ, White MK, Bergethon HS, Feeney T, Cho DW, Keehn B. An implementation story: Moving the GAIN from pilot project to statewide use. Journal of Psychoactive Drugs. 2008;40(1):97–107. This case study of a state-wide implementation of EBA is notable as it applies current research on implementation process to describe the challenges and solutions faced in moving EBA to wide scale. [PubMed]
14. Cooper JL, Aratani Y, Knitzer J, Douglas-Hall A, Masi Rachel Banghart P, Dababhah S. Unclaimed Children Revisited: The Status of Children’s Mental Health Policy in the United States. 2008. Retrieved February 5, 2009 from http://www.nccp.org/publications/pub_853.html.
15 ** Chorpita BF, Bernstein A, Daleiden EL. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research Special Issue: Improving mental health services. 2008;35(1–2):114–23. Of note, the authors review research on use of information technology and decision-making models in the design of their decision-making tool, which emphasizes the role of integrating multiple domains of information for regular feedback concurrent with treatment. [PubMed]
16 ** Garland AF, Hawley KM, Brookman-Frazee L, Hurlburt MS. Identifying common elements of evidence-based psychosocial treatments for children’s disruptive behavior problems. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(5):505–14. The authors present an alternate method of identifying common elements of EBTs to the distillation and matching model, preceded by a thorough review of the literature in this area. [PubMed]
17 ** Kazdin AE. Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist. 2008;63(3):146–59. The author reviews the ongoing divide between research and practice in mental health care, emphasizing the importance of systematic and frequent outcome monitoring in regular clinical settings as a key strategy to improving quality of care. [PubMed]
18. Shelef K, Diamond GM. Short form of the revised Vanderbilt Therapeutic Alliance Scale: Development, reliability, and validity. Psychotherapy Research. 2008;18(4):433–43. [PubMed]
19. Titus JC, Dennis ML, Lennox R, Scott CK. Development and validation of short versions of the internal mental distress and behavior complexity scales in the Global Appraisal of Individual Needs (GAIN) The Journal of Behavioral Health Services & Research. 2008;35(2):195–214. [PubMed]
20. Zwirs BWC, Burger H, Schulpen TWJ, Buitelaar JK. Developing a brief cross-culturally validated screening tool for externalizing disorders in children. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(3):309–16. [PubMed]
21 ** Hunsley J, Mash EJ. Developing criteria for evidence-based assessment: An introduction to assessments that work. In: Hunsley J, Mash EJ, editors. A guide to assessments that work Oxford series in clinical psychology. New York, NY, US: Oxford University Press; 2008. pp. 3–14. This chapter clearly lays out the criteria for evaluating assessments from both scientific rigor and practical standpoints. It also serves as the introduction to the book (of which the authors are editors) that reviews evidence based assessments by disorder (both child and adult)
22 ** Phares V, Curley J. Evidence-based assessment for children and adolescents. In: Steele RG, Elkin TD, Roberts MC, editors. Handbook of evidence-based therapies for children and adolescents: Bridging science and practice Issues in clinical child psychology. New York, NY, US: Springer Science + Business Media; 2008. pp. 537–49. An introductory chapter to an important book that discusses EBA for specific child and adolescent mental health disorders.
23. Cohen LL, La Greca AM, Blount RL, Kazak AE, Holmbeck GN, Lemanek KL. Introduction to special issue: Evidence-based assessment in pediatric psychology. Journal of Pediatric Psychology. 2008;33(9):911–5. [PMC free article] [PubMed]
24. Blount RL, Simons LE, Devine KA, Jaaniste T, Cohen LL, Chambers CT, et al. Evidence-based assessment of coping and stress in pediatric psychology. Journal of Pediatric Psychology. 2008;33(9):1021–45. [PMC free article] [PubMed]
25. Cohen LL, Lemanek K, Blount RL, Dahlquist LM, Lim CS, Palermo TM, et al. Evidence-based assessment of pediatric pain. Journal of Pediatric Psychology. 2008;33(9):939–55. [PMC free article] [PubMed]
26. Holmbeck GN, Thill AW, Bachanas P, Garber J, Miller KB, Abad M, et al. Evidence-based assessment in pediatric psychology: Measures of psychosocial adjustment and psychopathology. Journal of Pediatric Psychology. 2008;33(9):958–80. [PubMed]
27. Quittner AL, Modi AC, Lemanek KL, Levers-Landis CE, Rapoff MA. Evidence-based assessment of adherence to medical treatments in pediatric psychology. Journal of Pediatric Psychology. 2008;33(9):916–36. [PMC free article] [PubMed]
28. Mash EJ, Hunsley J. Commentary: Evidence-based assessment--Strength in numbers. Journal of Pediatric Psychology. 2008;33(9):981–2.
29 * Bost RH, Wen FK, Basso MR, Cates GR. Online tools for evaluating patient change: Statistical foundations, clinical applications, research relevance. Rehabilitation Psychology. 2008;53(3):313–20. Using a clinical case illustration, this article provides a review of statistical strategies for measuring change over time and discusses both the clinical and research potential of these tools.
30. Silverman WK, Hinshaw SP. The second special issue on evidence-based psychosocial treatments for children and adolescents: A 10-year update. Journal of Clinical Child and Adolescent Psychology Special Issue: Evidence-based psychosocial treatments for children and adolescents: A ten year update. 2008;37(1):1–7.
31 ** Lambert MJ, Hansen NB, Bauer S. Assessing the clinical significance of outcome results. In: Nezu AM, Nezu CM, editors. Evidence-based outcome research: A practical guide to conducting randomized controlled trials for psychosocial interventions. New York, NY, US: Oxford University Press; 2008. pp. 359–78. Lambert is a well-known researcher in the field of MFSs in adult psychotherapy. He and his colleagues present a useful review of clinical versus statistical significance.
32. Cromley T, Lavigne JV. Predictors and consequences of early gains in child psychotherapy. Psychotherapy: Theory, Research, Practice, Training. 2008;45(1):42–60. [PubMed]
33. Dinger U, Strack M, Leichsenring F, Wilmers F, Schauenburg H. Therapist effects on outcome and alliance in inpatient psychotherapy. Journal of Clinical Psychology. 2008;64(3):344–54. [PubMed]
34 * Elvins R, Green J. The conceptualization and measurement of therapeutic alliance: An empirical review. Clinical Psychology Review. 2008;28(7):1167–87. A comprehensive review of currently available youth and adult alliance measures. [PubMed]
35. Karver M, Shirk S, Handelsman JB, Fields S, Crisp H, Gudmundsen G, et al. Relationship processes in youth psychotherapy: Measuring alliance, alliance-building behaviors, and client involvement. Journal of Emotional and Behavioral Disorders. 2008;16(1):15–28.
36. Hawley KM, Garland AF. Working alliance in adolescent outpatient therapy: Youth, parent and therapist reports and associations with therapy outcomes. Child & Youth Care Forum. 2008;37(2):59–74.
37 ** Bearsley-Smith C, Sellick K, Chesters J, Francis K. Treatment content in child and adolescent mental health services: development of the treatment recording sheet. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(5):423–35. A key strength of this study was the use of action research techniques, involving the mental health care practitioners in the development of a measure of treatment processes. [PubMed]
38 * Bickman L, Lambert EW, Kelley SD, Breda C, Brannan AM, Vides de Andrade AR, et al., editors. Manual of the Peabody treatment progress battery PTPB. 2. Vanderbilt University; 2009. Unpublished manuscript. The manual includes a thorough review of the psychometric issues to be addressed in developing measures for frequent and routine clinical use. The battery includes several measures of treatment process and progress for administration to youth, their caregivers, and clinicians.
39 * Weisz JR, Gray JS. Evidence-based psychotherapy for children and adolescents: Data from the present and a model for the future. Child and Adolescent Mental Health. 2008;13(2):54–65. A highly respected researcher in the field of youth EBT, Weisz and Gray reflect on the recent meta analytic findings with several proposals for improving research on usual care.
40. Davies DR, Burlingame GM, Johnson JE, Gleave RL, Barlow SH. The effects of a feedback intervention on group process and outcome. Group Dynamics: Theory, Research, and Practice. 2008;12(2):141–54.
41. Slade K, Lambert MJ, Harmon SC, Smart DW, Bailey R. Improving psychotherapy outcome: The use of immediate electronic feedback and revised clinical support tools. Clinical Psychology & Psychotherapy. 2008;15(5):287–303. [PubMed]