PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of springeropenLink to Publisher's site
Administration and Policy in Mental Health
 
Adm Policy Ment Health. 2016; 43: 316–324.
Published online 2016 February 19. doi:  10.1007/s10488-016-0725-6
PMCID: PMC4832002

When is Sessional Monitoring More Likely in Child and Adolescent Mental Health Services?

Abstract

Sessional monitoring of patient progress or experience of therapy is an evidence-based intervention recommended by healthcare systems internationally. It is being rolled out across child and adolescent mental health services (CAMHS) in England to inform clinical practice and service evaluation. We explored whether patient demographic and case characteristics were associated with the likelihood of using sessional monitoring. Multilevel regressions were conducted on N = 2609 youths from a routinely collected dataset from 10 CAMHS. Girls (odds ratio, OR 1.26), older youths (OR 1.10), White youths (OR 1.35), and youths presenting with mood (OR 1.46) or anxiety problems (OR 1.59) were more likely to have sessional monitoring. In contrast, youths under state care (OR 0.20) or in need of social service input (OR 0.39) were less likely to have sessional monitoring. Findings of the present research may suggest that sessional monitoring is more likely with common problems such as mood and anxiety problems but less likely with more complex cases, such as those involving youths under state care or those in need of social service input.

Keywords: Sessional monitoring, CAMHS, Child, Adolescent, Case complexity

Sessional monitoring of treatment progress during psychological therapy involves the regular review of feedback from measures of symptoms, functioning, or common factors such as therapeutic alliance reported by patients or therapists (Carlier et al. 2012). It is an evidence-based intervention recommended by healthcare systems internationally (SAMSHA’s National Registry of Evidence-Based Programs and Practices 2015). Sessional monitoring is being rolled out across child and adolescent mental health services (CAMHS) in England as a means of supporting clinical practice and to underpin evaluation of service provision and benchmarking betweeen services (Department of Health 2011). Sessional monitoring may promote communciation between patients and therapists to help identify when patients may not be responding to therapy as expected and, consequently, may be more likely to disengage from therapy (Carlier, et al. 2012; Chen et al. 2013; Wolpert et al. 2012). Evidence suggests that sessional monitoring may be associated with higher levels of treatment effectiveness, treatment efficiency, and collaborative practice (Bickman et al. 2011; Gondek et al. 2016; Knaup et al. 2009). Recently, evidence demonstrated a dose–response effect, finding higher levels of treatment effectiveness when feedback was used more often (Bickman et al. 2015). Sessional monitoring can also provide useful information for teams and services to reflect on how their patients are experiencing and responding to therapy (Fleming et al. 2014).

There are a number of barriers to implementing and sustaining sessional monitoring (Boswell et al. 2013; Douglas et al. 2014; Mellor-Clark et al. 2014). Little is known about how it is actually used in routine practice. Sessional monitoring may be less likely with youths with certain demographic and case characteristics. For example, research evidence suggests that measures involving goal formulation at the outset of treatment were more likely to be used with younger youths and youths presenting with emotional difficulties or learning disabilities (Jacob et al. submitted). Therefore, it is important to examine whether demographic and case characteristics are also associated with the use of sessional monitoring.

Published research evidence from qualitative studies suggests that one of the barriers to routine outcome and sessional monitoring may be the view that the measures do not capture the full complexity of issues (Moran et al. 2011; Wolpert et al. 2014). Due to this perception of sessional monitoring, it may be less likely in complex cases, such as those involving a greater number of complexity factors, for instance youths experiencing serious physical health issues, being a victim of abuse or neglect, or living in financial difficulty. In addition, certain complexity factors, such as involvement with social services or youths being under state care, may cause challenges to establishing a therapeutic alliance, which has been suggested as important to facilitate the use of measures in therapy (Stasiak et al. 2012). Hence, sessional monitoring may be less likely in cases where such factors are present.

To the best of our knowledge, there is no existing evidence regarding when sessional monitoring is more likely in CAMHS. Differences in when sessional monitoring is used may have implications for both clinical practice and also the meaningful comparison of services, as more data may be available for certain youths than for others. Therefore, the aim of the present research was to explore whether patient demographic (i.e., age, gender, and ethnicity) and case (i.e., presenting problems and complexity factors) characteristics were associated with the likelihood of using sessional monitoring.

Method

Participants and Procedure

As part of the children and young people’s improving access to psychological therapies (CYP IAPT) programme (Wolpert et al. 2011), staff routinely collect demographic, outcome, and experience measures completed by the therapist, youth, and/or carer at assessment, on a session-by-session basis, and at case closure (Law and Wolpert 2014). Data from 12 purposively sampled services were collated as part of an internal audit of the CYP IAPT programme. A favourable ethical approval was received from University College London Research Ethics committee (project ID: 6087/001) and the project was registered with local Trusts.

Overall, the total dataset included N = 6801 youths, with data collected from 2011 to 2014. Of these youths, 40 % had attended at least three sessions,1 which resulted in a final retained sample of N = 2690 youths (level-1) from ten services (level-2). Demographic characteristics are shown in Table 1. There were a number of significant differences between the wider sample of youths attending for fewer than three sessions (n = 4111) and the included sample attending for at least three sessions. However, when inspecting the magnitude of these differences, the two samples appear to be broadly comparable as all odds ratios or effect sizes were small; nevertheless, there were more youths attending for more than three sessions from low frequency ethnic groups with a medium odds ratio (Cohen 1988).

Table 1
Demographic characteristics

Measures

Demographic Characteristics

Age, gender, and ethnicity were recorded by services as part of routine data recording. Ethnicity was captured using the categories from the 2001 Census and was generally based on self-report by the parent or the youth. These were grouped for analysis as follows: White (including White British, Irish, and Other White background), Mixed (including Mixed White and Black Caribbean, Mixed White and Black African, Mixed White and Asian, and any other mixed background), Asian (including Indian, Pakistani, Bangladeshi, and Other), Black or Black British (including Caribbean, African, and Other), and other ethnic groups (including Chinese and Other). Ethnicities occurring with a frequency of <5 % were then grouped into “low frequency groups” to avoid including under-powered groups in the main analysis (i.e., Mixed, Asian, and other).

Case Characteristics

To measure case characteristics, 44 items of the Current View questionnaire (Jones et al. 2013) were used that capture presenting problems and complexity factors. The Current View questionnaire is completed by therapists during an initial assessment appointment, with guidance and training available for scoring. In particular, 30 items capture presenting problems (e.g., “Anxious away from caregivers (Separation anxiety)”). Presenting problems occurring with a frequency of <5 % were grouped into “other presenting problems” to avoid including under-powered groups in the main analysis (i.e., psychosis, elimination problems, mutism, gender discomfort, and adjustment to a physical health problem). In addition, 14 items capture complexity factors (e.g., “Young carer status”). Complexity factors occurring with a frequency of <5 % were grouped into “other complexity factors” (i.e., young carer, learning disability, physical health condition, neurological disorder, child protection plan, refugee or asylum seeker, experience of war, and involvement with youth justice system). Therapists responded to the presenting problem items on a four-point scale from none (0) to severe (3), which were recoded from none to absent (0) and from 1 to 3 to present (1).2 Therapists responded to the complexity items as no (0) or yes (1). Number of sessions attended was captured as part of routine data recording (M = 12.31, SD = 13.58, range 1–151).

Sessional Monitoring

The number of sessions in which sessional measures were used was captured as part of routine data recording. The use of routine measures in at least two sessions was implemented as part of the CYP IAPT programme (Law and Wolpert 2014). Therefore, sessional monitoring was coded as 1 (any sessional measure used in at least two sessions) or 0 (no sessional measure used in any session). Overall, 49 % (1322) of youths had sessional monitoring and 51 % (1368) did not. Sessional measures included for example, the Revised Children’s Anxiety and Depression Scale (Weiss and Chorpita 2011), the Goal Based Outcomes tool (Law 2011), and the Session Rating Scale (Duncan and Miller 2003). Therapists receive training in selecting and using sessional measures as clinically relevant and appropriate; see Law and Wolpert (2014) for further information.

Analytic Strategy

To examine the relationship between demographic and case characteristics with sessional monitoring, multilevel logistic regressions were conducted in STATA 12 (StataCorp 2011). Three multilevel logistic regressions were performed predicting sessional monitoring. In Model 1 (the null model), the intraclass correlation coefficient (ICC) was computed to examine the variance explained at the service level. In Model 2, the patient-level demographic characteristics were entered: gender (coded 1 for female); grand mean centred age in line with recommendations (Hox 2010); and White, Black, and low frequency ethnic groups (each dummy coded 1, with not stated as the reference category). In Model 3, the patient-level case characteristics were entered: the 17 presenting problem variables described above (see “Participants and procedure” section) (each dummy coded 1 for present), the six complexity factors, and grand mean centred number of sessions attended to control for the expected relationship between a greater number of sessions attended and a greater likelihood of use of sessional monitoring. The likelihood ratio test was used to compare the fit of subsequent models.

Results

Results of analyses are shown in Table 2. In Model 1, the ICC revealed that 35 % of the variance in sessional monitoring was explained at the service level with 65 % residual or unexplained variance in sessional monitoring, indicating that multilevel regression was appropriate. Moreover, the amount of service-level variation was relatively large compared to previous research showing therapist effects of 6–9 % in treatment outcome and duration (Lutz et al. 2015).

Table 2
Multilevel logistic regression models with demographic and case characteristics predicting sessional monitoring

Adding demographic characteristics in Model 2 significantly improved the model fit compared to the null model but the ICC remained 35 %; likelihood ratio test χ2(5) = 35.11, p < 0.001. In particular, girls were more likely [odds ratio (OR) 1.26] to have sessional monitoring data than boys, youths were more likely (OR 1.10) to have sessional monitoring data with each additional year in age, and White youths were more likely (OR 1.35) to have sessional monitoring data than youths with unstated or missing ethnic identifiers.

Adding case characteristics in Model 3 significantly improved the model fit compared to Model 2 but the ICC increased to 40 %, suggesting that case characteristics explained individual-level, not service-level, variance; likelihood ratio test χ2(24) = 223.89, p < 0.001. In particular, irrespective of other presenting problems, youths presenting with mood or anxiety problems were more likely (OR 1.46 and 1.59, respectively) to have sessional monitoring data than youths presenting without these problems. In contrast, youths under state care or those in need of social service input were less likely to have sessional monitoring data than youths without these complexity factors (OR 0.20 and 0.39, respectively). Finally, in line with expectations, youths attending a greater number of sessions were more likely (OR 1.09) to have sessional monitoring than youths attending fewer sessions.

Discussion

The aim of the present research was to explore whether patient demographic (i.e., age, gender, and ethnicity) and case characteristics (i.e., presenting problems and complexity factors) were associated with the likelihood of using sessional monitoring in CAMHS. Findings of the present research suggest that there may be differences in the likelihood of sessional monitoring data being available for different groups of youths and families. Although the present research was not able to examine these mechanisms, possible explanations for the findings of the present research are discussed below.

In terms of demographic characteristics, girls were more likely to have sessional monitoring data than boys, which may be in line with evidence that females are more likely to seek mental health treatment (Oliver et al. 2005) and complete questionnaires in general (McCarty 2006) than males. Older youths were more likely to have sessional monitoring data than younger youths. This may reflect the fact that older youths may be more verbal and therefore therapists feel more able to include their views, though the exact mechanism is not clear. The fact that youths with White ethnicities recorded were more likely to have sessional monitoring than youths with unstated or missing ethnic identifiers may reflect the fact that data recording was poorer in general for these youth, and therefore unstated or missing ethnic identifiers were also associated with missing information on sessional monitoring.

In terms of case characteristics, irrespective of other presenting problems, youths presenting with mood or anxiety problems were more likely to have sessional monitoring data than youths presenting without these problems. In contrast, youths under state care or those in need of social service input were less likely to have sessional monitoring data than youths without these complexity factors. These findings are consistent with evidence from qualitative studies suggesting that routine outcome and sessional monitoring may feel more acceptable and relevant to both therapists and service users in cases with more prevalent presenting problems, and be less likely in complex cases (Moran et al. 2011; Wolpert et al. 2014). Similarly, as establishing a therapeutic alliance is an important facilitator to using measures in therapy (Stasiak et al. 2012) perhaps sessional monitoring was less likely when there were challenges to this alliance, such as in cases where there was involvement with social services or youths were under state care. Similarly, therapists may have perceived that available sessional measures were not well suited to monitoring progress for more complex cases. As sessional monitoring may help to identify when patients are not responding to therapy and therefore may be likely to disengage with therapy (Gondek et al. 2016; Kluger and De Nisi 1996), it may be of particular importance to use sessional monitoring with complex cases. Future research should examine whether a wider range of sessional measures is needed targeted to complex cases or whether training would help therapists to select sessional measures in complex cases.

Limitations should be considered when interpreting the findings of the present research. First, we used naturalistic, routinely collected data as opposed to those collected under controlled conditions. Therefore, limitations of confounding variables and selection bias may apply (Gilbody et al. 2002), and future research is needed to replicate the findings presented here, particularly to explore which factors explain the large amount of residual or unexplained variation, such as therapist-level factors. Second, given the inclusion criterion of having attended at least three sessions, there was a relatively small proportion of the wider sample (40 %) included in this study, meaning systematic differences between the two samples may have influenced the present findings (also see Participants and procedure for a discussion of the differences between the wider and included samples). Third, we examined the presence versus absence of at least one sessional measure, and future research should examine whether demographic and case characteristics are associated with the frequency or dosage of sessional monitoring. Finally, the source of non-use of sessional monitoring was not available in the dataset, and future research should interview therapists, youths, and carers when sessional monitoring has not been used to understand reasons for non-use of sessional monitoring.

Notwithstanding the above limitations, the present research is the first to examine when sessional monitoring is more likely in CAMHS. The findings suggest that sessional monitoring is more likely when cases present with more common problems such as mood or anxiety problems but may be less likely when cases present with more complex problems, such as when youths are under state care or in need of social service input. These differences may relate to the likelihood of therapists choosing to use these measures with different populations of services users, or may relate to the likelihood of measures being completed by these different groups. Nevertheless, these findings may have important implications for service comparison, especially in light of the mounting drive to consider the impact and quality of service provision across healthcare through outcome measurement in order to demonstrate transparency and accountability (NHS England 2015). In child mental health, evidence is still needed to inform risk adjustment, or how to adjust for differences in expected treatment outcomes between services with different patient populations. If data quality comparisons in terms of sessional monitoring are considered when comparing services, this may advantage those services who see more youths with less complex difficulties and disadvantage those seeing more complex cases, suggesting that these case characteristics need to be taken into account when considering risk adjustment.

Acknowledgments

This is an independent report commissioned and funded by the Policy Research Programme in the Department of Health. The views expressed are not necessarily those of the Department. The Child Policy Research Unit (CPRU) is funded by the Department of Health Policy Research Programme. This is an independent report commissioned and funded by the Department of Health. The views expressed are not necessarily those of the Department. The authors would like to thank members of CPRU: Terence Stephenson, Catherine Law, Amanda Edwards, Ruth Gilbert, Steve Morris, Helen Roberts, Cathy Street, and Russell Viner. The authors would also like to thank all members of CORC, its committee at the time of writing—(including M.W.): Ashley Wyatt, Duncan Law, Alison Towndrow, Tamsin Ford, Evette Girgis, Julie Elliott, Ann York, Mick Atkinson, and Alan Ovenden—and the CORC central team at the time of writing (including M.W.): Matthew Barnard, Jenna Jacob, Andy Whale, Elisa Napoleone, Victoria Zamperoni, Charlotte Payne, Kallum Rogers, Kate Dalzell, Craig Hamilton, Sally Wilson, Mark Garbett, Deborah Sheppard, Alison Ford, Slavi Savic, and Jeni Page.

Compliance with Ethical Standards

Compliance with Ethical Standards

Conflict of interest

No conflicts to declare.

Footnotes

1Assuming the first session was assessment and the last session was discharge or case closure, there would be at least one treatment session in which a sessional measure could have been used (Law and Wolpert 2014).

2If at least one case characteristic item was completed, incomplete items were coded as absent.

References

  • Bickman L, Douglas SR, De Andrade ARV, Tomlinson M, Gleacher A, Olin S, Hoagwood K. Implementing a measurement feedback system: A tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research. 2015 [PMC free article] [PubMed]
  • Bickman L, Kelley SD, Breda C, de Andrade AR, Reimer M. Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services. 2011;62(12):1423–1429. doi: 10.1176/appi.ps.002052011. [PubMed] [Cross Ref]
  • Boswell JF, Kraus DR, Miller SD, Lambert MJ. Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions. Psychotherapy Research. 2013 [PubMed]
  • Carlier IVE, Meuldijk D, Van Vliet IM, Van Fenema E, Van der Wee NJA, Zitman FG. Routine outcome monitoring and feedback on physical or mental health status: Evidence and theory. Journal of Evaluation in Clinical Practice. 2012;18(1):104–110. doi: 10.1111/j.1365-2753.2010.01543.x. [PubMed] [Cross Ref]
  • Chen J, Ou L, Hollis S. A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Services Research. 2013;13(1):211. doi: 10.1186/1472-6963-13-211. [PMC free article] [PubMed] [Cross Ref]
  • Cohen J. Statistical power analysis for the behavioral sciences. 2. New York: Erlbaum; 1988.
  • Department of Health . Talking therapies: A four-year plan of action. London: Personal Social Services Research Unit; 2011.
  • Douglas, S., Button, S., & Casey, S. E. (2014). Implementing for sustainability: Promoting use of a Measurement Feedback System for innovation and quality improvement. Administration and Policy in Mental Health and Mental Health Services Research. [PubMed]
  • Duncan BL, Miller SD. The Session Rating Scale: Preliminary psychometric properties of a “working” alliance measures. Journal of Brief Therapy. 2003;3:3–12.
  • England NHS. Future in mind: Promoting, protecting and improving our children and young people’s mental health and wellbeing. London: Department of Health; 2015.
  • Fleming I, Jones M, Bradley J, Wolpert M. Learning from a learning collaboration: The CORC approach to combining research, evaluation and practice in child mental health. Administration and Policy in Mental Health and Mental Health Services Research. 2014 [PubMed]
  • Gilbody SM, House AO, Sheldon TA. Outcomes research in mental health. The British Journal of Psychiatry. 2002;181:8–16. doi: 10.1192/bjp.181.1.8. [PubMed] [Cross Ref]
  • Gondek D, Edbrooke-Childs J, Fink E, Deighton J, Wolpert M. Routine outcome monitoring and treatment effectiveness, treatment efficiency, and collaborative practice: A systematic review. Administration and Policy in Mental Health and Mental Health Services Research. 2016 [PMC free article] [PubMed]
  • Hox J. Multilevel analysis: Techniques and applications. 2. Sussex: Routledge; 2010.
  • Jacob, J., Edbrooke-Childs, J., De Francesco, D., Deighton, J., Law, D., & Wolpert, M. (submitted). Goal formulation in child mental health settings: When is it more likely and is it associated with satisfaction with care? Administration and Policy in Mental Health and Mental Health Services Research.
  • Jones M, Hopkins K, Kyrke-Smith R, Davies R, Vostanis P, Wolpert M. Current view tool: Completion guide. London: CAMHS Press; 2013.
  • Kluger AN, De Nisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119:254–284. doi: 10.1037/0033-2909.119.2.254. [Cross Ref]
  • Knaup C, Koesters M, Schoefer D, Becker T, Puschner B. Effect of feedback of treatment outcome in specialist mental healthcare: Meta-analysis. The British Journal of Psychiatry. 2009;195(1):15–22. doi: 10.1192/bjp.bp.108.053967. [PubMed] [Cross Ref]
  • Law D. Goals and goal based outcomes (GBOs): Some useful information. London: CAMHS Press; 2011.
  • Law D, Wolpert M, editors. Guide to using outcomes and feedback tools with children, young people and families. 2. London: CAMHS Press; 2014.
  • Lutz W, Rubel J, Schiefele A-K, Zimmermann D, Böhnke JR, Wittmann WW. Feedback and therapist effects in the context of treatment outcome and treatment length. Psychotherapy Research. 2015;25(6):647–660. doi: 10.1080/10503307.2015.1053553. [PubMed] [Cross Ref]
  • McCarty C. Effort in phone survey response rates: The effects of vendor and client-controlled factors. Field Methods. 2006;18(2):172–188. doi: 10.1177/1525822X05282259. [Cross Ref]
  • Mellor-Clark J, Cross S, Macdonald J, Skjulsvik T. Leading horses to water: Lessons from a decade of helping psychological therapy services use routine outcome measurement to improve practice. Administration and Policy in Mental Health and Mental Health Services Research. 2014 [PubMed]
  • Moran P, Kelesidi K, Guglani S, Davidson S, Ford T. What do parents and carers think about routine outcome measures and their use? A focus group study of CAMHS attenders. Clinical Child Psychology and Psychiatry. 2011;17(1):65–79. doi: 10.1177/1359104510391859. [PubMed] [Cross Ref]
  • Oliver M, Pearson N, Coe N, Gunnell D. Help-seeking behaviour in men and women with common mental health problems: Cross-sectional study. The British Journal of Psychiatry. 2005;186(4):297–301. doi: 10.1192/bjp.186.4.297. [PubMed] [Cross Ref]
  • SAMSHA’s National Registry of Evidence-Based Programs and Practices. (2015). Partners for change outcome management system (PCOMS): International Centre for Clinical Excellence Retrieved June 2015 from http://www.nrepp.samhsa.gov/ViewIntervention.aspx?id=249.
  • Stasiak K, Parkin A, Seymour F, Lambie I, Crengle S, Pasene-Mizziebo E, Merry S. Measuring outcome in child and adolescent mental health services: Consumers’ views of measures. Clinical Child Psychology and Psychiatry. 2012;18(4):519–535. doi: 10.1177/1359104512460860. [PubMed] [Cross Ref]
  • StataCorp . Stata statistical software: Release 12. College Station: StataCorp LP; 2011.
  • Weiss, D., & Chorpita, B. (2011). Revised children’s anxiety and depression scale: User’s guide. Retrieved from http://www.childfirst.ucla.edu/RCADSGuide20110202.pdf.
  • Wolpert M, Curtis-Tyler K, Edbrooke-Childs J. A qualitative exploration of clinician and service user views on patient reported outcome measures in child mental health and diabetes services. Administration and Policy in Mental Health and Mental Health Services Research. 2014 [PMC free article] [PubMed]
  • Wolpert, M., Deighton, J., Patalay, P., Martin, A., Fitzgerald-Yau, N., Demir, E, et al (2011). Me and my school: Findings from the National Evaluation of Targeted Mental Health in Schools 2008-2011. Research Report DFE-RR177. London: Department for Education.
  • Wolpert M, Ford T, Trustam E, Law D, Deighton J, Flannery H, Fugard AJB. Patient-reported outcomes in child and adolescent mental health services (CAMHS): Use of idiographic and standardized measures. Journal of Mental Health. 2012;21(2):165–173. doi: 10.3109/09638237.2012.664304. [PubMed] [Cross Ref]

Articles from Springer Open Choice are provided here courtesy of Springer