PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Adm Policy Ment Health. Author manuscript; available in PMC 2017 May 1.
Published in final edited form as:
PMCID: PMC4609583
NIHMSID: NIHMS682965

Development and Piloting of a Classroom-focused Measurement Feedback System

Abstract

The present study used a community partnered research method to develop and pilot a classroom-focused measurement feedback system (MFS) for school mental health providers to support teachers’ use of effective universal and target classroom practices related to student emotional and behavioral issues. School personnel from seven urban elementary and middle school classrooms participated. Phase I involved development and refinement of the system through a baseline needs assessment and rapid-cycle feedback. Phase II involved detailed case study analysis of pre-to-post quantitative and implementation process data. Results suggest that teachers who used the dashboard along with consultation showed improvement in observed classroom organization and emotional support. Results also suggest that MFS use was tied closely to consultation dose, and that broader support at the school level was critical. Classroom-focused MFSs are a promising tool to support classroom improvement, and warrant future research focused on their effectiveness and broad applicability.

Keywords: school mental health, measurement feedback system, classroom practices

The past decade has seen increased attention to the use of quality improvement and accountability tools as a means toward improving practice and outcomes for children (e.g., APA Presidential Task Force on Evidence-Based Practice, 2006; Bickman, Kelley, Breda, de Andrade, & Riemer, 2011; Jensen-Doss & Hawley, 2010; U.S. Department of Health and Human Services, 2003). Particular attention has been given to the use of measurement feedback systems (MFSs), tools that provide ongoing assessment and data feedback to practitioners and families. In line with typical continuous quality improvement methods used across disciplines (e.g., Bambrick-Santoyo, 2010; Berwick, 1989; Deming, 1986; Juran, 1951; Park, Hironaka, Carver, & Nordstrum, 2013), MFSs allow practitioners to monitor progress during interventions, make course corrections, and track outcomes over time. The use of MFSs has become even more important with the passage of the Affordable Care Act (ACA), and increased requirements from states, counties, and other entities for providers to use evidence-based treatments and document outcomes (Bruns, Hoagwood, & Hamilton, 2008). MFS also represent a health information technology that could be integrated into electronic health records, and are a component of the ACA (Buntin, Burke, Hoaglin, & Blumenthal, 2011).

There is growing empirical support for the use of MFSs in medical and mental health care (e.g., Bickman et al., 2011; Duncan & Pozehl, 2000; Lambert, Hansen, & Finch, 2001; Lambert, Harmon, Slade, Whipple, & Hawkins, 2005; Lambert, Whipple, Smart, Vermeersch, & Nielsen, 2001). Specifically, these studies have shown that when clinicians receive feedback routinely (e.g., alerts related to client deterioration, current symptom levels), clients have either demonstrated improved symptom outcomes (Bickman et al., 2011) or markedly less deterioration (Lambert et al., 2005) when compared to clients of clinicians who did not receive feedback. However, with a focus on weekly client contacts and individual-level interventions and outcomes, these systems have limited applicability for providers who work in the other main setting for child mental health intervention – schools. Although many school-based providers provide weekly individual treatment, they are increasingly aligning their services to support the learning goals of students and schools, thereby addressing mental health and school functioning. Moreover, because school interventions can involve a broad range of personnel – including teachers, social workers, counselors, and others (e.g., Owens et al., 2014) – the school context necessitates the development of MFSs that are (a) applicable to the range of providers in schools, (b) adaptable to school-related functional outcomes, and (c) usable in the classroom context.

The continuous improvement framework inherent to MFSs is well-aligned with reflective teaching and shared learning – two approaches to teacher professional development (Bryk, Gomez, & Grunow, 2010; Park et al., 2013). A classroom-focused MFS is a critical practice improvement tool that can be used as a stand-alone or in conjunction with broader social-emotional curricula, prevention models, or targeted intervention models for students’ emotional and behavioral needs (e.g., Domitrovich, Cortes, & Greenberg, 2007; Embry, 2002; Reddy, Newman, De Thomas, & Chun, 2009). MFSs in education settings have been used effectively to guide classroom instructional practice based on student performance data (e.g., Rose & Church, 1998; Stecker, Lembke, & Foegen, 2008; Tuckman & Yates, 1980; Ysseldyke & Tardrew, 2007), but are rarely used to guide classroom practices related to student behaviors. There is growing evidence that performance feedback to teachers regarding their behavioral and academic interventions improves teacher practices (DiGennaro, Martens, & Kleinmann, 2007; Duchaine, Jolivette, & Fredrick, 2011; Duhon, Mesmer, Gregerson, & Witt, 2009). Studies in which teachers are provided with visual performance feedback that graphically depicts teacher or student behavior also have demonstrated promise in improving teachers’ intervention integrity and fidelity as part of a consultation programs for universal classroom behavioral strategies (Becker, Bradshaw, Domitrovich, & Ialongo, 2013; Becker, Darney, Domitrovich, Keperling, & Ialongo, 2013; Hawkins & Heflin, 2010; Reinke, Lewis-Palmer, & Martin, 2007). The MFS developed in the current study builds on this prior work in education and recent advancements in the mental health context in which MFSs promote ongoing practitioner self-monitoring (e.g., Chorpita, Bernstein, & Daleiden, 2008).

The present study describes the development and initial pilot of a MFS specifically designed for use by teachers in support of school district classroom practice improvement efforts. Similar to trends towards outcome monitoring in health care, recent educational policies promote accountability for teaching practices and outcomes, and many states are either promoting or mandating the use of teacher evaluation systems designed to identify teachers in need of support (National Council on Teacher Quality, 2012). MFSs focused on classroom practices and student behavior can be an important tool in this process. The MFS described in the current study was developed with two major goals in mind. First, it was paramount that the system be practically useful to teachers and aligned with progress indicators of interest to educators. Second, the system needed to have the flexibility to track outcomes relevant to both classroom-level as well as target (identified) student-level outcomes.

To accomplish this, the MFS was developed and piloted within the context of implementing an intervention – BRIDGE – designed to bridge mental health and educational practice around the needs of students with emotional and behavioral disorders. BRIDGE is a teacher consultation and support model (Cappella et al., 2012) that builds on prior research demonstrating that expert consultation or coaching within a continuous quality improvement framework shows promise in improving teacher practices (e.g., Becker, Bradshaw, et al., 2013; Brown, Jones, LaRusso, & Aber, 2010; Pianta, Mashburn, Downer, Hamre, & Justice, 2008; Reinke, Stormont, Webster-Stratton, Newcomer, & Herman, 2012). BRIDGE is distinguished by its use of providers indigenous to schools (e.g., lead teachers, clinical social workers, behavioral support specialists) to provide consultation to teachers, and its inclusion of both universal and targeted classroom intervention strategies (Cappella et al., 2012). BRIDGE was found to be effective in improving classroom and student outcomes in an RCT involving 36 urban elementary school classrooms (Cappella et al., 2012). The current study developed and piloted the MFS within the quality improvement component of BRIDGE, which included the use of coaching, feedback, and reflective teaching practice, but was designed with the broad intent to support classroom practice improvement efforts beyond this component as well.

In both developing and piloting the MFS, the role of implementation variables was considered. Major theoretical models of implementation all emphasize the importance of the interdependent, multi-level factors that impact program implementation at the innovation, individual, organizational, and systems levels (e.g., Aarons, Hurlburt, & Horwitz, 2011; Domitrovich et al., 2008; Fixsen, Blase, Metz, & Van Dyke, 2013). Across models, critical factors include the characteristics of the innovation itself (ease of use, flexibility, adaptability, innovation-setting fit), individual practitioner factors (characteristics, attitudes, perceptions), organizational factors (climate and culture, mission/policy alignment, leadership, local expertise), and outer context or systems factors (policies, financing, organizational partnerships, relations with intervention developers, leadership). Factors thought to be particularly germane to the implementation of the MFS among school personnel included a supportive school administration, the presence of supportive colleagues (Gamoran, Secada, & Marrett, 2000), and teachers’ stress and job burnout (Evertson & Weinstein, 2006; Yoon, 2002). In the current study, we assessed teachers’ stress, burnout, and their perceptions of professional community in order to characterize the implementation context. We also tracked implementation process variables (e.g., dosage of consultant, usage of the dashboard, consultant observations of teacher classroom practices) in order to understand the use of the MFS within the context of the broader BRIDGE program. We also assessed classroom-level and student-level outcomes in order to assess the promise of the combined MFS and intervention model.

The current paper describes two phases of the MFS development and piloting process. In Phase I, we describe the community-partnered, iterative research process involved in the design and refinement of a classroom-focused MFS. In Phase II, we present pilot data on student and classroom outcomes and the usage of the MFS in relation to implementation variables by examining detailed case study data on seven classrooms. The goal of the study was to develop a classroom-focused MFS that could be used within the context of implementation of a teacher consultation and support model, and meet the needs of both educators and mental health practitioners.

Phase I: Initial Development

Method

The goal of the Phase I portion of the study was to use a community-partnered, iterative research approach to develop and refine a classroom-focused MFS. The Phase I development process was conducted in line with community partnered research strategies articulated for the implementation context by Wells et al. (2006). In line with this model, the team operated by jointly identifying priority areas for research and action (e.g., teacher training related to classroom behavioral issues and students’ emotional needs); matching community needs, resources, and values with evidence-based intervention (e.g., BRIDGE, development of an MFS); and engaging in development, adaptation, and implementation through a collaborative process.

Setting and Participants

The study sample included school personnel working in seven classrooms serving students with behavioral and emotional problems in seven schools (4 elementary, 3 middle) in an urban school district. These classrooms were identified for this project by the district as being high priority for support services, Teachers were approached by research staff, and all teachers agreed to participate in the study. There were three 6–8th grade classrooms, three 3rd–5th grade classrooms, and one K-2nd grade classroom. Each classroom had six students, and was housed within a general educational school. The target classroom was typically the only self-contained classroom for students classified with behavioral and emotional disturbances in the school. One teacher and one aide from each classroom participated in the study (N = 14), along with 3 district-employed clinical social workers who served as behavioral consultants to the seven classroom teachers. Two social workers worked with three classrooms each and the third worked with one classroom. School personnel were an average of 38.7 years old (SD = 8.16) and 54% were male. Fifty-three percent of the sample identified as White, 11.7% as African American, 23.5% as Latino, and the remaining participants were of mixed or other backgrounds. De-identified data collected by the district on students’ emotional and behavioral functioning was available on 38 of 42 students across the seven classrooms (76% male; 82% free or reduced lunch; 65.8% Black, 23.7% Latino, 7.9% White, 2.6% mixed or other backgrounds). Approximately 7% of the students scored in the proficient range for language arts and 13% of the students were scored as proficient in math per their state standardized test scores.

Procedures

In Phase I, data collection included classroom, teacher, and student measures that comprised an assessment of baseline needs and implementation context. The measures reflected issues the district identified as most concerning (e.g., student behavior and exposure to trauma, teacher stress, and classroom dynamics). School personnel provided consent for participation in the study. Student-level data were gathered by the school district and provided to the research team in a de-identified format that could not be linked to individual students. Data also included research field notes that were used to refine the MFS. All procedures were approved through the university institutional review board and the school district’s research review board.

The BRIDGE teacher consultation model

The MFS was developed within the context of the implementation of the BRIDGE teacher consultation model (Cappella et al., 2012). As part of the community-partnered research process used for the current project, we conducted informal focus groups with teachers and social workers and held a series of meetings with district leadership to assess needs. The focus groups were not formally recorded and coded. Thorough notes were taken and summarized according to the major issues identified. The information was communicated back to school personnel orally and in writing to ensure that it accurately captured their perspectives. Synthesis of this information by the research team revealed three primary concerns: 1) teachers felt stress related to the overwhelming academic and social-emotional needs of their students and wanted direct support for the classroom, 2) teachers and district leaders identified a need for a systematic and consistent approach to classroom behavior management and climate across classrooms, and 3) teachers and district leaders felt that teachers needed additional training in how to teach students with intense behavioral needs and high levels of exposure to trauma and chronic stress. Because of its focus on universal and targeted student needs, its use of in-district consultants, and its alignment with school district policies related to teacher practice improvement, BRIDGE was selected for implementation. The MFS was thought to be an important tool to promote the use of effective classroom practices, and a decision was made to develop and field it in this context.

As noted above, BRIDGE deploys in-district consultants embedded within schools to work directly with teachers to support students identified with mental health issues (targeted support) and promote the creation of effective classrooms more broadly (universal support). Effective classrooms are defined via the Classroom Assessment Scoring System (CLASS: Pianta, La Paro, & Hamre, 2008), a standardized and validated tool for understanding quality teacher-student interactions. In BRIDGE, teachers are supported to use evidence-based universal and targeted strategies that fit the identified needs of their classrooms and target students and align with CLASS dimensions of Emotional Support (e.g., positive climate, teacher sensitivity) and Classroom Organization (e.g., behavior management, productivity). School personnel are provided with a roadmap for choosing these evidence-based strategies (or practice “kernels”; Embry, 2002) and in-district consultants (i.e., clinical social workers or lead teachers) coach teachers to implement strategies and monitor their effects on classrooms and target students.

The continuous improvement process that underlies BRIDGE is the specific context for the development of the MFS. In the initial BRIDGE meeting between teachers and consultants, one or two CLASS dimensions are identified as focal dimensions to target in the continuous improvement process. Then, dyads engage in improvement cycles focused on (a) assessment of classroom and student needs (aligned with the CLASS dimensions), (b) reflection (collaborative problem-solving, selection of BRIDGE strategies), and (c) action (implementation of strategies, monitoring of progress). The MFS was designed to support this continuous improvement process after the initial BRIDGE meeting and assessment step. The MFS was created as a self-monitoring tool to facilitate faithful implementation of empirically-supported strategies and to determine when to implement additional strategies. Moreover, although the MFS was built to fit with BRIDGE, the school district wanted a system that could apply across a range of classroom improvement efforts. BRIDGE was a useful platform to support the development of such a system. In addition, it was well-aligned with state policies mandating a teacher evaluation system to identify teachers in need of support, feedback, and professional development (State of New Jersey Department of Education).

During the study period, it was expected that BRIDGE consultants would observe a classroom 3 to 5 times (20–30 minutes each time) and meet with teachers 3 to 5 times (20–30 minutes each time). Observations could involve watching the teacher work with students or implement strategies, and supporting the teacher by modeling techniques and observing the teacher’s use of techniques. Meetings included reviewing videos of effective practices, discussion of classroom observations and implementation of strategies, and/or assessing progress. The MFS would serve as a self-monitoring tool for teachers during the continuous improvement process and a guide for consultation meetings. Initial BRIDGE training was conducted jointly for teachers and in-district clinical social workers – the BRIDGE consultants – in a one day long session. In addition, the study principal investigator (PI) met with the BRIDGE consultants for two hours on bi-weekly basis. Additional phone meetings were conducted as needed. BRIDGE was implemented over a 14-week time period (March to June 2012). The MFS was both developed and pilot tested within this time frame.

Baseline Needs Assessment

Measures used to assess baseline classroom, student, and teacher needs and the implementation context are described below.

Classroom Assessment Scoring System (CLASS)

The CLASS (Pianta, La Paro, et al., 2008) is an observational measure that assesses classroom quality in three domains: Emotional Support (positive climate, negative climate, teacher sensitivity, regard for student perspectives), Classroom Organization (behavior management, productivity, instructional learning formats), and Instructional Support (concept development, quality of feedback, and language modeling). The ten dimensions are scored on a 7-point scale ranging from 1 (low) to 7 (high). Each dimension contains a detailed overall description, behaviorally anchored scale points, and behavioral indicators (Mashburn et al., 2008). Each dimension is coded four times per teacher during one observational period. Reliabilities in prior research on BRIDGE ranged from .79 to .86 (Cappella et al., 2012). CLASS observations were conducted by a trained and reliable independent observer at baseline and at the end of the school year. The CLASS observer also took qualitative notes to contextualize and complement CLASS coding, identifying areas of strength and challenge for each classroom for the CLASS domains as well as any contextual factors that could have influenced the ratings. The CLASS domains of Emotional Support and Classroom Organization were used in the current study.

Strengths and Difficulties Questionnaire (SDQ)

The teacher-report version of the SDQ was used to assess student baseline needs. The SDQ contains 25 items, 20 assessing problem areas (emotional symptoms, conduct problems, hyperactivity/inattention, and peer relationship problems), and 5 assessing prosocial behavior (Goodman, 1999; Goodman, Meltzer, & Bailey, 1998). The measure compares favorably to the Child Behavior Checklist (Goodman & Scott, 1999), and distinguishes between clinical and non-clinical samples. In the present study, analysis focused on the total difficulties, conduct problems, and emotional distress subscales.

Exposure to Violence Scale

For students eight years and older, lifetime exposure to violence was assessed via a modified version of the Exposure to Violence Scale (Singer, Anglin, Song, & Lunghofer, 1995). The scale included items related to witnessing violence (3 items assessing witnessing threats of harm, others being beaten, and others being slapped/hit/or punched), personal victimization (3 items assessing personal threats of harm, being beaten, and being slapped/hit/or punched), and exposure to weapon-related victimization (2 items related to being shot with a gun or stabbed with a knife or witnessing such violence). Summed total scores were used as part of the baseline needs assessment.

Professional Quality of Life Scale (ProQOL)

The Professional Quality of Life Scale (ProQOL) (Stamm, 2010) is a 30-item self-report measure that assesses school personnel compassion satisfaction (e.g., caregiving as a positive experience, self-efficacy), secondary traumatic stress, and risk of burnout (e.g., hopelessness, helplessness). Alphas for subscales range from .72 to .87, indicating adequate internal consistency (Stamm, 2010). The ProQOL provides categorized responses as being high, average, or low. The measure was used at baseline to identify possible school personnel support needs.

Professional Community Index (PCI)

The PCI (Bryk & Schneider, 2002) includes 29 items to assess the extent to which six elements of professional community (i.e., deprivatized practice, collaboration, reflective dialogue, focus on student learning, collective responsibility, and teacher socialization) are present in schools. PCI total scores were used to assess teachers’ perceived support in their schools. Given the importance of administrative support, we separately examined the PCI item assessing principal-teacher collaboration, “The principal, teachers, and staff collaborate to make this school run effectively.”

Field notes

During project meetings with school personnel, researchers took notes on multi-level implementation factors observed to be related to the development and implementation of the MFS. These notes were taken using a structured template in order to facilitate their immediate use in the field. Specifically, barriers and facilitators to use of the MFS as part of this project were categorized according to their implementation-level (e.g., school administrator-level, district-level, innovation-level, and teacher-level). This information was used to refine the MFS and to identify specific areas for implementation support.

Baseline Needs Assessment Findings

Table 1 summarizes the baseline CLASS scores as well as the primary challenges in each classroom identified from CLASS observer and BRIDGE consultant notes. At baseline, all classrooms scored in the mid-range (3–5) for the CLASS Emotional Support and Classroom Organization subscales, indicating moderate or inconsistent use of effective classroom practices (Office of Head Start, 2014). With respect to student needs, the total difficulties scores on the SDQ revealed that 68% of students were in the abnormal range and 24% of students were in the borderline range (M = 25.33, SD = 6.00). Baseline SDQ scores for emotional stress revealed that 13% of students were in the abnormal range and 13% were in the borderline range (M = 4.03; SD = 2.74). On the conduct problems subscale, 68% were in the abnormal range and 11% were in the borderline range (M = 6.33, SD = 2.63). Of the 25 students with self-reported trauma exposure, all students reported exposure to at least one traumatic event, and 66% reported experiencing 9 or more lifetime traumatic events (M = 9.38, SD = 3.94).

Table 1
CLASS Means from Baseline to Post-test

With respect to experiences of professional quality of life and professional community among school personnel, the overall means for secondary trauma exposure (M = 18.75, SD = 3.99), burnout (M = 23.22, SD = 2.91), and compassion satisfaction (M = 43.00, SD = 3.77) were in the normal range. Individual scores were also in the normal range with the exception of one teacher whose score suggested a risk of job burnout (i.e., score of 42 or more). Similarly, teachers tended to report high levels of professional community with their colleagues (M = 75.11, SD = 15.52). On the single item from the PCI addressing principal-teacher collaboration, there was a split, with four teachers reporting strong collaboration (agree or strongly agree), and three teachers either disagreeing or strongly disagreeing with the statement.

In addition to the CLASS, which was completed by an independent observer, BRIDGE consultants observed each classroom using the CLASS as a lens for understanding classroom interactions. In the BRIDGE consultation meetings that followed, teachers were asked to identify their top priority problems in the classroom (e.g., classroom-wide disruptions during transitions, individual student behaviors that derailed instruction). The dyad then collaboratively identified appropriate classroom wide and targeted strategies from BRIDGE to address identified problems.

MFS Development

Within our overarching community-academic collaborative framework (e.g., Wells et al., 2006), the research team used a data-driven approach to develop and refine the MFS. The team jointly determined interest in developing a MFS that could be used as part of BRIDGE. Because of BRIDGE’s use of both targeted and universal strategies, it was thought that this context would prove a valuable testing ground that could lay the foundation for potential expansion of the MFS to other district practice improvement efforts. The next step involved selecting a MFS model that had prior success and could be adapted and piloted. The team decided to adapt the PractiseWise Clinical Dashboard (Chorpita et al., 2008), an Microsoft Excel-based tool, that was developed in the child psychotherapy context to support clinical decision-making through ongoing progress monitoring. This dashboard has been used by school-based clinicians conducting individual therapy (Borntrager & Lyon, 2014) and can be configured to track educationally and clinically-relevant progress indicators. Often used as a tool to support implementation of evidence-based psychotherapy elements through a modular framework (e.g., Chorpita et al., 2008; Chorpita & Daleiden, 2009), the PracticeWise dashboard provides a visual summary of individual client progress along with the history of clinical practices delivered in each session. Practitioners can configure the dashboard to track up to five measures of client progress at a time. These can include indicators that assess weekly progress outcomes, or standardized measures collected over longer intervals (PracticeWise, 2014).

Dashboard construction

Using the PracticeWise dashboard as a template, the initial version of Classroom Practices Dashboard was constructed based on the BRIDGE model, including information gleaned from CLASS scores, student measures, observer notes, and consultation meetings between BRIDGE consultants and teachers. Instead of tracking individual psychotherapy practices, the dashboard included a list of empirically-supported classwide and targeted student practices derived directly from BRIDGE. The practices were categorized according to primary dimensions on the CLASS that they were designed to address. The dashboard included a place for teachers to list the main classroom challenges, which were identified via synthesis of consultant observations, teacher report, and areas of difficulties identified on the CLASS. Teachers and consultants then selected universal and targeted progress indicators aligned to these specific classroom and target student needs.

Teachers could then check off the practices they implemented and the date(s) practices were used. In addition, teacher-consultant dyads were asked to select 3 to 5 indicators of progress in the classroom (e.g., number of disruptions) that mapped onto classroom needs or target student behaviors and were feasible to track. Although the research team provided examples, the consultants and teachers were encouraged to choose indicators of relevance to the identified needs in the classroom. Table 2 provides a list of CLASS domains and dimensions, sample BRIDGE practices, and sample progress outcomes.

Table 2
List of BRIDGE Practices Aligned with CLASS Dimensions and Progress Indicators

Preliminary field-testing, feedback, and refinement

Once the Classroom Practices Dashboard template was distributed, teachers were given two weeks to complete their first dashboards, which were shared with the research team and consultants. With support from the study PI, consultants met with the teachers to assist with interpretation and obtain feedback on implementation issues at multiple levels (e.g., innovation-related issues, issues related to teacher comfort and skill, implementation supports). Notes were kept by the study PI using a guided template organized by multi-level implementation issues. Notes were updated at each point of contact with the consultants and teachers. The PI used these notes to identify the emergent major issues which were then discussed with the team. This information was used to refine the dashboard with a focus on making rapid changes that could be implemented immediately. Due to this focus, formal qualitative analysis was not conducted. Rather, the PI noted the primary issues that emerged at each improvement cycle. These procedures are in line with continuous improvement methods such as the plan-do-study-act or reflective learning framework in which practices are field, feedback is synthesized, and changes are made immediately (e.g., Deming, 1986; Park et al., 2013)

Table 3 summarizes the major implementation issues and resulting refinements to the dashboard or implementation support. The primary areas of feedback included improvements to the dashboard and teacher and consultant skills. Additional pieces of feedback related to school- and district-level supports. With respect to the dashboard, it became immediately clear that weekly reporting was not sufficient or useful for classroom teachers. With weekly tracking, for example, it was difficult to adequately interpret how often classroom practices were used, and the progress indicators were not sufficiently specific or concrete. As such, the team decided the dashboard would be most effective as a daily tool. Dashboards could then be updated to address pertinent issues as they evolved. However, due to concerns about feasibility and burden, it was determined that teachers would select one week to use the dashboard daily, take a week off, and then use it daily the following week. Additionally, because challenges in the classroom tended to follow predictable patterns (e.g., transitions between activities, behavior after lunch, disruptions during particular lessons), it was suggested that daily tracking focus on the specific times of the day that were most challenging to teachers (see Figure 1 for an example of the Classroom Practices Dashboard visual display).

Figure 1
Sample Progress Pane from the Classroom Practices Dashboard (copyright by PracticeWise, LLC; adapted and used with permission)
Table 3
Phase I Classroom Practices Dashboard during Rapid Refinement Process

A second implementation issue related to the development of skills in progress monitoring and using Excel. For example, one teacher chose to track students’ behavioral problems on the bus. However, because the teacher could not directly observe behavior or intervene on the bus, it was not a feasible intervention target or progress indicator. To address such issues, the team generated a more comprehensive list of sample outcomes that could be tracked, and facilitated teachers’ cross sharing of dashboards. In addition, some teachers did not feel comfortable with Excel. As an immediate solution, Excel skill-building was incorporated into BRIDGE consultation. However, because the team’s primary goal was supporting the use of effective classroom practices, at times, teachers tracked outcomes and practices on paper. Consultants plotted it on the dashboard for use in consultation meetings. BRIDGE consultants also had variability in their comfort with Excel. To address this, the research team provided individual training to consultants on how to set up and interpret the dashboard with teachers (e.g., selecting outcomes to track, determine when and how to track, using the tool to make course corrections).

The remaining feedback related to broader implementation supports. These issues were not specific to the dashboard, and reflected implementation support needs relevant to a range of new practices. The teachers, BRIDGE consultants, and district staff all reported liking the dashboard, the range of practice and outcomes that could be included, and it broad applicability. Their feedback on implementation support was focused on three areas: 1) a desire for more frequent contact with their peers in different schools, 2) training for school administrators on the unique needs of students with emotional and behavioral disturbances, and 3) district-level support. Due to the time limits of the study, the resulting refinements were only partially implemented during the MFS pilot work; however, plans were made for support structures for future implementation efforts.

Discussion

Phase I focused on two primary tasks: (a) assessing baseline needs to inform construction of the dashboard, and (b) using rapid-cycle improvement methods to refine the dashboard for the BRIDGE context and to serve as a prototype for similar efforts in the district. The levels of student behavioral challenges in classrooms in this study are much higher than national norms (Bourdon, Goodman, Rae, Simpson, & Koretz, 2005), affirming both the tremendous need for behavioral and emotional support in these classrooms and the appropriateness of the empirically-supported classroom wide and targeted strategies used in the intervention. Although there was variability in teachers’ perceptions of collaboration between teachers and principals, it is encouraging that the needs assessment revealed a workforce that felt interconnected despite being spread across seven schools. It also alleviated some district concerns about the levels of teacher stress. Together, this information was used to shape the initial version of the dashboard and implementation supports.

Feedback from research field notes revealed a number of critical improvements to the dashboard and how it was used, as well as multi-level implementation support needs that included training, consultation, and school and district-level support. When possible, this information was used for immediate improvements to the dashboard and how it was used. Of note, our finding that skills related to the technology itself was a barrier is consistent with the literature on health information technology more broadly which suggests the importance of skills training and the development of user-friendly technology (Buntin et al., 2011; Kellermann & Jones, 2013). Our Phase I study suggests that the rapid cycle feedback process was a feasible partnered research method to arrive at a dashboard tool that could be fielded immediately.

Phase II: Pilot Study

Final Classroom Practices Dashboard

Phase I of the current study yielded a classroom focused MFS that was distributed for use throughout the rest the BRIDGE implementation. The final Classroom Practices Dashboard had: (a) a full list of empirically-derived universal and targeted classroom practices that teachers could check off when they were used, and (b) open fields where teachers could select 3 to 5 universal classroom or target student progress indicators related to their primary identified problems (sample progress indicators were provided). It also included a sheet where teachers could make notes relevant to discussion with their consultants or interpretation of the dashboard. This construction gave the dashboard both the specificity it needed and the flexibility to allow its use outside of the BRIDGE context. Tracking was designed to occur daily for a week and to focus on a period of the day when teachers experienced challenges. Each day, teachers were expected to log the practices they used and the progress indicators they were tracking. Teachers had the option of using the dashboard on a one-week off/one-week on schedule. Figure 1 depicts an example of the output pane from the final dashboard.

Use of the dashboard during the consultation process

The intent of the dashboard is to help facilitate self-monitoring for teachers and to use progress data to guide the consultation process. In the initial BRIDGE consultation meeting, teachers and consultants meet to reflect on baseline assessment information and teacher-identified classroom and student needs. The consultant-teacher dyad then select intervention strategies aligned to these needs and engage in ongoing consultation using a continuous improvement lens. Dashboards are completed during these periods of action and reflection. Once teachers complete their dashboards for the week, they are able to review their tracking with the consultant. Discussion can focus on whether practices that were planned were implemented, whether positive change has occurred in progress indicators, and how well the teacher perceives the process to have gone. Consultants can work with teachers to further support usage of the chosen practices or to add or shift practices based on practice usage and progress data. The intent of using the dashboard in consultation is that over time, teachers might use the dashboard and classroom practices with increasing independence, and more routinely incorporate reflective teaching into their daily activities.

Method

In Phase II, we examine pilot data on classroom and student outcomes and implementation processes through a detailed case study analysis of seven classrooms.

Setting and Participants

The setting and participants for Phase II were the same as for Phase I noted above.

Procedures

Phase II assessment focused on feasibility, preliminary outcomes at classroom and student levels, and implementation processes related to using the dashboard within the BRIDGE intervention. At the classroom-level, we conducted detailed case studies of the seven classrooms using both qualitative and quantitative data sources (e.g., Yin, 2009). We also examined pre- to post-test differences in measures of student behavior and emotional distress.

Measures

Classroom and student outcome measures were gathered post-test and involved a subset of Phase I measures. Classroom outcomes were assessed using the CLASS. Student outcomes were assessed using scales on the SDQ: total difficulties, conduct problems, and emotional distress. Implementation measures are described below.

Use of quality improvement tools

The frequency with which teachers completed the MFS tool was tallied across the 10-week period. As an indicator of consultation dosage, the frequency with which teachers interacted with their in-district BRIDGE consultants (either a coaching session or a classroom visit for observation) was tallied across the same time period.

Use of BRIDGE practices

Use of BRIDGE practices by teachers was abstracted from the BRIDGE consultants’ weekly structured notes. Consultants routinely visited all classrooms in order to work with or observe individual students for counseling. These visits often served as teacher observation and consultation sessions as well. The total number of BRIDGE practices observed each week (over 10 weeks) was used as an index of implementation dosage.

Research field notes

As in Phase I, researcher field notes were examined to catalog implementation factors at multiple levels and identify factors contributing to variations in implementation dosage.

Results

Classroom and student-level changes

Table 4 depicts the CLASS change scores from baseline to post-test for participating classrooms. As an indicator of meaningful improvement, we used a one-point increase. This is in line with prior research with BRIDGE and the CLASS, which reported standard deviations of about one point (Hamre et al., 2013). With respect to Emotional Support, 3 of the 7 classrooms made improvements of one point or more, and 3 classrooms showed trends in the positive direction. With respect to Classroom Organization, 4 classrooms showed improvement of about one point on the CLASS, and 2 classrooms had trends in a positive direction.

Table 4
Changes in CLASS Scores and Implementation Factors by Classroom

Paired samples t-tests revealed significant pre to post-test reductions in teacher-reported total difficulties on the SDQ, t(35) = 2.21, p < .05, (baseline M = 25.33, SD = 6.00; post-test M = 22.17, SD = 5.47). Specifically, there were significant reductions in teacher-reported conduct problems on the SDQ, t(35) = 4.19, p < .001, (baseline M = 6.33, SD = 2.63; post-test M = 3.67, SD = 2.11). There were no changes in teacher-reported emotional stress on the SDQ.

Implementation factors

Table 4 depicts CLASS change scores and implementation process variables: MFS dosage, consultation dosage, and number of BRIDGE practices observed by consultants. A pattern emerged such that those teachers who used the dashboard the most often (5 weeks of daily tracking) had more consultation sessions (9 or 10), more observed CLASS practices (35 to 38), and the most consistent improvement on Emotional Support and Classroom Organization. The other classrooms ranged from 4 to 7 consultation sessions, and 1 to 3 dashboards. Levels of dashboard and consultation use were not confounded with consultant.

Examination of field notes related to multi-level implementation factors in Phase II focused on identifying factors that differentiated high and low dashboard users. The broad implementation issues identified in Phase II were similar to those observed in Phase I, most of which were applied to the range of classrooms in the study. However, research field notes revealed varying levels of support by the school administration (this corresponds to Phase I teacher-report findings on the professional community index). Each of the three classrooms with the highest dashboard use also noted higher levels of support from school administration. However, one classroom with improvement in Classroom Organization was a “low support” school (Classroom 3) and one “high support” school did not show improvement in the CLASS (Classroom 7).

Discussion

Our Phase II results demonstrate pre-to post improvement in behavioral functioning for students in this study. Although there was no comparison group, and results warrant cautious interpretation, it does provide some indication the MFS-enhanced BRIDGE model has promise. There was also notable improvement in classroom functioning for four of the seven classrooms on Classroom Organization or both Classroom Organization and Emotional Support, and positive trends for other classrooms. Again, with a small sample and no comparison group, causal interpretations are not warranted. Yet, these trends are similar to those observed in a prior classroom-randomized study of BRIDGE (Cappella et al., 2012).

The results at the classroom level are also suggestive of classroom improvement trends with practical significance. According to guidelines for using the CLASS as a professional development tool, the classrooms in this study started and ended the study in the mid-range (scores of 3–5), which is indicative of moderate or inconsistent levels of effective classroom interactions (Office of Head Start, 2014). However, a score of 5 for Emotional Support has been identified as a critical threshold predicting positive student outcomes (Burchinal, Vandergrift, Pianta, & Mashburn, 2010). The three highest implementation classrooms ended the study with scores of 5 or above on Emotional Support and Classroom Organization. Of note, Classroom 7, which did not show improvement, already had CLASS scores at the minimum threshold for predicting positive student outcomes at baseline (Burchinal et al., 2010). In previous research with BRIDGE, classrooms who began the study with higher CLASS scores showed less improvement than those with lower baseline scores (Cappella et al., 2012).

The results on implementation process suggest that within a short time frame, the team was able to develop and field a MFS that was used frequently by several teacher-consultant dyads. Notably, the teachers who used the dashboard the most, also used consultation the most, and tended to have supportive school administrations. Administrative support was thought to be non-specific to the dashboard or to BRIDGE. Rather, it appeared to reflect a school environment supportive of the needs of students with severe emotional and behavioral issues. These findings suggest that, at least upon initial implementation, the classroom dashboard may not be ready to be used as a stand-alone self-monitoring tool by teachers. It did, however, appear to serve as a useful tool within the context of teacher consultation and support.

General Discussion and Implications

The current study represents an effort to develop and pilot a MFS to support teachers in addressing behavioral issues in the classroom. Overall findings across the two study phases indicate we were able to develop and field a classroom-focused MFS that was well-received by school personnel. This tool may be applicable to other school-based mental health interventions or classroom practice improvement efforts. Past efforts to use MFSs in classrooms have typically focused on singular practices or behaviors and relied on feedback from outside observers (Hawkins & Heflin, 2010; Reinke et al., 2007). Our experience suggests it is possible to construct an MFS in which teachers self-monitor and track classroom and target students’ responses to interventions. This sort of customizable MFS focused on tiered interventions and outcomes is important because it can meet the needs of both teachers and mental health staff in schools. Both groups are increasingly called upon to document outcomes for different – but complementary – purposes (e.g., APA Presidential Task Force on Evidence-Based Practice, 2006; Bruns & Hoagwood, 2008; National Council on Teacher Quality, 2012). In addition, the community-partnered research approach we used to rapidly develop, refine, and pilot the MFS could serve as a model for other real world, community-based pilot efforts.

Importantly, our implementation data suggested the MFS was a useful tool within BRIDGE – an intervention model that includes ongoing consultation and continuous improvement. However, because MFS usage appeared to be tied closely to BRIDGE consultation and use of strategies, an important future direction for research is how such tools can be most effectively used to improve practice more broadly. It is an open question whether an MFS, like the one we developed, can be a stand-alone practice and outcome-monitoring tool, or whether such tools are more effective as part of broader intervention and improvement efforts. Our findings illuminate needs for implementation support at multiple levels, including individual training of the workforce (e.g., related to progress monitoring, assessment, and information technology) and broader school- and district-level supports. These findings are consistent with the broader literature on the multi-level barriers to implementing health information technology {Kellermann, 2013 #1620}{Buntin, 2011 #1619}{Cresswell, 2013 #1621}, and suggest it may be challenging for school personnel to use MFSs separate from content-based intervention or improvement models. Future research to disaggregate the implementation and effects of MFSs is needed.

There are key limitations to the current study. Importantly, there was no control group and the sample size was small. As such, all interpretations of the pilot data must be made with caution. The detailed case studies, however, did suggest patterns of dashboard and consultation use that appear to be associated with positive changes in classroom and student outcomes. In addition, the study sample included the more behaviorally challenging classrooms in the school district. Although there was tremendous need in this sample for the types of interventions, consultation, and monitoring provided in this study, the sample may not be reflective of the general education population or even self-contained classrooms in other districts. For these reasons, further testing of this MFS and any associated consultation and intervention models should be conducted in other school contexts and with larger samples of classrooms.

In summary, the present study used a community-partnered approach to develop and field test a MFS for use in schools that can address both educational and clinical issues. The method demonstrates an iterative research process that could inform future university-community collaborative research. The MFS created in this study has the potential to be applied in a flexible manner by a range of school personnel to address student behavioral problems–an important contribution to current policy and practice efforts across child-service sectors.

Acknowledgments

Writing of this paper was supported by NIMH 5K01MH83694 (EN). The authors would like to acknowledge the contributions of all of our school district collaborators on this project, and the support of the PracticeWise, LLC team in the development of this manuscript.

Contributor Information

Erum Nadeem, New York University.

Elise Cappella, New York University.

Sibyl Holland, Harvard University.

Candace Coccaro, Jersey City Public Schools.

Gerard Crisonino, Jersey City Public Schools.

References

  • Aarons GA, Hurlburt M, Horwitz SMC. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. [PMC free article] [PubMed]
  • Allen JP, Pianta RC, Gregory A, Mikami AY, Lun J. An interaction-based approach to enhancing secondary school instruction and student achievement. Science. 2011;333(6045):1034–1037. [PMC free article] [PubMed]
  • APA Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006;61(4):271–285. [PubMed]
  • Bambrick-Santoyo B. Driven by Data: A practical guide to improve instruction. San Francisco: Jossey-Bass; 2010.
  • Becker KD, Bradshaw CP, Domitrovich C, Ialongo NS. Coaching teachers to improve implementation of the good behavior game. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40(6):482–493. [PubMed]
  • Becker KD, Darney D, Domitrovich C, Keperling JP, Ialongo NS. Supporting universal prevention programs: A two-phased coaching model. Clinical Child and Family Psychology Review. 2013;16(2):213–228. [PMC free article] [PubMed]
  • Berwick DM. Continuous improvement as an ideal in health care. New England Journal of Medicine. 1989;320(1):53–56. [PubMed]
  • Bickman L, Kelley SD, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatric Services. 2011;62(12):1423–1429. [PubMed]
  • Borntrager C, Lyon AR. Client progress monitoring and feedback in school-based mental health. Cognitive and Behavioral Practice 2014 [PMC free article] [PubMed]
  • Bourdon KH, Goodman R, Rae DS, Simpson G, Koretz DS. The Strengths and Difficulties Questionnaire: U.S. normative data and psychometric properties. Journal of the American Academy of Child & Adolescent Psychiatry. 2005;44:557–564. [PubMed]
  • Brown JL, Jones SM, LaRusso MD, Aber JL. Improving classroom quality: Teacher influences and experimental impacts of the 4Rs program. Journal of Educational Psychology. 2010;102(1):153.
  • Bruns EJ, Hoagwood KE. State implementation of evidence-based practice for youths, Part I: Responses to the state of the evidence. J Am Acad Child Adolesc Psychiatry. 2008;47(4):369–373. [PubMed]
  • Bruns EJ, Hoagwood KE, Hamilton JD. State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the Academy of Child and Adolescent Psychiatry. 2008;47(4):369–373. [PubMed]
  • Bryk AS, Gomez L, Grunow A. Getting ideas into action: Building networked improvement communities in education. 2010 Retrieved August 21, 2013, from http://www.carnegiefoundation.org/spotlight/webinar-bryk-gomez-building-networkedimprovement-communities-in-education.
  • Bryk AS, Schneider BL. Trust in schools: A core resource for improvement. New York: Russell Sage; 2002.
  • Buntin MB, Burke MF, Hoaglin MC, Blumenthal D. The benefits of health information technology: a review of the recent literature shows predominantly positive results. Health Affairs. 2011;30(3):464–471. [PubMed]
  • Burchinal M, Vandergrift N, Pianta R, Mashburn A. Threshold analysis of association between child care quality and child outcomes for low-income children in pre-kindergarten programs. Early Childhood Research Quarterly. 2010;25(2):166–176.
  • Cappella E, Hamre BK, Kim HY, Henry DB, Frazier SL, Atkins MS, Schoenwald SK. Teacher consultation and coaching within mental health practice: Classroom and child effects in urban elementary schools. Journal of Consulting and Clinical Psychology. 2012;80(4):597–610. [PMC free article] [PubMed]
  • Chorpita BF, Bernstein A, Daleiden EL. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1–2):114–123. [PubMed]
  • Chorpita BF, Daleiden EL. Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology. 2009;77(3):566. [PubMed]
  • Deming WE. Out of the crisis. Cambridge, MA: MIT Press; 1986.
  • DiGennaro FD, Martens BK, Kleinmann AE. A comparison of performance feedback procedures on teachers’ treatment implementton integrity and students’ inappropriate behavior in special education classrooms. Journal of Applied Behavior Analysis. 2007;40(3):447–461. [PMC free article] [PubMed]
  • Domitrovich CE, Bradshaw CP, Poduska JM, Hoagwood K, Buckley JA, Olin S, Ialongo NS. Maximizing the implementation quality of evidence-based preventive interventions in schools: A conceptual framework. Advances in School Mental Health Promotion. 2008;1(3):6–27.
  • Domitrovich CE, Cortes RC, Greenberg MT. Improving young children’s social and emotional competence: A randomized trial of the preschool “PATHS” curriculum. The Journal of Primary Prevention. 2007;28(2):67–91. [PubMed]
  • Duchaine EL, Jolivette K, Fredrick LD. The effect of teacher coaching with performance feedback on behavior-specific praise in inclusion classrooms. Education and Treatment of Children. 2011;34(2):209–227.
  • Duhon GJ, Mesmer EM, Gregerson L, Witt JC. Effects of public feedback during RTI team meetings on teacher implementation integrity and student academic performance. Journal of School Psychology. 2009;47(1):19–37.
  • Duncan K, Pozehl B. Effects of performance feedback on patient pain outcomes. Clinical Nursing Research. 2000;9(4):379–397. [PubMed]
  • Embry DD. The Good Behavior Game: A best practice candidate as a universal behavioral vaccine. Clinical Child and Family Psychology Review. 2002;5(4):273–297. [PubMed]
  • Evertson CM, Weinstein CS. Handbook of classroom management: Research, practice, and contemporary issues. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2006.
  • Fixsen D, Blase K, Metz A, Van Dyke M. Statewide implementation of evidence-based programs. Exceptional Children. 2013;79(2):213–230.
  • Gamoran A, Secada WG, Marrett CB. The organizational context of teaching and learning. In: Hallinan M, editor. Handbook of the Sociology of Education. Springer US; 2000. pp. 37–63.
  • Goodman R. The extended version of the Strengths and Difficulties Questionnaire as a guide to child psychiatric caseness and consequent burden. Journal of Child Psychology & Psychiatry. 1999;40:791–801. [PubMed]
  • Goodman R, Meltzer H, Bailey V. The Strengths and Difficulties Questionnaire: A pilot study on the validity of the self-report version. European Child and Adolescent Psychiatry. 1998;7:125–130. [PubMed]
  • Goodman R, Scott S. Comparing the Strengths and Difficulties Questionnaire and the Child Behavior Checklist: Is small beautiful? Journal of Abnormal Child Psychology. 1999;27:17–24. [PubMed]
  • Hamre BK, Pianta RC. Can instructional and emotional support in the first-grade classroom make a difference for children at risk of school failure? Child Development. 2005;76(5):949–967. [PubMed]
  • Hamre BK, Pianta RC, Downer JT, DeCoster J, Mashburn AJ, Jones SM, Hamagami A. Teaching though interactions: Testing a developmental framework of teacher effectiveness across 4000 classrooms. Elementary School Journal. 2013;113(4):461–487.
  • Hawkins SM, Heflin LJ. Increasing secondary teachers’ behavior-specific praise using a video self-modeling and visual performance feedback intervention. Journal of Positive Behavior Interventions. 2010;13(2):97–108.
  • Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child & Adolescent Psychology. 2010;39(6):885–896. [PMC free article] [PubMed]
  • Juran JM. Quality control handbook. New York: McGraw-Hill; 1951.
  • Kellermann AL, Jones SS. What it will take to achieve the as-yet-unfulfilled promises of health information technology. Health Affairs. 2013;32(1):63–68. [PubMed]
  • Lambert MJ, Hansen NB, Finch AE. Patient-focused research: Using patient outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology. 2001;69(2):159. [PubMed]
  • Lambert MJ, Harmon C, Slade K, Whipple JL, Hawkins EJ. Providing feedback to psychotherapists on their patients’ progress: clinical results and practice suggestions. Journal of Clinical Psychology. 2005;61(2):165–174. [PubMed]
  • Lambert MJ, Whipple JL, Smart DW, Vermeersch DA, Nielsen SL. The effects of providing therapists with feedback on patient progress during psychotherapy: Are outcomes enhanced? Psychotherapy Research. 2001;11(1):49–68. [PubMed]
  • Mashburn AJ, Pianta RC, Hamre BK, Downer JT, Barbarin OA, Bryant D, Howes C. Measures of classroom quality in prekindergarten and children’s development of academic, language, and social skills. Child Development. 2008;79(3):732–749. [PubMed]
  • Nastasi BK. Meeting the challenges of the future: Integrating public health and public education for mental health promotion. Journal of Educational and Psychological Consultation. 2004;15(3–4):295–312.
  • National Council on Teacher Quality. NCTQ State Teacher Policy Yearbook Brief. Washington: DC: National Council on Teacher Quality; 2012. State of the States 2012: Teacher Effectiveness Policies.
  • Office of Head Start. USe of the Classrom Assessment Scoring System (CLASS) in Head Start. 2014 Retrieved October 1, 2014, 2014, from http://eclkc.ohs.acf.hhs.gov/hslc/tta-system/teaching/eecd/Assessment/ClassroomAssessmentScoringSystem%28CLASS%29/ClassroomAssessm.htm.
  • Owens JS, Lyon AR, Brandt NE, Warner CM, Nadeem E, Spiel C, Wagner M. Implementation science in school mental health: Key constructs in a developing research Agenda. School Mental Health. 2014;6(2):99–111. [PMC free article] [PubMed]
  • Park S, Hironaka S, Carver P, Nordstrum L. Continuous improvement in education. Carnegie Foundation for the Advancement of Teaching; 2013.
  • Pianta RC, Allen JP. Building capacity for positive youth development in secondary school classrooms: Changing teachers’ interactions with students. Oxford Scholarship Online; 2008.
  • Pianta RC, La Paro KM, Hamre BK. Classroom Assessment Scoring System. Paul H. Brookes Publishing Company; 2008.
  • Pianta RC, Mashburn AJ, Downer JT, Hamre BK, Justice L. Effects of web-mediated professional development resources on teacher–child interactions in pre-kindergarten classrooms. Early Childhood Research Quarterly. 2008;23(4):431–451. [PMC free article] [PubMed]
  • PracticeWise. PracticeWise Retrieved September. 2014;15:2014. from http://www.practicewise.com.
  • Reddy LA, Newman E, De Thomas CA, Chun V. Effectiveness of school-based prevention and intervention programs for children and adolescents with emotional disturbance: A meta-analysis. Journal of School Psychology. 2009;47(2):77–99. [PubMed]
  • Reinke WM, Lewis-Palmer T, Martin E. The effect of visual performance feedback on teacher use of behavior-specific praise. Behavior Modification. 2007;31(3):247–263. [PubMed]
  • Reinke WM, Stormont M, Webster-Stratton C, Newcomer LL, Herman KC. The Incredible Years teacher classroom management program: Using coaching to support generalization to real-world classroom settings. Psychology in the Schools. 2012;49:416–428.
  • Rose DJ, Church RJ. Learning to teach: The acquisition and maintenance of teaching skills. Journal of Behavioral Education. 1998;8(1):5–35.
  • Singer MI, Anglin TM, Song L, Lunghofer L. Adolescents’ exposure to violence and associated symptoms of psychological trauma. JAMA. 1995;273(6):477–482. [PubMed]
  • Skinner CH, Cashwell TH, Skinner AL. Increasing tootling: The effects of a peer-monitored group contingency program on students’ reports of peers’ prosocial behaviors. Psychology in the Schools. 2000;37(3):263–270.
  • Stamm BH. The concise ProQOL manual. Vol. 2007. Pocatello, ID: ProQOL.org; 2010.
  • State of New Jersey Department of Education. AchieveNJ. 2013 May 7; Retrieved August 21, 2013 , from http://www.state.nj.us/education/AchieveNJ/
  • Stecker PM, Lembke ES, Foegen A. Using progress-monitoring data to improve instructional decision making. Preventing School Failure: Alternative Education for Children and Youth. 2008;52(2):48–58.
  • Tuckman BW, Yates D. Evaluating the student feedback strategy for changing teacher style. The Journal of Educational Research. 1980:74–77.
  • U.S. Department of Health and Human Services; DHHS, editor. Final Report. Rockville, MD: Department of Health and Human Services; 2003. New Freedom Commission on Mental Health: Achieving the Promise: Transforming Mental Health Care in America.
  • Wells KB, Staunton A, Norris KC, Bluthenthal R, Chung B, Gelberg L, Wong M. Building an academic-community partnered network for clinical services research: the Community Health Improvement Collaborative (CHIC) Ethnicity and Disease. 2006;16(1 Suppl 1):S3–17. [PubMed]
  • Yin RK. Case Study Research: Design and Methods. 4. Thousand Oaks, CA: Sage Publications; 2009.
  • Yoon JS. Teacher characteristics as predictors of teacher-student relationships: Stress, negative affect, and self-efficacy. Social Behavior and Personality: An International Journal. 2002;30(5):485–493.
  • Ysseldyke J, Tardrew S. Use of a progress monitoring system to enable teachers to differentiate mathematics instruction. Journal of Applied School Psychology. 2007;24(1):1–28.