PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jncimonoLink to Publisher's site
 
J Natl Cancer Inst Monogr. 2012 May; 2012(44): 67–77.
PMCID: PMC3482970

Multilevel Interventions: Measurement and Measures

Abstract

Background

Multilevel intervention research holds the promise of more accurately representing real-life situations and, thus, with proper research design and measurement approaches, facilitating effective and efficient resolution of health-care system challenges. However, taking a multilevel approach to cancer care interventions creates both measurement challenges and opportunities.

Methods

One-thousand seventy two cancer care articles from 2005 to 2010 were reviewed to examine the state of measurement in the multilevel intervention cancer care literature. Ultimately, 234 multilevel articles, 40 involving cancer care interventions, were identified. Additionally, literature from health services, social psychology, and organizational behavior was reviewed to identify measures that might be useful in multilevel intervention research.

Results

The vast majority of measures used in multilevel cancer intervention studies were individual level measures. Group-, organization-, and community-level measures were rarely used. Discussion of the independence, validity, and reliability of measures was scant.

Discussion

Measurement issues may be especially complex when conducting multilevel intervention research. Measurement considerations that are associated with multilevel intervention research include those related to independence, reliability, validity, sample size, and power. Furthermore, multilevel intervention research requires identification of key constructs and measures by level and consideration of interactions within and across levels. Thus, multilevel intervention research benefits from thoughtful theory-driven planning and design, an interdisciplinary approach, and mixed methods measurement and analysis.

Multilevel intervention research holds the promise of more accurately representing real-life situations than single-level intervention research and, thus, with the proper research design and measures, being more likely to facilitate the effective and efficient resolution of complex health-care systems problems (13). Taking a multilevel approach to cancer care interventions, however, creates measurement challenges and opportunities. Some measurement issues are similar to those arising in single-level interventions, whereas some are unique to multilevel interventions (4,5). The purpose of this article is to examine the state of measurement in multilevel cancer intervention research and to suggest how opportunities in the field may be addressed by drawing on the measurement literature in other fields.

One of the unique aspects of measurement in multilevel intervention research is that consideration of within and between level effects is needed. For multilevel interventions, it is not sufficient to measure effects at the different levels of intervention (eg, patient and health-care team), cross-level effects must also be taken into consideration (eg, the effect of patients on the team and the team on the patients). Furthermore, some measurement issues may be more complex when conducting multilevel intervention research, and some measurement rules of thumb may be uniquely associated with specific levels of analysis (eg, group or organization levels of analysis) (6,7). Because multilevel interventions are based on systems theory [whether general systems (810), social ecological systems (11,12), or complex adaptive systems (13) models], they require identification of key constructs and measures by level and consideration of interactions within and across levels (6). Thus, multilevel intervention research benefits from thoughtful planning and design, an interdisciplinary approach, mixed methods (eg, a combination of objective factors, surveys, interviews, observations) measurement and analysis, and careful attention to the congruence of theory, constructs, and measures.

Through a systematic review of the literature, this article examines the state of measurement in multilevel cancer care intervention research, identifies measurement opportunities in the field, and draws upon other fields to provide direction about measures and measurement to multilevel cancer care intervention researchers.

Literature Review

We conducted a systematic literature review focused on cancer care intervention research and measures published in the past 5 years. This contrasts with Stange et al. in this monograph (14), which focuses on the broad spectrum of health care and prevention research over a 10-year period (2000–2010). Thus, our review provides a comprehensive overview of a more recent set of cancer-specific intervention studies.

Methods

Multiple bibliographic databases were searched to identify articles related to cancer care intervention research and measurement. Almost 1800 articles were identified, of which 1072 were sufficiently related to the topic to warrant further review. Ultimately, 234 multilevel articles were identified, 40 of which involved cancer care interventions. A level was defined by the degree or type of social aggregation associated with an intervention target. For example, a single-level study targets a single type of social aggregation, for example, only individuals or only groups. Approaches to measurement and measures in these 40 studies were systematically analyzed. Supplementary Appendices A and B (available online) provide details about the search methodology and the article review and classification process.

Findings

Interventions and Levels of Analysis

Of 1072 articles reviewed, 78% (838 articles) concerned single-level studies. The distribution of studies across the cancer care continuum differed by level. Most of the single-level studies focused on treatment or survivorship or both, whereas the multilevel studies addressed mostly detection/screening, treatment, or survivorship.

Of the 234 multilevel articles, 40 (17%) involved interventions (randomized controlled trials or empirically evaluated nonrandomized interventions). These 40 articles were further classified by number and level of intervention targets and units of analysis. Supplementary Appendix C (available online) provides summaries of each article, including a description of the interventions, intervention targets, and measures. All of the 40 articles had multiple intervention targets and/or multiple measurement targets. However, for the majority of the studies, these targets were at a single level. The most common interventions (13 studies, 33%) were interventions focused on the individual patient and/or physician and involving multiple intervention targets at a single level and multiple measurements at a single level. Interventions targeting groups, organizations, and communities were underrepresented.

Measures and Measurement in Multilevel Cancer Care Intervention Research

The 40 multilevel intervention articles represented 37 unique research studies. For each of the 37 studies, related articles were examined to identify the best source of information regarding each study’s measures and measurement approaches. This section summarizes findings regarding the types of measures used, their properties, and measurement approaches in the 37 studies.

The vast majority of measures used in the 37 studies were individual-level measures. Group-, organization-, and community-level measures were minimally present. Supplementary Appendix D (available online) provides a summary of measures and their properties by study. Objective measures of individual outcomes, such as completing a screening test (1423,24), and measures of individual perceptions, knowledge, attitudes, or behavioral intentions (19,2533) were the most common types of measures. Of the 37 studies, almost half (43%, 16 studies) relied upon reviews of patient charts or medical records to collect key outcome data, such as screening completion.

Two studies used group-level measures; one used a group consensus process in palliative care delivery teams to assess group satisfaction with elements of the program (34). Another used quality circle discussions to identify barriers to screening (34). Two studies aggregated data to conduct analyses at the community level (32,36). Two studies analyzed care processes as part of their investigations (37,38).

The main outcome measures for randomized controlled trials were notably different from measures for other types of empirical intervention studies. All of the randomized controlled trials used an individual-level objective measure (eg, patient-completed screening test) as the primary outcome measure, whereas only one-quarter of the other empirical intervention studies did. These types of studies were more likely to use changes in knowledge, perceptions, or intentions or changes in prescribing rates or abnormality detection as their primary outcome measure.

Of the 37 studies, 32% (12 studies) used some type of established measurement scale; 14% (five studies) mentioned creating a measurement scale. Although 29 different established measurement scales were used in the 12 studies, the most frequently used scales were health-related quality-of-life instruments, such as the European Organization for Research and Treatment of Cancer or Functional Assessment of Cancer Therapy (35,3841); depression instruments, such as the Hospital Anxiety and Depression Scale (38,40,41) or Center for Epidemiological Studies Depression Scale (37,40,42); and fatigue scales, such as Brief Fatigue Inventory or Multidimensional Fatigue Inventory (21,28). Supplementary Appendix E (available online) has a complete list of these measures. However, many studies simply referred to pre- or post-questionnaires, and the general topics respondents were asked about, providing no details about the measures or instruments used (16,19,26,31,32,4347). Others noted that they developed measures or adapted measures from another instrument but provided no details regarding the specifics of the adapted measures.

Discussion of measurement validity and reliability was scant in the 37 studies. Only 14% (five studies) discussed validity at all, 11% (four studies) made specific reference to the reliability of at least one measure, and another 11% (four studies) did not provide an assessment of measure reliability but noted that such information was available elsewhere.

In three of the 37 studies, a measure of an individual-level phenomenon was analyzed at another level (25,32,36). For example, in the study by Ganz et al. (24), individual patient screening data were aggregated to assess the organizational rate of cancer screening.

Furthermore, measures were used both as interventions and as outcomes (38,39,48,49). For example, in the study by Hilarius et al. (39), patients completed a health-related quality-of-life survey by computer, which generated a graphic summary for the patient and the nurse (the intervention); health-related quality-of-life data also were used to assess outcomes related to the intervention. In other examples (48,49), performance metric reports (eg, detection rate of abnormalities, the number of surgical procedures to standard) were used as part of the interventions (ie, feedback to people targeted) and also as intervention outcome measures. This approach was used in interventions designed to achieve quality improvements, process improvements, or organizational changes.

Two other approaches to measurement were assessments of process (37,38) and economic/cost–benefit analyses (17,21,34,37,39). For example, Tamminga et al. (37) asked patients and nurses to periodically evaluate the care process, and Velikova et al. (38) analyzed charts to assess the process of care. Shankaran et al. (17) calculated an incremental cost-effectiveness ratio.

Studies that focused on quality improvements tended to have multiple intervention targets, multiple approaches to measurement, and multiple measures. Furthermore, they often used a model or checklist to guide design and evaluation of the intervention (eg, Deming's continuous quality improvement practices, Put Prevention Into Practice [PPIP], force field analysis, PRECEDE, Agency for Healthcare Research and Quality [AHRQ] checklist for improving delivery of preventive services). Ten (27%) of the studies fell into this category (18,19,21,22,2427,35,37,48). For example, Ganz et al. (24) delivered a quality improvement intervention to organizations over 2 years. Interventions targeted patients, physicians, provider organizations, staff, and medical directors using an iterative change management process and multiple approaches to data collection (chart reviews, patient interviews, staff meetings, etc.). Fitzgibbon et al. (18,24) used a similar continuous quality improvement approach. Wei et al. (22) and Ling et al. (19) used the PPIP model to design process improvement interventions with multiple targets (eg, patient, physician, office manager) and multiple measures and measurement approaches (eg, medical records, patient surveys, organizational informants).

Discussion

Themes from the Literature Review

A number of themes emerged in this review of measures in multilevel intervention studies in cancer care. First, the majority of “multilevel” intervention studies are actually multitarget interventions that focus on a single level—the individual level—for both interventions and measurement. The patient–physician dyad is the primary intervention and measurement target. This occurs despite the fact that relatively few processes across the cancer care continuum involve interaction with only a physician. The most overlooked, unstudied levels from both intervention and measurement perspectives are the group, organization, and community levels.

Second, many studies lack an explicit consideration of the levels involved in an intervention, leading to failure to consider the measurement implications of the levels of the intervention. A number of studies intervened with groups or organizations. However, they did not identify groups or organization as an intervention target nor did they measure any group or organization-level phenomena (18,19,24,29,30,32,40,49,50); they merely measured patient outcomes, sometimes aggregated to the group or organization level.

Third, studies that did attempt to intervene at multiple levels in addition to the patient, more often used a theory, conceptual model, or checklist to guide their intervention. The most frequently cited conceptual approach to multilevel intervention design was quality improvement (18,24,25,35,48). The most common models or checklists used for multilevel intervention design were PPIP (19,22), PRECEDE (51), the AHRQ checklist of practice items to improve delivery of care (26); strengths, weaknesses, opportunities, and threats analysis (26); and force field analysis (35).

Fourth, scant attention was paid to measurement issues such as independence, reliability, or validity. This may not be of concern when studies use single-item objective measures (such as screening completion ascertained through review of a medical record) or measurement instruments with well-known well-established psychometric properties; however, when multi-item scales are created for a study (see examples noted above), these are very significant concerns.

This review indicates a need to think about measurement and the measure selection process in a comprehensive and thoughtful manner when conducting multilevel intervention research. The nature of the interventions, the intervention targets, and the levels of the targets must be explicitly considered in research design and when developing the measurement and analysis plan. Standard measurement issues such as the independence, reliability, and validity of measures, sample size, and power require special attention in multilevel intervention research. Key constructs and measures should be defined by level, so that within-level and cross-level interactions may be assessed while careful attention can be paid to the congruence of theory, constructs, and measures. Interdisciplinary and mixed methods research approaches may help identify expected and unexpected effects and interactions of multilevel interventions.

Challenges and Opportunities Associated with Multilevel Measures and Measurement

This section discusses some of the challenges and opportunities associated with designing and implementing multilevel interventions highlighted in the literature review, and best practices and/or exemplar approaches for addressing them. Specific techniques that have evolved to deal with multilevel research issues, such as the lack of independence, need for congruence, aggregation, reliability, validity, and research design, are discussed.

In single-level intervention research, researchers need to confirm the independence of measures, assess the reliability and validity of measures, and address sample size and power requirements. Multilevel intervention research also must address these issues and may require a different or more complex approach than single-level research. For example, in a single-level study, the independence of measures is typically established using statistical tests to justify the use of standard analytical procedures. However, a multilevel study may have to accommodate the expected lack of independence among measures through the use of special analytical or aggregation procedures (eg, hierarchical linear modeling, structural equation modeling, or aggregating respondent data to the group or organization level).

A key concept in multilevel measurement is that theories, constructs, and measures must be congruent. The introduction of interventions at multiple levels requires that the level of theory, level of measurement, and level/type of statistical analysis be consistent (6,7). For example, collecting a measure at an organization or group level and claiming that it represents an individual-level concept would violate this principle of congruency and lack internal consistency and intellectual rigor. Measurements are conceptually distinguished by the level at which they are collected and analyzed (52). For example, a physician education intervention may be conceptualized and measured in a number of ways. Conceptualized as an individual-level intervention, one might ask a series of questions about individual knowledge, opinions, or actions. Conceptualized as a group-level intervention, one might ask a series of questions about group knowledge, group opinions, or group actions. Conceptualized as a multilevel intervention, one might ask a series of questions about both individual and group knowledge, opinions, or actions. To justify the aggregation of individual responses to the group level and make cross-group comparisons, one would have to demonstrate adequate agreement among group members and adequate between-group variance. Typically, this is accomplished by calculating rwg and Intraclass Correlation Coefficients (ICC) (6,7). The individual-level data also might be aggregated to the organization-level to assess organizational impact, typically without requiring any special justification. If the purpose was to compare aggregated results across organizations, within-organization agreement and across-organization variance generally needs to be established. To understand cross-level interactions, more sophisticated statistical modeling techniques are required.

The rationale for aggregating variables is typically conceptualized using one of three models—the consensus, additive, and dispersion models—and statistically justified with measures of reliability, agreement, and predictive validity (53,54). Consensus models conceptualize aggregate means of individual ratings as indications of a common construct perceived by the group, organization, or community. For example, patients may rate reactions to an intervention, and consensus among respondents supports the hypothesis that ratings represent a shared phenomenon. Ratings may be conceptualized with an individual-, group-, organization-, or community-level referent (eg, I am more likely to get the test, My team is more likely to use the procedure, My organization is supportive of the test) (53). In the example of a physician education intervention, responses to questions about individual beliefs vs responses to questions about group beliefs are examples of individual-level and group-level referents, respectively. When using a consensus model, it is important to conceptualize and frame the measure at the appropriate level and test whether empirically there is agreement among respondents. This is typically accomplished by computing the rwg statistic (54). If individual-level constructs and measures are used, aggregation is not theoretically justified, even if it may be statistically supported.

Additive models do not require agreement statistics, but aggregation should be theoretically well grounded and statistically justified. For example, it may not be necessary for all members of an organization to agree that coworkers cooperate, but statistically there needs to be more variation between organizations than within organizations to have the power to detect relationships across organizations. ICCs are used to provide this statistical evidence (54).

Finally, dispersion models conceptualize the variation among ratings as an important variable in understanding group-, organization-, or community-level constructs. For example, the variation in perceived openness to change within an organization may be an important consideration when assessing intervention effectiveness.

Just as in single-level studies, multilevel studies must also be concerned with common method variance. Because many measurement approaches use surveys to collect data from individuals, researchers must be careful regarding common methods variance. This problem occurs when both independent and dependent variables are collected with the same instrument. For example, a survey may ask about a group process, such as coordination, and an outcome, such as satisfaction. The correlation between these measures often is inflated because it is affected by the tendency of individual respondents to score responses similarly no matter the content of the questions. The strongest studies use different data collection methods to collect data on the independent and dependent variables (eg, a survey of individual team members to collect independent variable data and interviews with team managers to collect dependent variable data).

For measurements requiring aggregation of individual responses, particular attention should be paid to sample size and power issues. A sample size that might seem quite adequate for an individual-level analysis (eg, 250 respondents) may be insufficient for a group- or organization-level analysis (eg, 250 respondents may account for only 20 groups or one organization). This also has implications for the number of measures that may be included in a study (eg, in the previous example, if the group is the level of analysis, the appropriate number of measures will be much lower than if the individual is the level of analysis). Generally, the higher the level of analysis, the larger the number of individual respondents needed to achieve an equivalent power and the more parsimonious the inclusion of measures in the study should be.

Compared with single-level intervention studies, multilevel intervention studies also may require a more complex approach to establishing measurement reliability. Reliability may have to be assessed by measure by level. See Wageman et al. (55) for a discussion of this and examples. Validity also may be approached somewhat differently in multilevel intervention studies; external validity may be emphasized over internal validity. See Shapiro et al. (40) for an example where the focus was on external validity in a study of intervention services as routinely delivered within a community.

Because multilevel interventions are based on systems theory (6), they require conceptualization of key constructs and measures by level and consideration of within- and cross-level effects. Thus, multilevel intervention research benefits from thoughtful theory-based planning and design, interdisciplinary approaches, and mixed methods measurement.

One of the key challenges in designing multilevel interventions and the subsequent measurement plan is deciding on the intervention—what levels should be targeted for intervention and what factors should be measured at the various levels to understand the effect of the chosen intervention(s). The use of models, frameworks, and/or theories may help guide the planning and design of multilevel interventions and ensure that potential influences on an outcome are systematically identified and assessed.

The implementation science literature has examples of multilevel models and associated measures that may be useful when designing multilevel interventions. One model is the organization transformation model (56), which identifies five drivers of change to improve quality of care. Central to this model are improvement initiatives conducted by teams (group-level). Other constructs in the model that span levels are leadership from the executive level to frontline supervisors; alignment of strategy, rewards, evaluation, resource allocation, and other systems; and integrating mechanisms across departments and projects. The model suggests that an impetus, from either an internal source (organization-level) or outside source (community-level) is needed to motivate change. The drivers of change operate on an organization's culture, mission, strategy, and work processes (organization-level), and on individuals in its workforce to develop needed skills and to engage in the change effort. Thus, the organization transformation model addresses interventions and measures at four levels (community, organization, group, and individual). Measures of the constructs in the model are obtained through quantitative and qualitative methods.

The Promoting Action on Research Implementation in Health Services (57) model, another implementation model, addresses three concepts at two levels: quality of the evidence being implemented, organizational readiness for change, and facilitation, which includes group process. These concepts can be measured with the Organizational Readiness to Change Assessment instrument (58) administered to organization members. Many investigators also assess individual attitudes about the particular practice being implemented and evidence-based practice in general, as well as provider demographic characteristics.

The complex effects of multilevel interventions are probably best understood using interdisciplinary and mixed methods approaches. Interdisciplinary approaches can capture knowledge and insights from different fields. When that knowledge is effectively integrated and focused on solving problems, it may yield significant innovation. Designs and measures that assess the full complexity of factors operating within and across levels are more likely to effectively advance our understanding. Thus, design and measurement approaches that facilitate data collection in complex environments, such as sophisticated statistical designs (13), quasi-experimental designs, and mixed methods approaches that include both quantitative and qualitative data, are likely to be useful in multilevel intervention research.

Insights About Multilevel Measures and Measurement From Other Literatures

Based upon the literature review conducted for this chapter, the least developed domains in multilevel cancer care interventions are the group, organization, and community levels. To address this issue and provide readers with a starting point for further exploration, this section draws upon literature from the health services, social psychology, and organizational behavior fields to present commonly used measures at the group, organization, and community levels and discuss how measurement is approached at each of these levels, highlighting differences across levels. Examples of key constructs and measures by level are summarized in Tables 1–3. These tables are not an exhaustive list of all constructs and measures by level, but they do summarize important constructs that may need to be taken into consideration when designing multilevel interventions. In addition, it provides examples or references for the measures. Tables are organized into sections for group (Table 1), organization (Table 2), and community (Table 3).

Table 1
Approaches to measurement for the group level
Table 2
Approaches to measurement for the organizational level*
Table 3
Approaches to measurement for the community level

Group-Level Measurement and Measures

Many models of team dynamics (112,113) and a rich body of research about conditions that influence group outcomes exist (114,115). According to this literature, certain aspects of groups—the environment, characteristics, context, processes, and emergent states—are related to group outcomes. For each of these categories of conditions, key constructs and references for the associated established measures are summarized in Table 1. For an overview of the field, see Stewart (116,117) and Mathieu et al. (118). For an overview of the field from a health-care perspective, see Lemieux-Charles and McGuire (119), West et al. (96), and Poole and Real (120). See Wageman et al. (55), Kirkman et al. (63) and the articles by Campion et al. (61,62) for discussion of a wide variety of group measures and their psychometric properties.

The most common approach to measurement at the group level is to use a group-level construct and an associated group-level measure to collect information through a survey of the individual members of the group(s). Then various statistical tests (ie, rwg, ICC) are conducted to justify aggregation of individual responses to the group level, address the lack of independence of these answers, and establish variance between groups. This approach is warranted only if group-level constructs and group-level measures are used; if individual-level constructs and measures are used, aggregation is not theoretically justified, even if it may be statistically supported.

Another approach to measurement at the group level, although not used as frequently, is to use group constructs and related group measures to collect data directly from the group(s) through consensus (eg, the group together fills out a survey). This approach may be conducted with or without group facilitation. The advantage of this approach is that there is no need to deal with aggregation issues, as the group provides only one response. Disadvantages of this approach include more complicated data collection logistics and the possibility that group dynamics may bias the responses. For an example of this approach, see how Grady and Travers (34) used the consensus technique to generate group-level measures of satisfaction with a home hospice program.

Both of these approaches to group-level measurement have been found to be reliable and valid and to yield comparable results. To avoid common method bias when conducting group-level research, data are usually collected not only from group members but also from third parties such as patients or other medical personnel who interact with the group. For example, patients may be asked to provide feedback on the performance of a caregiver group (eg, if the desired outcome was achieved).

Organization-Level Measurement and Measures

Although organization-level measures are rare in the cancer care intervention literature, the use of aggregated individual-level data, informants, metrics/scorecards, records, and other organizational documents/data is common in studies of organizations. Researchers often use the American Hospital Association database of hospitals and delivery systems for objective information such as size. Aspects of organization structure, such as degree of decentralization, are typically obtained through key informants in an organization. For example, see Wei et al. (22), who assessed practice characteristics at baseline and post intervention, as reported by physician informants. Measures of organization culture are typically obtained through surveys. Culture is a good example of a measure in which dispersion of responses has meaning. High agreement among organization members is a measure of a “strong culture,” whereas low agreement indicates a “weak culture.”

A broad array of constructs and measures characterizes the organizational literature (Table 2), and in determining what constructs are most relevant, the researcher should be guided by theory and logic. For example, if an intervention requires coordination among providers, it would be wise to include measures of structures and processes that facilitate or hinder coordination [see (90,95)] as well as measures of the processes of coordination [see (94)]. As a further example, the organization literature argues that structuring an oncology practice by multidisciplinary service lines facilitates coordination across disciplines, but this does not ensure such coordination. Thus, it is important also to measure actual coordination among these providers.

Community-Level Measurement and Measures

Community-level measures include the context, resources, and norms of populations. Measures are often aggregated at varying levels including the social network, neighborhood, catchment area, region, state, or country. Table 3 summarizes the types of constructs under each of these domains that are used most frequently in the literature. We have restricted our summary to just a few illustrative example measures.

Community-level measures often are derived by aggregating individual-level measures (eg, socioeconomic status) from population-based surveys and administrative datasets to measure different constructs (eg, socioeconomic deprivation) at the population level. As noted previously, to distinguish the independent effects of community-level constructs, hierarchical modeling techniques that control for individual-level variation while estimating community-level effects are needed when using these measures. Numerous prior studies have documented that these aggregated measures of community context have effects on outcomes that operate independently from the individual-level measures from which they are derived [see (121)].

The most common contextual measures used in community-level research are probably those that describe the socioeconomic conditions of a community defined by geographic boundaries. Most typically derived from regionally aggregated census data, these measures may summarize the income or education distribution of a population or combine several indicators of socioeconomic status into a single summary index (102). In multilevel intervention research, these measures would most likely be used as control or stratifying measures but also may be conceptualized as potential moderating factors for interventions directed toward individuals or communities.

Exposure to information or media is a commonly used cancer intervention strategy to enhance population knowledge and awareness, but it also may be an important moderator or control measure in multilevel cancer intervention research. For example, Han et al. (103) found that exposures to mass media (television, radio, Internet, health news) were associated with perceived ambiguity about cancer prevention recommendations.

Economic measures of market behavior and economic capacity are another type of contextual measure that may be used in multilevel intervention research. One type of measure characterizes the economics of markets in ways that enable inferences of spillover effects of competitive organizational or physician behavior on clinical outcomes. Typical measures include concentration of specific delivery systems in a market (eg, penetration of managed care enrollment in a community), or more direct measurement of the competitive character of the health market place (eg, the Herfindahl–Hirschman index) (105). For example, Keating et al. (104) used a measure of managed care penetration to draw inferences about the quality of cancer care.

Health policy also is a contextual factor that can be important to include in multilevel intervention research, either as an intervention or as a mediating or moderating influence on cancer interventions. Cancer intervention research has used state policy measures in studies of tobacco cessation research (eg, state policy adoption of system strategies), cancer screening (eg, state insurance coverage mandates), and—to a lesser extent—treatment (eg, insurance coverage mandates and minority population access to clinical trials). Often this policy domain is measured by the presence or absence of a policy, but more complex measures have been used to characterize the leniency or restrictiveness of existing policies. Government policies are not the only policies that have been measured in cancer intervention research. Professional standards and guidelines are other important areas of policy that are suited to measurement, usually in process-oriented clinical guidelines that are measured at the individual level and then aggregated to the population level.

Measures of community-level resources that may be important to consider in multilevel cancer intervention research include health services delivery capacity, community resources, and social support. Measures of health service delivery capacity could include community-level measures of health-related infrastructure, workforce, and technology. A final community-level resource construct to consider in multilevel cancer intervention research is social support. Cancer support groups have been widely used as a strategy for enhancing coping skills among cancer survivors [see (110)], and a recent study of colorectal cancer screening behavior examined the effects of tangible and emotional sources of social support on adherence to screening guidelines (122).

Professional and social norms also are important to consider as community-level constructs, as they affect individual behavior. Professional norms include procedure utilization, prescribing patterns, and attitudes related to use of guidelines. Social norms include those related to health behaviors, end-of-life care, and use of narcotic medications.

Conclusions

A number of important conclusions emerge from our review of the literature on multilevel measures and interventions in cancer. First, explicit consideration or mention of levels is generally lacking in intervention studies. Second, few studies consider the group, organization, or community levels. Third, many studies are not guided by conceptual frameworks. Fourth, insufficient attention has been paid to measurement issues, such as scale construction, reliability, validity, sample size, or power. The special issues related to measurement of multilevel interventions also are underdeveloped (eg, dealing with lack of independence, aggregation, the need for alignment of theories, constructs, and measures, and the complexity of analyzing cross-level interactions). Finally, multilevel measurement needs to be considered in a comprehensive and complex manner, with attention given to effects within and across levels.

Funding

This material is based upon work supported in part by the Health Services Research and Development Service, Office of Research and Development, Veterans Health Administration, Department of Veterans Affairs. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the US government.

References

1. Koselka R. The new mantra: MVT. Forbes. March 11, 1996:114–118.
2. Jones FG, Moore CH. Designing and executing experiments in care—a data driven, scientific approach to quality improvement. In: Schoenbaum SC, editor. Measuring Clinical Care: A Guide for Physician Executives. Tampa, FL: American College of Physician Executives; 1995. pp. 115–125.
3. Moore CH. Experimental design in health care. Qual Manag Health Care. 1994;2(2):13–26. [PubMed]
4. Kerlinger FN, Lee HB. Foundations of Behavioral Research. 4th ed. Singapore: Wadsworth Thomson Learning; 2000.
5. Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RL. Multivariate Data Analysis. Upper Saddle River, NJ: Pearson Prentice Hall; 2006.
6. Klein KJ, Kozlowski SWJ. Multilevel Theory, Research, and Methods in Organizations: Foundations, Extensions, and New Directions. San Francisco, CA: Josey-Bass; 2000.
7. Klein KJ, Kozlowski SWJ. From micro to meso: critical steps in conceptualizing and conducting multilevel research. Organ Res Methods. 2000;3(3):211–236.
8. von Bertalanffy L. The history and status of general systems theory. Acad Manag J. 1972;15(4):407–426.
9. von Bertalanffy L. An outline of general system theory. Emergence: Complexity Organ. 2008;10(2):103–123.
10. Boulding KE. General systems theory—the skeleton of science. Manag Sci. 1956;2(3):197–208.
11. Schneider M, Stokols D. Multilevel theories of behavior change: a social ecological framework. In: Shumaker SA, Ockene JK, Reickert K, editors. The Handbook of Health Behavior Change. 3rd ed. New York, NY: Springer; 2008. pp. 85–105.
12. Bronfenbrenner U. Toward an experimental ecology of human development. Am Psychol. 1977;32(7):513–531.
13. Snowden DJ, Boone ME. A leader's framework for decision making. Harv Bus Rev. 2007;85(11):68–76. [PubMed]
14. Stange KC, Breslau ES, Dietrich AJ, Glasgow RE. State-of-the-art and future directions in multilevel interventions across the cancer control continuum. J Natl Cancer Inst Monogr. 2012;44:20–31. [PMC free article] [PubMed]
15. Jandorf L, Gutierrez Y, Lopez J, Christie J, Itzkowitz SH. Use of a patient navigator to increase colorectal cancer screening in an urban neighborhood health clinic. J Urban Health. 2005;82(2):216–224. [PMC free article] [PubMed]
16. Walsh JM, Salazar R, Terdiman JP, Gildengorin G, Perez-Stable EJ. Promoting use of colorectal cancer screening tests. Can we change physician behavior? J Gen Intern Med. 2005;20(12):1097–1101. [PMC free article] [PubMed]
17. Shankaran V, Luu TH, Nonzee N, et al. Costs and cost effectiveness of a health care provider-directed intervention to promote colorectal cancer screening. Clin Oncol. 2009;27(32):5370–5375. [PMC free article] [PubMed]
18. Fitzgibbon ML, Ferreira MR, Dolan NC, et al. Process evaluation in an intervention designed to improve rates of colorectal cancer screening in a VA medical center. Health Promot Pract. 2007;8(3):273–281. [PubMed]
19. Ling BS, Schoen RE, Trauth JM, et al. Physicians encouraging colorectal screening: a randomized controlled trial of enhanced office and patient management on compliance with colorectal cancer screening. Arch Intern Med. 2009;169(1):47–55. [PubMed]
20. Aragones A, Schwartz Mark D, Shah Nirav R, Gany Francesca M. A randomized controlled trial of a multilevel intervention to increase colorectal cancer screening among Latino immigrants in a primary care facility. J Gen Intern Med. 2010;25(6):564–567. [PMC free article] [PubMed]
21. Khankari K, Eder M, Osborn CY, et al. Improving colorectal cancer screening among the medically underserved: a pilot study within a federally qualified health center. J Gen Intern Med. 2007;22(10):1410–1414. [PMC free article] [PubMed]
22. Wei EK, Ryan CT, Dietrich AJ, Colditz GA. Improving colorectal cancer screening by targeting office systems in primary care practices: disseminating research results into clinical practice. Arch Intern Med. 2005;165(6):661–666. [PubMed]
23. Lasser KE, Murillo J, Medlin E, et al. A multilevel intervention to promote colorectal cancer screening among community health center patients: results of a pilot study. BMC Fam Pract. 2009;10:37. [PMC free article] [PubMed]
24. Ferreira MR, Dolan NC, Fitzgibbon ML, et al. Health care provider-directed intervention to increase colorectal cancer screening among veterans: results of a randomized controlled trial. Clin Oncol. 2005;23(7):1548–1554. [PubMed]
25. Ganz PA, Farmer MM, Belman MJ, et al. Results of a randomized controlled trial to increase colorectal cancer screening in a managed care health plan. Cancer. 2005;104(10):2072–2083. [PubMed]
26. Lane DS, Messina CR, Cavanagh MF, Chen JJ. A provider intervention to improve colorectal cancer screening in county health centers. Med Care. 2008;46(9) suppl 1:S109–S116. [PubMed]
27. Lewis C, Pignone M, Schild LA, et al. Effectiveness of a patient- and practice-level colorectal cancer screening intervention in health plan members: design and baseline findings of the CHOICE trial. Cancer. 2010;116(7):1664–1673. [PMC free article] [PubMed]
28. Taylor VM, Jackson JC, Yasui Y, et al. Evaluation of an outreach intervention to promote cervical cancer screening among Cambodian American women. Cancer Detect Prev. 2002;26(4):320–327. [PMC free article] [PubMed]
29. Smith MY, DuHamel KN, Egert J, Winkel G. Impact of a brief intervention on patient communication and barriers to pain management: results from a randomized controlled trial. Patient Educ Couns. 2010;81(1):79–86. [PubMed]
30. Milne E, Johnston R, Cross D, Giles-Corti B, English DR. Effect of a school-based sun-protection intervention on the development of melanocytic nevi in children. Am J Epidemiol. 2002;155(8):739–745. [PubMed]
31. Mokuau N, Braun KL, Wong LK, Higuchi P, Gotay C. Development of a family intervention for Native Hawaiian women with cancer: a pilot study. Soc Work. 2008;53(1):9–19. [PubMed]
32. Driscoll DL, Rupert DJ, Golin CE, et al. Promoting prostate-specific antigen informed decision-making. Evaluating two community-level interventions. Am J Prev Med. 2008;35(2):87–94. [PubMed]
33. Christie J, Itzkowitz S, Lihau-Nkanza I, et al. A randomized controlled trial using patient navigation to increase colonoscopy screening among low-income minorities. J Natl Med Assoc. 2008;100(3):278–284. [PubMed]
34. Grady A, Travers E. Hospice at home 2: evaluating a crisis intervention service. Int J Palliat Nurs. 2003;9(8):326–335. [PubMed]
35. Albert US, Koller M, Lorenz W, et al. Quality of life profile: from measurement to clinical application. Breast. 2002;11(4):324–334. [PubMed]
36. Hoffmann W, Munzinger H, Horstkotte E, Greiser E. A population-based evaluation of an intervention to improve advanced stage cancer pain management. J Pain Symptom Manage. 2004;28(4):342–350. [PubMed]
37. Tamminga SJ, de Boer AG, Verbeek JH, Taskila T, Frings-Dresen MH. Enhancing return-to-work in cancer patients, development of an intervention and design of a randomised controlled trial. BMC Cancer. 2010;10:345. [PMC free article] [PubMed]
38. Velikova G, Booth L, Smith AB, et al. Measuring quality of life in routine oncology practice improves communication and patient well-being: a randomized controlled trial. J Clin Oncol. 2004;22(4):714–724. [PubMed]
39. Hilarius DL, Kloeg PH, Gundy CM, Aaronson NK. Use of health-related quality-of-life assessments in daily clinical oncology nursing practice: a community hospital-based intervention study. Cancer. 2008;113(3):628–637. [PubMed]
40. Shapiro JP, McCue K, Heyman EN, Dey T, Haller HS. A naturalistic evaluation of psychosocial interventions for cancer patients in a community setting. J Psychosoc Oncol. 2010;28(1):23–42. [PubMed]
41. Waller A, Girgis A, Johnson C, et al. Facilitating needs based cancer care for people with a chronic disease: evaluation of an intervention using a multi-centre interrupted time series design. BMC Palliat Care. 2010;9:2. [PMC free article] [PubMed]
42. McCorkle R, Strumpf NE, Nuamah IF, et al. A specialized home care intervention improves survival among older post-surgical cancer patients. J Am Geriatr Soc. 2000;48(12):1707–1713. [PubMed]
43. Campbell C, Craig J, Eggert J, Bailey-Dorton C. Implementing and measuring the impact of patient navigation at a comprehensive community cancer center. Oncol Nurs Forum. 2010;37(1):61–68. [PubMed]
44. Evans BC, Crogan NL, Bendel R. Storytelling intervention for patients with cancer: part 1—development and implementation. Oncol Nurs Forum. 2008;35(2):257–264. [PubMed]
45. Petereit DG, Molloy K, Reiner ML, et al. Establishing a patient navigator program to reduce cancer disparities in the American Indian communities of Western South Dakota: initial observations and results. Cancer Control. 2008;15(3):254–259. [PMC free article] [PubMed]
46. Myers RE, Turner B, Weinberg D, et al. Impact of a physician-oriented intervention on follow-up in colorectal cancer screening. Prev Med. 2004;38(4):375–381. [PubMed]
47. Braun KL, Fong M, Kaanoi ME, Kamaka ML, Gotay CC. Testing a culturally appropriate, theory-based intervention to improve colorectal cancer screening among Native Hawaiians. Prev Med. 2005;40(6):619–627. [PMC free article] [PubMed]
48. Tarkkanen J, Geagea A, Nieminen P, Anttila A. Quality improvement project in cervical cancer screening: practical measures for monitoring laboratory performance. Acta Obstet Gynecol Scand. 2003;82(1):82–88. [PubMed]
49. Mor V, Laliberte LL, Petrisek AC, et al. Impact of breast cancer treatment guidelines on surgeon practice patterns: results of a hospital-based intervention. Surgery. 2000;128(5):847–861. [PubMed]
50. English DR, Milne E, Jacoby P, et al. The effect of a school-based sun protection intervention on the development of melanocytic nevi in children: 6-year follow-up. Cancer Epidemiol Biomarkers Prev. 2005;14(4):977–980. [PubMed]
51. Geller BM, Skelly JM, Dorwaldt AL, et al. Increasing patient/physician communications about colorectal cancer screening in rural primary care practices. Med Care. 2008;46(9) suppl 1:S36–S43. [PMC free article] [PubMed]
52. Rousseau D. Issues of Level in Organizational Research: Multi-level and Cross-level Perspectives. Vol. 7. Greenwich, CT: JAI Press; 1985.
53. Chan D. Functional relations among constructs in the same content domain at different levels of analysis: a typology of composition models. J Appl Psychol. 1998;83(2):234–246.
54. LeBreton J, Senter J. Answers to 20 questions about interrater reliability and interrater agreement. Organ Res Methods. 2008;11(4):815–852.
55. Wageman R, Hackman J, Lehman E. Team diagnostic survey. J Appl Behav Sci. 2005;41(4):373–398.
56. Lukas CV, Holmes SK, Cohen AB, et al. Transformational change in health care systems: an organizational model. Health Care Manage Rev. 2007;32(4):309–320. [PubMed]
57. Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. 1998;7(3):149–158. [PMC free article] [PubMed]
58. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38–50. [PMC free article] [PubMed]
59. Wageman R, Baker G. Incentives and cooperation: the joint effects of task and reward interdependence on group performance. J Organ Behav. 1997;18(2):139–158.
60. Wageman R. Interdependence and group effectiveness. Adm Sci Q. 1995;40(1):145–180.
61. Campion MA, Medsker GJ, Higgs AC. Relations between work group characteristics and effectiveness: implications for designing effective work groups. Pers Psychol. 1993;46(4):823.
62. Campion MA, Papper EM, Medsker GJ. Relations between work team characteristics and effectiveness: a replication and extension. Pers Psychol. 1996;49(2):429.
63. Kirkman BL, Rosen B, Tesluk PE, Gibson CB. The impact of team empowerment on virtual team performance: the moderating role of face-to-face interaction. Acad Manag J. 2004;47(2):175–192.
64. Jehn KA, Northcraft GB, Neale MA. Why differences make a difference: a field study of diversity, conflict, and performance in workgroups. Adm Sci Q. 1999;44(4):741–763.
65. Hackman JR, Oldham GR. Work Redesign. Reading, MA: Addison-Wesley Publishing Company; 1980.
66. Gibson CB, Gibbs J. Unpacking the concept of virtuality: the effects of geographic dispersion, electronic dependence, dynamic structure, and national diversity on team innovation. Adm Sci Q. 2006;51(3):451–495.
67. Marrone JA. Team boundary spanning: a multilevel review of past research and proposals for the future. J Manag. 2010;36(4):911–940.
68. Hoegl M, Gemuenden HG. Teamwork quality and the success of innovative projects: a theoretical concept of empirical evidence. Organ Sci. 2001;12(4):435–449.
69. Pinto MB, Pinto JK, Prescott JE. Antecedents and consequences of project team cross-functional cooperation. Manag Sci. 1993;39(10):1281–1297.
70. Edmondson A. Psychological safety and learning behavior in work teams. Adm Sci Q. 1999;44(2):350–383.
71. Ancona DG, Caldwell DF. Bridging the boundary: external activity and performance in organizational teams. Adm Sci Q. 1992;37(4):634–665.
72. Montoya-Weiss MM, Massey AP, Song M. Getting it together: temporal coordination and conflict management in global virtual teams. Acad Manag J. 2001;44(6):1251–1262.
73. Morgeson FP, DeRue DS, Karam EP. Leadership in teams: a functional approach to understanding leadership structures and processes. J Manag. 2010;36(1):5–39.
74. Gully SM, Incalcaterra KA, Joshi A, Beaubien JM. A meta-analysis of team efficacy, potency, and performance: interdependence and level of analysis as moderators of observed relationships. J App Psychol. 2002;87(5):819–832. [PubMed]
75. Guzzo RA, Yost PR, Campbell RJ, Shea GP. Potency in groups: articulating a construct. Br J Soc Psychol. 1993;32(1):87–106. [PubMed]
76. Mohammed S, Ferzandi L, Hamilton K. Metaphor no more: a 15-year review of the team mental model construct. J Manag. 2010;36(4):876–910.
77. Webber SS, Chen G, Payne SC, Marsh SM, Zaccaro SJ. Enhancing team mental model measurement with performance appraisal practices. Organ Res Methods. 2000;3(4):307–322.
78. Lewis K. Measuring transactive memory systems in the field: scale development and validation. J App Psychol. 2003;88(4):587–604. [PubMed]
79. Choi SY, Lee H, Yoo Y. The impact of information technology and transactive memory systems on knowledge sharing, application, and team performance: a field study. MIS Quarterly. 2010;34(4):855–870.
80. Zhang ZX, Hempel PS, Han YL, Tjosvold D. Transactive memory system links work team characteristics and performance. J App Psychol. 2007;92(6):1722–1730. [PubMed]
81. Anderson NR, West MA. Measuring climate for work group innovation: development and validation of the team climate inventory. J Organ Behav. 1998;19(3):235–258.
82. Kivimaki M, Elovainio M. A short version of the Team Climate Inventory: development and psychometric properties. J Occup Organ Psychol. 1999;72(2):241–246.
83. Cohen SG, Ledford GE, Spreitzer GM. A predictive model of self-managing work team effectiveness. Hum Relations. 1996;49(5):643–676.
84. Pescosolido AT. Group efficacy and group effectiveness: the effects of group efficacy over time on group performance and development. Small Group Res. 2003;34(1):20–42.
85. Hardin AM, Fuller MA, Davison RM. I know I can, but can we? Culture and efficacy beliefs in global virtual teams. Small Group Res. 2007;38(1):130–155.
86. O’Leary MB, Cummings JN. The spatial, temporal, and configurational characteristics of geographic dispersion. MIS Q. 2007;31(3):433–452.
87. Yano EM, Soban L, Parkerton P, Etzioni D. Primary care practice organization influences colorectal cancer screening performance. Health Serv Res. 2007;42(3):1130–1149. [PMC free article] [PubMed]
88. March JG, Simon HA. Organizations. New York, NY: John Wiley & Sons; 1958.
89. Charns M, Tewksbury L. Collaborative Management in Health Care: Implementing the Integrative Organization. San Francisco: Jossey-Bass Publishing; 1993.
90. Parker VA, Charns M, Young G. Clinical service lines in integrated delivery systems: an initial framework and exploration. J Healthcare Manag. 2001;46(4):261–275. [PubMed]
91. Charns M, Young G. Organization design and coordination. In: Burns LR, Bradley EH, Weiner BJ, editors. Shortell and Kaluzny's Healthcare management: Organizational Theory and Behavior. 6th ed. Clifton Park, NY: Thomson Delmar Learning; 2011. pp. 64–90.
92. Hage J, Aiken M. Relationship of centralization to other structural properties. Adm Sci Q. 1967;12(1):72–92.
93. Bazzoli GJ, Shortell S, Dubbs N, Chan C, Kralovec P. A taxonomy of health networks and systems: bringing order out of chaos. Health Serv Res. 1999;33(6):1683–1717. [PMC free article] [PubMed]
94. Gittell JH, Fairfield KM, Bierbaum B, et al. Impact of relational coordination on quality of care, postoperative pain and functioning, and length of stay: a nine-hospital study of surgical patients. Med Care. 2000;38(8):807–819. [PubMed]
95. Young GJ, Charns MP, Daley J, Forbes M, Henderson W, Khuri S. Patterns of coordination and clinical outcomes: a study of surgical services. Health Serv Res. 1998;33(5, pt 1):1211–1236. [PMC free article] [PubMed]
96. West MA, Guthrie JP, Dawson JF, Borrill CS, Carter M. Reducing patient mortality in hospitals: the role of human resource management. J Organ Behav. 2006;27(7):983–1002.
97. Huselid M. The impact of human resource management practices on turnover, productivity, and corporate financial performance. Acad Manag J. 1995;38(3):635–672.
98. Benzer JK, Young G, Stolzmann K, et al. The relationship between organizational climate and quality of chronic disease management. Health Services Research. 2011;46(3):691–711. [PMC free article] [PubMed]
99. Patterson MG, West MA, Shackleton VJ, et al. Validating the organizational climate measure: links to managerial practices, productivity and innovation. J Organ Behav. 2005;26(4):379–408.
100. Nembhard IM, Edmonson A. Making it safe: the effects of leader inclusiveness and professional status on psychological safety and improvement efforts in healthcare teams. J Organ Beh. 2006;27(7):941–966.
101. Ramirez M, Guy, Beale D. Contested resources: unions, employers, and the adoption of new work practices in US and UK telecommunications. Br J Industrial Relations. 2007;45(3):495–517.
102. Carstairs V, Morris R. Deprivation and health in Scotland. Health Bull (Edinb). 1990;48(4):162–175. [PubMed]
103. Han PK, Moser RP, Klein WM, et al. Predictors of perceived ambiguity about cancer prevention recommendations: sociodemographic factors and mass media exposures. Health Commun. 2009;24(8):764–772. [PubMed]
104. Keating NL, Landrum MB, Meara E, Ganz PA, Guadagnoli E. Do increases in the market share of managed care influence quality of cancer care in the fee-for-service sector? J Natl Cancer Inst. 2005;97(4):257–264. [PubMed]
105. Lin CY, Farrell MH, Lave JR, Angus DC, Barnato AE. Organizational determinants of hospital end-of-life treatment intensity. Med Care. 2009;47(5):524–530. [PMC free article] [PubMed]
106. Morden NE, Zerzan JT, Rue TC, et al. Medicaid prior authorization and controlled-release oxycodone. Med Care. 2008;46(6):573–580. [PubMed]
107. Mobley LR, Kuo TM, Urato M, Subramanian S. Community contextual predictors of endoscopic colorectal cancer screening in the USA: spatial multilevel regression analysis. Int J Health Geogr. 2010;9:44. [PMC free article] [PubMed]
108. Ell K, Vourlekis B, Xie B, et al. Cancer treatment adherence among low-income women with breast or gynecologic cancer: a randomized controlled trial of patient navigation. Cancer. 2009;115(19):4606–4615. [PMC free article] [PubMed]
109. Sherbourne CD, Stewart AL. The MOS social support survey. Soc Sci Med. 1991;32(6):705–714. [PubMed]
110. Gottlieb BH, Wachala ED. Cancer support groups: a critical review of empirical studies. Psychooncology. 2007;16(5):379–400. [PubMed]
111. Lukwago SN, Kreuter MW, Bucholtz DC, Holt CL, Clark EM. Development and validation of brief scales to measure collectivism, religiosity, racial pride, and time orientation in urban African American women. Fam Community Health. 2001;24(3):63–71. [PubMed]
112. Ilgen DR, Hollenbeck JR, Johnson M, Jundt D. Teams in organizations: from input-process-output models to IMOI models. Annu Rev Psychol. 2005;56:517–543. [PubMed]
113. Arrow H, McGrath JE, Berdahl Jl. Small Groups as Complex Systems: Formation, Coordination, Development, and Adaptation. Thousand Oaks, CA: Sage Publications, Inc.; 2000.
114. Salas E, Sims D, Burke C. Is there a “big five” in teamwork? Small Group Res. 2005;36(5):555–599.
115. Rousseau V, Aube C, Savoie A. Teamwork behaviors: a review and an integration of frameworks. Small Group Res. 2006;37(5):540–570.
116. Stewart GL. A meta-analytic review of relationships between team design features and team performance. J Manag. 2006;32(1):29–55.
117. Stewart GL. The past twenty years: teams research is alive and well at the Journal of Management. J Manag. 2010;36(4):801–805.
118. Mathieu J, Maynard MT, Rapp T, Gilson L. Team effectiveness 1997-2007: a review of recent advancements and a glimpse into the future. J Manag. 2008;34(3):410–476.
119. Lemieux-Charles L, McGuire WL. What do we know about health care team effectiveness? A review of the literature. Med Care Res Rev. 2006;63(3):263–300. [PubMed]
120. Poole MS, Real K. Groups and teams in health care: communication and effectiveness. In: Thompson TL, Dorsey AM, Miller KI, Parrott R, editors. Handbook of Health Communication. Mahwah, NJ: Lawrence Erlbaum Associates; 2003. pp. 369–402.
121. Pickett KE, Pearl M. Multilevel analyses of neighbourhood socioeconomic context and health outcomes: a critical review. J Epidemiol Community Health. 2001;55(2):111–122. [PMC free article] [PubMed]
122. Partin MR, Noorbaloochi S, Grill J, et al. The interrelationships between and contributions of background, cognitive, and environmental factors to colorectal cancer screening adherence. Cancer Causes Control. 2010;21(9):1357–1368. [PubMed]

Articles from Journal of the National Cancer Institute. Monographs are provided here courtesy of Oxford University Press