PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Natl Cancer Inst. Author manuscript; available in PMC 2010 March 2.
Published in final edited form as:
PMCID: PMC2830858
NIHMSID: NIHMS176925

Promoting Regular Mammography Screening II. Results From a Randomized Controlled Trial in US Women Veterans

Abstract

Background

Few health promotion trials have evaluated strategies to increase regular mammography screening. We conducted a randomized controlled trial of two theory-based interventions in a population-based, nationally representative sample of women veterans.

Methods

Study candidates 52 years and older were randomly sampled from the National Registry of Women Veterans and randomly assigned to three groups. Groups 1 and 2 received interventions that varied in the extent of personalization (tailored and targeted vs targeted-only, respectively); group 3 was a survey-only control group. Postintervention follow-up surveys were mailed to all women after 1 and 2 years. Outcome measures were self-reported mammography coverage (completion of one postintervention mammogram) and compliance (completion of two postintervention mammograms). In decreasingly conservative analyses (intention-to-treat [ITT], modified intention-to-treat [MITT], and per-protocol [PP]), we examined crude coverage and compliance estimates and adjusted for covariates and variable follow-up time across study groups using Cox proportional hazards regression. For the PP analyses, we also used logistic regression.

Results

None of the among-group differences in the crude incidence estimates for mammography coverage was statistically significant in ITT, MITT, or PP analyses. Crude estimates of compliance differed at statistically significant levels in the PP analyses and at levels approaching statistical significance in the ITT and MITT analyses. Absolute differences favoring the intervention over the control groups were 1%–3% for ITT analysis, 1%–5% for MITT analysis, and 2%–6% for the PP analysis. Results from Cox modeling showed no statistically significant effect of the interventions on coverage or compliance in the ITT, MITT, or PP analyses, although hazard rate ratios (HRRs) for coverage were consistently slightly higher in the intervention groups than the control group (range for HRRs = 1.05–1.09). A PP analysis using logistic regression produced odds ratios (ORs) that were consistently higher than the corresponding hazard rate ratios for both coverage and compliance (range for ORs = 1.15–1.29).

Conclusions

In none of our primary analyses did the tailored and targeted intervention result in higher mammography rates than the targeted-only intervention, and there was limited support for either intervention being more effective than the baseline survey alone. We found that adjustment for variable follow-up time produced more conservative (less favorable) intervention effect estimates.

Breast cancer is the second leading cause of cancer deaths in women in the United States (1). Evidence from randomized controlled trials shows that regular screening with mammography reduces mortality from breast cancer in women aged 50–74 years by approximately 23% (2) and that women older than 74 years benefit as well (3). To maximize the population benefit in terms of mortality reduction, women need to be screened every 1–2 years. Prevalence estimates from the National Health Interview Survey of mammography in women aged 40 years and older, as measured by self-report of recent use (within the past 2 years), increased from 30% to 70% between 1987 and 2000 (4). However, the prevalence of regular mammography—that is, consecutive, on-schedule mammograms—is lower than the prevalence of recent mammography. For example, a review of regional studies reported summary estimates of repeat mammography in women aged 50 years and older (using an interval of ≤15 months between mammograms); rates ranged from 30.7% (95% confidence interval [CI] = 17.5 to 43.9) in studies conducted before 1991 to 43.6% (95% CI = 35.0 to 52.5) in studies conducted from 1995 through 2001 (5). Thus, although the pattern of increase for repeat mammography was similar to that for recent mammography, the prevalence was lower.

CONTEXT AND CAVEATS

Prior knowledge

Some behavioral interventions, especially those that include individually tailored messages, have been found to increase rates of one-time mammography screening. However, fewer studies have analyzed interventions to promote ongoing regular mammography.

Study design

The Project Healthy Outlook on the Mammography Experience trial compared rates of completion of two or more mammograms (“compliance”) among women randomly assigned to a tailored and targeted intervention, to a targeted intervention, and to a survey-only control group. Outcomes were evaluated by three decreasingly conservative analytic methods.

Contributions

An analytic approach that takes losses to follow-up into account may produce more conservative (less favorable) estimates of intervention effects. Only the least conservative analysis provided evidence that either intervention improved compliance compared with the survey-only control group. The absolute between-group difference ranged from 3% to 6%, depending on the analysis.

Implications

An intervention targeted to a broad group of women (in this case, women veterans) may work as well as one that is both tailored and targeted, although neither intervention was clearly superior to the control condition.

Limitations

The appropriate analytic method for a study of this type is not clear. The study did not have the statistical power to evaluate whether the modest intervention effects that were observed were statistically significant. It was possible to assess the validity of results based on self-reported mammography by reviewing medical records only for the subgroup of women veterans known to have ever used Veterans Healthcare Administration facilities. A longer follow-up may be necessary to evaluate compliance among women who have regular mammograms at intervals of more than 2 years.

Throughout the 1980s and 1990s, many intervention trials were conducted to promote one-time mammography screening. Systematic reviews and meta-analyses have concluded that a variety of strategies are efficacious at increasing one-time mammography screening (615). Although the classification schemes varied, minimal interventions directed at patients (6,911,13) or providers (1214) have been shown to increase screening. Meta-analyses also suggest that more intensive patient-directed interventions, including those using multiple strategies (eg, letter plus phone call), those tailored to an individual’s characteristics, and those based on theory, produced larger intervention effects than usual care or minimal interventions (6,10,11,16). In contrast, fewer studies have been conducted to evaluate strategies to increase regular screening (17).

To evaluate two theory-based interventions to promote regular mammography screening, we performed Project HOME (Healthy Outlook on the Mammography Experience), a randomized controlled trial to promote regular mammography screening. At the time our trial was conducted, no published studies had examined the effects of an intervention on completion of two or more postintervention mammograms, although several studies had measured completion of a second on-schedule mammogram among women who had a recent preintervention mammogram [see, eg, (1820)]. Since 2000, only five published studies have examined whether women completed two postintervention mammograms (2125), and they are limited in several ways. All five studies were based on regional samples in the northeast United States. Only one study (22) was community based; the other four were conducted in health-care settings. As with most cancer screening intervention trials, these studies evaluated intervention efficacy—that is, they attempted to maximize internal validity, with less attention to external validity (26,27). All study samples were drawn from defined sampling frames; however, no comparisons were made between participants and eligible nonparticipants (eg, refusers and nonrespondents). Further, although most tested equivalence of study groups at baseline, only two (22,25) compared the final analyzed sample with study withdrawals. None analyzed the data using intention-to-treat (ITT) principles; only participants who completed the study and provided all relevant follow-up data were included in the outcome analyses. Although all studies tested theory-based personalized strategies such as telephone counseling (22,25) and/or tailored print materials (21,23,24), only two (23,24) compared interventions of different degrees of personalization and none compared targeted (designed for population subgroups) and tailored (designed for individual-level feedback) approaches to motivate women to obtain a mammogram.

Project HOME, in addition to comparing a tailored and targeted intervention with a targeted-only intervention, goes beyond these earlier studies by attending to external validity, as well as internal validity, as described in the accompanying article by del Junco et al. (28). Specifically, we used a well-defined national sampling frame, the target population was randomly sampled, the study sample was tracked throughout the course of the study, and the data were analyzed using an ITT analysis in addition to more typical but decreasingly conservative analyses (29).

Subjects and Methods

Selection and Recruitment of the Study Population

Sample selection, recruitment, follow-up, participant flow, and group equivalence at each stage of the study are described in del Junco et al. (28). Briefly, the study population was composed of a random sample of women veterans 52 years and older on June 1, 2000 (ie, born on or before June 1, 1948), drawn from the US National Registry of Women Veterans (NRWV). The NRWV is a comprehensive sampling frame that contains records for 1.4 million of the estimated 1.6 million women veterans who separated from active duty after January 1, 1942 (30). Women veterans are similar to the US female population in terms of demographics (3133), geographic distribution, mammography screening rates (34,35), and patterns of health-care use (whether private- or government-sponsored) (31,33). Eligibility criteria included previous active duty service in the US Armed Forces but not current active service, no prior diagnosis of breast cancer, physical and mental ability to participate, valid social security number, and a current mailing address in the United States or Puerto Rico.

Two random samples were selected 9 months apart, as described in del Junco et al. (28). For sampling rounds 1 and 2, respectively, recruitment began on September 4, 2000, and June 1, 2001, and follow-up ended on December 17, 2003, and October 1, 2004. Recruitment began with a letter of introduction and an eligibility survey. A pencil, a prepaid return envelope, and a small gift incentive (a bumper sticker that read, “Women are veterans, too”) were included with the mailing.

Study Objectives and Design

The primary aim of the study was to develop and evaluate two interventions based on the transtheoretical (ie, stages of change) model (36,37) and other relevant behavior change constructs to increase the completion of one postintervention mammogram (which we refer to as “mammography coverage”) and the completion of two postintervention mammograms 6–15 months apart (“mammography compliance”). Study candidates were randomly assigned to one of three groups approximately 2 months after the eligibility survey was mailed. Groups 1 and 2 received baseline and follow-up surveys and behavior change interventions that encouraged annual mammography screening and that differed in the extent of personalization. Group 3 was a survey-only control group. Based on American Cancer Society recommendations in effect at the time of the study (38), the intervention material included a recommendation that women 50 years and older be screened annually with mammography.

Because the study was designed to maximize external validity, we randomly assigned not only survey respondents with known eligibility status but also nonrespondents with unknown eligibility status to allow nonrespondents a chance to enroll, report ineligibility, or refuse participation at any time during the 3.25-year study. Randomization was stratified by sampling round and respondent or nonrespondent status to ensure proportional distribution across study groups. The study data manager (S. P. Coan) used Stata Statistical Software (Release 9; StataCorp LP, College Station, TX) to generate random numbers for all selection and allocation procedures without knowledge of study candidates’ NRWV characteristics or eligibility survey responses other than those used to establish stratum membership, that is, respondent or nonrespondent status and sampling round.

The study was funded by the National Cancer Institute (RO1 CA76330) and was conducted through The University of Texas Health Sciences Center at Houston School of Public Health and the Veterans Administration Medical Center (VAMC) in Durham, North Carolina. The study protocol was approved by the institutional review boards of both institutions.

Survey and Intervention Development and Implementation

Timing of the Surveys and Interventions

We mailed a baseline survey to all women within 1 month of random assignment. On completion and return of the baseline survey or 3 months after the baseline survey mailing date (whichever occurred first), women in groups 1 and 2 were sent the first round of intervention materials. The first follow-up (year 1) survey was sent to all three groups approximately 1 year after the baseline survey. About 2 months after receipt of the year 1 survey, a second round of intervention materials was mailed to groups 1 and 2. The final follow-up (year 2) survey was mailed to women in all three groups approximately 1 year after the year 1 survey. All mailings included the names of the project director (JAT) and principal investigator (SWV) as well as their contact information and a toll-free telephone number for respondents to call and ask questions or to decline participation.

Surveys

The baseline survey (Supplementary data, available online) included questions on mammography screening history, demographics, history of military service, and psychosocial constructs drawn from theories of behavior change. The year 1 and 2 follow-up surveys were similar to the baseline survey but did not include demographic questions or measures of some of the psychosocial constructs. All surveys included questions to assess eligibility (eg, breast cancer diagnosis). As described in detail in del Junco et al. (28), we used generally recommended procedures to recruit and retain study participants, including the use of incentives, multiple mailings, and multiple modes of contact (39,40). Staff members who conducted the mailings and telephone follow-up were blind to study candidates’ intervention group status.

Planning and Conceptual Frameworks for the Intervention

We used intervention mapping as the planning framework (41). Intervention mapping is a systematic process for using theory and evidence for intervention design, implementation, and evaluation. The transtheoretical (stages of change) model (36,37) was our primary conceptual framework for developing surveys and intervention materials because the goal of the intervention was to move women through stages of change or readiness—that is, precontemplation, contemplation, preparation, and action—toward maintenance or regular mammography screening. We used the following constructs from the transtheoretical model: stage of change, pros and cons (decisional balance), and processes of change. Using the intervention mapping process, we identified additional constructs from the literature that are relevant to repeat breast cancer screening, including perceived risk and barriers, from the health belief model (42); self-efficacy, from social cognitive theory (43); and subjective norms, from the theory of reasoned action/planned behavior (44). We measured some of these constructs with questions that were developed and validated by one of the coinvestigators on the study (WR) in his prior work on mammography screening interventions (4547). We also developed or modified items and scales used by other investigators in mammography studies that we identified in the literature and through our qualitative research with focus groups of women veterans. We validated these measures using baseline data from our study (48).

The more personalized intervention was both tailored and targeted, and the less personalized intervention was targeted-only. We defined tailoring as the use of personalized materials or strategies based on characteristics derived from an individual assessment (49,50). A tailored message is written specifically for an individual based on his or her responses to survey questions and attempts to mimic interpersonal communication (49,51). Following Kreuter and Skinner (52), we defined targeting as the development of materials for a specific population subgroup that takes into account certain characteristics shared by members of the group. A targeted message is written for a group whose similarity in characteristics (eg, veterans) is seen to represent a communication advantage (49). Intervention mapping guided the goals of tailoring for each theoretic construct selected and the form of tailored feedback provided (eg, graphical vs written).

Women in group 1 were sent both the targeted and tailored intervention components. Women in group 2 were sent the targeted component and a generic cover letter that conveyed general messages about breast cancer and mammography screening and encouraged annual screening.

Targeted Component

The targeted component consisted of a folder containing 1) a set of four educational booklets, 2) a letter for the woman to use to discuss mammography with her health-care provider, and 3) a pamphlet about mammography screening services available through the Veterans Administration (VA) (Supplementary data, available online). The materials were developed based on focus groups with women veterans and were designed to target the study population’s veteran status by including graphics of and testimonials and quotes from women veterans along with a pamphlet describing how to access mammography through the VA. Each of the four booklets addressed a different stage of change or readiness to have a screening mammogram, that is, precontemplation, contemplation/preparation, action, and maintenance. Text messages in the booklets were written by the investigators and were based on the constructs identified as being relevant to repeat mammography screening (pros and cons, processes of change, self-efficacy, subjective norms, and perceived risk). Messages reflected stage-relevant thoughts, feelings, and actions and suggested strategies to help participants move to the next stage of change. Each booklet contained a brief introduction, a section on frequently asked questions, a testimonial about mammography from a fictitious woman veteran that reflected the relevant stage of readiness to have a mammogram and an exercise (eg, list your favorite source of information). The six-page booklets were written at an eighth-grade educational level. A short self-assessment was printed on the inside pocket of the folder to allow a woman to categorize herself into one of the stages of change and to select the most relevant booklet.

Tailored Component

The tailored component of the intervention consisted of a letter with messages that addressed each par ticipant’s responses to the theoretic constructs measured on the surveys and up to three of 14 possible bookmarks suggesting solutions to mammography screening barriers that the participant had identified on the survey (eg, pain, cost, or transportation difficulties) (Supplementary data, available online). The four-page tailored letter was produced in a color, newsletter-style format (53). The introduction and conclusion sections emphasized that the letter was created for the woman based on her survey responses. The letter included feedback on her recent mammography behavior and intention, gave information and motivational messages, and where appropriate, suggested activities designed to move her into the next stage of change. The body of the letter was divided into six sections: 1) a description of the participant’s current stage of change; 2) feedback regarding her decisional balance (pros outweighed cons or vice versa), positive reinforcement of the pros she endorsed, and information to counter the cons she reported; 3) graphical illustrations of her objective and perceived risks for breast cancer along with messages designed to help reconcile her perceptions with her actual risk (first intervention letter only); 4) feedback on her self-efficacy to get a mammogram and strategies to increase her confidence and overcome identified barriers (eg, ways to minimize discomfort during the procedure); 5) review of her use of the processes of change and a list of activities she could do that were appropriate to her stage of change; and 6) a reminder about her next mammogram due date. Additional details about the development and pilot testing of the tailored intervention and about the cost of developing the intervention have been reported elsewhere (53,54).

In the first round of the intervention, the contents of the tailored letter were based on the responses to the baseline survey. In the second intervention round, the letter included feedback based on responses to both the baseline and year 1 surveys (Supplementary data, available online). If a woman returned the baseline but not the year 1 survey, the letter sent in the year 1 intervention package described three scenarios of stage movement based on the stage of readiness the woman had reported on the baseline survey. Women in group 1 who did not return either the baseline or year 1 survey were sent the generic letter that had been developed for the targeted-only group (group 2). Nonrespondents and women who did not identify any barriers were sent bookmarks for the top three barriers to repeat mammography screening based on all survey respondents (ie, fear of pain, no family history of breast cancer, and forgetting to schedule an appointment).

Outcomes

On all surveys, women were asked to give the month and year of their two most recent mammograms (“If you have had one or more mammograms, when were your last two mammograms?”). The primary outcomes—that is, coverage (the first postintervention mammogram) and compliance (two postintervention mammograms 6–15 months apart)—were measured by self-report on the years 1 and 2 follow-up surveys. Mammograms 15 months apart, rather than 12, were considered to be on-time, “annual” mammograms, to allow for scheduling exigencies and the constraints of insurance coverage based on a 12-month interval (5,55).

Covariates

The primary independent variable was study group (1, 2, or 3). Covariates were age group (52–64 or ≥65 years) and VA health-care services user status (yes or no). A VA user was defined as a woman who reported on any survey that her last mammogram was at a VA or VA-sponsored facility, who was at a VA or VA-sponsored facility, who was listed in either the Veterans Health Administration (VHA) inpatient or outpatient database, who reported having VA health insurance, or who had a record of a VA-sponsored mammogram in the VHA inpatient or outpatient file. These covariates were assessed for potential confounding and modifying effects in all multivariable analyses.

In some analyses, we also assessed whether being overdue for a mammogram modified intervention effects. Overdue was defined as an interval of more than 15 months since the last reported mammogram. Women who reported no previous mammograms and women who did not report at least the year of the most recent mammogram were considered to be overdue at the start of the follow-up period.

Statistical Analysis

Hypotheses

Our null hypotheses were that coverage and compliance were no different among study groups. Our alternative hypotheses were that coverage and compliance would be highest in group 1 (tailored and targeted), intermediate in group 2 (targeted-only), and lowest in group 3 (survey-only control group).

Sample Size

We used data from other mammography intervention trials (5658) to calculate the sample size required for a 35%–65% range of coverage estimates and a 15%–45% range of compliance estimates (in control subjects). A minimum sample size of 600 respondents per group by study’s end was estimated to enable the detection of a 7%–12% absolute between-group difference (at P < .05) in coverage or compliance with 80% power and a two-sided test of statistical significance. The sample size calculation included Bonferroni corrections for six pairwise comparisons between study groups (59,60).

Intention-To-Treat, Modified Intention-To-Treat, and Per-Protocol Analyses

To test our hypotheses, we conducted three decreasingly conservative analyses: ITT, modified intention-to-treat (MITT), and per-protocol (PP) analyses (29). We adapted terminology used by Le Henanff et al. (29) who defined ITT, MITT, and PP in terms of whether or not patients received or completed treatment. We defined these terms to indicate the extent that study candidates participated in the study by returning project surveys or completing telephone follow-up rather than the extent that they were exposed to our mailed intervention materials (ie, the “treatment”) because we had no feasible way to assess whether women read or used the booklets or tailored letter. The ITT analyses included women who responded to at least one follow-up survey, women who never responded to any survey, those who actively refused to participate after random assignment, those who withdrew during follow-up due to ineligibility, and respondents with missing values for mammography dates on both follow-up surveys (Figure 1). Enumeration of ineligibles and reasons for ineligibility are provided in figure 3 of del Junco et al. (28), and response patterns to the baseline and follow-up surveys are provided in table 3 of del Junco et al. (28).

Figure 1
Eligibility for intention-to-treat (ITT), modified intention-to-treat (MITT), and per-protocol (PP) analyses of mammography coverage and compliance. Coverage is at least one mammogram following the first intervention and before the end of the study. Compliance ...
Table 3
Per-protocol analysis: univariate ORs and 95% confidence intervals (CIs) from logistic regression for mammography coverage (n = 2681) and compliance (n = 2065)*

For the purposes of the ITT analyses, we coded mammography status as “0” for missing date values (61). In the MITT analysis, we excluded women who refused after randomization, nonrespondents to both year 1 and year 2 follow-up surveys, and women with unknown mammography status (Figure 1). In the PP analyses, we excluded women from the MITT analyses who did not complete a baseline survey (Figure 1). Only 7.4% of the women in group 1 who were eligible for the MITT analysis were not mailed a tailored letter in either intervention round, and 19% were not sent a tailored letter in one of the two rounds. In the PP analysis, all women received a tailored letter in at least one of the two rounds.

In the initial data analyses, we examined crude coverage and compliance by computing cumulative incidence rates and tested for overall and pairwise group differences using chi-square statistics. We then used Cox proportional hazards regression to estimate intervention effects before and after adjusting for covariates. We used Cox regression instead of logistic regression because Cox regression adjusts for potential differences in follow-up time across study groups and has been shown to yield more precise risk estimates than logistic regression for analyzing longitudinal data (62,63). We defined the start of the study follow-up period as the date of the first intervention mail-out for groups 1 and 2. For group 3 (control group), the start date of the follow-up period was assigned as the midpoint of all intervention mail-out dates. The end of the follow-up period was defined as the date of the first mammogram reported after the start date (ie, the coverage mammogram), the date of the most recent follow-up survey received for those with no eligible mammograms, the date we learned the respondent became ineligible, or the end of the study for refusers and nonrespondents, whichever occurred first.

To enable us to detect intervention effects on compliance (two postintervention mammograms) separately from intervention effects on coverage (the first postintervention mammogram), conditional Cox regression models for compliance were restricted to participants with a coverage mammogram. We were unable to evaluate compliance in the ITT analysis using Cox regression because, by definition, a woman had to self-report a coverage mammogram and have a time interval of at least 6 months between the date of her coverage mammogram and the date of her final follow-up survey. For the compliance Cox regression analyses, the start of the follow-up period was defined as the date of a participant’s first postintervention mammogram (ie, coverage). Study candidates lacking a coverage mammogram and participants lacking at least 6 months of follow-up time after the date of their coverage mammogram were excluded from the analyses for compliance because those women could not become due for a second postintervention mammogram. The end of the follow-up period for the compliance regression analyses was defined as the date of the first self-reported mammogram that occurred 6–15 months after the date of the coverage mammogram (ie, the “compliance” mammogram) or, for those without a compliance mammogram, the date of the most recent follow-up survey received, the date we learned the respondent became ineligible, or the date exactly 15 months following the date of the respondent’s coverage mammogram, whichever occurred first. Follow-up was truncated at 15 months.

In the Cox models for the MITT and PP analyses of coverage mammograms, we included overdue status as a time-dependent covariate along with the appropriate interaction terms (eg, group 1 × overdue). This approach allowed us to examine potential differential intervention effects depending on whether participants were overdue for an annual mammogram at the start or sometime during the follow-up period. We were unable to assess overdue status in the ITT analyses because nonrespondents and refusers had unknown pre- and postintervention mammography dates. Our substitution of a “yes” (or a code of 1) for missing overdue status and a “no” (or a code of 0) for missing postintervention mammography status would render the interaction term coefficients uninterpretable because the modifying effect of being overdue would be confounded with the artificial effect from the imputations.

Results were summarized using hazard rate ratios (HRRs) and 95% confidence intervals. Group differences based on the multivariable and univariate Cox models differed by less than 15% (64); therefore, we show only univariate results. The assumptions of the proportional hazards model were tested with diagnostic statistical and graphical methods based on Schoenfeld residuals (65,66). There were no statistically significant departures from model assumptions. Stata Statistical Software version 10 was used for all analyses.

Ancillary Per-Protocol Analyses Comparing Estimates From Cox and Logistic Regression

All previous mammography trials that measured the percentage of women who obtained two postintervention mammograms used logistic regression to adjust for covariates and also restricted their outcome analyses to women who provided all baseline and follow-up data (our PP definition). Therefore, to directly compare our results with those of other studies, we performed PP analyses for coverage and compliance using logistic regression and compared the estimates with those from the Cox PP analyses (Figure 1). Results for logistic regression were summarized using odds ratios (ORs) and 95% confidence intervals.

Ancillary Analyses of VA Users

Linkage of NRWV records with the VA’s administrative patient databases (henceforth referred to as VA records) enabled us to assess intervention effects independent of survey response in the subset of women who had ever used the VA health-care system. For this subset of women, VA records were available for eligible nonparticipants (those who never responded to any survey and those who refused to participate after they were randomly assigned) as well as for study participants (respondents to the year 1 or year 2 follow-up surveys). Figure 2 shows the number of VA users eligible for ITT, MITT, and PP analyses using VA records and self-report survey data.

Figure 2
Eligibility of Veterans Administration (VA) users for intention-to-treat (ITT), modified intention-to-treat (MITT), and per-protocol (PP) analyses of mammography coverage and compliance. Coverage is at least one mammogram following the first intervention ...

For the ITT analyses, comparing effect estimates based on VA records with those based on self-report enabled us to assess the influence of imputing zeros for self-reported mammograms among presumed eligible nonparticipants. For both the MITT and PP analyses, comparing effect estimates based on VA records with those based on self-report enabled us to examine the influence of different methods of ascertaining the study outcome. Across the ITT, MITT, and PP analyses, comparing effect estimates based on VA records enabled us to determine whether intervention effects varied with the level of study participation. We used Cox regression for all ancillary analyses within the subgroup of VA users.

Results

Participant Flow, Recruitment, and Baseline Data

Details about the selection and flow of study candidates throughout the 3.25-year study, including analyses comparing between-group equivalence and nonparticipant with participant characteristics, are described in del Junco et al. (28). By the end of the study, we were able to establish eligibility status for 7997 of 8444 (95%) study candidates whose records were drawn from the NRWV and who were randomly assigned to one of the three study groups. Of the 8444 study candidates, 2944 were identified as ineligible before the first intervention mailing, leaving 5500 respondents, never respondents, and refusers for the ITT analyses. As described in del Junco et al. (28), the three study groups were statistically equivalent on all factors measured at randomization, at the time of the baseline surveys, and at the end of the study. In addition, there were no between-group differences in the characteristics of presumably eligible nonparticipants (ie, nonrespondents and those who refused to participate, either before the intervention or during follow-up) and those of participants retained through the end of the study.

Mammography Coverage

Differences in the crude estimates of coverage among groups were not statistically significant in ITT, MITT, or PP analyses (Table 1). The only pairwise comparisons for coverage that were statistically significant at P less than .05 were between group 2 (targeted-only) and group 3 (control group) for the MITT and PP analyses. Estimates showed favorable but modest absolute differences between the control group and the two intervention groups of 1%–2% for ITT, 1%–3% for MITT, and 2%–3% for PP.

Table 1
Crude cumulative incidence of mammography coverage and compliance for intention-to-treat, modified intention-to-treat, and per-protocol analyses*

Results from Cox modeling (Table 2) showed no statistically significant effect of the interventions on self-reported mammography coverage in the ITT, MITT, or PP analyses, although hazard rate ratios were slightly higher in the intervention groups than the control group (range for HRRs = 1.05 – 1.09). Adjustment for age group and VA user status slightly affected the precision but not the magnitude of the estimates (Supplementary Table 1, available online). Tests for first-order interactions between study group and age group and between study group and VA user status showed no evidence of effect modification (Supplementary Table 1, available online). In the time-dependent Cox models for coverage, overdue status did not statistically significantly modify intervention effects in either the MITT or PP analyses.

Table 2
Univariate Cox HRRs and 95% CIs for coverage and compliance for intention-to-treat, modified intention-to-treat, and per-protocol analyses*

There were statistically significant main effects of being in the younger age group in all analyses, and the association with age was stronger in the ITT than in the MITT or PP analyses (Table 2). Being a VA user was statistically associated with coverage in the ITT analysis but not in the MITT or the PP analyses.

Mammography Compliance

Differences in the crude estimates of compliance among groups were statistically significant in the PP analyses and approached statistical significance in the ITT and MITT analyses (Table 1). In all three analyses, pairwise comparisons showed that women in each of the two intervention groups were more likely to report receiving two postintervention mammograms 6–15 months apart than women in the control group. Estimates showed favorable but modest absolute differences between the control group and the two intervention groups of 3%–4% for ITT and MITT and around 6% for PP.

However, results from Cox modeling showed no statistically significant effect of the interventions on self-reported mammography compliance in either the MITT or the PP analysis (Table 2). There were no statistically significant main effects of age or VA user status on compliance. Adjustment for age group and VA user status slightly affected the precision but not the magnitude of the estimates (Supplementary Table 2, available online). Tests for first-order interactions between study group and age group and study group and VA user status showed no evidence of effect modification (Supplementary Table 2, available online).

Ancillary Per-Protocol Analyses Using Logistic Regression

The odds ratios in the PP analyses for coverage and compliance using logistic regression were consistently higher than the hazard rate ratios based on Cox regression, although the confidence intervals overlapped (range for ORs = 1.15–1.29) (Table 3). Compared with group 3, group 2 had statistically significantly higher mammography coverage and group 1 had statistically significantly higher mammography compliance (Table 3). Inclusion of age group and VA user status in the logistic models and tests for first-order interactions between study group and age group and between study group and VA user status showed no evidence of confounding or effect modification. As in the PP analyses based on Cox regression, younger age had a statistically significant positive association with mammography coverage but not with mammography compliance, and VA user status was not associated with either outcome.

Ancillary Analyses of VA Users

In the subset of VA users, the hazard rate ratios of mammography coverage were similar for VA records and self-report in the ITT analyses, regardless of study group assignment (Table 4). In the MITT and PP analyses, the hazard rate ratios of mammography coverage based on VA records were slightly higher than those based on self-report whereas the hazard rate ratios of mammography compliance showed the opposite pattern. In VA record–based analyses, hazard rate ratios for mammography coverage increased slightly across the three decreasingly conservative analyses (ie, from the ITT to the MITT and then to the PP analyses).

Table 4
Univariate Cox HRRs and 95% CIs for coverage and compliance among Veterans Administration users for intention-to-treat, modified intention-to-treat, and per-protocol analyses by data source used to ascertain mammography status*

Discussion

To our knowledge, this is the first cancer screening intervention trial in the United States that used a well-defined, nationally representative study population and that provided evidence of both internal and external validity (28). In addition, few intervention trials have conducted ITT analyses or compared intervention effects using different analytic approaches. An exception is a recent study of a substance abuse program (67) that analyzed data using both ITT and PP approaches and found that positive findings from the PP analysis were not replicated using ITT.

In none of our primary analyses did the tailored and targeted intervention (group 1) result in higher mammography rates than the targeted-only intervention (group 2), and there was limited support for either intervention being more effective than the baseline survey alone (group 3). Our study had adequate statistical power to test hypotheses of 7%–12% absolute between-group differences for mammography coverage and compliance, assuming that rates in the control group were in the range of 35%–65% for coverage and of 15%–45% for compliance. Mammography rates in the control group were within these ranges for all primary analyses except coverage in the MITT and PP analyses, for which the cumulative incidence estimates were around 80%. With such high rates, it may be difficult for any intervention to increase coverage (ie, a ceiling effect). Follow-up time was adequate for women to complete two postintervention mammograms before the end of the study, even assuming that they did not receive their first postintervention mammograms until the end of the first follow-up period. However, some women may not follow an annual mammography schedule or may be on a biennial schedule at the recommendation of their physician. Studies measuring multiple mammograms after an intervention need to allow adequate time to assess these contingencies.

We also found that the PP effect estimates from logistic regression were of greater magnitude than the corresponding effect estimates from Cox modeling. This finding is consistent with data from an occupational cohort and simulation study that compared mortality risk estimates using proportional hazards, Poisson, and logistic regression. In that study, Callas et al. (63) found that logistic regression overestimates mortality risk associated with the exposure and is less precise than estimates based on proportional hazards models, which are considered to be the “gold” standard for prospective studies. Further, the extensive simulations of Callas et al. (63) revealed that risk estimates differed most when the outcome (eg, mammography use) was common and when the relative risk was less than 2.0, circumstances that pertained to our data.

Our estimates based on Cox regression were also lower than those from the five other studies (2125) that evaluated tailored or personalized approaches and measured two postintervention mammograms, all of which used logistic regression to estimate intervention effects. However, when we analyzed our data using logistic regression and imposed similar restrictions on the study sample, our effect sizes increased and were within the range of estimates reported in those studies. Collectively, these findings raise a question about the appropriate analytic method to use in analyzing outcome data in cancer screening intervention trials. Trials that use logistic regression may overestimate favorable intervention effects in the target population. If so, our findings have important implications for the dissemination of cancer screening interventions that appear favorable based on the results of efficacy trials using analyses that ignore losses to follow-up.

Our study has several limitations. Because our study was designed to assess the effectiveness of the interventions under “real-world” conditions, we retained women in our ITT analyses for whom we did not have data on mammography status. Our decision to code women who did not respond, who refused to participate, or who were missing self-reported mammography data as having “no mammogram” is a “worst-case” imputation strategy (61). However, the hazard rate ratios based on decreasingly conservative analyses were very similar to one another. Moreover, despite the artificially low rates of coverage and compliance produced by imputing zeros for missing mammography dates in the ITT analyses that were based on self-report, the ITT analyses based on VA records—in which mammography was ascertained independent of study participation—were corroborative.

Another limitation relates to the use of self-report to measure mammography status. There is some evidence that survey respondents may overreport socially desirable behaviors such as mammography screening when self-reports are compared with more objective data sources, such as medical records (6870). In a recent study by Paskett et al. (71), women receiving the intervention were more likely than those in the control group to report mammograms that were not documented in their medical records. In our analyses of VA users, in which we compared medical record with self-report data, there was no consistent pattern suggestive of such bias in the hazard rate ratio estimates. In addition, volunteers willing to participate in health promotion and prevention trials may be a self-selected subgroup that is more likely than the target population to engage in health-related behaviors. For example, people who complete study questionnaires are more likely to undergo colorectal cancer screening than those who do not (7276). Such a phenomenon could explain the slight increases in hazard rate ratios we observed when comparing results from the ITT with the less conservative MITT and PP analyses based on VA records.

At the time our trial was initiated, there were no published studies that evaluated the effect of a tailored intervention on completion of two postintervention mammograms or that compared the effects of tailored and targeted approaches. Our hypothesis regarding mammography compliance (ie, the completion of two postintervention mammograms 6–15 months apart) was based on theory (36,37) and on some empiric evidence from trials of interventions to promote one postintervention mammogram (57,7779). We expected that a tailored and targeted intervention would be more effective than one that was targeted-only and that both interventions would be more effective than a survey-only or no contact. After our trial began, findings were published from two other studies that used a three-group design, that delivered two rounds of intervention that varied the extent of personalization, and that measured completion of two postintervention mammograms (23,24). Our findings are generally consistent with those studies. Rimer et al. (24) found no statistically significant difference between either intervention condition (standard care plus a mailed tailored print booklet or standard care plus telephone counseling to address barriers) compared with standard care (patient reminder and physician prompt), whereas Lipkus et al. (23) reported greater mammography use after telephone counseling than after standard care (multiple mailed reminders), but only in the first year of the trial. Collectively, these findings and ours provide little support for an additional benefit of using tailored interventions, either mailed or by telephone, to increase regular mammography screening.

We did not have adequate power to detect whether the modest intervention effects we observed were statistically significant. Crude coverage and compliance estimates showed favorable absolute differences between the control group and the two intervention groups of 1%–3% for ITT analysis, of 1%–5% for MITT analysis, and of 2%–6% for the PP analysis. The absolute differences that we observed were larger than those reported in a population-based mammography intervention trial that tested a direct mail strategy using a commercial mailing list to reach low-income women eligible for free screening (80). That study reported statistically significant differences for the mail-only group (1.06%) and the mail plus incentive group (1.58%) compared with a control group (0.83%). Although most efficacy trials of cancer screening interventions aim to detect 10% or greater absolute between-group differences, there is no consensus about what constitutes an important difference from a public health perspective when an intervention is delivered at a population level.

Our findings suggest that a targeted-only intervention may be as effective as one that is both targeted and tailored, although there was limited support for either intervention being more effective than the baseline survey alone. We also found that using an analytic approach that adjusts for variable follow-up time produced more conservative (less favorable) intervention effect estimates.

Acknowledgments

Funding

National Institutes of Health (RO1 NCI CA76330 to L.A.B., W.C., D.J.d.J., A.H., D.R.L., W.R., J.A.T., S.W.V., C.W., S.P.C.; RO3 NCI CA 103512 to D.J.d.J., S.W.V.; KO7 CA79759 to M.E.F.; R25 CA057712 to A.M.).

The authors take sole responsibility for the study design, data collection and analyses, interpretation of the data, and the preparation of the manuscript. The authors are indebted to the US women veterans who participated in the study and to the members of the external advisory committee who reviewed study materials and assisted in various other ways throughout the study. We thank Amy Jo Harzke for her comments on an earlier draft of the manuscript.

References

1. Jemal A, Siegal R, Ward E, Murray T, Xu J, Thun MJ. Cancer Statistics, 2007. CA Cancer J Clin. 2007;57(1):43–66. [PubMed]
2. Kerlikowske K, Grady D, Rubin SM, Sandrock C, Ernster VL. Efficacy of screening mammography: a meta-analysis. JAMA. 1995;273(2):149–154. [PubMed]
3. Galit W, Green MS, Lital KB. Routine screening mammography in women older than 74 years: a review of the available data. Maturitas. 2007;57(2):109–119. [PubMed]
4. Swan J, Breen NL, Coates RJ, Rimer BK, Lee NC. Progress in cancer screening practices in the United States: results from the 2000 National Health Interview Survey. Cancer. 2003;97:1528–1540. [PubMed]
5. Clark MA, Rakowski W, Bonacore LB. Repeat mammography: prevalence estimates and considerations for assessment. Ann Behav Med. 2003;26(3):201–211. [PubMed]
6. Bonfill X, Marzo M, Pladevall M, Marti J, Emparanza J. The Cochrane Library. Issue 4. Chichester, UK: John Wiley & Sons, Ltd; 2003. Strategies for increasing the participation of women in community breast cancer screening (Cochrane Review)
7. Legler JM, Meissner HI, Coyne C, Breen NL, Chollette V, Rimer BK. The effectiveness of interventions to promote mammography among women with historically lower rates of screening. Cancer Epidemiol Biomarkers Prev. 2002;11(1):59–71. [PubMed]
8. Ratner PA, Bottorff JL, Johnson JL, Cook R, Lovato CY. A meta-analysis of mammography screening promotion. Cancer Detect Prev. 2001;25(3):147–160. [PubMed]
9. Wagner TH. The effectiveness of mailed patient reminders on mammography screening: a meta-analysis. Am J Prev Med. 1998;14(1):64–70. [PubMed]
10. Yabroff KR, Mandelblatt JS. Interventions targeted toward patients to increase mammography use. Cancer Epidemiol Biomarkers Prev. 1999;8:749–757. [PubMed]
11. Yabroff KR, O’Malley AS, Mangan P, Mandelblatt JS. Inreach and outreach interventions to improve mammography use. J Am Med Womens Assoc. 2001;56(4):166–174. [PubMed]
12. Mandelblatt JS, Yabroff KR. Effectiveness of interventions designed to increase mammography use: a meta-analysis of provider-targeted strategies. Cancer Epidemiol Biomarkers Prev. 1999;8:759–767. [PubMed]
13. Stone EG, Morton SC, Hulscher MEJL, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med. 2002;136(9):641–651. [PubMed]
14. Snell JL, Buck EL. Increasing cancer screening: a meta-analysis. Prev Med. 1996;25:702–707. [PubMed]
15. Centers for Disease Control and Prevention. Guide to Community Preventive Services. 2007. [Accessed July 12, 2007]. http://www.thecommunityguide.org/cancer/screening/default.htm.
16. Sohl SJ, Moyer A. Tailored interventions to promote mammography screening: a meta-analysis review. Prev Med. 2007;45:252–261. [PMC free article] [PubMed]
17. Vernon SW, Tiro JA, Meissner HI. Behavioral research in cancer screening. In: Miller SM, Bowen DJ, Croyle RT, editors. Handbook of Behavioral Science and Cancer. Washington, DC: APA; 2008.
18. Partin MR, Malone M, Winnett M, Slater J, Bar-Cohen A, Caplan LS. The impact of survey non-response bias on conclusions drawn from a mammography intervention trial. J Clin Epidemiol. 2003;56(9):867–873. [PubMed]
19. Mayer JA, Lewis EC, Slymen DJ, et al. Patient reminder letters to promote annual mammograms: a randomized controlled trial. Prev Med. 2000;31:315–322. [PubMed]
20. Rakowski W, Lipkus IM, Clark MA, et al. Reminder letter, tailored stepped-care, and self-choice comparison for repeat mammography. Am J Prev Med. 2003;25(4):308–314. [PubMed]
21. Clark MA, Rakowski W, Ehrich B, et al. The effect of a stage-matched and tailored intervention on repeat mammography. Am J Prev Med. 2002;22(1):1–7. [PubMed]
22. Messina CR, Lane DS, Grimson R. Effectiveness of women’s telephone counseling and physician education to improve mammography screening among women who underuse mammography. Ann Behav Med. 2002;24(4):279–289. [PubMed]
23. Lipkus IM, Rimer BK, Halabi S, Strigo TS. Can tailored interventions increase mammography use among HMO women? Am J Prev Med. 2000;18(1):1–10. [PubMed]
24. Rimer BK, Halabi S, Skinner CS, et al. Effects of a mammography decision-making intervention at 12 and 24 months. Am J Prev Med. 2002;22(4):247–257. [PubMed]
25. Costanza ME, Stoddard AM, Luckmann R, White MJ, Spitz-Avrunin J, Clemow L. Promoting mammography: results of a randomized trial of telephone counseling and a medical practice intervention. Am J Prev Med. 2000;19(1):39–46. [PubMed]
26. Glasgow RE, Marcus AC, Bull SS, Wilson KM. Disseminating effective cancer screening interventions. Cancer. 2004;101:1239–1250. [PubMed]
27. Glasgow RE, Davidson KW, Dobkin PL, Ockene JK, Spring B. Practical behavioral trials to advance evidence-based behavioral medicine. Ann Behav Med. 2006;31(1):5–13. [PubMed]
28. del Junco DJ, Vernon SW, Coan SP, et al. Promoting regular mammography screening I. A systematic assessment of validity in a randomized trial. J Natl Cancer Inst. 2008;100(5):333–346. [PMC free article] [PubMed]
29. Le Henanff A, Giraudeau B, Baron G, Ravaud P. Quality of reporting of non-inferiority and equivalence randomized trials. JAMA. 2006;295:1147–1151. [PubMed]
30. Richardson C, Waldrop J. Veterans: 2000. Census 2000 Brief. Washington, DC: U.S. Census Bureau, U.S. Department of Commerce; 2003. [Accessed May 25, 2007]. www.census.gov/prod/2003pubs/c2kbr-22.pdf.
31. Boyle John M. Survey of Female Veterans: A study of the Needs, Attitudes and Experiences of Women Veterans. [Study Conducted for the Veterans Administration] New York, NY: Harris (Louis) and Associates, Inc; 1985. Louis Harris and Associates; pp. 1–299. Report 843002.
32. Estimates and Projections of the Veteran Population, 1990–2030, Vetpop 2001. Washington, DC: Department of Veterans Affairs, Office of the Actuary; 2001. [Accessed August 1, 2007]. http://www1.va.gov/vetdata/docs/5l.xls.
33. 2001 National Survey of Veterans (NSV): Final Report. [Accessed August 1, 2007]. http://www1.va.gov/vetdata/docs/survey_final.htm.
34. Hynes DM, Bastian LA, Rimer BK, Sloane R, Feussner JR. Predictors of mammography use among women veterans. J Womens Health. 1998;7(2):239–247. [PubMed]
35. Goldzweig CL, Parkerton PH, Washington DL, Lanto AB, Yano EM. Primary care practice and facility quality orientation: influence on breast and cervical cancer screening rates. Am J Manag Care. 2004;10(4):265–272. [PubMed]
36. Diclemente CC, Prochaska JO. Toward a comprehensive transtheoretical model of change. In: Miller WR, Heather N, editors. Treating Addictive Behaviors. New York, NY: Plenum Press; 2002. pp. 3–24.
37. Prochaska JO, Diclemente CC. Stages of behavior change in the modification of problem behaviors. In: Hersen M, Eisler RM, Miller PM, editors. Progress in Behavior Modification. Sycamore, IL: Sycamore Publishing Company; 1992. pp. 184–206.
38. Greenlee RT, Murray T, Bolden S, Wingo PA. Cancer statistics, 2000. CA Cancer J Clin. 2000;50(1):7–33. [PubMed]
39. Dillman DA. Mail and Internet Surveys: The Tailored Design Method. 2nd ed. Hoboken, NJ: John Wiley & Sons; 1999.
40. Aday LA. Designing and Conducting Health Surveys: A Comprehensive Guide. San Francisco, CA: Jossey-Bass; 1996.
41. Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning Health Promotion Programs: An Intervention Mapping Approach. 2nd ed. San Francisco, CA: Jossey-Bass; 2006.
42. Janz NK, Becker MH. The health belief model: a decade later. Health Educ Q. 1984;11(1):1–47. [PubMed]
43. Bandura A. Health promotion from the perspective of Social Cognitive Theory. In: Norman P, Abraham C, Conner M, editors. Understanding and Changing Health Behaviour: From Health Beliefs to Self-regulation. Amsterdam, The Netherlands: Harwood Academic Publishers; 2000. pp. 299–339.
44. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.
45. Rakowski W, Ehrich B, Dube CE, et al. Screening mammography and constructs from the transtheoretical model: associations using two definitions of the stages-of-adoption. Ann Behav Med. 1996;18(2):91–100. [PubMed]
46. Rakowski W, Fulton JP, Feldman JP. Women’s decision making about mammography: a replication of the relationship between stages of adoption and decisional balance. Health Psychol. 1993;12(3):209–214. [PubMed]
47. Rakowski W, Andersen MR, Stoddard AM, et al. Confirmatory analysis of opinions regarding the pros and cons of mammography. Health Psychol. 1997;16(5):433–441. [PubMed]
48. Tiro JA, Diamond P, Perz CA, et al. Validation of scales measuring attitudes and norms related to mammography screening in women veterans. Health Psychol. 2005;24(6):555–566. [PubMed]
49. Kreuter MW, Strecher VJ, Glassman B. One size does not fit all: the case for tailoring print materials. Ann Behav Med. 1999;21(4):276–283. [PubMed]
50. Kreuter MW, Skinner CS. Tailoring: what’s in a name? [letter] Health Educ Res. 2000;15(1):1–4. [PubMed]
51. Oenema A, Brug J, Lechner L. Web-based tailored nutritional education: results of a randomized controlled trial. Health Educ Res. 2001;16(6):647–660. [PubMed]
52. Kreuter MW, Skinner CS. Response to Pasick [letter] Health Educ Res. 2001;16(4):507–508.
53. Halder AK, Tiro JA, Glassman B, et al. Lessons learned from developing a print tailored intervention: a guide for practitioners and researchers new to tailoring [published online ahead of print July 7, 2006] Health Promot Pract. 2008 http://hpp.sagepub.com/cgi/rapidpdf/1524839906289042v1 Accessed January 15 doi:101177/1524839906289042. [PubMed]
54. Lairson DR, Newmark GR, Rakowski W, Tiro JA, Vernon SW. Development costs of a computer-generated tailored intervention. Eval Program Plann. 2004;27(2):161–169.
55. Partin MR, Casey-Paal AL, Slater JS, Korn JE. Measuring mammography compliance: lessons learned from a survival analysis of screening behavior. Cancer Epidemiol Biomarkers Prev. 1998;7(8):681–687. [PubMed]
56. Michels TC, Carter WB, Taplin SH, Kugler JP. Barriers to screening: the theory of reasoned action applied to mammography use in a military beneficiary population. Mil Med. 1995;160(9):431–437. [PubMed]
57. Rakowski W, Ehrich B, Goldstein MG, et al. Increasing mammography among women aged 40–74 by use of a stage-matched, tailored intervention. Prev Med. 1998;27:748–756. [PubMed]
58. Rakowski W, Ehrich B, Goldstein MG, et al. Encouraging repeat screening mammography with a stage-matched, tailored intervention. 1998 [PubMed]
59. Fleiss JL. Statistical Methods for Rates and Proportions. 2nd ed. New York, NY: John Wiley & Sons; 1981.
60. Schlesselman JJ. Sample size requirements in cohort and case-control studies of disease. Am J Epidemiol. 1974;99(6):381–384. [PubMed]
61. Wood AM, White IR, Thompson SG. Are missing outcome data adequately handled? A review of published randomized controlled trials in major medical journals. Clin Trials. 2004;1(4):368–376. [PubMed]
62. Hosmer DW, Lemeshow S. Applied Survival Analysis: Regression Modeling of Time to Event Data. New York, NY: John Wiley & Sons, Inc; 1999.
63. Callas PW, Pastides H, Hosmer DW. Empirical comparisons of proportional hazards, poisson, and logistic regression modeling of occupational cohort data. Am J Ind Med. 1998;33(33):47. [PubMed]
64. Kleinbaum DG, Sullivan K, Barker N. A Pocket Guide to Epidemiology. New York, NY: Springer; 2007.
65. Garret JM. Graphical assessment of the Cox model proportional hazards assumption. Stata Tech Bull. 1997;35:9–14. gr23.
66. Schoenfeld D. Partial residuals for the proportional hazards regression model. Biometrika. 1982;69(1):239–241.
67. Hallfors D, Cho H, Sanchez V, Khatapoush S, Kim HM, Bauer D. Efficacy vs effectiveness trial results of an indicated “model” substance abuse program: implications for public health. Am J Public Health. 2006;96(12):2254–2259. [PubMed]
68. Johnson TP, O’Rourke DP, Burris JE, Warnecke RB. An investigation of the effects of social desirability on the validity of self-reports of cancer screening behaviors. Med Care. 2005;43(6):565–573. [PubMed]
69. Vernon SW, Briss PA, Tiro JA, Warnecke RB. Some methodologic lessons learned from cancer screening research. Cancer. 2004;101:1131–1145. [PubMed]
70. Warnecke RB, Sudman S, Johnson TP, O’Rourke DP, Davis AM, Jobe JB. Cognitive aspects of recalling and reporting health-related events: Papanicolaou smears, clinical breast examinations, and mammograms. Am J Epidemiol. 1997;146:982–992. [PubMed]
71. Paskett E, Tatum C, Rushing J, et al. Randomized trial of an intervention to improve mammography utilization among a triracial rural population of women. J Natl Cancer Inst. 2006;98(17):1226–1237. [PubMed]
72. Hoogewerf PE, Hislop TG, Morrison BJ, Burns SD, Sizto R. Health belief and compliance with screening for fecal occult blood. Soc Sci Med. 1990;30(6):721–726. [PubMed]
73. Hunter W, Farmer A, Mant D, Verne J, Northover J, Fitzpatrick R. The effect of self-administered fecal occult blood tests on compliance with screening for colorectal cancer: results of a survey of those invited. Fam Pract. 1991;8(4):367–372. [PubMed]
74. Kelly RB, Shank JC. Adherence to screening flexible sigmoidoscopy in asymptomatic patients. Med Care. 1992;30(11):1029–1042. [PubMed]
75. Lindholm E, Berglund B, Haglind E, Kewenter J. Factors associated with participation in screening for colorectal cancer with faecal occult blood testing. Scand J Gastroenterol. 1995;30(2):171–176. [PubMed]
76. Vernon SW, Acquavella JF, Yarborough CM, Hughes JI, Thar WE. Reasons for participation and nonparticipation in a colorectal cancer screening program for a cohort of high risk polypropylene workers. J Occup Med. 1990;32(1):46–51. [PubMed]
77. Skinner CS, Strecher VJ, Hospers H. Physicians’ recommendations for mammography: do tailored messages make a difference? Am J Public Health. 1994;84(1):43–49. [PubMed]
78. Champion VL, Ray DW, Heilman DK, Springston JK. A tailored intervention for mammography among low-income African-American women. J Psychosoc Oncol. 2000;18(4):1–13.
79. Janz NK, Schottenfeld D, Doerr KM, et al. A two-step intervention to increase mammography among women aged 65 and older. Am J Public Health. 1997;87(10):1683–1686. [PubMed]
80. Slater JS, Henly GA, Ha CN, et al. Effect of direct mail as a population-based strategy to increase mammography use among low-income under-insured women ages 40 to 64 years. Cancer Epidemiol Biomarkers Prev. 2005;14(10):2346–2352. [PubMed]