PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Am J Health Behav. Author manuscript; available in PMC 2011 November 1.
Published in final edited form as:
PMCID: PMC2963446
NIHMSID: NIHMS236618

Standard Definitions of Adherence for Infrequent yet Repeated Health Behaviors

Abstract

Objective

To present common language for defining adherence of infrequent yet repeated health behaviors.

Methods

We illustrate methodological and conceptual issues using human papillomavirus (HPV) vaccination and screening mammography study data as examples.

Results

Adherence rates of infrequent, repeated behaviors varied widely depending on how adherence was defined and measured. We advocate use of 3 standard definitions of adherence: initiation of behavior (initiation), adherence to most recent opportunity (on-schedule), and timely adherence across multiple opportunities (maintenance or completion).

Conclusions

The proposed framework has cross-cutting implications for research and practice. Standardizing adherence metrics may facilitate comparisons across studies of health behaviors practiced at infrequent yet repeated intervals.

Keywords: adherence, diffusion, maintenance, mammography, HPV vaccination

Introduction

Researchers have studied adherence extensively in the context of behaviors that many people practice frequently (eg, daily), such as flossing teeth, eating a low-fat diet, exercising, smoking cessation, and taking oral birth control medications.13 For these behaviors, studying adherence can yield voluminous data. Even when using standardized instruments, researchers often must categorize people as “adherent” and “nonadherent,” to make measurement practicable, analyses tractable, and study findings more easily communicated.

Adherence in the context of infrequent health behaviors has received relatively less attention. By infrequent, we refer to behaviors that people typically perform monthly or less often, such as receiving A1C tests every 3 months for blood glucose levels, having Pap tests every one to 3 years, or getting flu shots annually. These infrequent behaviors must be repeated to gain the full health benefits they offer. Although studying infrequent, repeated behaviors often yields fewer data points than dose study of frequently performed behaviors, it has inherent complexities that deserve attention.

Researchers have lamented the absence of a gold standard for measuring adherence for most health behaviors. For example, medication adherence can be assessed using patients’ self-reports, electronic medication monitors, unannounced pill counts, pharmacy records, biological assays, and so on.4 Our focus here is on providing standard definitions of adherence (ie, using data to identify people who are adherent), rather than standardizing measurement instruments (ie, how to collect data). To define adherence for infrequent behaviors, we advocate adopting common language. This suggestion is consistent with recommendations from a conference on cancer screening sponsored by the National Cancer Institute, Centers for Disease Control, and American Cancer Society that recommended standardization of screening-adherence measures.57

We propose 3 broad categories: initiation, on-schedule, and maintenance. Initiation refers to a person’s having ever engaged in the behavior. An example is whether a person has ever received flu vaccine. On-schedule refers to adhering to guidelines at the most recent opportunity to engage in the behavior, such as whether a person had a flu shot during the last flu season. Cutoffs for defining on-schedule use should come from evidence-based recommendations regarding how often the behavior should be practiced, such as those from the US Preventive Services Task Force8 or the American Cancer Society.9 Maintenance (or completion) refers to adherence across multiple, previous opportunities, again, based on evidence-based recommendations. An example is whether a person had flu shots during the last 5 flu seasons. Assessing maintenance (or completion) should incorporate assessments of behavior at several time points as well as their timeliness. Some of these adherence distinctions are stated in stage theories of health behavior and diffusion of innovations theory.1012

Our adherence categories reflect the shift that occurs as health behaviors are introduced and subsequently diffused over time (Figure 1). By diffusion, we refer to the processes by which innovations become adopted in populations over time.12 For new health behaviors in early stages of diffusion and, thus, not yet adopted by most people, studying initiation may be the most appropriate focus for research. For example, studies of the relatively new nasal flu spray vaccine have assessed initiation. As a behavior becomes more established, researchers should begin studying on-schedule adherence to guidelines. Maintenance can be studied only when a behavior is well established in a population. Most likely, there always will be population pockets that have not yet adopted the behavior.

Figure 1
Recommended Definition of Adherence Depends on How Widely People Have Adopted the Health Behaviora

To illustrate implications of our proposed definitions of adherence in greater depth, we use examples of human papillomavirus (HPV) vaccination and mammography screening, using data from our research. These 2 behaviors represent different regions of the diffusion curve; HPV vaccination is a new health behavior whereas mammography is an established health behavior. HPV vaccination may prevent infection with HPV whereas mammograms are a screening test to find breast cancers early. To achieve potential benefits of early detection, women need regular mammograms. Thus, maintenance of the behavior is the desired end state. In contrast, HPV vaccination is complete when people have obtained the recommended doses. Although we focus on these 2 examples, the issues are relevant to other infrequent, repeated behaviors.

Example 1: HPV Vaccination

HPV vaccination may reduce the incidence of cervical cancer, perhaps by as much as 77% with widespread vaccine coverage,13 while also reducing genital warts and perhaps other HPV-related cancers.14 The US Advisory Committee on Immunization Practices (ACIP) recommends females aged 11 to 12 years routinely receive the vaccine, with catch-up vaccinations for females aged 13 to 26.15 The quadrivalent HPV vaccine requires 3 doses over 6 months, administered at months zero, 2, and 6 and may confer lifelong protection.15 This report focuses on quadrivalent HPV vaccine for females, but the definitions we propose can be extended to other ACIP recommendations (eg, bivalent HPV vaccine for females and quadrivalent HPV vaccine for males).16

Once people have completed the 3-dose series, over a relatively brief period, it is likely they will not have to be vaccinated again for a long time, if ever. In this regard, the HPV vaccine regimen is similar to other vaccine regimens (eg, hepatitis B vaccine). It also is similar to regimens for some prophylactic medications (eg, prophylactic malaria medications taken weekly: a week before arriving in a malaria area, while being there, and for 4 weeks after leaving it).

The main challenges for defining HPV vaccine adherence are accounting for the number and timing of vaccine doses received. We present several approaches to defining HPV vaccine adherence that shift in relevance as diffusion of vaccination increases: initiation, on-schedule, and completion. We illustrate the issues using HPV vaccination data from the Carolina HPV Immunization Measurement and Evaluation (CHIME) Project. One component of CHIME was a population-based longitudinal study to investigate HPV vaccine decision making by parents of adolescent girls in an area where women are at relatively high risk of cervical cancer.

Interviewers completed baseline telephone surveys with 889 (73%) of 1220 eligible parents between July and October 2007.17 Of 873 baseline respondents eligible for follow-up, 650 (74%) completed follow-up telephone interviews between October and November 2008. Sampling and data collection methods are reported in detail elsewhere.17 Because some parents may have been unfamiliar with HPV vaccine, we provided all parents with informative statements about HPV and HPV vaccine during baseline interviews. Percentages reported below are weighted to account for the study’s complex sampling design. The Institutional Review Board for the University of North Carolina approved this research.

HPV vaccine initiation

Studying initiation is a logical place to begin research for a new behavior like HPV vaccination. HPV vaccine initiation compares people who have received one or more doses to those who have received none at a certain time point (but met eligibility criteria for vaccination). Because HPV vaccination only recently became available, most published studies reporting HPV vaccine uptake have focused on initiation.1829

Assessing HPV vaccine initiation has numerous advantages. First, collecting data on initiation minimizes both interviewer and respondent burden because initiation can be assessed with a single item. A recent statewide survey of parents in North Carolina used such an item: “Has [child] had any shots of the HPV vaccine?” 30 Second, assessing vaccine initiation may be less prone to errors than other adherence definitions because it does not rely on the exact number of doses received or their timing. Parental reports of their daughters’ HPV vaccination histories may be especially susceptible to errors, as parents may not recall the number of doses their child has received,31 especially if they have more than one child in the approved age range for vaccination. Last, HPV vaccine initiation allows for straightforward analyses and parameter interpretation because vaccination status is treated as a dichotomous variable.

The main limitation to assessing vaccine adherence as initiation is loss of information that occurs when grouping all vaccinators together. A female who has received 2 doses of HPV vaccine, with her second dose a month prior to data collection (ie, she is on-schedule), might be grouped with a female who has received 2 doses, with her second dose a year before data collection (off-schedule). Thus, the complexity is lost, although population-level characterization is gained.

In the CHIME baseline survey conducted a little over a year after HPV vaccine became available in the United States, 12% (83/650) of parents said their daughters had initiated the HPV vaccine series. Of parents whose daughters had not been vaccinated at baseline, an additional 27% (149/567) reported vaccination during follow-up interviews the next year. Thus, 36% (232/650) of parents said their daughters had initiated the 3-dose HPV vaccine series by the end of the study. If we examined vaccine initiation using these data, the 232 parents who reported their daughters had received one or more doses would be compared to the remaining 418 parents of daughters who were classified as not vaccinated.

On-schedule for HPV vaccine regimen completion

Although HPV vaccine initiation is a good place to begin research on this new behavior, it does not address vaccination timeliness. For unvaccinated people, this means getting a first dose. For those with one or 2 doses, this means getting the next dose after some period of months. On-schedule adherence includes 2 categories: (a) on-schedule, initiated regimen and within recommended time guidelines to complete regimen or completed regimen (all 3 doses received) and (b) off-schedule, not initiated vaccine regimen (no doses received) or initiated regimen but outside of recommended time guidelines to complete regimen. Guidelines for timeliness should be based on the ACIP guidelines in the United States.15

Although the ACIP recommendations address the timing of doses, they do not specify how long past these milestones people can wait before they should be considered off-schedule. In CHIME, Brewer and colleagues classified adolescent girls who had received one or 2 doses within guidelines if they were not more than 2 months past the time recommended for receiving their next dose at follow-up interviews (ie, within 4 months after the first shot or 6 months after the second shot at the time of follow-up interviews) (Figure 2). Allowing a window beyond a due date is similar to the way other repeat health behaviors are treated for classification purposes.3234

Figure 2
Proposed Guidelines for Determining On-Schedule Adherence to Quadrivalent HPV Vaccine Regimen

One advantage of categorizing use as on-schedule is that it accounts for adherence to guidelines about dose timing. Assessing vaccine initiation evaluates only whether people had the first dose. On-schedule adherence may provide a more accurate depiction of participants’ vaccination status, especially if many people who initiate do not complete the vaccination series. Similar to assessing vaccine initiation, this approach results in straightforward analyses and parameter interpretation because it treats vaccination as a dichotomous outcome.

A disadvantage to this approach is additional burden on survey participants to accurately recall number and timing of doses received. Many participants will be unable to recall this information, leading to concerns about how to classify them. Failure to accurately recall this information may be especially problematic for assessing HPV vaccine adherence, as some individuals may be unaware of the recommended number and timing of doses. Moreover, failure to recall does not necessarily mean the person was not vaccinated. This is especially challenging when relying on proxy respondents. In CHIME, the inability of participants to recall when their daughters received HPV vaccine doses was a common problem; many parents were unable to provide even the year the dose was administered for 42% (232/553) of reported doses. Individuals with unknown vaccination dates for their most recent doses were classified as off-schedule unless (a) they reported receiving all 3 doses (because regimen completion is considered on-schedule regardless of vaccination schedule) or (b) they reported receiving 2 doses and gave a vaccination date for their first dose that was within 6 months of the follow-up interview. Electronic health records, physicians’ reports, and insurance data may provide viable alternatives (or validation sources) to self- or parent-reported vaccination histories. Such objective sources may lessen missing or inaccurate data. Some of these sources were used in past studies reporting HPV vaccination levels, though levels of missing data were not provided.19,22,35 Objective sources also have limitations, eg, delays in receiving reports from physicians’ office, which may render electronic data incomplete.32

Using the proposed classification scheme, 31% (197/650) of females would be classified as on-schedule [parents reported their daughters had completed the HPV vaccine regimen (21%, 137/650) or had received one or 2 doses but were on-schedule to complete the regimen (10%, 60/650)], and 69% (453/650) would be considered off-schedule [had received one or 2 doses but were off-schedule to complete the regimen (5%, 35/650) or had not initiated the HPV vaccine regimen (64%, 418/650)].

HPV vaccine regimen completion

HPV vaccine may be most effective when people receive all 3 doses in a timely manner.15 Thus, completion of all 3 doses most clearly addresses public health goals. Analyses compare people who have completed all doses to others who have not initiated vaccination or who have initiated but not completed. People who achieve initiation or on-schedule adherence do not fully meet public health guidelines. However, at this time, it is premature to expect HPV vaccine completion as the outcome in most current studies. Series completion is relatively rare (only 21% in the CHIME Project), a reflection of difficulties in availability of HPV vaccine and insurance coverage for it, as well as the relatively large number of people initiating the vaccine who have not yet received all 3 doses.25,35 As barriers to HPV vaccination decrease over time, it will become increasingly relevant to examine rates of regimen completion as the appropriate outcome for behavioral interventions.

Example 2: Mammography Screening

Secondary prevention via mammography is the most effective way to reduce breast cancer morbidity and mortality.36 Regular use of mammography can lead to early diagnosis of breast cancer, when tumors are smaller and cancer is potentially curable; and patients also may have more treatment options.36 Mammography use provides a contrast to HPV vaccination. First, it represents screening rather than primary preventive action. Second, it is at a very different state of diffusion compared to HPV vaccination.

Mammography screening requires women to repeat the same behavior at regular but infrequent intervals, potentially over decades. In this way, mammography screening is similar to other types of cancer screening (eg, colon, cervical) and wellness visits (eg, eye exams, dental cleanings). That is, they all are based on the premise that people will practice these behaviors on regular schedules that may range from months to years apart. The issues discussed here regarding mammography screening may apply to these other health behaviors.

Defining mammography adherence encounters considerable challenges in incorporating the number and timing of mammograms.37 Defining adherence is further complicated by the different guidelines of organizations and the manner in which they change over time. Although some national organizations recommend women receive screening every one to 2 years (eg, National Cancer Institute), other organizations recommend every year (eg, American Cancer Society, American College of Radiology). Despite a recent push to standardize operational definitions of mammography use, none have been widely adopted at this time.33,34 It is not yet clear what effect the recent US Preventive Services Task Force statement about biennial screening mammography will have on the guidelines of different organizations or on the practices of women and their physicians.37

We present several approaches to define adherence that reflect diffusion of mammography over time: initiation, on-schedule use, and maintenance. To illustrate these definitions, we use mammography screening data from PRISM (Personally Relevant Information on Screening Mammography). PRISM is a 4-year National Cancer Institute-funded intervention study of mammography adherence as part of the Health Maintenance Consortium. The eligible sample frame included North Carolina women between age 40 and 75 who were enrolled with the North Carolina State Health Plan for Teachers and State Employees, had mammograms 8 to 9 months before baseline surveys, and were due for subsequent mammograms 2 to 3 months after recruitment. PRISM enrolled 3547 women in 2003 and followed them annually until 2008. Details of the study can be found elsewhere.32 Institutional review boards for the University of North Carolina and Duke University approved this research.

For these analyses, we use the 2003 American Cancer Society guidelines in effect at the time of the study that recommend annual screening for women 40 and over.9 In defining adherence, we allow a 14-month window, typical in this research, to account for potential scheduling difficulties and other reasonable delays.3234 We use 12-, 24-, and 36-month telephone survey data and claims data across 3 annual screening cycles.

Mammography initiation

As women in the United States began adopting screening mammography, a research priority was to identify the proportion of women who had ever received screening (initiation) and to compare women who had been screened to those who had not been screened. Although initiation of mammography use is still a concern among certain subgroups (eg, recent immigrants, women in certain economic strata, countries with underdeveloped medical infrastructure), mammography use has been disseminated widely in the United States.38 In the late 1970s, less than 10% of women reported ever having had mammograms.39 By the early 1990s, almost 90% of age-appropriate women reported having had at least one mammogram.39,40 The successful diffusion of mammography requires a shift in focus from initiation to timely use, especially for populations with insurance. On-schedule use and maintenance provide better markers of potential population benefit than does initiation. Indeed, PRISM recruited only women who had their last screening mammograms between September 2003 and September 2004, to ensure all were adherent to recent mammograms upon study entry. This was consistent with the NIH Health Maintenance Consortium interest in behavioral maintenance (http://hmcrc.srph.tamhsc.edu/default.aspx).

On-schedule mammography use

A natural next step from assessing initiation is to examine whether women are on-schedule for mammography screening. On-schedule mammography use compares women who have had recent mammograms to those who have not. Many national surveys (eg, Behavioral Risk Factor Surveillance System-BRFSS, Health Information Trends Survey) and intervention trials define mammography adherence as on-schedule use, assessed as having had mammograms within a defined time.4143 One item can efficiently measure on-schedule use, reducing participant burden. For example, BRFSS uses “How long has it been since your last mammogram?”44

Assessing on-schedule use is valuable in evaluating the immediate impact of interventions. For national studies, on-schedule use is an indicator, in cross-section, of progress toward meeting national guidelines. In clinical settings, measuring on-schedule use alerts health care providers whether patients are up-to-date on screening tests. This is a critical quality indicator for Healthcare Effectiveness Data and Information Set (HEDIS) and other quality assessments. Finally, on-schedule use is computationally simple and easy to interpret via parameter estimates of a dichotomous outcome (on-schedule vs off-schedule).

However, defining mammography as on-schedule use has several limitations. First, on-schedule use does not necessarily take into account past screening behavior. Thus, women classified as on-schedule may include women with different patterns of prior mammography use.7 Similarly, the off-schedule group may include never screeners and lapsed screeners. Each type of mammography use is conceptually different, may have unique behavioral correlates, and may require different behavioral interventions. Combining these groups could mask important differences. Appropriate interventions to encourage women who have lapsed from regular mammography use may be different from those needed to motivate women to initiate use. Last, self-reports of use over long periods of time can cause forward telescoping of screening dates, resulting in overestimations of on-schedule use.45 Claims data may be a viable alternative to self-reported use for women with insurance coverage. However, these data often are incomplete due to lag time in claims processing, women’s paying out of pocket, or filing claims with other insurance plans not captured in the study (eg, spouse/partner insurance).

In PRISM, 90% (3049/3406), 91% (3096/3406), and 89% (3040/3406) of participants confirmed they had mammograms in the 14 months prior to the 12-, 24- and 36-month surveys, respectively, and would be classified as on-schedule. (These proportions exclude 141 women with missing outcome data.) At each survey, women who had recent mammograms (on-schedule) would be compared to women who had mammograms more than 14 months ago (off-schedule). These proportions are likely to overestimate how many women are getting mammograms at regular intervals, as on-schedule use does not take into account the interval between recent and prior screenings.

Mammography maintenance

Like most screening tests, mammography is most effective when at-risk populations are screened at regular intervals. Accordingly, it is important to know what proportion of age-eligible women have received mammograms at regular intervals over time (maintenance). This is the most compelling public health outcome. Below we illustrate 3 ways to define mammography maintenance and some of the associated strengths and weakness of each approach.

Maintenance as number of mammograms

One approach to operationalizing maintenance counts how often women receive screening mammograms at recommended intervals over time. For example, maintenance may be defined as having completed 3 screening mammograms at 14-month intervals. Women then are categorized as having achieved maintenance (3 mammograms) or not (0, 1, 2 mammograms). Many studies have used count measures of mammography screening to assess maintenance33,34; however, few studies have included more than 2 on-schedule mammograms in the definition of maintenance. Using the counts approach reflects the interval between screenings. For this reason, it is preferable for measuring widely diffused behaviors compared to assessments of on-schedule use.

Mammography-adherence definitions that do not incorporate the interval between screenings address recurring instances of on-schedule use, not maintenance. For example, women could be on-schedule (defined as having had a mammogram in the previous 14 months) at interviews a year apart. However, some of these women may not be in maintenance, because mammograms that were classified as on-schedule with respect to interview dates were more than 14 months apart from one another.

In PRISM, 52% (1782/3406) of participants received 3 consecutive mammograms at 14-month intervals and, thus, would be classified as having achieved maintenance. This estimate is similar to other reported estimates of screening mammography use over multiple screening opportunities.33,34 On average, PRISM participants received 2.2 screening mammograms over the study. Ideally, they would have received 3 mammograms.

Defining maintenance as having achieved some number of screening mammograms has limitations. First, assessment of maintenance requires detailed information about timing of mammograms so researchers can calculate if women received mammograms at recommended intervals. Women can get off-schedule for many reasons, and not all reflect nonadherence. For example, some abnormal test results may cause women to obtain their next mammograms sooner than would have occurred following a normal test result. Second, many women will have missing data, which requires decisions about how to deal with these data. Count measures may not account for variability in predictors of use over time. Lastly, defining adherence as a binary outcome can lose important information. For example, PRISM participants who received 3 screening mammograms (n = 1782) would be compared to a group comprising women who received no (n = 366), one (n = 475) or 2 mammograms (n = 783), which implicitly assumes the latter 3 groups are homogeneous.

Maintenance as time off-schedule

Time off-schedule is another count method for defining mammography maintenance. Off-schedule time can be calculated by counting the number of days of nonadherence. Each day beyond the defined screening interval contributes an additional day to the nonadherence tally until a participant receives a subsequent mammogram. Unlike counting numbers of completed mammograms, this approach is more sensitive to smaller variations in adherence.

Time off-schedule suffers from some of the same limitations as defining adherence as the number of screening mammograms, such as attrition and more complex data collection. Data may be highly skewed, with many cases having the same number of days nonadherent. Time off-schedule may classify some women with different use patterns as having the same level of nonadherence. A woman with 365 days off-schedule could represent someone who skipped a year of screening, delayed each of her mammograms by 4 months, or some other permutation. Thus, the summary measure may mask differences that are clinically meaningful.

Using PRISM data, we calculated the number of days women were nonadherent (based on a 14-month window between screenings). On average, women experienced 80 days nonadherent over the study (standard deviation [SD] = 148, range 0 – 672). However, 58% of women did not have any days nonadherent, which resulted in right-skewed data.

Modeling maintenance as event onset

Another approach is to define maintenance over time as event onset, such as not receiving the next consecutive screening mammogram when it should have been received. Survival analysis includes statistical methods to assess the occurrence and timing of events.46,47 In survival analysis, an event is a change identified as happening at a specific time. Survival analysis is commonly used to model onset of disease or recovery after treatment. Like other ways of defining mammography use, different approaches to operationalization can have considerable effects on the inferred rate of mammography use,48 and computations require detailed information on the timing of events.49 Survival analysis offers some advantages over other methods of modeling mammography maintenance. Attrition is handled by censoring at the point of withdrawal from the study. In addition, survival models produce estimates of the effect of time on the outcome, which may be important for behaviors repeated over decades.

Using a form of survival analysis called discrete event history analysis,52 we can model the onset of nonadherence to sustained mammography screening over 3 consecutive screening opportunities or intervals. Estimates include information for 141 women with missing data until they withdrew from study or had missing data (ie, were censored). In PRISM, 26% (n = 921) were nonadherent in the first interval; in interval 2, an additional 15% were not adherent (n = 538); and in the last interval, another 9% became nonadherent (n = 306). Thus, only 50% (n = 1782) had sustained use over 3 years (ie, completed 3 consecutive screening mammograms). When we modeled the effect of time on mammography maintenance, we observed a strong effect for number of years (odds ratio [OR]=0.71, 95% confidence interval [CI]: 0.66–0.76). That is, people who remained adherent in later periods were less likely to become nonadherent.

DISCUSSION

In the early stages of diffusion, people may be unaware of a new behavior, or barriers to access may prevent action. Therefore, the emphasis of research should be on motivating people to try evidence-based recommended health innovations. Over time, as awareness increases, barriers fall, and more people adopt the health behavior, focus should shift to assessing timely, repeated use. We suggest 3 standard adherence definitions for infrequent yet repeated behaviors: initiation, on-schedule, and maintenance or completion. As new health behaviors are found to have public health benefit, it is appropriate to start with measures of initiation and later focus on assessing maintenance. Similarly, when well-accepted technologies are being introduced to other populations, there should be a natural progression of adherence measures. Although we have focused on mammography screening and HPV vaccination, our 3 adherence definitions may be useful for many other behaviors.

Each of our proposed adherence categories has strengths and weaknesses. Although assessing behavioral initiation is useful in early stages of behavioral diffusion, it lacks detail on the timeliness of the behavior. On-schedule use takes into account some of the timeliness needed to accurately evaluate adherence. However, it oversimplifies past use accounting for only the most recent opportunity to engage in the behavior and not intervals between behaviors. Maintenance addresses these concerns, but it requires detailed data on patterns of use. This can create substantial burden on participants, analysts, and budgets due to such considerations as repeated assessments or multiple data sources. Nevertheless, as more nonobtrusive measures, such as electronic medical records, become available, data collection will be less of a problem. Each of the 3 adherence definitions risks errors of misclassification. Measures that incorporate timeliness, however, may be less susceptible to some forms of misclassification compared to adherence defined as some arbitrary cutoff. Lastly, not all health behaviors neatly fall into the categories of frequent or infrequent. Condom use is a frequent occurrence for some people and rare for others depending on how often sex occurs.

The diffusion perspective can guide decisions about appropriate interventions. For example, mass media interventions may be efficient and effective when an innovation is new, knowledge of the behavior is low, and few people have adopted the innovation, because mass media can reach large portions of the public with health information.50 As innovations like mammography become standard practice and more widely accepted, people who have not yet adopted screening may need more intensive personal approaches to become motivated or overcome barriers. More research is needed on how barriers differ across initiation, on-schedule use, and maintenance or completion so that we can personalize intervention approaches.

Our proposed adherence definitions have cross-cutting implications for research and practice. Adopting uniform definitions may be useful in fostering uptake of common metrics of adherence. Common adherence metrics may facilitate comparison across studies of health behaviors that occur at infrequent yet repeated intervals.34 Research reports, however, should provide detailed information about how adherence is defined, measured, and modeled, because these methods can considerably affect variation in estimates.47 Common adherence definitions also can guide the field, potentially prompting research on on-schedule, maintenance, and completion as diffusion progresses and these adherence definitions become more relevant.

Acknowledgments

PRISM was funded by the National Cancer Institute (5R01-CA105786). The CHIME study was funded by Centers for Disease Control and Prevention (S3715-25/25). At the time of this research, Dr Gierisch was funded through an AHRQ NRSA postdoctoral traineeship at Duke University Medical Center (T-32-HS000079). Dr Reiter was funded through the Cancer Control Education Program at UNC Lineberger Comprehensive Cancer Center (R25 CA57726), and Dr Brewer received additional support through an American Cancer Society career development award (MSRG-06-259-01-CPPB).

References

1. Velicer WF, Prochaska JO, Rossi JS, Snow MG. Assessing outcome in smoking cessation studies. Psychol Bull. 1992;111:23–41. [PubMed]
2. DiMatteo MR. Variations in patients' adherence to medical recommendations: a quantitative review of 50 years of research. Med Care. 2004;42:200–209. [PubMed]
3. Jeffery RW, Drewnowski A, Epstein LH, et al. Long-term maintenance of weight loss: current status. Health Psychol. 2000;19:5–16. [PubMed]
4. Osterberg L, Blaschke T. Adherence to medication. N Engl J Med. 2005;353:487–497. [PubMed]
5. Meissner HI, Smith RA, Rimer BK, et al. Promoting cancer screening: Learning from experience. Cancer. 2004;101:1107–1117. [PubMed]
6. Vernon SW, Briss PA, Tiro JA, Warnecke RB. Some methodologic lessons learned from cancer screening research. Cancer. 2004;101:1131–1145. [PubMed]
7. Rakowski W, Breslau ES. Perspectives on behavioral and social science research on cancer screening. Cancer. 2004;101:1118–1130. [PubMed]
8. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2008;149:627–637. [PubMed]
9. Smith RA, Saslow D, Sawyer KA, et al. American Cancer Society guidelines for breast cancer screening: update 2003. CA Cancer J Clin. 2003;53:141–169. [PubMed]
10. Prochaska JO, DiClemente CC. Stages and processes of self-change of smoking: toward an integrative model of change. J Consult Clin Psychol. 1983;51:390–395. [PubMed]
11. Weinstein ND. The precaution adoption process. Health Psychol. 1988;7:355–386. [PubMed]
12. Rogers EM. Diffusion of Innovation. New York: The Free Press; 1995.
13. Smith JS, Lindsay L, Hoots B, et al. Human papillomavirus type distribution in invasive cervical cancer and high-grade cervical lesions: a meta-analysis update. Int J Cancer. 2007;121:621–632. [PubMed]
14. Gillison ML, Chaturvedi AK, Lowy DR. HPV prophylactic vaccines and the potential prevention of noncervical cancers in both men and women. Cancer. 2008;113:3036–3046. [PubMed]
15. Markowitz LE, Dunne EF, Saraiya M, et al. Quadrivalent Human Papillomavirus Vaccine: Recommendations of the Advisory Committee on Immunization Practices (ACIP) MMWR Recomm Rep. 2007;56:1–24. [PubMed]
16. Centers for Disease Control and Prevention. ACIP provisional recommendations for HPV vaccine-December 2009 (on-line) [Accessed on January 22, 2010]. Available at: http://www.cdc.gov/vaccines/recs/provisional/downloads/hpv-vac-dec2009-508.pdf.
17. Hughes J, Cates JR, Liddon N, et al. Disparities in how parents are learning about the human papillomavirus vaccine. Cancer Epidemiol Biomarkers Prev. 2009;18:363–372. [PubMed]
18. Reiter PL, Brewer NT, Gottlieb SL, et al. Parents' health beliefs and HPV vaccination of their adolescent daughters. Soc Sci Med. 2009;69:475–480. [PubMed]
19. Vaccination coverage among adolescents aged 13–17 years - United States, 2007. MMWR Morb Mortal Wkly Rep. 2008;57:1100–1103. [PubMed]
20. Grant D, Kravitz-Wirtz N, Breen N, et al. One in four California adolescent girls have had human papillomavirus vaccination. Policy Brief UCLA Cent Health Policy Res. 2009:1–6. [PubMed]
21. Kahn JA, Rosenthal SL, Jin Y, et al. Rates of human papillomavirus vaccination, attitudes about vaccination, and human papillomavirus prevalence in young women. Obstet Gynecol. 2008;111:1103–1110. [PubMed]
22. Chao C, Slezak JM, Coleman KJ, Jacobsen SJ. Papanicolaou screening behavior in mothers and human papillomavirus vaccine uptake in adolescent girls. Am J Public Health. 2009;99:1137–1142. [PubMed]
23. Brabin L, Roberts SA, Stretch R, et al. Uptake of first two doses of human papillomavirus vaccine by adolescent schoolgirls in Manchester: prospective cohort study. Bmj. 2008;336:1056–1058. [PMC free article] [PubMed]
24. Jain N, Euler GL, Shefer A, et al. Human papillomavirus (HPV) awareness and vaccination initiation among women in the United States, National Immunization Survey-Adult 2007. Prev Med. 2009;48:426–431. [PubMed]
25. National state, and local area vaccination coverage among adolescents aged 13–17 years--United States, 2008. MMWR Morb Mortal Wkly Rep. 2009;58:997–1001. [PubMed]
26. Gottlieb SL, Brewer NT, Sternberg MR, et al. Human papillomavirus vaccine initiation in an area with elevated rates of cervical cancer. J Adolesc Health. 2009;45:430–437. [PubMed]
27. Conroy K, Rosenthal SL, Zimet GD, et al. Human papillomavirus vaccine uptake, predictors of vaccination, and self-reported barriers to vaccination. J Womens Health (Larchmt) 2009;18:1679–1686. [PubMed]
28. Gerend MA, Weibley E, Bland H. Parental response to human papillomavirus vaccine availability: uptake and intentions. J Adolesc Health. 2009;45:528–531. [PubMed]
29. Caskey R, Lindau ST, Alexander GC. Knowledge and early adoption of the HPV vaccine among girls and young women: results of a national survey. J Adolesc Health. 2009;45:453–462. [PubMed]
30. State Center for Health Statistics. North Carolina Child Health Assessment and Monitoring Program (CHAMP) Survey. Raleigh NC: 2009.
31. Suarez L, Simpson DM, Smith DR. Errors and correlates in parental recall of child immunizations: effects on vaccination coverage estimates. Pediatrics. 1997;99:E3. [PubMed]
32. DeFrank JT, Rimer BK, Gierisch JM, et al. Impact of mailed and automated telephone reminders on receipt of repeat mammograms: a randomized controlled trial. Am J Prev Med. 2009;36:459–467. [PMC free article] [PubMed]
33. Boudreau DM, Luce CL, Ludman E, et al. Concordance of population-based estimates of mammography screening. Prev Med. 2007;45:262–266. [PMC free article] [PubMed]
34. Clark MA, Rakowski W, Bonacore LB. Repeat mammography: prevalence estimates and considerations for assessment. Ann Behav Med. 2003;26:201–211. [PubMed]
35. Chao C, Velicer C, Slezak JM, Jacobsen SJ. Correlates for completion of 3-dose regimen of HPV vaccine in female members of a managed care organization. Mayo Clin Proc. 2009;84:864–870. [PMC free article] [PubMed]
36. Humphrey LL, Helfand M, Chan BK, Woolf SH. Breast cancer screening: a summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med. 2002;137:347–360. [PubMed]
37. Screening for Breast Cancer: U.S. Preventive Services Task Force Recommendation Statement. Ann Intern Med. 2009;151:716–726. [PubMed]
38. Breen N, Wagener DK, Brown ML, et al. Progress in cancer screening over a decade: results of cancer screening from the 1987, 1992, and 1998 National Health Interview Surveys. J Natl Cancer Inst. 2001;93:1704–1713. [PubMed]
39. Cronin KA, Yu B, Krapcho M, et al. Modeling the dissemination of mammography in the United States. Cancer Causes Control. 2005;16:701–712. [PubMed]
40. Ahluwalia IB, Mack KA, Murphy W, et al. State-specific prevalence of selected chronic disease-related characteristics--Behavioral Risk Factor Surveillance System, 2001. MMWR Surveill Summ. 2003;52:1–80. [PubMed]
41. Champion V, Maraj M, Hui S, et al. Comparison of tailored interventions to increase mammography screening in nonadherent older women. Prev Med. 2003;36:150–158. [PubMed]
42. Mayer JA, Lewis EC, Slymen DJ, et al. Patient reminder letters to promote annual mammograms: a randomized controlled trial. Prev Med. 2000;31:315–322. [PubMed]
43. Rauscher GH, Hawley ST, Earp JA. Baseline predictors of initiation vs. maintenance of regular mammography use among rural women. Prev Med. 2005;40:822–830. [PubMed]
44. Centers for Disease Control and Prevention. Behavioral Risk Factor Surveillance System Survey Questionnaire. Atlanta: Georgia U.S. Department of Health and Human Services, Centers for Disease Control and Prevention; 2007.
45. Rauscher GH, Johnson TP, Cho YI, Walk JA. Accuracy of self-reported cancer-screening histories: a meta-analysis. Cancer Epidemiol Biomarkers Prev. 2008;17:748–757. [PubMed]
46. Allison PD. Survival Analysis Using SAS: A Practical Guide. Cary, NC: SAS Institute Inc; 1995.
47. Partin MR, Slater JS, Caplan L. Randomized controlled trial of a repeat mammography intervention: effect of adherence definitions on results. Prev Med. 2005;41:734–740. [PubMed]
48. Partin MR, Casey-Paal AL, Slater JS, Korn JE. Measuring mammography compliance: lessons learned from a survival analysis of screening behavior. Cancer Epidemiol Biomarkers Prev. 1998;7:681–687. [PubMed]
49. Singer JD, Willett JB. It's About Time: Using Discrete-Time Survival Analysis to Study Duration and the Timing of Events. 1993:155–195.
50. Rimer BK, Gierisch JM. Public education and cancer control. Semin Oncol Nurs. 2005;21:286–295. [PubMed]