|Home | About | Journals | Submit | Contact Us | Français|
Study aims were 1) to document and examine associations between parent-report and electronic monitoring of pediatric antiepileptic drug adherence, 2) determine the sensitivity and specificity of parent -reported adherence, and 3) develop a correction factor for parent -reported adherence.
Participants included 111 consecutive children with new-onset epilepsy (Mage = 7.2 ± 2.0; 61.3% males; 75.8% Caucasian) and their primary caregivers. Antiepileptic drug adherence was electronically monitored for three months prior to the 4-month clinic follow-up visit. parent-reported adherence captured adherence one-week prior to the clinic visit. For specificity/sensitivity analyses of parent-reported adherence, cut-points of 50%, 80%, and 90% were used with electronically-monitored adherence calculated one-week prior to the clinic visit as the reference criterion.
Electronically monitored adherence (80.3%) was significantly lower than parent-reported adherence (96.5%; p < 0.0001) one-week prior to the clinic visit, but both were significantly correlated (rho=0.46, p < 0.001). The 90% parent-reported adherence cut-point demonstrated the most sensitivity and specificity to electronically monitored adherence; however, specificity was still only 28%. A correction factor of 0.83 was identified as a reliable adjustment for parent-reported adherence when compared to electronically monitored adherence.
Although electronic monitoring is the gold standard of adherence measurement for pediatric epilepsy, it is often not clinically feasible to integrate it into routine clinical care. Thus, use of a correction factor for interpreting parent-reported adherence holds promise as a reliable clinical tool. With reliable adherence measurement, clinicians can provide adherence interventions with the hope of optimizing health outcomes for children with pediatric epilepsy.
The success of antiepileptic drug (AED) therapy in the treatment of pediatric epilepsy is dependent upon a patient’s ability to take his/her prescribed treatment regimen. Medication adherence is defined as the extent to which a person’s medication-taking behaviors coincide with medical advice (Haynes, 1979). In adult epilepsy populations, poor AED adherence contributes to morbidity (i.e., persistent seizure activity) (Bassili et al., 2002; Cramer et al., 2002; Gopinath, Radhakrishnan et al., 2000; Jones et al., 2006; Manjunath et al., 2009), mortality (Faught et al., 2008), healthcare expenditures (Berg et al., 1993; DiMatteo et al., 2002), and reduced quality of life (Dew et al., 2001). While less is known about AED adherence in children, studies have demonstrated non-adherence rates ranging from 14–34% with self-report/parent-report methods (Al-Faris et al., 2002; Asadi-Pooya, 2005; Asato et al., 2009; Kanner, 2003; Kyngas, 2000a, 2000b; Mitchell, 2000) and 14–21% with electronic monitoring (EM; Modi et al., 2008). Multi-method assessment has likely contributed to variability in non-adherence rates. Lack of consistent measurement and empirically-derived parameters to define non-adherence (Cramer et al., 2008), as well as minimal knowledge of measurement agreement, limits our understanding of non-adherence within pediatric epilepsy. This information is critical to advance knowledge of pediatric AED adherence thresholds that detrimentally impact epilepsy outcomes.
Electronic monitoring has been termed the “gold standard” for valid adherence measurement given its precision in capturing actual medication dosing when compared to other assessment measurement methods (Cramer, 1995; Rapoff, 1999). Specifically, EM examines adherence in “real-time” over extended, continuous periods of time (Quittner et al., 2008); yet, its high costs (e.g. $138 per patient) limit its clinical utility. In turn, many clinicians rely on self-report and parent-report to assess adherence. Subjective measures are easily administered and cost-effective, but often provide inflated adherence data secondary to reporting biases (e.g., high social desirability, poor recall, recency effects) (Mitchell et al., 2000) by approximately 20% (Haynes et al., 1980). However, no studies have compared these methods and determined whether the more clinically feasible subjective method could be better applied with a reliable correction factor based upon EM adherence data (Jasti et al., 2006; Quittner et al., 2008).
The aims of the current study were to 1) document and examine associations between EM and parent-report, 2) determine the sensitivity and specificity of self-reported adherence based on different adherence cut-points compared to EM, and 3) develop a reliable correction factor for interpreting parent-reported adherence. An exploratory aim was to ascertain whether agreement between parent-report and EM varied by sociodemographic and epilepsy-specific characteristics. parent-reported adherence rates were hypothesized to be correlated yet inflated compared to EM adherence. Sensitivity and specificity were hypothesized to increase with more rigorous adherence cut-points.
Children between the ages of 2 and 12 years diagnosed with epilepsy were recruited from the New Onset Seizure Clinic at Cincinnati Children’s Hospital Medical Center (CCHMC). Families participating in this study were part of a larger longitudinal study examining adherence to AED therapy over a two-year period following diagnosis. Study eligibility criteria for the larger study included (1) new diagnosis of epilepsy, (2) age between 2 and 12 years, (3) absence of a comorbid systemic illness (e.g., asthma, diabetes) or significant developmental disorder (e.g., autism, Down syndrome, intellectual disability), (4) no prior treatment for seizures, and (5) initiation of carbamazepine or valproic acid immediately after diagnosis. Monotherapy with one of these anti-epileptic medications is standard practice within the clinic. A consecutive cohort of 130 children and their primary caregivers were eligible and subsequently approached for study participation during routine clinical care. Informed consent was obtained from 125 families yielding a recruitment rate of 96%. Reasons for not participating included feeling overwhelmed by a new epilepsy diagnosis or being too busy. In addition, one family who originally consented was withdrawn due to an autism diagnosis not disclosed during the consent process. For purposes of the current study, 13 participants stopped attending clinic visits shortly after study enrollment; thus, the final cohort included 111 children and their caregivers.
Study protocol and consent forms were approved by the CCHMC Institutional Review Board prior to study implementation. Following informed consent, primary caregivers completed a demographics form and information regarding seizure history. Participants were provided a Medication Event Monitoring System (MEMS) TrackCap and bottle to begin EM of their child’s AED therapy at diagnosis. MEMS TrackCaps were downloaded at subsequent clinic visits, including both 1-month and 4-month appointments. Adherence data at the 4-month clinic visit was chosen as the primary outcome measure, given that participants would be on maintenance AED dose. Medical chart reviews were also conducted to extract information regarding seizure type, seizure history, and prescribed treatment regimen (e.g., AED and dosing frequency), which are collected at each clinic visit. Primary caregivers received a $10 gift card for completing the questionnaires and $10 for bring back the MEMS TrackCaps at each clinic visit.
The Medication Event Monitoring System (MEMS® 6 TrackCap) made by AARDEX Corporation (Union City, CA) is an EM system that measures the dosing histories of oral medications. Each unit has two components: a standard plastic vial with a threaded opening and a closure for the vial that contains a microelectronic circuit to register the dates and times the bottle is opened and closed. The MEMS TrackCap stores times and dates for bottle openings over 36 months. For the current study, MEMS TrackCaps were used to monitor AED adherence over the 4-month study period.
Caregivers were asked to complete one ad-hoc question: “In the past week, my child has missed _____ doses of the medication.” This question was prefaced by non-judgmental language that normalized non-adherence to treatment for children and their caregivers. A time period of one-week was used to maximize respondent recall and minimize memory decay. This question has been found to highly correlate with subscales on the Pediatric Epilepsy Medication Self-Management Questionnaire (PEMSQ) (Modi et al., 2010).
Caregivers provided information on the child’s age, gender, ethnicity, relation to child, caregiver marital status, occupation, socioeconomic status (SES), and family composition at the initial clinic visit. A revised Duncan score (Mueller & Parcel, 1981; Nakao & Treas, 1992; Stevens & Featherman, 1981), which is an occupation-based, contemporary measure of SES (Hauser, 1994), was calculated for each family. Scores range from 15 to 97 with higher scores representing greater occupational attainment. For two-caregiver households, the higher Duncan score was used in analyses.
Adherence rates were calculated for parent-report and EM in the following manner: the number of treatments performed daily was divided by the number of prescribed daily treatments and then multiplied by 100 to determine adherence percentages for the allocated time periods. For example, for a patient prescribed twice daily dosing who only doses once a day, their adherence rate would be 50% (e.g., 1/2 × 100 = 50%). Adherence rates for EM were calculated for both the week prior to the clinic visit and for 4 months since diagnosis. Truncated EM adherence rates (maximum of 100%) were used in analyses to reduce inflation as a result of overuse or extra openings that may have occurred due to prescription refills. This method has been used successfully in studies assessing EM adherence (Cramer et al., 2002; Jones et al., 2006). For the 3 participants without EM data at the 4-month clinic visit, due to either forgetting to take the MEMs bottle on vacation or AED discontinued due to an intolerable side effect for a portion of the electronic monitoring period, adherence data from the next follow-up clinic visit (i.e., the 7-month visit) were used to maximize complete adherence data. parent-reported adherence was calculated for 1 week prior to the 4-month clinic visit.
Descriptive analyses, including means, medians, and standard deviations, were calculated for parent-reported and EM adherence. A Wilcoxon Signed Ranks Test was conducted to compare parent-reported adherence to EM adherence over a 1-week period of time. Given the skewness of adherence data, Spearman’s correlation coefficient (rho) was used to examine the association between parent-reported and EM adherence over the same period of time.
Based on prior literature (Phipps & DeCuir-Whalley, 1990) and the lack of empirical evidence regarding optimal levels of adherence within pediatric epilepsy, adherence cut-points of 50%, 80%, and 90% were used to examine nonadherence with both parent-reported and EM adherence. To ensure that adherence reflected the same period of time, both parent-reported and EM adherence captured adherence one-week prior to the clinic visit. Sensitivity (i.e., proportion of adherent participants as defined by EM correctly identified as adherent by parent-report), specificity (i.e., proportion of non-adherent participants as defined by EM correctly identified as non-adherent by parent-report) and test accuracy were calculated using receiver-operating characteristic (ROC) curve analyses. EM was employed as the “reference standard” and parent-reported adherence as the “test” to identify changes in sensitivity, specificity, and accuracy across the 3 adherence cut-points (50%, 80%, 90%). The area under a ROC curve quantifies the overall ability of the test to discriminate between parent-reported and EM adherence. For example, a test with poor sensitivity and specificity would have an area of 0.50, whereas a test with excellent sensitivity and specificity would have an area of 1.0.
To develop a reliable correction factor of parent-reported adherence, the sample was split into two samples (Sample 1: first 54 participants, Sample 2: last 54 participants). Within Sample 1, a correction factor for each participant was calculated by dividing EM adherence by parent-reported adherence. The mean correction factor from Sample 1 was then applied to Sample 2 (n=54) by multiplying the correction factor to each participant’s parent-reported adherence. Using a one-sample t-test within Sample 2, the difference between Sample 2’s corrected parent-reported adherence and EM adherence was tested against zero (i.e., reference comparison).
To determine whether agreement between parent-reported and EM adherence varied by sociodemographic and disease-specific characteristics as an exploratory study aim, Spearman’s correlations were calculated across dichotomized sociodemographic subgroups. SES was dichotomized using a median split and child age was dichotomized into two developmental age groups: pre-school (2–6 years) and school-age (7–12 years). Fisher’s Z transformation (p < 0.05) was employed to evaluate the equality of the correlation coefficients utilizing normal approximation. Significance was identified as p < 0.05. Analyses were performed using SPSS 15.0 (SPSS Inc., Chicago, IL).
Descriptive statistics regarding participant information and adherence are presented in Table 1. Mean parent-reported adherence in the week prior to the clinic visit was significantly higher (96.5%; SD = 7.6) than mean EM adherence data for the same time period (80.3%; SD = 27.0; z = −6.4, p < 0.0001), but both were significantly correlated (rho = 0.46, p < 0.001).
The percentage of participants meeting adherence cut-points decreased as the adherence cut-points became increasingly rigorous for both EM and parent-reported adherence (see Figure 1). Specifically with parent-reported adherence data, identification of non-adherence went from 1% (50% cut-point) to 13% (90% cut-point), whereas for EM adherence, non-adherence increased from 12% (50% cut-point) to 43% (90% cut-point). The ROC curve statistic identified the 90% cut-point of parent-reported adherence as having the highest sensitivity and specificity to EM adherence (ROC statistic = 0.63, p < 0.05, 95% CI = 0.52–0.75). Sensitivity and specificity were not significant for the 50% (ROC statistic = 0.54, p = NS, 95% CI = 0.36–0.73) and 80% (ROC statistic = 0.53, p = NS, 95% CI = 0.40–0.65) cut-points. Changes in sensitivity and specificity for the 50%, 80%, and 90% cut-points are outlined in Table 2.
A mean correction factor was generated for Sample 1 (Mcorrection = 0.83± 0.25). The correction factor was then applied to Sample 2 and the difference between corrected parent-reported and EM adherence (Mdifference = 2.63± 22.9) was calculated using a one-sample t-test. A significant difference was not detected when compared to the reference value of 0 (t(45) = 0.78, p = NS).
No significant differences were identified by sample subgroups (i.e., age, gender, marital status, SES, race, seizure activity and type) when examining the equality of Spearman’s correlation coefficients (see Table 3).
Given that EM is considered the “gold standard” criterion to compare other adherence measurement methods, data from the current study provides initial evidence on the reliability of a correction factor for parent-reported adherence in a pediatric epilepsy sample. Based on our hypothesis and consistent with the broader adherence literature (Hansen et al., 2009; Modi et al., 2006), parent-reported adherence was significantly correlated and inflated relative to EM. Inflated ratings of parent-reported adherence could be due to multiple factors, including social desirability effects, poor memory recall, or individuals recalling global perceptions of adherence rather than specific time frames (e.g., doses missed within the past week) (Quittner et al., 2008). Although the factors influencing self/parent-reported adherence were not the focus of the current study, given its clinical utility, establishing its reliability is critical to the accurate assessment of adherence in a clinic setting.
The cut-point of 80% has been broadly used to define non-adherence across a variety of pediatric and adult chronic conditions (Rapoff, 1999); however, it is quite arbitrary (Cramer et al., 2008) and no studies have examined the validity of this cut-point in pediatric epilepsy. Our data suggest that a cut-point of 90% adherence for parent-report measures has the best sensitivity and specificity compared to 50% or 80% cut-points; however, specificity is still extremely poor at only 28%. Thus, parent-reported adherence that is not corrected for inflation is unhelpful in identifying patients who are non-adherent. One potential reason for this is that caregivers in the current study may have been reluctant to report missing their child’s AED in fear of being judged by the healthcare team. Overall, these data suggest a need to develop self/parent-report measures that normalize non-adherence and allow families to be more comfortable reporting adherence difficulties (Quittner et al., 2008) or as our study has demonstrated, develop “correction” factors for parent-report.
Given the detrimental impact of non-adherence on both health outcomes (Bassili et al., 2002; Cramer et al., 2002; Gopinath et al., 2000; Jones et al., 2006; Manjunath et al., 2009) and unnecessary healthcare costs (Cutler & Everett, 2010), implementation of EM would be ideal for monitoring adherence in standard clinical practice. EM monitoring has the ability to better identify patients at risk for non-adherence who may benefit from adherence interventions. Although the primary drawback to EM is its high cost, EM is the most reliable adherence tool and routine use may off-set the cost of unnecessary healthcare expenditures that plague the current healthcare system (e.g., unwarranted hospitalizations, emergency room visits). However, in the absence of objective EM adherence data, a self/parent-report adherence measure with a correction factor of 83%, which is consistent with the broader adult adherence literature (Haynes et al., 1980), could serve as a reliable assessment tool in clinical settings. For example, if a patient reported perfect adherence (i.e., 100%), a clinician could interpret this rate with the correction factor as approximately 83% based on our findings. This technique is particularly helpful if the clinician has other evidence to suggest that self/parent-reported adherence information may be unreliable (e.g., inconsistent reporting of adherence behaviors, zero serum blood levels).
When adherence concerns are suspected by clinicians who routinely assess for AED adherence, clinicians and patients often hesitate to openly discuss non-adherence, as it typically elicits strong reactions on the side of families and healthcare teams. Clinicians may not consider poor adherence when seizures occur or when families present well (i.e., high social desirability) regarding adherence. Lack of communication around adherence can result in unwarranted AED dosage increases or change in AEDs (Koumoutsos et al., 2007). Open discussion of adherence barriers during clinic visits, along with consistent EM of adherence, when possible, or use of a correction factor for self/parent-reported adherence, may hold promise for improving patient care. Nonetheless, patient perceptions of adherence can serve as a critical point for intervention. For example, families who believe they are highly adherent may benefit from examining their own EM adherence to provide objective data regarding their adherence behaviors. Given the lower rates of adherence identified in the current study, families experiencing adherence challenges may benefit from adherence-promotion interventions (Graves et al., 2010; Kahana et al., 2008).
Although this is the first study examining multi-method pediatric AED adherence, several limitations are noted that have direct implications for future research. A primary limitation is that the data only provide a snapshot of adherence during the first 4 months of AED therapy for a restricted age range (i.e., 2–12 years of age). Findings may not generalize for longer-term adherence or for adolescent populations for which adherence appears particularly problematic. Caregivers likely had primary responsibility for administering AEDs in the current study, whereas both the adolescent and caregiver play a role during adolescence. Second, ingestion of the medication is presumed with EM, but not confirmed. Finally, without a well-validated standardized self/parent-reported measure of adherence, we used one ad-hoc question representing caregiver-perceived adherence in the past week. Since the inception of this study, we have developed the Pediatric Epilepsy Medication Self-Management Questionnaire (Modi et al., 2010), which has an Adherence to Treatment and Clinic Appointments subscale that could further our understanding of adherence behaviors.
Data from the current study have direct implications for clinical care. Of primary importance is the need to routinely assess AED adherence, as it can have a direct impact on seizure management and healthcare costs (Cutler & Everett, 2010). Multi-method assessment can yield variable rates of non-adherence and the limitations to each method need to be acknowledged. Although healthcare teams should consider how EM can be integrated into clinical care for their patients, in the interim, self/parent-reported adherence with a correction factor may serve as a useful clinical tool. With routine adherence monitoring, healthcare professionals can proactively identify patients at risk for non-adherence and then refer them for empirically-supported adherence interventions (Graves et al, 2010; Kahana et al., 2008) that can ultimately improve the health outcomes of children with epilepsy.
We confirm that we have read the Journal’s position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.
This research was funded by a grant from the National Institutes of Health (K23HD057333) to the first author.
We would like to extend our deepest appreciation to the children with epilepsy and their families who participated in this study. We thank Julie Koumoutsos, Elizabeth Painter, Avnish Dhamija, and Samantha Gambill for recruiting participants and collecting data. We also thank the healthcare team involved in the medical and psychosocial care of children with new-onset epilepsy who facilitated the current research, including the nurse practitioners, nurses, and social workers.
None of the authors has any conflict of interest to disclose.