In the publication of physical activity clinical studies, more detail is necessary for researchers and practitioners to evaluate the methods used in data collection (i.e., the implementation of best practice recommendations). This paper provides information about integrating an accelerometer into the methodology of a physical activity clinical trial. This study demonstrates the application of best practice recommendations for accelerometry in an intervention study. Specifically, studies employing the use of accelerometry need to follow and report on the following best practice recommendations in order to ensure the integrity of the objective data: 1) appropriate monitor selection and data quality; 2) detailed protocols for monitor use; 3) consistent calibration of monitors and checking of data quality; 4) use of validated equations for analysis; and 5) integration of other sources of physical activity measurement.
Based on the best practice recommendations cited above, the ActiGraph did detect improvements in physical activity observed by other measures, specifically the PAR. Also, the pattern of ActiGraph results from baseline to month 6 is consistent with the results from the PAR, indicating that the ActiGraph is sensitive to detecting physical activity changes in an intervention study. These results also confirm the findings of the main trial at 6 months [29
], with both treatment arms outperforming the control arm on physical activity behavior. The results were not consistent at 12 months, as the data from the ActiGraph show no differences between treatment and control, while in the main trial, the print arm outperformed the control arm. It is notable that the ActiGraph, even on a 26% sample and for only 3 days detected similar effects at 6 months. However, given that the sample was small and only captured 3 days of monitoring, it is not surprising the pattern did not remain at month 12.
In this study, the correlations between the ActiGraph and the interviewer-administered PAR were small-to-medium [47
], ranging from 0.28–0.48. These results are comparable with other studies in which correlations on the PAR range from 0.06–0.90 [25
]. These findings show that both the ActiGraph and the PAR, while not perfectly correlated, have enough overlap to indicate the ActiGraph can be used as an appropriate measurement of activity.
When self-report of physical activity, rather than the interviewer-based method, is used as the comparison, the ActiGraph and weekly self-report were significantly correlated at baseline and month 6, but not at month 12, with correlations ranging from 0.19 to 0.44; the monthly log was found to have a significant correlation at month 6, but not a month 12. These values are similar to those from other studies, which found correlations ranging from 0.15– 0.50 [26
]. These small-to-moderate sized correlations show that while there remain some improvements necessary for precision, the measures can be used as complementary methods in physical activity trials.
ActiGraph and fitness [VO2
(ml/kg/min)] were significantly correlated at baseline, again with range in the small-to-moderate category. These correlations are similar to those found in another study with ranges of 0.49–0.54 for men and 0.14–0.47 for women [48
]. However, even with best practices, there is error inherent in comparing fitness to activity behavior due to recall biases in self-report, cut point errors with the accelerometer, and that physical activity behavior is not
a proxy for fitness.
At baseline and month 12, there appears to be a more consistent relationship between the ActiGraph and other complementary self-report and interviewer-based measures. Post-hoc tests were undertaken to examine the reasons the relationships were not significant at Month 6. Specifically, we examined whether there were group differences based on treatment assignment. Interestingly at Month 6, there was a correlation of −0.20 between the PAR and ActiGraph for the delayed treatment/contact control group. This suggests that participants in the delayed treatment group, who received no physical activity advice or behavior change information, may have inaccurately reported their behavior due to social desirability or simply due to a lack of experiential awareness as to what constitutes moderate intensity activity.
The relationship for vigorous intensity activity became more consistent over time. Research has found that individuals have difficulty differentiating between activities of moderate and vigorous intensities [50
], which highlights a gap between the current exercise prescription and the understanding of general public [50
]. Vigorous activities tend to be well-defined (e.g., discrete events such as sports, jogging); thus these may result in more accurate recall [48
]. It may be that any effort associated is due to the type of activity and less accurate recall of moderate to light activities. In this study, our correlations ranged from 0.28–0.42 for vigorous intensity, and showed consistent relationships when compared at each time point. However, based on the PAR, only 9–17% of the activities recorded were of vigorous intensity. These low rates of vigorous intensity activity make it difficult to fully assess this supposition. It is also possible that improved self-report of physical activity at month 6 and month 12 may exaggerate the gains from participating in the intervention, irrespective of treatment group. This should be explored further in future studies [22
When the scatterplots presented in are examined, they demonstrate that there are participants who report no physical activity on the PAR at, yet register between 3 to 169 minutes of moderate intensity physical activity according to the ActiGraph. This shows a large degree of underreporting, which declines over time but is still present. There are a few hypothesized reasons for this finding. First, it is possible that as participants interacted more with the research team, they became better aware of what constituted moderate-intensity physical activity. Second, much of the activity that was recorded as light activity on the ActiGraph at baseline may have been actually misclassified. For data processing of the ActiGraph information, the Freedson equation was used [39
]. However, some research has shown that the Freedson cut-points overestimates resting/light activity by 14% and underestimates moderate intensity by 60% [22
]. Third, there may have been a response effect to being in a physical activity trial. For the purposes of the parent trial, the interviewer administered 7-day PAR was used as the primary screening tool for physical activity behavior, and was administered prior to the ActiGraph monitor assessment. More research is needed to determine if an ActiGraph should be used as entry criteria for study eligibility.
Additionally, research has found that accelerometer counts were higher during track locomotion versus treadmill locomotion [51
]. The Freedson values [39
] were originally established for treadmill walking versus walking in free-living environment; therefore, there may have been misclassification from using it to establish the cutoff values for moderate intensity and above physical activity. Also, studies have found that the relationships between ActiGraph counts and actual activities were r
=0.77 for walking and only r
=0.59 for activities combined [13
]. This may indicate a problem with using accelerometers in free-living situations. If a participant was doing a different type of activity other than walking, the monitor may not have captured this activity. Furthermore, accelerometers are known to be unable to detect upper body movement and the energy cost associated with that and with changes in surface or terrain [13
]. Given the potential limitations of using accelerometers in free-living situations (which is the method used in home-based physical activity interventions), data reported in physical activity trials may contain inaccurate estimates and should be interpreted with caution.
There are other factors that investigators need to consider when incorporating accelerometers for use in clinical trials. First is the selection of the device itself. The ActiGraph is a uniaxial monitor that has been used in a number of research trials [e.g., 52
]; other monitors that are triaxial (e.g., Tritrac) should also be considered depending the needs of the study. Additionally, the number of monitors to be purchased is a consideration. Factors such as how compliant participants would be with returning the monitors, as well as the length of time for the data download, and charging/recalibrating process are important. For the current study, the Investigative team chose to have a pool of 32 monitors as we felt it was prudent to have an excess of monitors rather than miss an assessment window due to lack of available equipment. This was particularly important in light of the rolling admission for this trial, at times we were conducting ActiGraph assessments for baseline, Month 6 and Month 12 time points simultaneously. A metric (e.g., 1 monitor for every 3 participants) should be developed by each investigative team to ensure adequate device coverage for the study needs.
In terms of limitations, the sample size was small (n=63). Additionally, to minimize burden, participants were asked to wear the monitor for only three days, given research has found that three to four days of monitoring is needed to achieve 80% reliability in the variance of activity [38
]. Therefore, we were not able to capture the entire weekly pattern of participants’ activity. This limits the ability to fully compare the complementary measures (i.e., monthly physical activity logs, weekly self-report data) from the main trial to this one. Future studies should have all participants wear a monitor as well as consider seven days versus three days of monitoring. We would recommend monitoring for a seven day period, as this would best capture the patterns and duration of activity over a complete recording period.
In this trial, moderate intensity physical activity was demonstrated for participants prior to each administration of the PAR by having them walk on a treadmill for one minute at moderate intensity. While we felt this one minute demonstration would be effective for reminding participants of what moderate intensity “felt like”, further training of participants may be necessary to further improve the validity of self-report, particularly for participants who are in a control or delayed treatment condition. Other methods for teaching participants to recognize how their body feels when exercising at moderate intensity and above include taking the participant on a one minute walk in a free-living situation (e.g., hallway, sidewalk) [55
] and longer simulated walks on a treadmill (i.e., 10 minutes) [e.g., 57
]. These longer 10 minute walks demonstrate the minimum duration necessary to be considered a bout of activity [44
An interesting methodological question is whether the mere act of wearing an accelerometer influences participants’ behavior and reporting. The mean physical activity level obtained from the PAR for the participants who wore the monitor did not differ from the participants who did not wear the monitor. Additionally, there were no differences on change in the PAR at 6 or 12 months between those who wore a monitor and those who did not. This indicates that the addition of an accelerometer did not unduly influence the reporting of behavior and/or amount of activity.
The current study used a more traditional form of data capture, examining cut-points of activity for light, moderate and vigorous intensity activity [39
]. However, more recent studies have recommended more sophisticated methodology for data processing to distinguish types of activity for specific behaviors previously difficult to capture (such as vacuuming, walking uphill) [59
]. For example, Crouter, Clowers, and Bassett [60
] suggested using the coefficient of variation in activity to identify between walking/running and lifestyle activities, and then applying the corresponding regression model equations. The authors found that their approach was better at predicting energy expenditure than previous one-regression models. Unfortunately, these methods could not be used for the current study because the ActiGraph data collected for the parent study specified certain data collection parameters which preclude a reanalysis using the Crouter method [60
]. It is possible that as the technology improves, the sensitivity of these methods for detecting bouts and types of activity also will improve.