Physical activity monitors have a number of overarching strengths. First, they provide objective measures of physical activity behaviors that are free of the random and systematic errors associated with self-report. As such they are generally believed to provide better assessments for many activities, particularly activities that have proved difficult to measure by self-report (e.g., walking). In the past decade, physical activity monitors have been successfully used in many large-scale population-based studies (e.g., (10
)). The instruments used in these studies, typically worn for 7-day periods, have proven to be acceptable to participants. For example, the National Health and Nutrition Examination Survey (NHANES) includes a US population-based sample of individuals aged 6 to 85 years. Among NHANES participants who wore the devices for at least 10 hours on at least one day, 79% provided 3 or more days of valid data (21
), although compliance varied by age group (32
). From a technical perspective, the instruments used in these studies have generally proven to be reliable and rugged on repeated use, and the number of instruments lost has been acceptable.
Newer physical activity monitor methods that have employed direct measures of body posture have provided new insights into the relation between activity patterns and health. The amount of time adults spend standing upright can be as high as 5 to 8 hours per day (11
) and the difference between time spent in sitting and standing behaviors is an important determinant of physical activity energy expenditure (17
) and metabolic health (13
). Furthermore breaks in sedentary time appear to influence metabolic risk factors (12
). We have noted similarly high levels of upright behaviors using the activ
PAL (PAL Technologies, LTD, Glasgow, UK) (9
) among healthy middle aged adults and cancer survivors entering an exercise intervention (). Of the approximately 16 hours of daily monitoring, these adults engaged in roughly 48 individual bouts of active and sedentary behavior (i.e., transitions between sitting and standing/stepping), spent an average of about 5 hours upright, and about 70% of upright time was spent standing still.
Distribution of time in upright/active and sedentary behaviors in healthy controls and cancer survivors (N=95)
However, physical activity monitors also have at least three weaknesses that should be recognized. First, the accuracy and precision of instruments that are currently available in the marketplace—largely single monitor devices worn on the waist or upper arm—can be limited for certain types of upright behaviors that have a low ambulatory component and that may involve upper body work. The amount of time spent in these behaviors is only now being described (e.g., ), and future monitor calibration efforts should include common upright activities in their calibration protocols. In recent years, a considerable effort has been made to enhance the ability of physical activity monitors to capture these behaviors, and methods that have used sophisticated treatment of densely sampled data (e.g., 1- to 10-second epochs) to select appropriate prediction equations (8
), classify types of behavior (6
) or derive expenditure (26
) appear to be more successful than were initial efforts to develop a single regression equation to account for a wide range of behaviors (19
Second, information about the location or purpose of individual activities is limited, unless information from other sources is integrated with information from the monitor. Lack of information about where and why behavior is done may be a greater weakness for surveillance and intervention designs than association studies because of the need for surveillance studies to specifically classify behavior. Technological solutions using GPS are being developed, but behavioral logs that allow integration of location and purpose of behavior with physical activity monitor data also may be useful. However, the use of additional monitoring systems can add to the burden on participants and study staff.
Third, a particular challenge for large population-based studies is that the infrastructure to use the instruments and process the large volumes of data obtained from the field using automated high throughput data reduction methods remains limited. Available software from the monitors generally do an excellent job interacting with the device during the initialization and downloading steps, and provide a means of visualizing results from the individual date files. However, post-processing of the data requires staff to carry out of a number of quality control checks, and determine participants’ wearing time and compliance with the initial protocol. Standardized quality control procedures are needed to identify and flag bad data that may result from monitor malfunctions, participant tampering, unknown responses (e.g., out of range values), and human error (e.g., errors at time of initialization). Integrating a core set of quality control indicators and estimates of wearing time into the data download (or export) process could serve to simplify use of the devices in large population-based studies and further standardize the methods employed by different research groups.