The goal of this pilot study of college students was to examine the agreement between two computer user observational assessments (the mRULA and UC Computer Use Checklist) and compare their concurrent validity in describing upper extremity musculoskeletal symptoms. Overall, the two assessments were independent of each other and their associations with symptoms were not consistent and often depended upon inclusions of various covariates into the model.
While we expected that the Rapid Upper Limb Assessments and the UC Computer Use Checklist findings to be somewhat related, there was essentially no agreement between the two. Furthermore, the scatterplots, combined with the r2 values associated with the trendline, illustrate the uniqueness of the posture assessments to each other. One potential reason for the lack of correspondence is that each instrument has different origins. The UC Computer Use Checklist was originally designed to be used among computer users whereas the modified RULA was adapted from the RULA which was originally intended for industrial work-places rather than office environments. The modified Rapid Upper Limb Assessment scores the posture of the upper extremities separately while the UC Computer Use Checklist measures computing postures relevant to upper extremities and neck musculoskeletal health. It seems reasonable that two survey instruments with different origins would not agree.
We do not consider the lack of correspondence as due to a single rater. While using one rater does not prevent information bias from occurring, the use of a single trained rater eliminates potential bias resulting from low inter-rater reliability. We do recognize that the small sample size could result in methodological limitations. The frequency-based quartiles created for each survey allowed as direct a comparison as possible for describing musculoskeletal symptoms.
Another interesting observation is the temporal variation in the overall effect of posture score on symptoms. In unadjusted models almost all effects were null when predicting any symptoms, whether statistically significant or not. Most effects were negative or towards the null when predicting moderate or greater symptoms, an unexpected finding. After adjusting for temporal variation (days into the semester and time into day), the relationship between the UC Computer Use Checklist and symptoms changed to be significant and positive (from the null to OR = 1.4 for any symptoms and OR = 1.3 for moderate or greater symptoms).
The directly comparable transformation of mRULA did not change after adjustment for any
symptoms, but when time of day was adjusted for (to predict moderate or greater
symptoms) the association became significant with a lower (more negative) point estimate. Using the Lueder cut points for mRULA the adjusted effects were unexpected. The association with left mRULA became positive and statistically significant when predicting any
symptoms while the right mRULA remained at no effect. The association with right mRULA was more negative (OR
= 0.4 from 0.6) while that of the left mRULA experienced no change. The temporal variability of effect is an interesting finding as Ortiz [16
] states that a single measure of posture is enough when using goniometers. The developers of the UC Computer Use Checklist recommend that scoring be based on at least two (one morning and one afternoon) observation periods; more may be needed if the subject uses different software applications. Temporal variation of these two postural assessment tools has not been reported in previous studies. Further research using these tools to predict symptoms should consider multiple measurements throughout the observation period.
A major finding is that the UC Computer Use Checklist, when adjusted for temporal variation, consistently predicted symptoms in the expected positive direction. This finding makes sense as it was originally designed for use in computing environments with a goal to incorporate both non-neutral postures and workspace characteristics when estimating risk for symptoms.
Overall, the direction of association of mRULA with symptoms is inconsistent. The direction of the association of both mRULA scores when modeled for moderate or greater
symptoms is consistently negative when adjusted for covariates, with or without interaction terms. This is an unexpected finding. One explanation is there is a limited range in age and postures relative to the population of workers and type of tasks performed for which the original RULA was developed. Although the activities assessed ranged from gaming to communicating to assignments, the variability of possible postures was small. One reason for the more constricted range of values for the right mRULA (3–7 when 1–7 is possible) is the lack of workstation adjustability for every dorm room that was observed. Based on these scores all of the students are at risk. This coincides with past findings reported by Tullar, et al. that identified 7 potential postural risk factors in a previous survey of college students [19
]. Further research using this tool among computer users is needed to understand how it is associated with symptoms. In addition, RULA has specific definitions of neutral postures, which have been called into question recently, as Marcus et al. have demonstrated that postures previously thought to be “neutral” (such as 90 degree angles at the elbow) may not, in fact, be protective. RULA is limited in that it does not take into consideration specific workstation design fundamentals such as arm supports and the position of devices within the computer workstation [10
One study limitation is the absence of a comprehensive exposure assessment capturing frequency, duration and intensity [23
]. Postural assessment tools provide a measure of exposure intensity, only one of the three dimensions. Our analysis and results are limited to exploring the intensity measure. Specific methods of combining postural risk and duration and frequency have not been developed for the current instruments or by others. However, we did explore the interactions of the intensity and duration via our last model. Here we saw stronger associations suggesting that intensity alone is not powerful; however, when we correct the models for the temporal parameters the intensity parameters become important.
The findings presented here were not meant to be generalizable to neither student nor working populations and, therefore, are of limited generalizability. The current study is a pilot study limited to 30 participants for the purpose of exploring the agreement between two widely used posture surveys and comparing their associations with upper extremity musculoskeletal symptoms. Comparisons of the findings with the working population cannot be made as the current study is the first to evaluate the association between upper extremity musculoskeletal symptoms and posture surveys designed to predict symptoms. While student exposure patterns experienced in dorm rooms with unadjustable furniture are expected to be similar to work environments with limited adjustability, it is impossible to comment on symptoms patterns as the current study was the first to evaluate symptoms over a semester. Previous studies on working populations found a lack of symptoms growth over a workweek [2
], as was observed with data from the current study. Our preliminary finding of symptoms remaining fairly steady throughout the day then increasing after midnight is not incompatible with those observing symptoms growth throughout a workday [2
], especially since a “workday” for students can conceivably occur well into the early hours. Further evaluation of widely used posture surveys is planned as we have just completed a prospective cohort study of just over 150 undergraduate students in a single dorm.
Both the modified Rapid Upper Limb Assessment and the UC Computer Use Checklist have been widely used by public health practitioners and ergonomists in the field in addition to have been designed to assess postural risk for computing-related upper extremity musculoskeletal disorders. Inherent has been the belief scores from the two surveys on one observation are in agreement. However, the only data available comparing two posture assessments in the field found they classified exposure differently [3
]. Our data suggest the mRULA and the UC Computer Use Checklist characterize symptoms differently. The UC Computer Use Checklist appears to have a traditional relationship with both symptoms outcomes: after adjusting for covariates, increasing scores correspond with increasing point estimates. The mRULA, when directly compared with the UC Computer Use Checklist, has a nontraditional relationship with symptoms: increasing scores (though non-significant) are associated with roughly increasing point estimates that plateau for experiencing any symptoms, but a negative association (and statistically significant) for moderate or greater symptoms. When the mRULA is categorized by the cutpoints recommended by Lueder and Corlett, for any symptoms both sides suggest a traditional relationship where increasing scores indicates greater risk of symptoms occurrence (though significant for the left side and not the right side which includes mousing postures). For moderate or greater symptoms, the left side mRULA suggests risk is decreasing with increasing scores and the right side shows no change in risk as scores increase (statistically significant). The relationship between mRULA and symptoms will be re-examined with the longitudinal cohort study where multiple posture assessments over a semester were recorded on a larger (n
= 155) population.
The current work is preliminary and more research involving the time relationship between posture assessment and outcomes is needed to understand the roles of these risk assessment tools in predicting symptoms occurrence.