|Home | About | Journals | Submit | Contact Us | Français|
The ability to determine athletic performance in varsity athletes using preseason measures has been established. The ability of pre-season performance measures and athlete’s exposure to predict the incidence of injuries is unclear. Thus our purpose was to determine the ability of pre-season measures of athletic performance to predict time to injury in varsity athletes.
Male and female varsity athletes competing in basketball, volleyball and ice hockey participated in this study. The main outcome measures were injury prevalence, time to injury (based on calculated exposure) and pre-season fitness measures as predictors of time to injury. Fitness measures were Apley’s range of motion, push-up, curl-ups, vertical jump, modified Illinois agility, and sit-and-reach. Cox regression models were used to identify which baseline fitness measures were predictors of time to injury.
Seventy-six percent of the athletes reported 1 or more injuries. Mean times to initial injury were significantly different for females and males (40.6% and 66.1% of the total season (p<0.05), respectively). A significant univariate correlation was observed between push-up performance and time to injury (Pearson’s r=0.332, p<0.01). No preseason fitness measure impacted the hazard of injury. Regardless of sport, female athletes had significantly shorter time to injury than males (Hazard Ratio=2.2, p<0.01). Athletes playing volleyball had significantly shorter time to injury (Hazard Ratio=4.2, p<0.01) compared to those playing hockey or basketball.
When accounting for exposure, gender, sport and fitness measures, prediction of time to injury was influenced most heavily by gender and sport.
Injuries among varsity athletes pose a significant challenge for both teams and individual players as demonstrated by surveillance of injuries in the National Collegiate Athletic Association (NCAA) [1-6]. A reportable injury is defined as one resulting from participation in an organized practice or competition and associated with restriction of participation or performance . The opportunity for athletic injury increases with greater time spent participating . Thus, measuring exposure to the possibility of injury in the greatest detail possible, such as minutes played in both practices and games is an important aspect of injury reporting . Despite the fact that exposure is an important aspect of injury reporting, most studies report only on the occurrence of the injury and ignore quantifying exposure or the time from baseline evaluation to injury occurrence. The NCAA defines exposure as 1 athlete participating in 1 practice, which allows for a rate of injury to be determined (number of injuries / 1000 athlete exposures) , but does not account for the different exposure encountered by different athletes during practices and games.
Fitness evaluation and pre-participation evaluations (PPE; medicals) are standard practice in university sport . The goal of these medical evaluations are to screen athletes for cardiac , musculoskeletal , and high-risk behaviour concerns . Current standard for the musculoskeletal (MSK) part of the PPE include both medical history and a screening physical exam with history being very important for determining previous injuries . Current evidence indicates that the efficacy of the physical screening exam to determine pathology and abnormalities or predict future injury is limited . However, some tests commonly used to evaluate pre-season fitness may be prognostic factors for injury in varsity athletes [12,13].
Specifically, suboptimal musculoskeletal physical characteristics such as flexibility, muscle strength, muscle endurance, muscle power, and agility have been shown to be risk factors for injury in sports [1,13-18]. For example, flexibility is associated with an enhanced ability to dissipate and absorb forces and accommodate stress, thereby reducing injury  and strength/fatigue resistance  as well as agility, have been shown to prevent injury . In contrast, vertical jump ability although beneficial to sport performance may increase risk of injury due to landing forces [22-24]. Thus, fitness and screening have been shown to be predictive of injury occurrence, however it is unclear whether pre-season fitness performance influences the time to injury.
Thus, the purpose of this study was to determine the utility of preseason fitness measures to predict risk of injury in varsity athletes when accounting for estimates of athlete’s exposure. We hypothesized that suboptimal preseason performance scores would be predictive of the time to injuries during the competitive varsity season.
All participants (eligibility defined in participants) completed a battery of preseason fitness tests during the month preceding their competitive season starting. Exposure data was estimated retrospectively based on available data and expert assessment to determine the amount of time athletes were exposed to the possibility of injury in practice or competition. Injury data was obtained retrospectively using a self-administered injury report survey completed by participants at post-season team meetings for which attendance of all athletes (injured or not) was mandatory.
Testers were trained exercise specialists with experience conducting fitness testing according to accepted protocols . A certified exercise physiologist with 10years of experience in fitness testing was present at all test sessions to supervise the standard testing protocol. Fitness tests were administered in the same order specified as follows for all athletes to control for the effects of accumulating fatigue on a subsequent performance test: vertical jump, the sit and reach, the modified Illinois agility, push ups, curl ups and the Apley’s range of motion test. Standardized guidelines were followed in the administration of all the tests [21,25,26]. All fitness tests included have shown adequate reliability in order to draw inferences on individuals (ICC>0.9) [21,27-30].
The sample consisted of 6 varsity teams including men’s hockey (n=23), women’s hockey (n=18), men’s volleyball (n=14), women’s volleyball (n=10), men’s basketball (n=9) and women’s basketball (n=12). This sample of varsity teams includes sports where sport participation involves use of both the upper and lower extremities where both genders are involved. Descriptive data is presented in Table1. All 86 participants provided written informed consent to participate in accordance with guidelines from the Research Ethics Board for the Faculty in the spirit of the Helsinki declaration. To be eligible for inclusion, an athlete must have participated in a varsity game or practice during the 2008-2009 season and have data for both the preseason fitness testing and the post-season injury survey. At the time of the pre-season fitness testing, all athletes were medically cleared to participate as determined by their Pre-Participation Evaluation (PPE) . Each PPE was completed by the athlete and referring physician, then submitted to the head therapist for review. The head therapist then either cleared each athlete for participation (no flagged issues on PPE) or provided arrangements for additional referrals or evaluations as flagged by specific identified issues on the PPE. Only when subsequent referrals and evaluations were completed in those athletes who required follow up did the head therapist clear that athlete for participation.
The vertical jump test estimated instantaneous anaerobic power and lower body strength. No run-up, step up, or pre-jump was permitted. One familiarization jump was allowed to ensure the participant executed the correct technique. Three testing trials were performed with a rest period of 10-15 seconds between each trial. A vertec device was used to measure jump height. The trial with the greatest jump height was recorded to the nearest 0.5 inch. Maximal jump height was converted to centimetres and the difference between the maximum jump height and an athlete’s standing reach height was recorded as the vertical jump height (cm).
The sit and reach test was used to evaluate lower back and hip flexibility . Participants were encouraged to stretch hamstrings prior to testing although no formal warm-up was administered. Participants were asked to sit with their legs fully extended, the ankle at 90° and to reach forward as far as possible with their outstretched arms and fingers by bending their back and hips. The maximal position of flexion had to be held for 2 seconds and 2 trials were performed with a 10-15 second rest between. The maximum sit and reach distance was recorded to the nearest 0.5cm.
The modified Illinois agility test evaluated the participants’ ability to accelerate, decelerate, turn in different directions, and run at different angles . The test was administered as described by Vescovi et al ,. where four cones were used to mark the rectangular agility area (10 meters long, 5 meters wide). Another 4 cones were placed inside the centre of the area at a distance of 3.3m apart. Participants began on their own time and the tester started the stopwatch at the participant’s initial movement. Participants ran straight to the second cone, zigzagged through the series of cones in the centre and finished by running around the final 2 cones of the course. Time was stopped when the participant crossed the finish line. Two trials were permitted with a 1-2 minute rest period in between and the best time was recorded with a handheld stopwatch to the closest hundredth of a second.
The push up test was used to evaluate upper-body endurance specifically, pectoralis major, anterior deltoids and triceps . No pause was allowed at elbow extension and a self-selected tempo had to be maintained throughout the test. The number of repetitions were counted until fatigue or until the participants were unable to maintain proper technique over 2 consecutive repetitions.
The curl-up test evaluated the endurance of the abdominal musculature that provides a foundation for trunk and spine stability . The test was administered to a metronome beat set at 50 beats per minute (25 curl-ups per minute). One warning about improper form was permitted prior to final termination of the test. The number of curl-ups that met the proper technique criteria was recorded . The partial curl-up test was used because it produces the highest activation of the abdominal musculature (rectus abdominus, external obliques, and lower abdominal stabilizers) with minimal activation of the hip flexors .
The Apley’s ROM test assessed shoulder flexibility, specifically medial rotation with adduction and lateral rotation with abduction. The test reflects functional capacity at the shoulder . Participants were required to remove their shirt to bare skin or bra top. The tester held a measuring stick (cm) with the 20cm mark at the level of the inferior angle of the scapula. Participants were instructed to reach into lateral rotation and abduction with palm facing anterior. Second, the participants were asked to reach into medial rotation and adduction with palm facing posterior. Participants repeated the same motions on the other side. Measurements were recorded as a positive (good range) or negative (limited range) value measured from the level of the inferior angle (20cm mark) to the level of the 3rd digit. Measurements were recorded to the nearest 0.5cm.
Exposure was quantified as the total amount of time participants were exposed to the possibility of injury in practice or competition prior to injury.
Practice exposure time was determined by totalling the amount of team practice time (in minutes) that individual athletes participated in prior to the occurrence of their first injury. Team practice time was calculated using each team’s practice schedule throughout the season. Due to the retrospective nature of this analysis and because practices are mandatory, the estimate of practice exposure is based on the assumption that athlete’s were exposed in the full duration of scheduled practice up until the day on which injury occurred. The absence from practice due illness or personal reasons was not documented.
Based on the retrospective nature of game exposure estimations, game exposure was determined differently for each sport using the best available data. Individual estimates of game exposure were calculated. This was to account for the significant differences in playing time that a starter or first line player might have compared to a substitute or fourth line player. For ice hockey teams, game exposure was derived from coach-estimated average playing time per game per athlete at the end of the competitive season. Using this estimation, total game exposure time per participant was calculated up to the date of injury using the team game schedule. Game exposure time for basketball players was taken directly from game statistics, which recorded the number of minutes each individual played per game. For volleyball players, game exposure was calculated by multiplying the number of sets each athlete played per game by the number of rallies within those sets, their team’s average rally time, and a coach-estimated average percentage of court time per set that each athlete played over the season. The lattermost variable accounts for differences in court time not recorded by game statistics. Set played and numbers of rallies were retrieved from game statistics. Average rally time per team was determined by reviewing 5 rallies from each of 3 randomly chosen videotaped games from both the men’s and women’s 2008-2009 season games.
The survey was modified from the National Collegiate Athletic Association (NCAA) Injury Surveillance System  to be completed by the athletes and for data capture using the Cardiff Teleform system (New England Survey Systems, Brooklyn, MA). The survey was simplified and re-formatted so that participants may effectively report injuries retrospectively. Face validity of the survey was assessed with 2 men’s league soccer teams and 2 women’s recreational basketball team. Feedback on readability and time to complete survey was collected and appropriate changes were made. Injury data was not retrieved from the central care training room athlete’s records. This is because central care only records injuries of those athletes who make a formal appointment to see an athletic therapist. Using only central care data we could have missed injuries that were treated outside of central care. Our strategy is supported by recent research showing that a self-report internet survey of injury reporting by parents of adolescent soccer players captured more injuries than athletic therapist reported injuries . In addition, calendars of each team’s pre-season and competitive season were distributed at their respective meetings to help athletes determine the exact date at which the injury occurred. Furthermore, trainers were present to assist athletes in recalling injuries that occurred, the type of injury and when the injury occurred.
Descriptive analysis consisted of means and standard deviations for subject preseason performance. Frequencies were estimated for each type and location of injury. Univariate analysis of the ability of the preseason measures to predict time without injury was completed using Pearson Correlation coefficients.
Proportional-hazard (or Cox) multivariate regression analysis was used to determine the predictive ability of each pre-season measures for the hazard of injury while taking into account gender and sport. The primary dependent variable in this study was time to initial injury or to the end of the season for athletes who were not injured. Time to initial injury was reported as the percentage of exposure in games and practices from the total season duration prior to injury. For athletes who were not injured, 100% exposure was assumed. Baseline or time zero was determined as the first official practice for each team. The independent variables in this study were the 5 preseason baseline measures (vertical jump, sit and reach, Apley’s ROM test, push up test and modified Illinois agility test). Hazard is defined as the probability of being injured at a given point in time, having remained uninjured up until that point . Cox regression accounts for different amounts of exposure to injury amongst participants. It enables the difference between survival times (or time remaining injury free) of particular groups of participants to be predicted while allowing for other prognostic factors or covariates to be considered at the same time . The preseason scores were entered into the model as continuous variables. Upper and lower extremity range of motion data was entered in the analysis as a distance from the mean to reflect the hypothesis that too much or too little ROM was hypothesized to be a risk for injury. Hazard ratios were reported with 95% confidence intervals (95% CI). Statistical analysis was completed using SPSS v18.104.22.168 (SPSS) using an alpha level of 0.05.
A total of 46 male and 40 female athletes completed preseason baseline measures (Table2) and the post-season injury survey. Seventy six percent of athlete’s reported one or more injuries during the 2008-2009 season (Table3), over half of which were new injuries (63.1%). Twenty nine percent were recurrence of past injuries. Injuries to the lower extremity were the most common with 33.8% of athletes reporting injury to this area. The most common injury type reported was incomplete muscle or tendon strain (23.1%).
Most injuries occurred during the regular season (61.5%); 33.8% occurred during the preseason, and 4.6% during the post season (Table4). The majority of athletes did not miss any games due to their injury (73.8%); however most missed 1 or more practices due to their injury (55.4%). All but 1 athlete sought some form of medical attention following injury (98.5%).
Injuries occurred in a greater percentage of games (11.8%) than practices (6.9%). Over half of practice injuries occurred during regular season practice (57.9%). The remainder occurred in preseason practice (42.1%). No injuries occurred in post-season practice. Sixty-six percent of game injuries occurred during regular season competition (66.6%). Twenty two percent occurred in pre season games (22.2%), while 11.1% occurred in post-season games (Table4).
Mean times to initial injury were significantly shorter for females (40.6% of total season) than for male athletes (66.1%). Mean times to initial injury reported in percent of the total varsity participation were significantly shorter for volleyball players (27.4%) than for hockey players (68.5%) or basketball players 57.6% (Table5). Kaplan Meier survival curves for each team showing the percent athlete not injured at different percentage of the total varsity participation are presented in Figure1. Female and male volleyball players had the shortest mean time to injury (19.3% and 35.7%), followed by women’s hockey (48.9%) and women’s basketball (45.9%). Men’s basketball had a mean time to initial injury of 66.8%. Male hockey players had the longest mean time to initial injury at 83.8%.
Only 1 significant univariate correlation was observed between preseason test scores and time to injury (Table6). Weaker upper extremity strength (push ups) was associated with shorter time to injury (Pearson’s r=0.33, p<0.01).
Proportional-hazard (Cox) regression was used to determine the effect of each independent variable on the hazard of injury. Cox regression analysis including gender and sport as covariates revealed that none of preseason measures of strength, range of motion, and agility significantly impacted the hazard of injury. Only, vertical jump performance showed a trend toward impacting injury hazard. As vertical jump scores increase, so does the hazard of injury (HR 1.04, p<0.10). The Cox regression model predicting time to injury using only gender and team as predictors is reported in Table7. Regardless of sport, female athletes had significantly shorter time to injury than males (HR=2.2, p<0.01). Athletes playing volleyball had significantly shorter time to injury than either basketball or hockey (HR=4.2, p<0.01).
This study sought to develop a predictive model for injury in varsity athletes from sports where both upper and lower extremities are used using preseason performance measures. The study accounted for estimated exposure to the risk of injury for male and female basketball, volleyball, and hockey athletes. Overall, the results indicated that female gender is the most predictive factor in determining time to injury in game or practice regardless of preseason performance. Further, volleyball had significantly shorter time to injury than the other sports studied.
Univariate associations suggested that athletes with limited upper extremity endurance as demonstrated by low push up performance had a shorter time to injury. However, when accounting for the relationship between gender and push up performance in the multivariate model, female gender better predicts shorter time to injury. Similarly, Augustsson et al found that males performed significantly more push ups than females and had 44% greater upper body endurance strength . They suggested that females who train upper body endurance may be more likely to avoid injury . In this study, pushups data alone was not sufficient to predict time to injury. However, our results provide some support to investigate the belief that strength and conditioning in athletes may be a good prevention strategy not only for occurrence but also for time to injury.
Vertical jump was the only other pre-season performance measure showing potential to predict time to injury. Athletes who had higher vertical jump scores showed a trend toward being injured earlier. This finding may appear counterintuitive given that greater muscular or aerobic performance scores are typically desirable in sports. However, Martel et al claim that plyometric training of high intensity and high impact to improve vertical jump height increases the possibility for muscle damage and injury . Additionally, Hewett et al identified female athletes with a higher vertical jump performance scores as having higher risk landing profiles and that high jumping females were more likely to incur injuries, especially at the knee . In volleyball and basketball high vertical jumps are desirable, however this research and those previously mentioned [22-24] suggest that this greater performance attribute may be associated with increased risk of injury. Additionally, athletes with the greatest vertical jump height are likely to play more games and thus have greater exposure to injury in competitive situations. Therefore the combination of increased performance and exposure may put athletes in starting positions at higher risk of injury compared to other players.
Of the injuries collected over the 2008/2009 varsity season, injuries occurred in a greater percentage of games (11.8%) than practices (6.9%). This finding is consistent with data from the NCAA where athletes were 3.5 times more likely to sustain an injury during a game than practice . Both our study and the NCAA study found that injuries occurring in games were greatest for athletes involved in contact sports, compared to sports traditionally considered “non-contact” such as volleyball . Contacts are more frequent in game situations .
In the present study, lower extremity injury was more prevalent than upper extremity injury in hockey, volleyball, and basketball which is consistent with previous findings [40-43]. The most common types of injuries in the present study across all sports examined were muscle strain (23%), ligament sprain (20%), and tendonitis (12%). These findings are consistent with other studies reporting muscle sprain/strain as the most prevalent injury [41-43]. However, Hootman et al report ligament sprain (14.8%) as the highest injury prevalence . This discrepancy may be due to the difference in injury definition used as Hootman et al  only accounted for injuries that resulted in limitation of participation in competition or practice, while we defined injury as one that could have resulted in limited participation, or in the athlete seeking medical attention. In our study, we may have captured more injuries as some participants may have incurred an injury and sought medical treatment but did not restrict themselves from practice.
Our study found that females were more likely to be injured and had a shorter mean time to injury than males. These results are consistent to the majority of studies found in the literature [13,39,44-46]. For example Murphy et al found a discrepancy between male and female injury incidence among diverse populations . They found females to be at greater risk than males when identifying risk factors for lower extremity injury . On the other hand others have found that gender was not an important factor in sport injuries for athletes involved in volleyball, basketball, soccer, wrestling and running . One reason for this difference may be because men are less likely to report injuries . The increased likelihood to report injuries by females may be a cultural bias that may be considered in future studies relying on self-reporting of injuries. For our results, this bias if true likely influenced only injury prevalence reporting and not time to injury in this study.
To our knowledge, no previous studies examined gender differences with respect to mean time to injury thus our finding that female’s time to injury was shorter than males is novel. We are confident in the finding that mean time to injury is shorter in females because although our exposure estimate may not be exact the error in estimating exposure in our study should be the same for male and females. Others have identified other female risk factors for increased risk of injury.
Our findings demonstrate that both men’s and women’s volleyball had the highest rate of injury and the shortest time to injury. Compared to the longitudinal NCAA data our findings are less than the NCAA women’ volleyball data which reported 4 injuries per 1000 athlete exposures. The differences might exist due sample size and further longitudinal data collection with Canadian varsity athletes might elucidate any real differences compared to the NCAA data. However, it is noteworthy that we cannot technically compare exposure differences because the NCAA exposure data is a frequency measure defined as 1 athlete participating in 1 practice , whereas our exposure is a calculated estimate of exposure in minutes. It has been indicated that a limitation of the NCAA data is the sensitivity of the exposure estimate whereby, “future authors should use a finer level of exposure measurement, such as player-minutes”  In this case our estimate of exposure in minutes allows for time to injury to be investigated, with the finding that volleyball has significantly shorter time to injury than basketball or hockey.
Accounting for and documenting practice and game exposure when predicting injury is a novel aspect of this study. The collection of the measurement of the potential predictors of injury was collected before the exposure. However, injury information was collected retrospectively with a risk of recall bias. For feasibility reason, we based the injury data survey on the NCAA Injury Surveillance System modified so that an athlete could complete the survey retrospectively. A member of the research team was present to answer questions or clarify items when the survey was administered. Furthermore, practice and competition schedules were used to help athletes remember injury dates and to reduce any recall bias that may have occurred. Others have shown that retrospective tracking of injuries is sufficiently accurate in athletes . Our injury survey also ensured that we captured all injuries including those that may not have been reported to athletic therapy staff in a prospective injury tracking system because we were asked the athletes themselves.
We combined upper and lower extremity injuries when examining the relationship between preseason performance measures and injury occurrence. This may have reduced the predictive power of the preseason performance measures used in this study because some preseason measures may only be predictive of specific injuries. However, the intent was to assess the ability of the preseason measures to identify athletes at risk of any injuries in sports involving both upper and lower extremity activities.
Missing data may have influenced the results. Due to a scheduling conflict men’s hockey did not complete the Modified Illinois Agility test. Other missing fitness measures (<3 per measure) were due to either scheduling conflict or exclusion of invalid result due to instructions not being followed.
Future studies should continue to utilize a comprehensive approach to document time exposed to injury. Larger sample sizes would help confirm if preseason measures are significant predictors of a specific injury in multivariate models when still accounting for gender and sport. Alternatively, because we found differences between genders and between sports, we recommend focussing on only one gender and one sport when planning studies on predictions of shorter time to injuries. Males and females practicing different sports may have different backgrounds in terms of training, and previous injuries. Further, different sports include different activities. Poor scores on fitness measures may predict shorter time to injuries when exposed to risky activities more prevalent in a given sport than another. Nevertheless, our results show that the preseason measures selected in the present study were not strong predictors of short time to injuries over and above gender and sport in sports that were selected because they involved both upper and lower extremity use. Our results provide novel evidence that gender and sports are key predictors of time to injury in out selected varsity sports. To our knowledge this information was lacking from the literature as analyses predicting time to injury are rare in the varsity sport literature.
Likely this means that a different predictor set will be required for different sport even though for simplicity varsity programs may have benefited from using a common dataset for all sports and genders.
In the future, it may be beneficial to examine the predictive ability of more performance measures, however, to ensure the feasibility of preseason screening the added measures should have the ability to be administered rapidly, and require little equipment. Identifying more performance measures with the ability to predict injury is of utmost value for varsity athletics. In the future, collecting injury information and documenting exposure in minutes prospectively should be considered. Future research should consider the impact of other conditions that may influence participation in practice or games when estimating exposure. Also, the modified varsity sport injury survey could be compared to prospective injury recording in order to confirm its validity.
Our study found the prevalence of injury types and location to be similar to those found in other studies. When accounting for exposure, gender and sport, performance measures were not found to be significant predictors of time to injury. Time to injury was influenced most heavily by gender and sport. Our study did not support the hypothesis that baseline performance measures would predict time to injury.
Michael D Kennedy, Robyn Fischer, Kristine Fairbanks, Lauren Lefaivre, Lauren Vickery, Janelle Molzan, Eric Parent contributed equally.
The author(s) declare that they have no competing interests.
MDK contributed to the design, analysis and drafting the manuscript; RF contributed to the design, created the injury surveillance instrument, analyzed and wrote results and helped draft manuscript; KF contributed to the overall design, data collection and manuscript preparation; LL assisted with the conception of the project, collected data and oversaw the development of the methods; LV assisted with the overall design, oversaw data collection and assisted in manuscript preparation; JM assisted in overall design and development of the methods and exposure data collection; EP oversaw the design development, the statistical analysis and assisted in the drafting of the manuscript. All authors read and approved the final manuscript.