Although it is common practice to administer pre-participation examinations (PPE) of athletes prior to training, there are no clearly established formats. Elements integral to the PPE fall within the scope of physical therapist practice, and are often categorized as a form of primary prevention for musculoskeletal disorders as defined in the Guide to Physical Therapist Practice.
The purpose of this study is to describe the design and implementation of a PPE for a women's professional (gridiron) football team. The results and findings from this PPE provide one of the first musculoskeletal profiles and information about selected physical characteristics from members of a female professional football team.
Players from the Kentucky Karma women's football team, a member of the National Women's Football League (NWFA), volunteered to participate in a PPE. Of twenty-five eligible team members, thirteen consented to participate. The PPE consisted of a health history questionnaire, a musculoskeletal screening, and a series of physical performance and agility tests.
The players' average (± SD) age, height, weight, body mass index (BMI), and body fat percentage were 29.6 (± 5.6) yrs., 1.66 (± .05) m, 66.8 (± 12.6) kg, 24.1 (± 3.7), and 27.4 (± 6.6) %, respectively. Commonly reported injuries were similar to those reported in men's collegiate football.
This is one of the first papers to report on a model PPE for a women's professional football team. Future research is needed to establish a standard PPE, recognize common injuries, and develop prevention strategies unique to women's professional football.
musculoskeletal screening; female athlete; performance testing
Context: Estimates suggest that more than 5.5 million youths play football annually, and 28% of youth football players (age range = 5 to 14 years) are injured each year, resulting in more than 187 000 emergency room visits.
Objective: To analyze time-loss (TL) and non–time-loss (NTL) injury patterns across age groups in youth football players.
Design: Two-year observational cohort.
Setting: Two midwestern communities, including players from the fourth through eighth grades and between the ages of 9 and 14 years.
Patients or Other Participants: A total of 779 players participated, including 296 in grades 4 and 5; 203 in grade 6; 188 in grade 7; and 92 in grade 8. (Players in the fourth and fifth grades participated on the same teams, so we considered them as a single group.)
Main Outcome Measure(s): Injury frequencies and exposures were collected by certified athletic trainers present at each practice and game and used to calculate injury rates with 95% confidence intervals (CIs) for both TL and NTL injuries across age groups.
Results: A total of 474 injuries and 26 565 exposures were identified. Injuries were reported by 36.5% of the players, with 14.4% reporting more than 1 injury in a season. The overall injury rate per 1000 athlete-exposures (A-Es) was 17.8 (95% CI = 16.3, 19.5). The injury rate increased with each succeeding grade from 14.3 per 1000 A-Es (95% CI = 12.1, 16.9) in grades 4 and 5 to 21.7 per 1000 A-Es (95% CI = 17.2, 27.3) in grade 8. A total of 58.6% of all injuries were NTL. Non–time-loss injuries accounted for 70.1% of the injuries reported by fourth and fifth graders, 55.1% by sixth graders, 64.0% by seventh graders, and 33.8% by eighth graders. The cumulative NTL injury rate was 10.5 per 1000 A-Es (95% CI = 9.3, 11.8), and the TL injury rate was 7.4 per 1000 A-Es (95% CI = 6.4, 8.5).
Conclusions: Youth football players sustained more NTL injuries than TL injuries. We recommend that a first-aid–certified coach or league official be present at all games and practices.
youth sports; epidemiology; injury incidence
To compare injury risk in elite football played on artificial turf compared with natural grass.
Prospective two‐cohort study.
Male European elite football leagues.
290 players from 10 elite European clubs that had installed third‐generation artificial turf surfaces in 2003–4, and 202 players from the Swedish Premier League acting as a control group.
Main outcome measure
The incidence of injury during training and match play did not differ between surfaces for the teams in the artificial turf cohort: 2.42 v 2.94 injuries/1000 training hours and 19.60 v 21.48 injuries/1000 match hours for artificial turf and grass respectively. The risk of ankle sprain was increased in matches on artificial turf compared with grass (4.83 v 2.66 injuries/1000 match hours; rate ratio 1.81, 95% confidence interval 1.00 to 3.28). No difference in injury severity was seen between surfaces. Compared with the control cohort who played home games on natural grass, teams in the artificial turf cohort had a lower injury incidence during match play (15.26 v 23.08 injuries/1000 match hours; rate ratio 0.66, 95% confidence interval 0.48 to 0.91).
No evidence of a greater risk of injury was found when football was played on artificial turf compared with natural grass. The higher incidence of ankle sprain on artificial turf warrants further attention, although this result should be interpreted with caution as the number of ankle sprains was low.
injuries; football; surface properties; soccer; artificial turf
Methods: During the preseason, 10 Australian football clubs volunteered 23 teams to participate in a protective equipment randomised controlled trial, the Australian Football Injury Prevention Project (AFIPP). All players from these teams were invited to participate. Players who did not agree to participate in AFIPP were surveyed about their reasons for non-involvement.
Results: 110 football players (response rate 63.6%) completed the non-responder survey and cited the two main reasons behind non-involvement in the project as "I did not know about the project" (39.4%) and "I was not at training when the research team visited" (36.5%).
Conclusions and implications: Preseason may not be the best time for maximal player recruitment in community based sports safety research. Enhanced communication between researchers and players at community level football clubs during the recruitment phase is likely to improve response rates.
Background: The Australian football injury prevention project (AFIPP) was a randomised controlled trial examining the effects of protective equipment on injury rates in Australian Football.
Objective: To present the results of the AFIPP baseline survey of community football players' attitudes towards protective equipment.
Methods: Teams of players were recruited from the largest community football league in Victoria, Australia, during the 2001 playing season; 301 players were enrolled in the study and all were surveyed before the season began about their attitudes towards protective headgear and mouthguards.
Results: Almost three quarters of the players (73.6%) reported wearing mouthguards during the previous playing season (year 2000) compared with only 2.1% wearing headgear. The most common reasons for not wearing headgear and mouthguards (in non-users) were: "I don't like wearing it" (headgear: 44.8%; mouthguards: 30.6%), and "It is too uncomfortable" (headgear: 40.7%; mouthguards: 45.8%).
Conclusions: The higher mouthguard usage reflects the favourable attitudes towards mouthguards by Australian football players generally. Similarly, the low headgear usage reflects the low acceptance of this form of protection in this sport. Further research should be directed towards establishing the reasons why players seem to believe that headgear plays a role in injury prevention yet few wear it.
To determine the level of pre‐employment, pre‐season, and post‐injury medical evaluation of players undertaken within UK professional team sports.
A postal, whole population survey.
Elite professional sports teams in England.
Six groups comprising the following clubs: professional football (Premiership, 15 of 20; Championship, 22 of 24), rugby union (Premiership, 9 of 12; Division 1, 11 of 14), rugby league (Super League, 6 of 11) and cricket (County, 12 of 18).
Main outcome measures
Number (percentage) of clubs recording players' medical history and undertaking medical examinations of players' cardiovascular, respiratory, neurological, and musculoskeletal systems at pre‐employment, pre‐season and post‐injury.
The overall response to the survey was 74%, with a range from 55% to 92% among groups. Almost 90% of football (Premiership and Championship) and rugby union (Premiership) clubs took a pre‐employment history of players' general health, cardiovascular, respiratory, neurological, and musculoskeletal systems, but fewer than 50% of cricket and rugby union (Division 1) clubs recorded a history. The majority of football (Premiership and Championship) and rugby union (Premiership) clubs implemented both cardiovascular and musculoskeletal examinations of players before employment. Fewer than 25% of clubs in any of the groups implemented neurological examinations of players at pre‐employment, although 100% of rugby union (Premiership) and rugby league clubs implemented neurological testing during pre‐season.
None of the sports implemented best practice guidelines for the preparticipation evaluation of players at all stages of their employment. Departures from best practice guidelines and differences in practices between clubs within the same sport leave club physicians vulnerable if their players sustain injuries or ill health conditions that could have been identified and avoided through the implementation of a preparticipation examination.
preparticipation evaluation; professional sport; risk factors for injury; medical risk factors
Objectives: To undertake a prospective epidemiological study of the injuries sustained in English youth academy football over two competitive seasons.
Methods: Player injuries were annotated by medical staff at 38 English football club youth academies. A specific injury audit questionnaire was used together with a weekly return form that documented each club's current injury status.
Results: A total of 3805 injuries were reported over two complete seasons (June to May) with an average injury rate of 0.40 per player per season. The mean (SD) number of days absent for each injury was 21.9 (33.63), with an average of 2.31 (3.66) games missed per injury. The total amount of time absent through injury equated to about 6% of the player's development time. Players in the higher age groups (17–19 years) were more likely to receive an injury than those in the younger age groups (9–16 years). Injury incidence varied throughout the season, with training injuries peaking in January (p<0.05) and competition injuries peaking in October (p<0.05). Competition injuries accounted for 50.4% of the total, with 36% of these occurring in the last third of each half. Strains (31%) and sprains (20%) were the main injury types, predominantly affecting the lower limb, with a similar proportion of injuries affecting the thigh (19%), ankle (19%), and knee (18%). Growth related conditions, including Sever's disease and Osgood-Schlatter's disease, accounted for 5% of total injuries, peaking in the under 13 age group for Osgood-Schlatter's disease and the under 11 age group for Sever's disease. The rate of re-injury of exactly the same anatomical structure was 3%.
Conclusions: Footballers are at high risk of injury and there is a need to investigate ways of reducing this risk. Injury incidence at academy level is approximately half that of the professional game. Academy players probably have much less exposure to injury than their full time counterparts. Areas that warrant further attention include the link between musculoskeletal development and the onset of youth related conditions such as Sever's disease and Osgood-Schlatter's disease, the significant number of non-contact injuries that occur in academy football, and the increased rates of injury during preseason training and after the mid season break. This study has highlighted the nature and severity of injuries that occur at academy level, and the third part of the audit process now needs to be undertaken: the implementation of strategies to reduce the number of injuries encountered at this level.
Football (soccer) is endorsed as a health-promoting physical activity worldwide. When football programs are introduced as part of general health promotion programs, equal access and limitation of pre-participation disparities with regard to injury risk are important. The aim of this study was to explore if disparity with regard to parents’ educational level, player body mass index (BMI), and self-reported health are determinants of football injury in community-based football programs, separately or in interaction with age or gender.
Four community football clubs with 1230 youth players agreed to participate in the cross-sectional study during the 2006 season. The study constructs (parents’ educational level, player BMI, and self-reported health) were operationalized into questionnaire items. The 1-year prevalence of football injury was defined as the primary outcome measure. Data were collected via a postal survey and analyzed using a series of hierarchical statistical computations investigating associations with the primary outcome measure and interactions between the study variables. The survey was returned by 827 (67.2%) youth players. The 1-year injury prevalence increased with age. For youths with parents with higher formal education, boys reported more injuries and girls reported fewer injuries than expected; for youths with lower educated parents there was a tendency towards the opposite pattern. Youths reporting injuries had higher standardized BMI compared with youths not reporting injuries. Children not reporting full health were slightly overrepresented among those reporting injuries and underrepresented for those reporting no injury.
Pre-participation disparities in terms of parents’ educational level, through interaction with gender, BMI, and self-reported general health are associated with increased injury risk in community-based youth football. When introduced as a general health promotion, football associations should adjust community-based youth programs to accommodate children and adolescents with increased pre-participation injury risk.
Objective: Head/orofacial (H/O) injuries are common in Australian rules football. Mouthguards are widely promoted to prevent these injuries, in spite of the lack of formal evidence for their effectiveness.
Design: The Australian football injury prevention project was a cluster randomized controlled trial to evaluate the effectiveness of mouthguards for preventing H/O injuries in these players.
Setting and subjects: Twenty three teams (301 players) were recruited from the largest community football league in Australia.
Intervention: Teams were randomly allocated to either the MG: custom made mouthguard or C: control (usual mouthguard behaviours) study arm.
Main outcome measures: All injuries, participation in training and games, and mouthguard use were monitored over the 2001 playing season. Injury rates were calculated as the number of injuries per 1000 person hours of playing time. Adjusted incidence rate ratios were obtained from Poisson regression models.
Results: Players in both study arms wore mouthguards, though it is unlikely that many controls wore custom made ones. Wearing rates were higher during games than training. The overall rate of H/O injury was 2.7 injuries per 1000 exposure hours. The rate of H/O injury was higher during games than training. The adjusted H/O injury incidence rate ratio was 0.56 (95% CI 0.32 to 0.97) for MG versus C during games and training, combined.
Conclusions: There was a significant protective effect of custom made mouthguards, relative to usual mouthguard use, during games. However, the control players still wore mouthguards throughout the majority of games and this could have diluted the effect.
Data on the incidence, nature, severity and cause of match football injuries sustained on dirt field are scarce. The objectives of this study was to compare the incidence, nature, severity and cause of match injuries sustained on dirt field and artificial turf field by amateur male football players.
A prospective two-cohort design was employed. Participants were 252 male football players (mean age 27 years, range 18-43) in 14 teams who participated in a local championship carried on a dirt field and 216 male football players (mean age 28 years, range 17-40) in 12 teams who participated in a local championship carried on a artificial turf field in the same zone of the city. Injury definitions and recording procedures were compliant with the international consensus statement for epidemiological studies of injuries in football.
The overall incidence of match injuries for men was 36.9 injuries/1000 player hours on dirt field and 19.5 on artificial turf (incidence rate ratio 1.88; 95% CI 1.19-3.05).
Most common injured part on dirt field was ankle (26.7%) and on artificial turf was knee (24.3%). The most common injury type in the dirt field was skin injuries (abrasion and laceration) and in the artificial turf was sprain and ligament injury followed by haematoma/contusion/bruise.
Most injuries were acute (artificial turf 89%, dirt field 91%) and resulted from player-to-player contact (artificial turf 59.2%, dirt field 51.4%).
Most injuries were slight and minimal in dirt field cohort but in artificial turf cohort the most injuries were mild.
There were differences in the incidence and type of football match injuries sustained on dirt field and artificial turf.
Despite the growing popularity of women's football and the increasing number of female players, there has been little research on injuries sustained by female football players.
Analysis of the incidence, characteristics and circumstances of injury in elite female football players in top‐level international tournaments.
Injuries incurred in seven international football tournaments were analysed using an established injury report system. Doctors of all participating teams reported all injuries after each match on a standardised injury reporting form. The mean response rate was 95%.
387 injuries were reported from 174 matches, equivalent to an incidence of 67.4 injuries/1000 player hours (95% CI 60.7 to 74.1) or 2.2 injuries/match (95% CI 2.0 to 2.4). Most injuries (84%; 317/378) were caused by contact with another player. The injuries most commonly involved the lower extremity (n = 248; 65%), followed by injuries of the head and neck (n = 67, 18%), trunk (n = 33, 9%) and upper extremity (n = 32, 8%). Contusions (n = 166; 45%) were the most frequent type of injury, followed by sprains or ligament rupture (n = 96; 26%) and strains or muscle fibre ruptures (n = 31; 8%). The most common diagnosis was an ankle sprain. There were 7 ligament ruptures and 15 sprains of the knee. On average 1 injury/match (95% CI 0.8 to 1.2) was expected to result in absence from a match or training.
The injury rate in women's top‐level tournaments was within the range reported previously for match injuries in elite male and female players. However, the diagnoses and mechanisms of injury among the female players differed substantially from those previously reported in male football players.
Concussion in the National Football League (NFL) remains an important issue. An initial description of the injury epidemiology involved 6 years from 1996 to 2001.
The increased attention to concussions may have resulted in team physicians being more conservative in treating players in recent years.
Two consecutive 6-year periods (1996-2001 and 2002-2007) were compared to determine changes in the circumstances associated with the injury, the patterns of signs and symptoms, and the players’ time loss from participation in the NFL.
During 2002-2007, concussions were recorded by NFL team physicians and athletic trainers using the same standardized reporting form used from 1996 to 2001. Player position, type of play, concussion signs and symptoms, loss of consciousness, and medical action taken were recorded.
There were 0.38 documented concussions per NFL game in 2002-2007—7.6% lower than the 0.42 in the earlier period (1996-2001). The injury rate was lower in quarterbacks and wide receivers but significantly higher in tight ends during the second 6 years. The most frequent symptoms were headaches and dizziness; the most common signs were problems with information processing and immediate recall. During 2002-2007, a significantly lower fraction of concussed players returned to the same game, and more were removed from play. Most concussed players (83.5%) returned to play in < 7 days; the percentage decreased to 57.4% with loss of consciousness. The number of players returning in < 7 days was 8% lower during 2002-2007 and 25% lower for those with loss of consciousness.
The most recent 6 years of NFL concussion data show a remarkable similarity to the earlier period. However, there was a significant decrease in the percentage of players returning to the same game, and players were held out of play longer.
There was a more conservative management of concussion in NFL players from 2002 to 2007 even though the clinical signs and symptoms remained similar to the earlier 6-year period.
concussion; traumatic brain injury; injury epidemiology; sport injury prevention
Objective To examine the effect of a comprehensive warm-up programme designed to reduce the risk of injuries in female youth football.
Design Cluster randomised controlled trial with clubs as the unit of randomisation.
Setting 125 football clubs from the south, east, and middle of Norway (65 clusters in the intervention group; 60 in the control group) followed for one league season (eight months).
Participants 1892 female players aged 13-17 (1055 players in the intervention group; 837 players in the control group).
Intervention A comprehensive warm-up programme to improve strength, awareness, and neuromuscular control during static and dynamic movements.
Main outcome measure Injuries to the lower extremity (foot, ankle, lower leg, knee, thigh, groin, and hip).
Results During one season, 264 players had relevant injuries: 121 players in the intervention group and 143 in the control group (rate ratio 0.71, 95% confidence interval 0.49 to 1.03). In the intervention group there was a significantly lower risk of injuries overall (0.68, 0.48 to 0.98), overuse injuries (0.47, 0.26 to 0.85), and severe injuries (0.55, 0.36 to 0.83).
Conclusion Though the primary outcome of reduction in lower extremity injury did not reach significance, the risk of severe injuries, overuse injuries, and injuries overall was reduced. This indicates that a structured warm-up programme can prevent injuries in young female football players.
Trial registration ISRCTN10306290.
In a previous prospective study, the risk of concussion and all injury was more than threefold higher among Pee Wee ice hockey players (ages 11–12 years) in a league that allows bodychecking than among those in a league that does not. We examined whether two years of bodychecking experience in Pee Wee influenced the risk of concussion and other injury among players in a Bantam league (ages 13–14) compared with Bantam players introduced to bodychecking for the first time at age 13.
We conducted a prospective cohort study involving hockey players aged 13–14 years in the top 30% of divisions of play in their leagues. Sixty-eight teams from the province of Alberta (n = 995), whose players had two years of bodychecking experience in Pee Wee, and 62 teams from the province of Quebec (n = 976), whose players had no bodychecking experience in Pee Wee, participated. We estimated incidence rate ratios (IRRs) for injury and for concussion.
There were 272 injuries (51 concussions) among the Bantam hockey players who had bodychecking experience in Pee Wee and 244 injuries (49 concussions) among those without such experience. The adjusted IRRs for game-related injuries and concussion overall between players with bodychecking experience in Pee Wee and those without it were as follows: injury overall 0.85 (95% confidence interval [CI] 0.63 to 1.16); concussion overall 0.84 (95% CI 0.48 to 1.48); and injury resulting in more than seven days of time loss (i.e., time between injury and return to play) 0.67 (95% CI 0.46 to 0.99). The unadjusted IRR for concussion resulting in more than 10 days of time loss was 0.60 (95% CI 0.26 to 1.41).
The risk of injury resulting in more than seven days of time loss from play was reduced by 33% among Bantam hockey players in a league where bodychecking was allowed two years earlier in Pee Wee compared with Bantam players introduced to bodychecking for the first time at age 13. In light of the increased risk of concussion and other injury among Pee Wee players in a league where bodychecking is permitted, policy regarding the age at which hockey players are introduced to bodychecking requires further consideration.
Background: Cross-sectional studies have indicated that neurocognitive performance may be impaired among football players. Heading the ball has been suggested as the cause, but recent reviews state that the reported deficits are more likely to be the result of head injuries.
Objective: To examine the association between previous concussions and heading exposure with performance on computer based neuropsychological tests among professional Norwegian football players.
Methods: Players in the Norwegian professional football league (Tippeligaen) performed two consecutive baseline neuropsychological tests (Cogsport) before the 2004 season (90.3% participation, n = 271) and completed a questionnaire assessing previous concussions, match heading exposure (self-reported number of heading actions per match), player career, etc. Heading actions for 18 players observed in two to four matches were counted and correlated with their self-reported values.
Results: Neither match nor lifetime heading exposure was associated with neuropsychological test performance. Nineteen players scored below the 95% confidence interval for one or more subtasks, but they did not differ from the rest regarding the number of previous concussions or lifetime or match heading exposure. The number of previous concussions was positively associated with lifetime heading exposure (exponent (B) = 1.97(1.03–3.75), p = 0.039), but there was no relation between previous concussions and test performance. Self-reported number of headings correlated well with the observed values (Spearman's ρ = 0.77, p<0.001).
Conclusion: Computerised neuropsychological testing revealed no evidence of neuropsychological impairment due to heading exposure or previous concussions in a cohort of Norwegian professional football players.
Objective: To describe the epidemiology of injuries in the Australian Football League (AFL) over four seasons.
Methods: An injury was defined as "any physical or medical condition that caused a player to miss a match in the regular season." The rationale for this definition was to eliminate a previously noted tendency of team recorders to interpret injury definitions subjectively. Administrative records of injury payments to players who did not play matches determined the occurrence of an injury.
Results: The seasonal incidence of new injuries was 39 per club (of 40 players) per season (of 22 matches). The match injury incidence for AFL games was 25.7 injuries per 1000 player hours. The injury prevalence (percentage of players missing through injury in an average week) was 16%. The recurrence rate of injuries was 17%. The most common and prevalent injury was hamstring strain (six injuries per club per season, resulting in 21 missed matches per club per season), followed in prevalence by anterior cruciate ligament and groin injuries.
Conclusions: The injury definition of this study does not produce incidence rates that are complete for all minor injuries. However, the determination of an injury is made by a single entity in exactly the same manner for all teams, which overcomes a significant methodological flaw present in other multiteam injury surveillance systems.
Background: No previous study on adult football involving several different countries has investigated the incidence and pattern of injuries at the highest club competitive level.
Objective: To investigate the risk exposure, risk of injury, and injury pattern of footballers involved in UEFA Champions League and international matches during a full football season.
Method: Eleven top clubs (266 players) in five European countries were followed prospectively throughout the season of 2001–2002. Time-lost injuries and individual exposure times were recorded during all club and national team training sessions and matches.
Results: A total of 658 injuries were recorded. The mean (SD) injury incidence was 9.4 (3.2) injuries per 1000 hours (30.5 (11.0) injuries per 1000 match hours and 5.8 (2.1) injuries per 1000 training hours). The risk of match injury was significantly higher in the English and Dutch teams than in the teams from France, Italy, and Spain (41.8 (3.3) v 24.0 (7.9) injuries per 1000 hours; p = 0.008). Major injuries (absence >4 weeks) constituted 15% of all injuries, and the risk of major injury was also significantly higher among the English and Dutch teams (p = 0.04). National team players had a higher match exposure, with a tendency towards a lower training injury incidence than the rest of the players (p = 0.051). Thigh strain was the most common injury (16%), with posterior strains being significantly more common than anterior ones (67 v 36; p<0.0001).
Conclusions: The risk of injury in European professional football is high. The most common injury is the thigh strain typically involving the hamstrings. The results suggest that regional differences may influence injury epidemiology and traumatology, but the factors involved are unclear. National team players have a higher match exposure, but no higher risk of injury than other top level players.
Objective: To investigate the risks and benefits of the use of local anaesthetic in a descriptive case series from three professional football (rugby league and Australian football) teams.
Methods: Cases of local anaesthetic use (both injection and topical routes) and complications over a six year period were recorded. Complications were assessed using clinical presentation and also by recording all cases of surgery, incidences of players missing games or leaving the field through injury, and causes of player retirement.
Results: There were 268 injuries for which local anaesthetic was used to allow early return to play. There were 11 minor and six major complications, although none of these were catastrophic or career ending. About 10% of players taking the field did so with the assistance of local anaesthetic. This rate should be considered in isolation and not seen to reflect standard practice by team doctors.
Conclusions: The use of local anaesthetic in professional football may reduce the rates of players missing matches through injury, but there is the risk of worsening the injury, which should be fully explained to players. A procedure should only be used when both the doctor and player consider that the benefits outweigh the risks.
Objectives: To describe the safety attitudes and beliefs of junior (aged 16–18 years) Australian football players.
Setting: Six Victorian Football League Under 18 (VFL U18) clubs in Victoria, Australia.
Methods: Cross sectional survey. Altogether 103 players completed a self report questionnaire about their safety beliefs and perceptions of support when injured, across three contexts in which they played: VFL U18 club, local club, and school.
Results: Although only 6% believed it was safe to play with injuries, 58% were willing to risk doing so. This increased to almost 80% when players perceived that their chances of being selected to play for a senior elite team would be adversely affected if they did not play. There were significant differences in the perceived level of support for injured players and in the ranking of safety as a high priority across the three settings. In general, the VFL U18 clubs were perceived as providing good support for injured players and giving a high priority to safety issues, but local clubs and particularly schools were perceived to address these issues less well.
Conclusions: Junior Australian football players have certain beliefs and perceptions in relation to injury risk that have the potential to increase injuries. These negative beliefs need to be addressed in any comprehensive injury prevention strategy aimed at these players.
OBJECTIVE: To define the causes of injuries to players in English professional football during competition and training. METHOD: Lost time injuries to professional and youth players were prospectively recorded by physiotherapists at four English League clubs over the period 1994 to 1997. Data recorded included information related to the injury, date and place of occurrence, type of activity, and extrinsic Playing factors. RESULTS: In all, 67% of all injuries occurred during competition. The overall injury frequency rate (IFR) was 8.5 injuries/1000 hours, with the IFR during competitions (27.7) being significantly (p < 0.01) higher than that during training (3.5). The IFRs for youth players were found to increase over the second half of the season, whereas they decreased for professional players. There were no significant differences in IFRs for professional and youth players during training. There were significantly (p < 0.01) injuries in competition in the 15 minute periods at the end of each half. Strains (41%), sprains (20%), and contusions (20%) represented the major types of injury. The thigh (23%), the ankle (17%), knee (14%), and lower leg (13%) represented the major locations of injury, with significantly (p < 0.01) more injuries to the dominant body side. Reinjury counted for 22% of all injuries. Only 12% of all injuries were caused by a breach of the rules of football, although player to player contact was involved in 41% of all injuries. CONCLUSIONS: The overall level of injury to professional footballers has been showed to be around 1000 times higher times higher than for industrial occupations generally regarded as high risk. The high level of muscle strains, in particular, indicates possible weakness in fitness training programmes and use of warming up and cooling down procedures by clubs and the need for benchmarking players' levels of fitness and performance. Increasing levels of injury to youth players as a season progresses emphasizes the importance of controlling the exposure of young players to high levels of competition.
Positions, signs, symptoms, and medical management of National Football League players with concussions involving 7 or more days out (7+) from play were compared for two 6-year study periods (2002-2007 vs 1996-2001).
More players were held out 7+ days in the 2002-2007 period without significant difference in concussion signs and symptoms.
From 1996 through 2007, National Football League team physicians reported concussion signs and symptoms, medical action taken, and follow-up management.
During the 2002-2007 period, 143 (16.7%) and 33 (3.86%) concussed players were out 7+ days and 21+ days, respectively, compared with 73 (8.2%) and 7 (0.79%) in the 1996-2001 period, a significant difference (z = 5.39, P < .01). The positions with the highest fraction of 7+ days out were the quarterback (24.5% vs 16.1%), linebacker (19.7% vs 4.6%), and wide receiver (19.5% vs 8.2%) in the later versus earlier period. The player positions with the highest odds for being out 7+ days were quarterback (odds ratio = 1.80 vs 4.02), linebacker (odds ratio = 1.28 vs 0.65), and wide receiver (odds ratio = 1.25 vs 1.15). The highest incidence of 7+ days out occurred after passing plays (32.2% vs 37.0%), followed by kickoffs (18.9% vs 21.9%). The majority of players with 7+ days out were removed from the game on the day of injury (74.8% vs 72.6%); a smaller fraction were returned to play on the day of injury in the later 6 years (3.5% vs 6.8%).
The positions with the highest odds for being out 7+ days with concussion were quarterbacks, linebackers, and wide receivers. In the more recent 6-year period, more players were managed conservatively by being held out 7+ days, even though the signs and symptoms of their concussions were similar to those in the earlier period.
concussion; traumatic brain concussion; concussion surveillance; epidemiology; sport concussion; postconcussion syndrome; concussion guidelines
Considerable controversy regarding fluid replacement during exercise currently exists.
To compare fluid turnover between National Football League (NFL) players who have constant fluid access and collegiate football players who replace fluids during water breaks in practices.
Respective preseason training camps of 1 National Collegiate Athletic Association Division II (DII) football team and 1 NFL football team. Both morning and afternoon practices for DII players were 2.25 hours in length, and NFL players practiced for 2.25 hours in the morning and 1 hour in the afternoon. Environmental conditions did not differ.
Patients or Other Participants:
Eight NFL players (4 linemen, 4 backs) and 8 physically matched DII players (4 linemen, 4 backs) participated.
All players drank fluids only from their predetermined individual containers. The NFL players could consume both water and sports drinks, and the DII players could only consume water.
Main Outcome Measure(s):
We measured fluid consumption, sweat rate, total sweat loss, and percentage of sweat loss replaced. Sweat rate was calculated as change in mass adjusted for fluids consumed and urine produced.
Mean sweat rate was not different between NFL (2.1 ± 0.25 L/h) and DII (1.8 ± 0.15 L/h) players (F1,12 = 2, P = .18) but was different between linemen (2.3 ± 0.2 L/h) and backs (1.6 ± 0.2 L/h) (t14 = 3.14, P = .007). We found no differences between NFL and DII players in terms of percentage of weight loss (t7 = −0.03, P = .98) or rate of fluid consumption (t7 = −0.76, P = .47). Daily sweat loss was greater in DII (8.0 ± 2.0 L) than in NFL (6.4 ± 2.1 L) players (t7 = −3, P = .02), and fluid consumed was also greater in DII (5.0 ± 1.5 L) than in NFL (4.0 ± 1.1 L) players (t7 = −2.8, P = .026). We found a correlation between sweat loss and fluids consumed (r = 0.79, P < .001).
During preseason practices, the DII players drinking water at water breaks replaced the same volume of fluid (66% of weight lost) as NFL players with constant access to both water and sports drinks.
thermoregulation; sodium loss; dehydration; hydration; carbohydrate and electrolyte drinks
Objective—To examine the methods of appointment, experience, and qualifications of club doctors and physiotherapists in professional football.
Methods—Semistructured tape recorded interviews with 12 club doctors, 10 club physiotherapists, and 27 current and former players. A questionnaire was also sent to 90 club doctors; 58 were returned.
Results—In almost all clubs, methods of appointment of doctors are informal and reflect poor employment practice: posts are rarely advertised and many doctors are appointed on the basis of personal contacts and without interview. Few club doctors had prior experience or qualifications in sports medicine and very few have a written job description. The club doctor is often not consulted about the appointment of the physiotherapist; physiotherapists are usually appointed informally, often without interview, and often by the manager without involving anyone who is qualified in medicine or physiotherapy. Half of all clubs do not have a qualified (chartered) physiotherapist; such unqualified physiotherapists are in a weak position to resist threats to their clinical autonomy, particularly those arising from managers' attempts to influence clinical decisions.
Conclusions—Almost all aspects of the appointment of club doctors and physiotherapists need careful re-examination.
Key Words: football clubs; doctors; physiotherapists; qualifications
Anterior cruciate ligament (ACL) injury is a severe event for a footballer, but it is unclear if the knee injury rate is higher on returning to football after ACL injury.
To study the risk of knee injury in elite footballers with a history of ACL injury compared with those without.
The Swedish male professional league (310 players) was studied during 2001. Players with a history of ACL injury at the study start were identified. Exposure to football and all time loss injuries during the season were recorded prospectively.
Twenty four players (8%) had a history of 28 ACL injuries in 27 knees (one rerupture). These players had a higher incidence of new knee injury of any type than the players without ACL injury (mean (SD) 4.2 (3.7) v 1.0 (0.7) injuries per 1000 hours, p = 0.02). The risk of suffering a knee overuse injury was significantly higher regardless of whether the player (relative risk 4.8, 95% confidence interval 2.0 to 11.2) or the knee (relative risk 7.9, 95% confidence interval 3.4 to 18.5) was used as the unit of analysis. No interactive effects of age or any other anthropometric data were seen.
The risk of new knee injury, especially overuse injury, was significantly increased on return to elite football after ACL injury regardless of whether the player or the knee was used as the unit of analysis.
football; injury incidence; knee; prevalence; anterior cruciate ligament
To compare the incidence, nature, severity and cause of training injuries sustained on new generation artificial turf and grass by male and female footballers.
The National Collegiate Athletic Association Injury Surveillance System was used for a two‐season (August to December) prospective study involving American college and university football teams (2005 season: men 52 teams, women 64 teams; 2006 season: men 54 teams, women 72 teams). Injury definitions and recording procedures were compliant with the international consensus statement for epidemiological studies of injuries in football. Athletic trainers recorded details of the playing surface and the location, diagnosis, severity and cause of all training injuries. The number of days lost from training and match play was used to define the severity of an injury. Training exposures (player hours) were recorded on a team basis.
The overall incidence of training injuries for men was 3.34 injuries/1000 player hours on artificial turf and 3.01 on grass (incidence ratio 1.11; p = 0.21) and for women it was 2.60 injuries/1000 player hours on artificial turf and 2.79 on grass (incidence ratio 0.93; p = 0.46). For men, the mean severity of injuries that were not season ending injuries was 9.4 days (median 5) on artificial turf and 7.8 days (median 4) on grass and, for women, 10.5 days (median 4) on artificial turf and 10.0 days (median 5) on grass. Joint (non‐bone)/ligament/cartilage and muscle/tendon injuries to the lower limbs were the most common general categories of injury on artificial turf and grass for both male and female players. Most training injuries were acute (men: artificial turf 2.92, grass 2.63, p = 0.24; women: artificial turf 1.94, grass 2.23, p = 0.21) and resulted from player‐to‐player contact (men: artificial turf 1.08, grass 0.85, p = 0.10; women: artificial turf 0.47, grass 0.56; p = 0.45).
There were no major differences between the incidence, severity, nature or cause of training injuries sustained on new generation artificial turf and on grass by either men or women.
acute; gradual onset; contact; non‐contact; risk