Although it is common practice to administer pre-participation examinations (PPE) of athletes prior to training, there are no clearly established formats. Elements integral to the PPE fall within the scope of physical therapist practice, and are often categorized as a form of primary prevention for musculoskeletal disorders as defined in the Guide to Physical Therapist Practice.
The purpose of this study is to describe the design and implementation of a PPE for a women's professional (gridiron) football team. The results and findings from this PPE provide one of the first musculoskeletal profiles and information about selected physical characteristics from members of a female professional football team.
Players from the Kentucky Karma women's football team, a member of the National Women's Football League (NWFA), volunteered to participate in a PPE. Of twenty-five eligible team members, thirteen consented to participate. The PPE consisted of a health history questionnaire, a musculoskeletal screening, and a series of physical performance and agility tests.
The players' average (± SD) age, height, weight, body mass index (BMI), and body fat percentage were 29.6 (± 5.6) yrs., 1.66 (± .05) m, 66.8 (± 12.6) kg, 24.1 (± 3.7), and 27.4 (± 6.6) %, respectively. Commonly reported injuries were similar to those reported in men's collegiate football.
This is one of the first papers to report on a model PPE for a women's professional football team. Future research is needed to establish a standard PPE, recognize common injuries, and develop prevention strategies unique to women's professional football.
musculoskeletal screening; female athlete; performance testing
Context: Estimates suggest that more than 5.5 million youths play football annually, and 28% of youth football players (age range = 5 to 14 years) are injured each year, resulting in more than 187 000 emergency room visits.
Objective: To analyze time-loss (TL) and non–time-loss (NTL) injury patterns across age groups in youth football players.
Design: Two-year observational cohort.
Setting: Two midwestern communities, including players from the fourth through eighth grades and between the ages of 9 and 14 years.
Patients or Other Participants: A total of 779 players participated, including 296 in grades 4 and 5; 203 in grade 6; 188 in grade 7; and 92 in grade 8. (Players in the fourth and fifth grades participated on the same teams, so we considered them as a single group.)
Main Outcome Measure(s): Injury frequencies and exposures were collected by certified athletic trainers present at each practice and game and used to calculate injury rates with 95% confidence intervals (CIs) for both TL and NTL injuries across age groups.
Results: A total of 474 injuries and 26 565 exposures were identified. Injuries were reported by 36.5% of the players, with 14.4% reporting more than 1 injury in a season. The overall injury rate per 1000 athlete-exposures (A-Es) was 17.8 (95% CI = 16.3, 19.5). The injury rate increased with each succeeding grade from 14.3 per 1000 A-Es (95% CI = 12.1, 16.9) in grades 4 and 5 to 21.7 per 1000 A-Es (95% CI = 17.2, 27.3) in grade 8. A total of 58.6% of all injuries were NTL. Non–time-loss injuries accounted for 70.1% of the injuries reported by fourth and fifth graders, 55.1% by sixth graders, 64.0% by seventh graders, and 33.8% by eighth graders. The cumulative NTL injury rate was 10.5 per 1000 A-Es (95% CI = 9.3, 11.8), and the TL injury rate was 7.4 per 1000 A-Es (95% CI = 6.4, 8.5).
Conclusions: Youth football players sustained more NTL injuries than TL injuries. We recommend that a first-aid–certified coach or league official be present at all games and practices.
youth sports; epidemiology; injury incidence
To compare injury risk in elite football played on artificial turf compared with natural grass.
Prospective two‐cohort study.
Male European elite football leagues.
290 players from 10 elite European clubs that had installed third‐generation artificial turf surfaces in 2003–4, and 202 players from the Swedish Premier League acting as a control group.
Main outcome measure
The incidence of injury during training and match play did not differ between surfaces for the teams in the artificial turf cohort: 2.42 v 2.94 injuries/1000 training hours and 19.60 v 21.48 injuries/1000 match hours for artificial turf and grass respectively. The risk of ankle sprain was increased in matches on artificial turf compared with grass (4.83 v 2.66 injuries/1000 match hours; rate ratio 1.81, 95% confidence interval 1.00 to 3.28). No difference in injury severity was seen between surfaces. Compared with the control cohort who played home games on natural grass, teams in the artificial turf cohort had a lower injury incidence during match play (15.26 v 23.08 injuries/1000 match hours; rate ratio 0.66, 95% confidence interval 0.48 to 0.91).
No evidence of a greater risk of injury was found when football was played on artificial turf compared with natural grass. The higher incidence of ankle sprain on artificial turf warrants further attention, although this result should be interpreted with caution as the number of ankle sprains was low.
injuries; football; surface properties; soccer; artificial turf
Methods: During the preseason, 10 Australian football clubs volunteered 23 teams to participate in a protective equipment randomised controlled trial, the Australian Football Injury Prevention Project (AFIPP). All players from these teams were invited to participate. Players who did not agree to participate in AFIPP were surveyed about their reasons for non-involvement.
Results: 110 football players (response rate 63.6%) completed the non-responder survey and cited the two main reasons behind non-involvement in the project as "I did not know about the project" (39.4%) and "I was not at training when the research team visited" (36.5%).
Conclusions and implications: Preseason may not be the best time for maximal player recruitment in community based sports safety research. Enhanced communication between researchers and players at community level football clubs during the recruitment phase is likely to improve response rates.
The purpose of this study was to identify the existence of Relative Age Effect (RAE) at youth level in both elite and amateur Spanish soccer clubs, and also to carry out an analysis providing with information on how this effect has evolved in recent years. We have obtained information on the youth teams of the 20 clubs belonging to the Spanish Professional Football League (LFP) in two separate seasons (2005-2006 and 2008-2009) as well as data on five youth academies belonging to amateur clubs. The collected data revealed an over- representation of players born in the first months of the selection year in all groups of analysis (Elite 2005-2006, Elite 2008-2009 and Amateurs), although only the Elite groups showed significant variations in birth-date distribution in relation to the Spanish population. The results showed a reduction in RAE from the 2005-2006 season to the 2008-2009 season. The following variables - playing position, the number of years each player has spent in their specific age group and the category of the team at each club were shown not to have influence on the extent of RAE.
Key pointsThere was RAE in all groups analyzed, although only the Elite groups showed significant variations in birth-date distribution in relation to the general population.RAE is more evident in the Elite groups than in the Amateur probably because of the detection process, which is more thorough in the Elite groups.Playing position, number of years in their specific age group and category of the team did not have any influence on the extent of RAE.Any attempts to prevent RAE should be based on a stable sport policy and the implication of all the stakeholders in the system. All of them should think in the development of a player as a long-term project.
Relative age effect; season-of-birth bias; selection processes; talent identification; youth soccer
This paper focuses on the contribution of Australian Football League (AFL) players to their team’s on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player’s match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong’s Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list “covering the line ”.
Key pointsA simulated interaction matrix for Australian Rules football players is proposedThe simulations were carried out by fitting unique negative binomial distributions to each player pairing in a sideEigenvector centrality was calculated for each player in a simulated matrix, then for the teamThe team centrality measure adequately predicted the team’s winning marginA player’s net effect on margin could hence be estimated by replacing him in the simulated side with another player
Interaction matrix; negative binomial distribution; eigenvector centrality; player ratings
Objectives: To undertake a prospective epidemiological study of the injuries sustained in English youth academy football over two competitive seasons.
Methods: Player injuries were annotated by medical staff at 38 English football club youth academies. A specific injury audit questionnaire was used together with a weekly return form that documented each club's current injury status.
Results: A total of 3805 injuries were reported over two complete seasons (June to May) with an average injury rate of 0.40 per player per season. The mean (SD) number of days absent for each injury was 21.9 (33.63), with an average of 2.31 (3.66) games missed per injury. The total amount of time absent through injury equated to about 6% of the player's development time. Players in the higher age groups (17–19 years) were more likely to receive an injury than those in the younger age groups (9–16 years). Injury incidence varied throughout the season, with training injuries peaking in January (p<0.05) and competition injuries peaking in October (p<0.05). Competition injuries accounted for 50.4% of the total, with 36% of these occurring in the last third of each half. Strains (31%) and sprains (20%) were the main injury types, predominantly affecting the lower limb, with a similar proportion of injuries affecting the thigh (19%), ankle (19%), and knee (18%). Growth related conditions, including Sever's disease and Osgood-Schlatter's disease, accounted for 5% of total injuries, peaking in the under 13 age group for Osgood-Schlatter's disease and the under 11 age group for Sever's disease. The rate of re-injury of exactly the same anatomical structure was 3%.
Conclusions: Footballers are at high risk of injury and there is a need to investigate ways of reducing this risk. Injury incidence at academy level is approximately half that of the professional game. Academy players probably have much less exposure to injury than their full time counterparts. Areas that warrant further attention include the link between musculoskeletal development and the onset of youth related conditions such as Sever's disease and Osgood-Schlatter's disease, the significant number of non-contact injuries that occur in academy football, and the increased rates of injury during preseason training and after the mid season break. This study has highlighted the nature and severity of injuries that occur at academy level, and the third part of the audit process now needs to be undertaken: the implementation of strategies to reduce the number of injuries encountered at this level.
Background: The Australian football injury prevention project (AFIPP) was a randomised controlled trial examining the effects of protective equipment on injury rates in Australian Football.
Objective: To present the results of the AFIPP baseline survey of community football players' attitudes towards protective equipment.
Methods: Teams of players were recruited from the largest community football league in Victoria, Australia, during the 2001 playing season; 301 players were enrolled in the study and all were surveyed before the season began about their attitudes towards protective headgear and mouthguards.
Results: Almost three quarters of the players (73.6%) reported wearing mouthguards during the previous playing season (year 2000) compared with only 2.1% wearing headgear. The most common reasons for not wearing headgear and mouthguards (in non-users) were: "I don't like wearing it" (headgear: 44.8%; mouthguards: 30.6%), and "It is too uncomfortable" (headgear: 40.7%; mouthguards: 45.8%).
Conclusions: The higher mouthguard usage reflects the favourable attitudes towards mouthguards by Australian football players generally. Similarly, the low headgear usage reflects the low acceptance of this form of protection in this sport. Further research should be directed towards establishing the reasons why players seem to believe that headgear plays a role in injury prevention yet few wear it.
To determine the level of pre‐employment, pre‐season, and post‐injury medical evaluation of players undertaken within UK professional team sports.
A postal, whole population survey.
Elite professional sports teams in England.
Six groups comprising the following clubs: professional football (Premiership, 15 of 20; Championship, 22 of 24), rugby union (Premiership, 9 of 12; Division 1, 11 of 14), rugby league (Super League, 6 of 11) and cricket (County, 12 of 18).
Main outcome measures
Number (percentage) of clubs recording players' medical history and undertaking medical examinations of players' cardiovascular, respiratory, neurological, and musculoskeletal systems at pre‐employment, pre‐season and post‐injury.
The overall response to the survey was 74%, with a range from 55% to 92% among groups. Almost 90% of football (Premiership and Championship) and rugby union (Premiership) clubs took a pre‐employment history of players' general health, cardiovascular, respiratory, neurological, and musculoskeletal systems, but fewer than 50% of cricket and rugby union (Division 1) clubs recorded a history. The majority of football (Premiership and Championship) and rugby union (Premiership) clubs implemented both cardiovascular and musculoskeletal examinations of players before employment. Fewer than 25% of clubs in any of the groups implemented neurological examinations of players at pre‐employment, although 100% of rugby union (Premiership) and rugby league clubs implemented neurological testing during pre‐season.
None of the sports implemented best practice guidelines for the preparticipation evaluation of players at all stages of their employment. Departures from best practice guidelines and differences in practices between clubs within the same sport leave club physicians vulnerable if their players sustain injuries or ill health conditions that could have been identified and avoided through the implementation of a preparticipation examination.
preparticipation evaluation; professional sport; risk factors for injury; medical risk factors
International sports bodies should protect the health of their athletes, and injury surveillance is an important pre-requisite for injury prevention. The Fédération International de Football Association (FIFA) has systematically surveyed all football injuries in their tournaments since 1998.
Analysis of the incidence, characteristics and changes of football injury during international top-level tournaments 1998–2012.
All newly incurred football injuries during the FIFA tournaments and the Olympic Games were reported by the team physicians on a standardised injury report form after each match. The average response rate was 92%.
A total of 3944 injuries were reported from 1546 matches, equivalent to 2.6 injuries per match. The majority of injuries (80%) was caused by contact with another player, compared with 47% of contact injuries by foul play. The most frequently injured body parts were the ankle (19%), lower leg (16%) and head/neck (15%). Contusions (55%) were the most common type of injury, followed by sprains (17%) and strains (10%). On average, 1.1 injuries per match were expected to result in absence from a match or training. The incidence of time-loss injuries was highest in the FIFA World Cups and lowest in the FIFA U17 Women's World Cups. The injury rates in the various types of FIFA World Cups had different trends over the past 14 years.
Changes in the incidence of injuries in top-level tournaments might be influenced by the playing style, refereeing, extent and intensity of match play. Strict application of the Laws of the Games is an important means of injury prevention.
Soccer; Epidemiology; Sporting injuries; Injury Prevention
Previous study has shown a likely link between increased shoe- surface traction and risk of knee Anterior Cruciate Ligament (ACL) injury. Portable natural grass systems are being used more often in sport, but no study to date has investigated their relative safety. By their nature, they must have high resistance to falling apart and therefore newly laid systems may be at risk of creating excessive shoe-surface traction. This study describes two clusters of knee injuries (particularly non-contact ACL injuries), each occurring to players of one professional football team at single venue, using portable grass, in a short space of time. The first series included two ACL injuries, one posterolateral complex disruption and one lateral ligament tear occurring in two rugby league games on a portable bermudagrass surface in Brisbane, Australia. The second series included four non-contact ACL injuries over a period of ten weeks in professional soccer games on a portable Kentucky bluegrass/perennial ryegrass surface in Barcelona, Spain. Possible intrinsic risk factors are discussed but there was no common risk shared by the players. Although no measures of traction were made at the Brisbane venue, average rotational traction was measured towards the end of the injury cluster at Camp Nou, Barcelona, to be 48 Nm. Chance undoubtedly had a part to play in these clusters, but the only obvious common risk factor was play on a portable natural grass surface soon after it was laid. Further study is required to determine whether portable natural grass systems may exhibit high shoe-surface traction soon after being laid and whether this could be a risk factor for knee injury.
Key pointsExcessive shoe-surface traction is a hypothesised risk factor for knee ligament injuries, including anterior cruciate ligament injuries.Portable natural grass systems (by their nature in order to prevent grass rolls or squares from falling apart) will tend to exhibit high resistance to tearing when first laid. This may lead to excessive shoe-surface traction.This dual case series describes two clusters of non-contact knee ligament injuries which occurred in circumstances of newly laid portable turf.Further research is warranted to undercover any link between non-contact knee ligament injuries and ground surfaces conditions.
Anterior cruciate ligament; bermudagrass; perennial ryegrass; Kentucky bluegrass
Football (soccer) is endorsed as a health-promoting physical activity worldwide. When football programs are introduced as part of general health promotion programs, equal access and limitation of pre-participation disparities with regard to injury risk are important. The aim of this study was to explore if disparity with regard to parents’ educational level, player body mass index (BMI), and self-reported health are determinants of football injury in community-based football programs, separately or in interaction with age or gender.
Four community football clubs with 1230 youth players agreed to participate in the cross-sectional study during the 2006 season. The study constructs (parents’ educational level, player BMI, and self-reported health) were operationalized into questionnaire items. The 1-year prevalence of football injury was defined as the primary outcome measure. Data were collected via a postal survey and analyzed using a series of hierarchical statistical computations investigating associations with the primary outcome measure and interactions between the study variables. The survey was returned by 827 (67.2%) youth players. The 1-year injury prevalence increased with age. For youths with parents with higher formal education, boys reported more injuries and girls reported fewer injuries than expected; for youths with lower educated parents there was a tendency towards the opposite pattern. Youths reporting injuries had higher standardized BMI compared with youths not reporting injuries. Children not reporting full health were slightly overrepresented among those reporting injuries and underrepresented for those reporting no injury.
Pre-participation disparities in terms of parents’ educational level, through interaction with gender, BMI, and self-reported general health are associated with increased injury risk in community-based youth football. When introduced as a general health promotion, football associations should adjust community-based youth programs to accommodate children and adolescents with increased pre-participation injury risk.
There is a gap in knowledge about the mechanisms of sports-related brain injuries. The objective of this study was to determine the mechanisms of brain injuries among children and youth participating in team sports.
We conducted a retrospective case series of brain injuries suffered by children participating in team sports. The Canadian Hospitals Injury Reporting and Prevention Program (CHIRPP) database was searched for brain injury cases among 5–19 year-olds playing ice hockey, soccer, American football (football), basketball, baseball, or rugby between 1990 and 2009. Mechanisms of injury were classified as “struck by player,” “struck by object,” “struck by sport implement,” “struck surface,” and “other.” A descriptive analysis was performed.
There were 12,799 brain injuries related to six team sports (16.2% of all brain injuries registered in CHIRPP). Males represented 81% of injuries and the mean age was 13.2 years. Ice hockey accounted for the greatest number of brain injuries (44.3%), followed by soccer (19.0%) and football (12.9%). In ice hockey, rugby, and basketball, striking another player was the most common injury mechanism. Football, basketball, and soccer also demonstrated high proportions of injuries due to contact with an object (e.g., post) among younger players. In baseball, a common mechanism in the 5–9 year-old group was being hit with a bat as a result of standing too close to the batter (26.1% males, 28.3% females).
Many sports-related brain injury mechanisms are preventable. The results suggest that further efforts aimed at universal rule changes, safer playing environments, and the education of coaches, players, and parents should be targeted in maximizing prevention of sport-related brain injury using a multifaceted approach.
Objective: Head/orofacial (H/O) injuries are common in Australian rules football. Mouthguards are widely promoted to prevent these injuries, in spite of the lack of formal evidence for their effectiveness.
Design: The Australian football injury prevention project was a cluster randomized controlled trial to evaluate the effectiveness of mouthguards for preventing H/O injuries in these players.
Setting and subjects: Twenty three teams (301 players) were recruited from the largest community football league in Australia.
Intervention: Teams were randomly allocated to either the MG: custom made mouthguard or C: control (usual mouthguard behaviours) study arm.
Main outcome measures: All injuries, participation in training and games, and mouthguard use were monitored over the 2001 playing season. Injury rates were calculated as the number of injuries per 1000 person hours of playing time. Adjusted incidence rate ratios were obtained from Poisson regression models.
Results: Players in both study arms wore mouthguards, though it is unlikely that many controls wore custom made ones. Wearing rates were higher during games than training. The overall rate of H/O injury was 2.7 injuries per 1000 exposure hours. The rate of H/O injury was higher during games than training. The adjusted H/O injury incidence rate ratio was 0.56 (95% CI 0.32 to 0.97) for MG versus C during games and training, combined.
Conclusions: There was a significant protective effect of custom made mouthguards, relative to usual mouthguard use, during games. However, the control players still wore mouthguards throughout the majority of games and this could have diluted the effect.
Despite the growing popularity of women's football and the increasing number of female players, there has been little research on injuries sustained by female football players.
Analysis of the incidence, characteristics and circumstances of injury in elite female football players in top‐level international tournaments.
Injuries incurred in seven international football tournaments were analysed using an established injury report system. Doctors of all participating teams reported all injuries after each match on a standardised injury reporting form. The mean response rate was 95%.
387 injuries were reported from 174 matches, equivalent to an incidence of 67.4 injuries/1000 player hours (95% CI 60.7 to 74.1) or 2.2 injuries/match (95% CI 2.0 to 2.4). Most injuries (84%; 317/378) were caused by contact with another player. The injuries most commonly involved the lower extremity (n = 248; 65%), followed by injuries of the head and neck (n = 67, 18%), trunk (n = 33, 9%) and upper extremity (n = 32, 8%). Contusions (n = 166; 45%) were the most frequent type of injury, followed by sprains or ligament rupture (n = 96; 26%) and strains or muscle fibre ruptures (n = 31; 8%). The most common diagnosis was an ankle sprain. There were 7 ligament ruptures and 15 sprains of the knee. On average 1 injury/match (95% CI 0.8 to 1.2) was expected to result in absence from a match or training.
The injury rate in women's top‐level tournaments was within the range reported previously for match injuries in elite male and female players. However, the diagnoses and mechanisms of injury among the female players differed substantially from those previously reported in male football players.
Data on the incidence, nature, severity and cause of match football injuries sustained on dirt field are scarce. The objectives of this study was to compare the incidence, nature, severity and cause of match injuries sustained on dirt field and artificial turf field by amateur male football players.
A prospective two-cohort design was employed. Participants were 252 male football players (mean age 27 years, range 18-43) in 14 teams who participated in a local championship carried on a dirt field and 216 male football players (mean age 28 years, range 17-40) in 12 teams who participated in a local championship carried on a artificial turf field in the same zone of the city. Injury definitions and recording procedures were compliant with the international consensus statement for epidemiological studies of injuries in football.
The overall incidence of match injuries for men was 36.9 injuries/1000 player hours on dirt field and 19.5 on artificial turf (incidence rate ratio 1.88; 95% CI 1.19-3.05).
Most common injured part on dirt field was ankle (26.7%) and on artificial turf was knee (24.3%). The most common injury type in the dirt field was skin injuries (abrasion and laceration) and in the artificial turf was sprain and ligament injury followed by haematoma/contusion/bruise.
Most injuries were acute (artificial turf 89%, dirt field 91%) and resulted from player-to-player contact (artificial turf 59.2%, dirt field 51.4%).
Most injuries were slight and minimal in dirt field cohort but in artificial turf cohort the most injuries were mild.
There were differences in the incidence and type of football match injuries sustained on dirt field and artificial turf.
Concussion in the National Football League (NFL) remains an important issue. An initial description of the injury epidemiology involved 6 years from 1996 to 2001.
The increased attention to concussions may have resulted in team physicians being more conservative in treating players in recent years.
Two consecutive 6-year periods (1996-2001 and 2002-2007) were compared to determine changes in the circumstances associated with the injury, the patterns of signs and symptoms, and the players’ time loss from participation in the NFL.
During 2002-2007, concussions were recorded by NFL team physicians and athletic trainers using the same standardized reporting form used from 1996 to 2001. Player position, type of play, concussion signs and symptoms, loss of consciousness, and medical action taken were recorded.
There were 0.38 documented concussions per NFL game in 2002-2007—7.6% lower than the 0.42 in the earlier period (1996-2001). The injury rate was lower in quarterbacks and wide receivers but significantly higher in tight ends during the second 6 years. The most frequent symptoms were headaches and dizziness; the most common signs were problems with information processing and immediate recall. During 2002-2007, a significantly lower fraction of concussed players returned to the same game, and more were removed from play. Most concussed players (83.5%) returned to play in < 7 days; the percentage decreased to 57.4% with loss of consciousness. The number of players returning in < 7 days was 8% lower during 2002-2007 and 25% lower for those with loss of consciousness.
The most recent 6 years of NFL concussion data show a remarkable similarity to the earlier period. However, there was a significant decrease in the percentage of players returning to the same game, and players were held out of play longer.
There was a more conservative management of concussion in NFL players from 2002 to 2007 even though the clinical signs and symptoms remained similar to the earlier 6-year period.
concussion; traumatic brain injury; injury epidemiology; sport injury prevention
In a previous prospective study, the risk of concussion and all injury was more than threefold higher among Pee Wee ice hockey players (ages 11–12 years) in a league that allows bodychecking than among those in a league that does not. We examined whether two years of bodychecking experience in Pee Wee influenced the risk of concussion and other injury among players in a Bantam league (ages 13–14) compared with Bantam players introduced to bodychecking for the first time at age 13.
We conducted a prospective cohort study involving hockey players aged 13–14 years in the top 30% of divisions of play in their leagues. Sixty-eight teams from the province of Alberta (n = 995), whose players had two years of bodychecking experience in Pee Wee, and 62 teams from the province of Quebec (n = 976), whose players had no bodychecking experience in Pee Wee, participated. We estimated incidence rate ratios (IRRs) for injury and for concussion.
There were 272 injuries (51 concussions) among the Bantam hockey players who had bodychecking experience in Pee Wee and 244 injuries (49 concussions) among those without such experience. The adjusted IRRs for game-related injuries and concussion overall between players with bodychecking experience in Pee Wee and those without it were as follows: injury overall 0.85 (95% confidence interval [CI] 0.63 to 1.16); concussion overall 0.84 (95% CI 0.48 to 1.48); and injury resulting in more than seven days of time loss (i.e., time between injury and return to play) 0.67 (95% CI 0.46 to 0.99). The unadjusted IRR for concussion resulting in more than 10 days of time loss was 0.60 (95% CI 0.26 to 1.41).
The risk of injury resulting in more than seven days of time loss from play was reduced by 33% among Bantam hockey players in a league where bodychecking was allowed two years earlier in Pee Wee compared with Bantam players introduced to bodychecking for the first time at age 13. In light of the increased risk of concussion and other injury among Pee Wee players in a league where bodychecking is permitted, policy regarding the age at which hockey players are introduced to bodychecking requires further consideration.
Objective To examine the effect of a comprehensive warm-up programme designed to reduce the risk of injuries in female youth football.
Design Cluster randomised controlled trial with clubs as the unit of randomisation.
Setting 125 football clubs from the south, east, and middle of Norway (65 clusters in the intervention group; 60 in the control group) followed for one league season (eight months).
Participants 1892 female players aged 13-17 (1055 players in the intervention group; 837 players in the control group).
Intervention A comprehensive warm-up programme to improve strength, awareness, and neuromuscular control during static and dynamic movements.
Main outcome measure Injuries to the lower extremity (foot, ankle, lower leg, knee, thigh, groin, and hip).
Results During one season, 264 players had relevant injuries: 121 players in the intervention group and 143 in the control group (rate ratio 0.71, 95% confidence interval 0.49 to 1.03). In the intervention group there was a significantly lower risk of injuries overall (0.68, 0.48 to 0.98), overuse injuries (0.47, 0.26 to 0.85), and severe injuries (0.55, 0.36 to 0.83).
Conclusion Though the primary outcome of reduction in lower extremity injury did not reach significance, the risk of severe injuries, overuse injuries, and injuries overall was reduced. This indicates that a structured warm-up programme can prevent injuries in young female football players.
Trial registration ISRCTN10306290.
Objective: To describe the epidemiology of injuries in the Australian Football League (AFL) over four seasons.
Methods: An injury was defined as "any physical or medical condition that caused a player to miss a match in the regular season." The rationale for this definition was to eliminate a previously noted tendency of team recorders to interpret injury definitions subjectively. Administrative records of injury payments to players who did not play matches determined the occurrence of an injury.
Results: The seasonal incidence of new injuries was 39 per club (of 40 players) per season (of 22 matches). The match injury incidence for AFL games was 25.7 injuries per 1000 player hours. The injury prevalence (percentage of players missing through injury in an average week) was 16%. The recurrence rate of injuries was 17%. The most common and prevalent injury was hamstring strain (six injuries per club per season, resulting in 21 missed matches per club per season), followed in prevalence by anterior cruciate ligament and groin injuries.
Conclusions: The injury definition of this study does not produce incidence rates that are complete for all minor injuries. However, the determination of an injury is made by a single entity in exactly the same manner for all teams, which overcomes a significant methodological flaw present in other multiteam injury surveillance systems.
Background: Cross-sectional studies have indicated that neurocognitive performance may be impaired among football players. Heading the ball has been suggested as the cause, but recent reviews state that the reported deficits are more likely to be the result of head injuries.
Objective: To examine the association between previous concussions and heading exposure with performance on computer based neuropsychological tests among professional Norwegian football players.
Methods: Players in the Norwegian professional football league (Tippeligaen) performed two consecutive baseline neuropsychological tests (Cogsport) before the 2004 season (90.3% participation, n = 271) and completed a questionnaire assessing previous concussions, match heading exposure (self-reported number of heading actions per match), player career, etc. Heading actions for 18 players observed in two to four matches were counted and correlated with their self-reported values.
Results: Neither match nor lifetime heading exposure was associated with neuropsychological test performance. Nineteen players scored below the 95% confidence interval for one or more subtasks, but they did not differ from the rest regarding the number of previous concussions or lifetime or match heading exposure. The number of previous concussions was positively associated with lifetime heading exposure (exponent (B) = 1.97(1.03–3.75), p = 0.039), but there was no relation between previous concussions and test performance. Self-reported number of headings correlated well with the observed values (Spearman's ρ = 0.77, p<0.001).
Conclusion: Computerised neuropsychological testing revealed no evidence of neuropsychological impairment due to heading exposure or previous concussions in a cohort of Norwegian professional football players.
The aim of the study was to examine the relation between lower limb comfort scores and injury and to measure the responsiveness of a lower limb comfort index (LLCI) to changes over time, in a cohort of professional footballers. Lower limb comfort was recorded for each individual using a comfort index which assessed the comfort status of five anatomical segments and footwear. Specifically we tested the extent to which comfort zones as measured by the LLCI were related to injury measured as time loss events. The hypothesis for the study was that poor lower limb comfort is related to time loss events (training or match day). A total of 3524 player weeks of data was collected from 182 professional athletes encompassing three codes of football (Australian Rules, Rugby league, Rugby Union). The study was conducted during football competition periods for the respective football leagues and included a period of pre- season training. The results of regression indicated that poor lower limb comfort was highly correlated to injury (R2 =0.77) and accounted for 43.5 time loss events/ 1000hrs football exposure. While poor comfort was predictive of injury 47% of all time loss events it was not statistically relevant (R2 =0.18). The results indicate lower limb comfort can be used to assess the well-being of the lower limb; poor comfort is associated with injury, and the LLCI has good face validity and high criterion-related validity for the relationship between comfort and injury.
Key pointsComfort as a method to determine the well-being of athletes has a role in injury management.A lower limb comfort index is a mechanism by which lower limb comfort can be evaluated.Poor lower limb comfort is associated with injury in professional football.The use of a comfort as a marker of athlete health has practical and clinical relevance to sports medicine professionals managing musculoskeletal injury.
Lower limb comfort; musculoskeletal; football; injury
Background: No previous study on adult football involving several different countries has investigated the incidence and pattern of injuries at the highest club competitive level.
Objective: To investigate the risk exposure, risk of injury, and injury pattern of footballers involved in UEFA Champions League and international matches during a full football season.
Method: Eleven top clubs (266 players) in five European countries were followed prospectively throughout the season of 2001–2002. Time-lost injuries and individual exposure times were recorded during all club and national team training sessions and matches.
Results: A total of 658 injuries were recorded. The mean (SD) injury incidence was 9.4 (3.2) injuries per 1000 hours (30.5 (11.0) injuries per 1000 match hours and 5.8 (2.1) injuries per 1000 training hours). The risk of match injury was significantly higher in the English and Dutch teams than in the teams from France, Italy, and Spain (41.8 (3.3) v 24.0 (7.9) injuries per 1000 hours; p = 0.008). Major injuries (absence >4 weeks) constituted 15% of all injuries, and the risk of major injury was also significantly higher among the English and Dutch teams (p = 0.04). National team players had a higher match exposure, with a tendency towards a lower training injury incidence than the rest of the players (p = 0.051). Thigh strain was the most common injury (16%), with posterior strains being significantly more common than anterior ones (67 v 36; p<0.0001).
Conclusions: The risk of injury in European professional football is high. The most common injury is the thigh strain typically involving the hamstrings. The results suggest that regional differences may influence injury epidemiology and traumatology, but the factors involved are unclear. National team players have a higher match exposure, but no higher risk of injury than other top level players.
Objective: To investigate the risks and benefits of the use of local anaesthetic in a descriptive case series from three professional football (rugby league and Australian football) teams.
Methods: Cases of local anaesthetic use (both injection and topical routes) and complications over a six year period were recorded. Complications were assessed using clinical presentation and also by recording all cases of surgery, incidences of players missing games or leaving the field through injury, and causes of player retirement.
Results: There were 268 injuries for which local anaesthetic was used to allow early return to play. There were 11 minor and six major complications, although none of these were catastrophic or career ending. About 10% of players taking the field did so with the assistance of local anaesthetic. This rate should be considered in isolation and not seen to reflect standard practice by team doctors.
Conclusions: The use of local anaesthetic in professional football may reduce the rates of players missing matches through injury, but there is the risk of worsening the injury, which should be fully explained to players. A procedure should only be used when both the doctor and player consider that the benefits outweigh the risks.
OBJECTIVE: To define the causes of injuries to players in English professional football during competition and training. METHOD: Lost time injuries to professional and youth players were prospectively recorded by physiotherapists at four English League clubs over the period 1994 to 1997. Data recorded included information related to the injury, date and place of occurrence, type of activity, and extrinsic Playing factors. RESULTS: In all, 67% of all injuries occurred during competition. The overall injury frequency rate (IFR) was 8.5 injuries/1000 hours, with the IFR during competitions (27.7) being significantly (p < 0.01) higher than that during training (3.5). The IFRs for youth players were found to increase over the second half of the season, whereas they decreased for professional players. There were no significant differences in IFRs for professional and youth players during training. There were significantly (p < 0.01) injuries in competition in the 15 minute periods at the end of each half. Strains (41%), sprains (20%), and contusions (20%) represented the major types of injury. The thigh (23%), the ankle (17%), knee (14%), and lower leg (13%) represented the major locations of injury, with significantly (p < 0.01) more injuries to the dominant body side. Reinjury counted for 22% of all injuries. Only 12% of all injuries were caused by a breach of the rules of football, although player to player contact was involved in 41% of all injuries. CONCLUSIONS: The overall level of injury to professional footballers has been showed to be around 1000 times higher times higher than for industrial occupations generally regarded as high risk. The high level of muscle strains, in particular, indicates possible weakness in fitness training programmes and use of warming up and cooling down procedures by clubs and the need for benchmarking players' levels of fitness and performance. Increasing levels of injury to youth players as a season progresses emphasizes the importance of controlling the exposure of young players to high levels of competition.