PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jathtrainLink to Publisher's site
 
J Athl Train. 2006; 41(3): 231–232.
PMCID: PMC1569564

“I Can't Believe We Don't Know That!”

The April–June issue of the Journal of Athletic Training was bundled with this year's Supplement containing the abstracts of research presented in the Free Communications Program of the 2006 NATA Annual Meeting & Clinical Symposia in Atlanta, GA. The 2006 Supplement contained 247 abstracts, more than double the 117 abstracts we saw 10 years ago in our 1996 Supplement. In fact, the number of abstracts submitted this year is nearly triple the number submitted just 7 years ago. 1 At no time in our history have we seen a greater quantity of athletic training research or a greater focus by athletic trainers on using research to guide clinical practice. This shift away from relying on clinical anecdotes to guide care and toward using demonstrated best practices and outcome-based evidence is critical to our continuing development as a profession and our efforts in the legislative and reimbursement arenas.

Although we are in the midst of a research revolution in sports medicine, there is still much we don't know. You might speculate that the unknown lies somewhere in the small details of advanced research questions, such as the supraspinal contribution to neuromuscular inhibition—and you would be partially correct. The reality, however, is that the unknown also lies at the very heart of many of our everyday practices, the very kinds of things that make my students say, “I can't believe we don't know that!” Using my own research area as an example, some fundamental questions for which we don't have answers include, “How soon do we need to apply cold?” “How long should we apply the cold?” and “How often should we apply the cold?” Although all 3 of these are questions for which instructors and textbooks attempt to provide answers, we don't have meaningful data to actually answer any of them.

This brings us to a major but often-ignored problem in our research practices. Instead of asking the most important questions, we tend to go after the “low-hanging fruit” and ask the questions that are easier to answer. The 2 areas in which we most need to focus our efforts are (1) randomized clinical trials to establish the effectiveness of our treatment practices and (2) studies that identify and explain mechanisms by which these treatments work and that build theories we can use to improve our treatments. Unfortunately, both types of studies are difficult to do. Clinical trials are our most important need and the medical community's gold standard for determining whether treatments are effective. They can also be expensive and time consuming and require large numbers of patients who are often difficult to find unless a multicentered approach is taken. Few athletic trainers have experience with clinical trials, and we need to train our students to both value them and perform them. Mechanistic studies usually involve advanced research techniques, well-equipped laboratories, and above all else, researchers who have specific expertise in the theoretic foundations of the research topic. We see numerous attempts at mechanistic studies that fall short in one or more of these requirements.

Although some excellent research is indeed being conducted, much of our research is more opportunity based than question driven. Many researchers ask, “What question can we answer with our current equipment or in our available timeframe?” instead of “What question should we be asking to make a difference?” We use convenience samples of student subjects because our students are far easier and faster to recruit than are the patients to whom we want to generalize our results. For example, we simply cannot continue to study uninjured college students without range-of-motion limitations to determine if ultrasound and diathermy are effective at improving range of motion in injured patients. Similarly, it is difficult to justify studying untrained young women with typical neuromuscular coordination to tell us why highly trained athletic women with well-developed neuromuscular coordination suffer anterior cruciate ligament injuries. In both cases, it is easier to study our readily available student volunteers, but without an appropriate subject population, the studies don't answer their intended questions.

Another barrier to performing our most needed research is our culture. We tend to think of people who do research as being a different group than clinical practitioners. The reality is that every clinical practitioner needs to think of himself or herself as a clinical researcher. As a profession, we clamor for outcomes research, yet as individuals, we are hesitant to participate in outcomes studies with our own patients. The usual excuses are that these studies are too time consuming for busy professionals or that they ask us to treat our patients in a standardized way that differs from what we normally do. We also have a misplaced belief that we will put our athletes in jeopardy by allowing them to be randomized into treatment groups. Because we need to get our athletes back on the field as quickly as possible, we think we must use every available means to treat them, regardless of whether there is any evidence that these treatments will actually improve their outcome. Even worse, the mere suggestion that one of our athletes could be randomized to a placebo group is nearly enough to cause a riot among many practitioners. We tend to put the interests of our individual patients above the greater good for all patients who would be served by participating in a clinical trial. Although caring for the interests of our individual patients is our first responsibility, the argument that we should exclude them from a clinical trial in order to provide them our “best care” is moot if we don't have the evidence to identify what the best care is.

In my role on the Biomedical Sciences Institutional Review Board at a large academic medical center, I see scores of clinical trials from other medical professionals that involve randomizing patients (not just subjects) into different treatment groups or even into treatment and placebo groups. These are accepted, standard, everyday medical research methods that are instilled in most medical professionals while they are students. They live these methods every day and value their importance. Our profession's leadership is enthusiastically pursuing these kinds of investigations, yet these studies remain antithetical to the culture we see evidenced in many individual practitioners. We need to understand that research extends down to every decision we make with every patient, and we must work to obtain the evidence to help us make these decisions. Until we are ready to ask the important questions and individually embrace participating in the right kinds of studies to answer these questions, we will not obtain the kind of data we most need for furthering the profession and improving the care of our individual patients.

Footnotes

Editor's Note: Mark A. Merrick, PhD, ATC, is the Director of the Athletic Training Division in the School of Allied Medical Professions at the Ohio State University and a Journal of Athletic Training Section Editor.

REFERENCE

  • Johns LD. [Letter] J Athl Train. 2006;41:S-2. (suppl 2)

Articles from Journal of Athletic Training are provided here courtesy of National Athletic Trainers Association