|Home | About | Journals | Submit | Contact Us | Français|
A rapid evolution is occurring in medicine and many allied health professions to establish the evidence that drives the clinical decision-making process used in treating patients. The time is certainly now for the athletic training profession to establish the evidence influencing the way we care for our patients, particularly as we continue to progress toward increasing third-party reimbursement recognition for the health care services we provide. At this point in our profession's history, the stakes couldn't be higher.
We have often learned, and perhaps even taught, that much of what we know and do in rehabilitating a patient or athlete after injury is “more of an art than a science.” Although this response may have been acceptable to the question of why a certain therapeutic modality was administered in treating a soft tissue injury 15 years ago, it is simply not acceptable today. To establish our evidence, we need to conduct research studies that ultimately will allow us to determine the efficacy of a treatment or series of treatments over time on specific clinical outcomes. Over the past 5 to 10 years, the athletic training profession has experienced rapid advancement in its research. This has been critical in continuing to build our profession's knowledge base. Although our research volume has increased tremendously (mostly in experimental, mechanistic-based studies), we need to increase the number of outcomes-based studies (clinical trials) that will allow us to establish the efficacy of our treatments. Further, as researchers, we need to think fundamentally differently as to how we present our research data. We must illustrate our data in a way that is meaningful for clinicians, so that they can use it to improve their practice and standard of care.
As consumers of research literature, we are most familiar with interpreting study results derived from traditional hypothesis testing. In this manner, we often look to determine if groups vary on a dependent measure after different treatments, according to an a priori established probability level (typically .05). The problem with this approach is that dichotomous decisions are then made from the results. Whether the groups are different or related, very little information can be discerned from interpreting the post hoc P value, which only reflects statistical significance and represents the likelihood that groups differ after a treatment due to chance. Unfortunately, what is missing from interpreting the P value alone is the clinical significance of the finding. Often, “significant results” include clinically irrelevant differences because a large sample was studied. Conversely, looking at the P value alone can also result in a clinically meaningful finding being ignored because the value was greater than .05. Collectively, the information provided by the P value itself is limiting: it fails to indicate the magnitude and certainty of the treatment effect as well as an estimate of what the treatment effect would be for the population being studied. This reduces the clinician's decision-making power and critical thinking.
As researchers, our obligation is to present data in a manner that allows the clinician to answer the following questions: How confident are we that the treatment investigated is effective? And, do the treatment benefits outweigh the potential risks in administering it? In disseminating research findings to clinicians to help them answer these questions, we need to focus on characterizing the clinical significance of our results, so that the magnitude and direction of the effects can be interpreted. The use of confidence intervals (CIs) provides great insight into the clinical significance of results and gives an alternative approach to traditional hypothesis testing and interpretation of the P value alone.1 Moreover, CIs can also be applied to analyses involving nonparametric data. A CI represents an interval estimate such that a range of specific values about the sample statistic is likely to contain the true population value at a given level of confidence (eg, 95%). The main importance of the CI is in providing information concerning the variability of the observed statistic (eg, mean, odds ratio) and, therefore, its precision, as well as the accuracy with which the interval contains the population parameter (ie, the true value). The use of CIs is not new to biomedical or clinical trials research; wide use in medical journals dates back to the mid 1980s. Additionally, specific authoritative calls have been put forth by leading medical journals and for authors to report CIs in their results, to discourage the use of P values alone.2,3
Ultimately, the research we perform and disseminate needs to make its way into clinical practice; otherwise it is of little value. Let's begin to use CIs when presenting our research results in lieu of the traditional P value in JAT. Although this departure from our traditional statistical comfort zone may be difficult, we as researchers need to continue to do our part in establishing the knowledge base for our profession. Reporting CIs will maximize the usefulness of data published in our journal; more importantly, they will give our practitioners meaningful data that will provide the evidence needed to improve clinical practice and the standard of care provided by certified athletic trainers.
Editor's Note: Mitchell L. Cordova, PhD, ATC, FACSM, is Chairperson and Associate Professor in the Department of Kinesiology at the University of North Carolina at Charlotte. He is also a Section Editor for JAT.