Search tips
Search criteria 


Logo of jathtrainLink to Publisher's site
J Athl Train. 2003 Jul-Sep; 38(3): 194–195.
PMCID: PMC233170

Letters to the Editor

Kristinn I. Heinrichs, PhD, PT, SCS, ATC, CSCS

It was with interest that I read the commentary on curriculum development in the December 2002 Supplement (Starkey C. Curriculum development. J Athl Train. 2002;37[4 suppl]:S180–S181). The commentary highlights the paradigm shift that must occur in athletic training education as the educational expectations of faculty and students continue to move toward the professional educational standards of other health care professions, including medicine, physical therapy, and pharmacy. Unlike the dichotomies drawn between the roles of academic and clinical faculty or graduate and undergraduate learners, the professional expectations of entry-level athletic training education are the same regardless of whether the academic program is at the baccalaureate or master's degree level. Clinical faculty are an integral component of education in all health professions—and clinicians in other disciplines face challenges similar to those of clinical faculty in athletic training, including job responsibilities, time constraints, and expectations (eg, expectations of academe, which are research, service, and academics, versus expectations found in athletic departments or clinical practice). The challenge facing athletic training education programs is to take advantage of their clinicians' years of experience: a problem-based learning (PBL) curriculum allows the talents of clinicians and academicians to be utilized in the development of clinical cases forming the core of the PBL curriculum and as tutors.1,2

In my manuscript,2 I provided an athletic training example of an acute orthopaedic injury, the knowledge required to solve the problem, and the complexity of the knowledge links to highlight the integration of the basic sciences with the clinical application. The education and socialization of the athletic training student does not take place in a vacuum, nor is one group responsible for one area of student development (“one group provides knowledge and skills and the other group is responsible for the student's introduction and socialization into the profession”); rather, the student's professional development is the result of student-centered learning and a collaborative effort of all the faculty—both academic and clinical—with whom the student interacts. Learning must take place in the context of clinical practice, rather than independent of the clinical-reasoning process. Differences in theory and technique between academic and clinical staff are to be expected (and are healthy) and do not necessarily cause adversarial relationships between the two groups, as suggested in the commentary. In reality, students will experience diversity in assessment and management approaches; differences in opinion force students to constantly evaluate the information they receive. By their nature, clinical problems are “messy,” ill-defined, and often characterized by more than one approach to the solution.1 The development of a PBL curriculum requires “real” clinical problems that have been carefully selected for their richness and depth of content; the reader is referred to the sample concept map2 in my manuscript for an example. Concept mapping encourages students to organize and systematize their knowledge, identify knowledge gaps, and be explicit about relationships among ideas.3

The commentary highlights the misunderstandings that often occur when the term PBL is used without fully understanding the underlying cognitive educational theory of the instructional-design methods. The PBL method emphasizes designing a learning environment, rather than instructional sequences, in which the learners work together as they use a variety of tools and information resources in their guided pursuit of learning.3 Cognitive theory focuses on the way information is stored in the brain: learning involves creating new links between information and grouping of knowledge. This method moves away from the behaviorist theory of “traditional” instruction and focuses the attention on learning and the learner.3

The PBL approach is not restricted to adult learners, as implied by the commentary. In fact, this approach has been successfully employed in the K through 12 educational levels4 and in other disciplines (A. Kelson, personal communication, 2001); perhaps the “problem” has been too narrowly defined in the commentary. Problems are not always negative situations requiring a remedy; problems can also be the search for a better way to do something or a “goal where the correct path to the solution is not known.”5 The commentary draws the incorrect conclusion that “adult learning styles and PBL are overemphasized during the student's undergraduate education.”

The literature was misinterpreted in the commentary: rather than overemphasizing “adult learning styles” and PBL methods, the traditional methods are soundly criticized by both Parsell and Bligh6 and Donaldson et al.7 The traditional teaching methods of subject-based information (eg, lectures; shallow cases with one clear solution; nonadaptive, static, Internet-based environments that are ignorant of the individual knowledge state of the user3,8) are increasingly recognized for their failure to prepare students for today's professional environment, in which value is placed on problem-solving skills, critical analysis, and decision-making skills.6

Subject-based information is transmitted to an audience of passive learners. At the undergraduate level, students find great difficulty understanding relationships among scientific concepts acquired in separately taught disciplines and relating them effectively to clinical practice. This situation is perpetuated by an examination and assessment system that stresses the need to memorize a large number of facts, forcing students to become dependent, rather than independent, learners. It is now widely accepted that this preparation is inadequate training for professional practice in a changing social and medical climate.6

Furthermore, the commentary implied that students' abilities to succeed with PBL are based on maturity, rather than on a well-designed, student-centered instructional curriculum. Students' abilities to understand the relationships are a function of an instructional design grounded in cognitive learning theory, not maturity. The commentary cited Donaldson et al7 in support of the “overemphasis of the importance of adult learning styles and problem-based learning.” However, a closer examination of the Donaldson et al7 paper reveals that the survey of 13 nontraditional undergraduate students (age greater than 27 years) demonstrated that the older undergraduate students recognized the difference between “making the grade” and “learning.” Learning was defined as having ownership of the material and being able to apply the learning to real-world problems—a hallmark of PBL. Paradoxically, the adult learners' success strategies included repetition and memorization of facts, cramming, and using mnemonic devices—all of which resulted in “good” test grades but did not focus on deeper understanding and improved retention through active learning. If information is retrieved in the same manner in which it is stored, memorization and cramming do not guarantee the student will be able to use the information to solve a problem. The older students focused exclusively on “success” in college as measured by success on testing, using assessment methods that are increasingly seen as inadequate in preparing tomorrow's professionals.7 This success is externally defined and imposed and is fundamentally different from the success that occurs when students own and internalize the learning in a meaningful way. Donaldson et al's interpretations are limited by the fact that the interviews were conducted only with nontraditional-aged students. There was no corresponding sample of traditional-aged undergraduate students to explore how their perceptions of learning differed from their older counterparts. Furthermore, students' perceptions of learning are likely also dependent on the learning environment (ie, traditional passive lecture versus active PBL). Therefore, the current lens through which we measure academic success may not be correct. Perhaps a new perspective and lens should be used to explore how educational policy must change to prepare today's student for tomorrow.

As the commentary correctly pointed out, aviation training consists in part of learning the basic technical skills associated with piloting an aircraft. However, reducing the problem of piloting an aircraft to “how to avoid crashing” is a far too narrow and simplistic definition of the problem. Although it is true that memorization and training aids are important to learn the basic behavior and discipline, the most difficult aspect of pilot training to learn is how to behave and think in a manner to minimize risk and the chance for fatal mistakes. The analogous training aids are as basic as learning the basic ankle-taping skills: unless these basics (anatomy, psychomotor skills) are mastered, the learning level should not be advanced (assessment, decision making, rehabilitation principles). In the same way, unless the pilot masters the basics of flight, he or she cannot understand the importance of sophisticated fluid dynamics or “fly-by-wire” technology. Without PBL training, the pilot of the United Airlines DC-10 that lost its number 2 engine over Iowa in the late 1980s would never been able to make the emergency landing. Perhaps with better education, training, and learned discipline, John F. Kennedy Jr might never have taken off for the last time. Piloting a small private plane is fundamentally different from piloting an Airbus A-340, in which leadership, decision-making skills, teamwork, discipline, and crew resource management are critical. The managerial and leadership skills of the pilot are as important as the technical skills. The aviation industry has already turned to PBL, computer simulations, and simulator training to develop these problem-solving skills and teamwork required to successfully manage a long-haul flight (J.D. Rodriguez, personal communication, 2003). In fact, this concept of crew resource management is so powerful that medicine has adopted this paradigm for surgical training. Similarly, on-field management of sports trauma contains those same principles used in crew resource management.2 Memorizing checklists and rote knowledge is a very small, but important, part of solving dynamic problems encountered during flight; the problem-solving and decision-making skills are more crucial. The aviation example I used2 also highlighted the diverse use of PBL in learning environments in one discipline ranging from K through 12 to the most advanced aviation education research using PBL. The reader is encouraged to return to the manuscript to view the interactive Web sites cited in this example.

Cognitive theorists and educators are correct in recognizing that today's students, be they schoolchildren or adults, must acquire the generic skills and personal characteristics of independent and self-directed learners in order to become life-long learners.1,2,6,8 Learning is an active process of constructing, rather than acquiring, knowledge. Instruction is a process of supporting that construction, rather than communicating knowledge.9 Perhaps in modern society, where scientific advances outpace our ability to learn every single fact, we would be wise to remember the words of John Dewey, in the early 20th century, when he argued against an educational framework of memorization and recitation in order to meet the demands of the new industrial age, saying, “education is not a preparation for life, it is life itself.”9


1. Barrows HS. The Tutorial Process. Springfield, IL: Southern Illinois University Medical School; 1992.
2. Heinrichs KI. Problem-based learning in entry-level athletic training professional education programs: a model for developing critical-thinking and decision-making skills. J Athl Train. 2002;37(4 suppl):S189–S198. [PMC free article] [PubMed]
3. Eklund J, Woo R. Proceedings of the Australasian Society for Computers in Learning in Tertiary Education. Wollongong, New South Wales, Australia: 1998. A cognitive perspective for designing multimedia learning environments; pp. 181–190.
4. The NASA SCIence Files. Available at: Accessed April 30, 2003.
5. Southern Illinois University School of Medicine. Problem-based learning initiative. Available at: Accessed May 20, 2003.
6. Parsell G, Bligh J. Contract learning, clinical learning and clinicians. Postgrad Med J. 1996;72:284–289. [PMC free article] [PubMed]
7. Donaldson JF, Graham SW, Martindill W, Bradley S. Adult undergraduate students: how do they define their experience and success? J Contin Higher Educ. 2000;48:2–11.
8. Barzak MY, Ball PA, Ledger R. The rationale and efficacy of problem-based learning and computer-assisted learning in pharmaceutical education. Pharm Educ. 2001;1:105–113.
9. Lefoe G. Proceedings of the Australasian Society for Computers in Learning in Tertiary Education. Wollongong, New South Wales, Australia: 1998. Creating constructivist learning environments on the web: the challenge in higher education; pp. 453–464.
J Athl Train. 2003 Jul-Sep; 38(3): 195–196.

I read with pleasure the recent supplement to the Journal of Athletic Training (JAT) focusing on athletic training education. I also attended the recent 2003 Athletic Training Educators' Conference and was pleased to see that interest and production in athletic training educational research has grown substantially over the past decade.

Studies using survey research are now common in JAT; this is very evident in the recent supplement. A total of 8 articles using survey-research methods were published in JAT in 2002, 6 of which were in the education supplement. Overall the quality of research and specifically survey research has improved in JAT. However, one area that is consistently problematic is survey-sampling methods.

For example, in one article (Stradley SL, Buckley BD, Kaminski TW, et al. A nationwide learning-style assessment of undergraduate athletic training students in CAAHEP-accredited athletic training programs, J Athl Train. 2002;37(suppl 4):141–146), the authors attempted to select a nationwide sample of undergraduate athletic training students at CAAHEP-accredited institutions. Theoretically, this is the appropriate underlying population of interest to determine the prevalence of learning styles and to test their research question regarding potential geographic differences in learning styles among athletic training students. However, several problems with the sample design and analysis likely make the final sample nonrepresentative of the target population (all CAAHEP-accredited undergraduate athletic training students) and also may be responsible for the nonsignificant findings regarding geographic differences.

The authors first chose 50 CAAHEP-accredited programs using a stratified (by NATA district), random-sample, proportionate-to-size (the number of programs in each district) method. Program directors at each selected institution then randomly selected 10 students to complete the survey instrument, with one exclusion criterion: the student had to have attended grades 6–12 in the same region as the institution he or she attended.

Several problems exist with this sample design. First, the method of randomization used to select programs and students is not described. This is a common problem in many publications; however, when the randomization methods are elucidated, the details reveal that most are not ensuring an accurate random sample. It is not an easy endeavor to select a true random sample, and this is a major source of bias in survey and clinical research.13 Second, the exclusion criterion used by program directors to select students effectively negates the complex random-sampling methods because not all subjects will have a known probability of selection, a requirement for a representative sample.1,4 Third, disregarding the exclusion-criterion problem, the proposed sampling methods constitute a complex (multilevel: sampling programs first and students second), stratified (by NATA district), and randomly selected (proportionate-to-size) design. Complex sample designs such as this require special statistical manipulation to account for the unequal probability of selection, which is accomplished by computing statistical weights (inverse of the probability of selection corrected for nonsampling error, over- and undersampling, response rate, etc) and using these weights in the analysis.1,4 Special statistical software (ie, SUDAAN [RTI Intl, Research Triangle Park, NC] or Stata [Stata Corp, College Station, TX]) must be used to analyze complex survey data. A simple random-sample design is the only probability sampling method that does not require special statistical manipulation.

The lack of statistical weighting most notably influences the standard errors, which then influence the accuracy of the statistical testing used to determine geographic differences. This may be one reason why the authors did not find any geographic differences in learning styles. The lack of statistical weighting also precludes accurate estimates of the distribution of learning styles among athletic training students, because districts with a large proportion of CAAHEP-accredited programs will be overrepresented when unweighted estimates are reported.

I applaud the authors for attempting to use complex designs in athletic training survey research. However, due to several fatal problems, the resulting sample in this study is merely a sample of convenience with limited generalizability beyond the actual sample and, therefore, is not a true nationwide sample as suggested in the title. Of note, the authors do acknowledge this limitation and promote cautious interpretation of their results due to the “low number of subjects in each region.” The actual geographic-region sample sizes are not reported; however, the authors used the chi-square test, which is usually robust with cell sizes greater than 5. Therefore, it is unlikely that low sample sizes had a major influence on their negative findings regarding regional differences.

Unfortunately, nonprobability samples (such as convenience samples) are common in athletic training-related survey research. In 2002, of the 8 survey research articles published in JAT, one set of authors used appropriate probability sampling and analytic methods,5 one set surveyed the entire population of interest,6 and one set may have used the appropriate methods, but not enough information was provided to make a complete determination.7 Although useful for pilot testing and survey development, nonprobability samples are often biased because the subjects selected are the easiest to find or are those subjects most likely to respond.1 Subsequently, the selected sample will not be representative of the population of interest, and the results will have limited applicability.

To improve the quality of survey research in athletic training, investigators need to use strong sampling methods and the appropriate analytic methods. This may require that researchers seek advanced training in survey methodology, solicit consultation from an experienced survey-sampling statistician in the early stages of study design, or both. In addition, the editors and reviewers of JAT should exercise critical judgment when reviewing manuscripts on survey research, especially with regard to the methods and the generalizability of the conclusions.


1. Rea LM, Parker RA. Designing and Conducting Survey Research: A Comprehensive Guide. San Francisco, CA: Jossey-Bass; 1997. pp. 128–144.
2. Schulz KF, Grimes DA. Generation of allocation sequences in randomized trials: chance, not choice. Lancet. 2002;359:515–519. [PubMed]
3. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting randomized controlled trials: the CONSORT Statement. JAMA. 1996;276:637–639. [PubMed]
4. Lohr SL. Sampling: Design and Analysis. Pacific Grove, CA: Duxbury Press; 1999. pp. 2–6.pp. 23–58.
5. Laurent T, Weidner TG. Clinical-education-setting standards are helpful in the professional preparation of employed, entry-level certified athletic trainers. J Athl Train. 2002;37(suppl):S-248–S-254. [PMC free article] [PubMed]
6. Hawn KL, Visser MF, Sexton PJ. Enforcement of mouthguard use and athlete compliance in National Collegiate Athletic Association men's collegiate ice hockey competition. J Athl Train. 2002;37:204–208. [PMC free article] [PubMed]
7. Cuppett M, Latin RW. A survey of physical activity levels of certified athletic trainers. J Athl Train. 2002;37:281–285. [PMC free article] [PubMed]
J Athl Train. 2003 Jul-Sep; 38(3): 196–197.

Authors' Response

We would like to thank Dr Hootman for her insightful and critical review of our paper, “A Nationwide Learning-Style Assessment of Undergraduate Athletic Training Students in CAAHEP-Accredited Athletic Training Programs,” and welcome this opportunity to respond. Although we do not have the answers to all the questions she has raised, we will address what we feel are the 2 primary issues. Such thought-provoking insights challenge us as researchers to develop even better survey-oriented research studies in the future. This we feel is especially important given the fact that an increasing number of scholars in our profession are actively engaged in survey research.

First, we acknowledge Dr Hootman's concerns regarding the manner in which we sampled our population and her suggestions for improving upon our methods. However, we stated several times in our discussion the limitations inherent to our particular study. Despite our unintentional use of a “sample of convenience,” we believe our study does contribute to the existing literature in this area. The methods certainly could have been improved upon. Nonetheless, we maintain that the methods we employed are acceptable given that investigations on this topic in athletic training education are (1) new and exploratory and (2) almost impossible to conduct without using methods similar to those we chose to incorporate. We agree that our use of the wording “random sample” may be a bit misleading. We did choose the CAAHEP-accredited programs, randomly drawing from a population of all accredited programs; however, as Dr Hootman pointed out, students selected at each school had to meet sufficient criteria in order to be selected. Interestingly, several survey-research studies previously published in the Journal of Athletic Training failed to report how students were selected. It would be helpful if the Journal of Athletic Training would publish recommendations on how random selection should be handled for future submissions.

Dr Hootman also expressed concern regarding our statistical design and analysis. Our analyses were based on studies that used similar techniques and had been reported previously in the allied health literature. We agree with Dr Hootman that researchers should solicit expert advice from those extensively involved in survey research. We not only sought advice from experts in survey research but also put our manuscript forward for peer review.

As the profession of athletic training continues the transition from hour-based to competency-based clinical-education models, studies such as ours help to answer small questions. More importantly, they open up a broader dialogue by bringing forth more questions that need to be answered. We are confident that our work, despite its flaws, did make a contribution to the existing body of knowledge. We are optimistic that future research on this topic will enlighten us all even more.

Articles from Journal of Athletic Training are provided here courtesy of National Athletic Trainers Association