Search tips
Search criteria 


Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
J Gen Intern Med. 2007 May; 22(5): 655–661.
Published online 2007 February 23. doi:  10.1007/s11606-007-0103-x
PMCID: PMC1852913

A Ten-Month Program in Curriculum Development for Medical Educators: 16 Years of Experience

Donna M. Windish, MD, MPH,corresponding author1,2 Aysegul Gozu, MD, MPH,3 Eric B. Bass, MD, MPH,3 Patricia A. Thomas, MD,3 Stephen D. Sisson, MD,3 Donna M. Howard, DrPH,3 and David E. Kern, MD, MPH3



Despite increased demand for new curricula in medical education, most academic medical centers have few faculty with training in curriculum development.


To describe and evaluate a longitudinal mentored faculty development program in curriculum development.


A 10-month curriculum development program operating one half-day per week of each academic year from 1987 through 2003. The program was designed to provide participants with the knowledge, attitudes, skills, and experience to design, implement, evaluate, and disseminate curricula in medical education using a 6-step model.


One-hundred thirty-eight faculty and fellows from Johns Hopkins and other institutions and 63 matched nonparticipants.


Pre- and post-surveys from participants and nonparticipants assessed skills in curriculum development, implementation, and evaluation, as well as enjoyment in curriculum development and evaluation. Participants rated program quality, educational methods, and facilitation in a post-program survey.


Sixty-four curricula were produced addressing gaps in undergraduate, graduate, or postgraduate medical education. At least 54 curricula (84%) were implemented. Participant self-reported skills in curricular development, implementation, and evaluation improved from baseline (p < .0001), whereas no improvement occurred in the comparison group. In multivariable analyses, participants rated their skills and enjoyment at the end of the program significantly higher than nonparticipants (all p < .05). Eighty percent of participants felt that they would use the 6-step model again, and 80% would recommend the program highly to others.


This model for training in curriculum development has long-term sustainability and is associated with participant satisfaction, improvement in self-rated skills, and implementation of curricula on important topics.

KEY WORDS: curriculum development, faculty development


Medical education requires ongoing curriculum development (CD) to incorporate new knowledge and competencies.1 The Liaison Committee on Medical Education, the Accreditation Council for Graduate Medical Education (ACGME), and the American Academy of Continuing Medical Education have called for curricular changes to enhance the ability of physicians to fulfill their societal contract of providing quality medical care.211 Currently, medical schools and residency programs are required to define learning objectives and methods for their trainees.8,9 Soon, they will be expected to demonstrate the attainment of program objectives and trainee competence.11

Despite demands for curricular change, most faculty in academic medical centers have no instruction in education or CD.12,13 Curricula produced are often suboptimal and do not follow CD principles.14 Faculty development programs that have arisen to address this need vary in intensity, duration, and focus.12,13,1522 Most of them aim to improve teaching skills of their faculty.13 Few publications describe faculty training in CD. Those that do focus only partially on CD16,19,20 or include few participants.2022 This paper describes and evaluates a 10-month mentored program in existence since 1987, which trains faculty and fellows in CD skills.


The Longitudinal Program in Curriculum Development (CD) was established in 1987 as part of the Johns Hopkins Faculty Development Program at the Johns Hopkins Bayview Medical Center. The Faculty Development Program also includes teaching skills,17,18 consultation, and facilitator training programs.

Targeted Learners

The CD program serves faculty and fellows from Johns Hopkins and other medical institutions within the geographic region. Applicants self-select to participate. In general, no specific criteria are used for accepting applicants other than involvement in medical education and developing or joining a curricular project deemed acceptable to program facilitators. Participation is required for fellows in the clinician-educator track of the general internal medicine fellowship. Due to grant stipulations, the program trained mainly general internal medicine faculty and fellows during the years of our study. The number of participants from other specialties has increased in recent years, a trend facilitated by the initiation of program tuition and the program becoming recognized as a resource for the entire School of Medicine.

Educational Goals

The goals of the program are for participants to: (1) develop the knowledge, attitudes, and skills to design, implement, evaluate, and disseminate a curriculum in medical education; and (2) design, pilot, implement, evaluate, write up, and present a curriculum.

Educational Strategies

The program operates one half-day per week from early September to mid June of each academic year. Components include:

  1. Workshops and readings on curricular development steps and related issues. A 6-step model of CD23 is presented in a series of workshops. The 6 steps include: (1) problem identification and general needs assessment, (2) needs assessment of targeted learners, (3) goals and specific measurable objectives, (4) educational strategies, (5) implementation, and (6) evaluation and feedback. Sessions added through the years include: (1) writing for publication, internet resources for CD, finding and applying for funding in 2002; and (2) searching educational databases and obtaining institutional review board (IRB) approval in 2004.
  2. A mentored CD project. Participants work alone or in teams to design, pilot, and implement curricula of their choosing. Deadlines are established for outlines and written drafts of each CD step. Participants meet with facilitators for 45–60 minutes every 4–6 weeks to discuss progress and receive written feedback on their work. Participants generate a final paper describing their curriculum and present their work before an invited audience at the end of the program. Approximately 50% of the scheduled time is protected for independent work.
  3. Work in progress sessions. Participants present their needs assessment instruments, curricular segments, and evaluation instruments to co-participants and facilitators for feedback.


The Bureau of Health Professions, Health Resources, and Services Administration, U.S. Public Health Service provided grant support in all but 1 year from 1987 to 2006. In 1993, tuition was initated to partly cover expenses. In all years, the Johns Hopkins Bayview Medical Center has provided meeting space and approximately 30% full-time equivalent administrative support. Beginning in 2006, the program became financially independent. We adjusted tuition to cover faculty and food expenses and a 15% continuing education office charge. Two or three faculty experienced in CD, teaching skills, and mentoring facilitate each year depending on the number of curricular projects. Salary support is based on activities provided by faculty members: 5% FTE each for facilitating group sessions, 1.5% FTE for each curriculum mentored, and 4% FTE for one faculty to administer the program.


Study Population

Five of the 152 participants who enrolled in the program from 1988 through 2003 (cohorts 1–16) dropped out, and two participated twice, leaving 145 individuals who completed the course. Each of the 64 program completers in cohorts 2–9 named a colleague they knew during the first session who was similar to themselves in age, gender, professional training, and professional status. These nonparticipants served as a comparison group for self-assessed CD skills and enjoyment. We recruited nonparticipants until 64 were obtained, providing 80% power to detect a difference of 0.5 on Likert-scaled questions (α = 0.05).

Evaluation Design and Methodology

A preprogram survey asked the participants (cohorts 1–16) and nonparticipants (cohorts 2–9) to indicate their age, gender, professional training, work setting, academic activities, and prior experience in CD. Pre- and post-surveys ascertained participant and nonparticipant self-assessments of their skills in CD, implementation, and evaluation, as well as their enjoyment in CD and evaluation. Participants completed questionnaires during the first and last workshop sessions. Nonparticipants received mailed questionnaires within 3 months of their matched participant’s first and last curricular sessions. We rated skills on a 6-point scale (0 = none, 5 = excellent) and enjoyment on a 4-point scale (0 = none, 3 = a lot).

In a post-program survey, participants in all 16 cohorts rated the program’s quality, methods,and facilitation on 5-point scales. Open-ended questions asked the participants to assess the program’s strengths and weaknesses and to comment on constraints that might inhibit their curricular work.

We determined whether curricular projects for all 16 cohorts had been implemented through contact with participants or direct knowledge of the facilitators. We obtained characteristics of curricula (topics, targeted learners) from project titles, facilitators, and written products of the project teams. We confirmed all publications in medical journals by PubMed searches.

The Johns Hopkins IRB determined that this project qualified for exemption from review under guidelines regarding education program evaluation.

Statistical Analysis

We calculated baseline characteristics by percentage. To assess whether participants and nonparticipants in cohorts 2–9 differed in baseline characteristics, or pre- or post-program self-reported skills and enjoyment, we performed Student’s t tests, Fisher’s exact tests, or chi-squared analyses. We performed multiple linear regression analyses to determine whether participants differed from nonparticipants in baseline and end-of-program self-assessment of skills and enjoyment in CD activities while controlling for baseline characteristics that were statistically different between groups. Baseline covariates included type of fellowship training; academic appointment; time spent in hospital-based practice, community-based practice, teaching, and administration; research salary support; and previous experience in evaluating curricula. We used ANCOVA to compare change scores from baseline between participants and nonparticipants while controlling for baseline characteristics that differed between groups.

Qualitative Analysis

Two study investigators (AG, DK) independently analyzed responses to open-ended questions, marking comments representing discrete thoughts. The resulting comments were separated into categories based on the actual words used by participants. Consensus was reached by discussion.


Response Rate and Baseline Characteristics

Of the 145 participants in cohorts 1–16, 138 (95.2%) completed the pre- and post-questionnaires. For cohorts 2–9, 63 (98.4%) participants and 63 (98.4%) nonparticipants completed pre- and post-questionnaires. Baseline characteristics for cohort 1–16 and cohort 2–9 participants were similar except that cohort 1–16 participants were more likely to be women and to have associate professor/professor academic appointments. Baseline characteristics for cohort 2–9 participants versus nonparticipants are displayed in Table 1. Participants were more likely than nonparticipants to have fellowship training in general internal medicine, practice medicine in a community-based setting, and have salary supported by grant funding. Participants were less likely to have an academic appointment, teaching and administrative responsibilities and report past evaluation of a curriculum.

Table 1
Baseline Characteristics and Experience in Curriculum Development of Program Participants and Nonparticipants, Cohorts 2–9*

Pre/Post-assessment of Skills and Enjoyment in Curriculum Development

At baseline, as many as 54% (range 40–54%) of participants and 68% (range 27–68%) of nonparticipants in cohorts 2–9 indicated that they did not perform CD, implementation, or evaluation activities at baseline and did not rate their skills and enjoyment in these categories. On the post-program survey, seven participants and seven nonparticipants indicated that they had received training in curriculum/program development beyond our CD Program during the prior 10 months. Participants (cohorts 2–9) rated their pre-program skills in CD and implementation less favorably than nonparticipants and their post-program skills in CD, implementation, and evaluation more favorably (Table 2). Self-reported skills improved significantly for participants from baseline in curricular development, implementation, and evaluation. Ratings decreased for nonparticipants in CD and did not change in curricular implementation or evaluation (Table 3). Participants enjoyed CD more than nonparticipants (Table 2), and enjoyment did not change after the program (Table 3). These findings remained after controlling for baseline characteristics. Pre–post changes for participants were similar in each area described when data from all 16 cohorts were analyzed.

Table 2
Comparison Between Participants and Nonparticipants’ Baseline and End-of-program Self-assessment of Skills and Level of Enjoyment in Curriculum Development Activities, Cohorts 2–9*
Table 3
Pre–post-changes in Self-assessed Skills and Enjoyment in Curriculum Development Activities, Cohorts 2–9*

Assessment of Program

Nearly 90% of the participants indicated that the program offered a good balance between faculty input and self-directed learning. Most felt confident that their curriculum would be piloted and implemented at their respective workplaces and that they could develop curricula in the future (Table 4). Eighty percent of the participants indicated that they would use the 6-step model in the next 2 years, and 80% would recommend this program to others as a good or outstanding experience.

Table 4
Program Evaluation: Assessment of Program Quality, Educational Methods, and Facilitation, Cohorts 1–16, N = 138*

Qualitative Assessment

Over 90% of the participants responded to ≥1 open-ended questions. Two-hundred two comments related to program strengths: 31% focused on organization, structure, and content of the program; 29% on the expertise, support, and facilitation of program faculty; 14% on experiential learning and development of a curriculum; 12% on working in and interactions among groups; and 5% on quality of course materials. There were 217 responses on curriculum implementation and evaluation barriers: 24% mentioned time constraints; 22.6% finances/funding; 12.4% institutional constraints; 9.7% learner interest; 7.4% lack of faculty continuity; 8.3% logistics/feasibility; and 6.9% personal constraints such as time management or energy. Of the 70 comments on program weaknesses, the most common related to heavy workload and time commitment.

Characteristics of Curricula

Cohort 1–16 participants produced 64 curricula. At least 54 (84%) were implemented in total or part. Thirteen curricular projects (20.3%) led to publication of 18 articles in peer-reviewed journals, and 2 books related to project work. Curricula targeted residents (n = 48), practicing physicians (n = 13), students (n = 7), fellows (n = 3), and nonphysicians (n = 2) (some addressed >1 level of learner). Curricula were typically directed toward trainees or practitioners in general internal medicine (n = 49), but several targeted physicians in other specialties (n = 4), physicians in mixed specialties (n = 3), or nonphysicians (n = 2).

Curricula mostly addressed gaps in existing education. Topics included: psychosocial or behavioral subjects [19 projects (30%), e.g., communication skills, substance abuse, ethics]; general clinical issues [16 (25%), e.g., evidence-based practice, introduction to clinical medicine and clinical reasoning, physical examination skills]; internal medicine topics directed toward generalists [13 (20%), e.g., ambulatory medicine, HIV care, hospitalist care]; other topics for internists [9 (14%), e.g., musculoskeletal disorders, women’s health, dermatology]; preventive medicine topics [6 (9%), e.g., colorectal cancer screening, smoking cessation counseling, obesity management]; and educational and research skills training [3 (5%)].

Most participants (86%) worked in pairs or groups and rated this highly in terms of productivity, developing collegial relationships, and cooperativeness (all mean ≥4.0, where 1 = poor, 5 = excellent). Of those who worked in groups, 86% felt it moderately to very important to work collaboratively. Of those who worked alone, 75% would have preferred to work with others.


This paper describes a 10-month mentored faculty development program in CD in existence since 1987. Our results show improvement in self-assessed CD, implementation, and evaluation skills of participants but not in those of nonparticipants. Participants’ end-of-program self-assessed skills were higher than those of nonparticipants, whereas some of their pre-program self-assessed skills were lower even after controlling for differences in baseline characteristics. The program’s success was also seen in the number of curricular products implemented.

Institutions have recognized the need for faculty development of clinician-educators. The focus of this training has often centered on teaching skills.12,17,21,2429 Although skilled teachers remain critical for role modeling30,31 and the dissemination of knowledge and skills to future physicians, new curricula are required to meet changing educational demands. Some institutions have recognized this need and instituted programs to train faculty in CD.16,1922 Most programs have integrated CD training into a more comprehensive program and only report in part on this aspect of the training.16,19,20 Evaluations of training have usually been post-only,16,1922 involved small numbers,2022 or did not include a comparison group.16,1922 We add to the literature by reporting a pre–post-evaluation (with a comparison group) of a long-standing mentored faculty development program focused solely on CD.

The methods used in our CD program mirror methods used in research training and other CD programs15,1922,32: skills training sessions, independent projects, and regular meetings with mentors and with feedback from them (the mentors). We also include periodic deadlines, work-in-progress presentations, and the oral and written presentation of a final project. Participants valued each of these components highly, reported they had improved their skills, and felt confident that they could develop other curricula in the future. While published guides are available to provide independent direction for curriculum developers,23,33 it is unlikely that these alone or even short 1- to 2-day workshops could have accomplished these objectives. Other faculty development programs that have led to successful project implementation have also used a longitudinal approach with at least some of the methods described above.15,16,1922

At least 84% of curricula developed were fully or partly implemented. Possible explanations for the high implementation rate include: (a) the structure and mentorship provided by the program; (b) most curricula were group projects, so that if a participant moved after completion of the program, another participant was likely to continue the work; and (c) as years passed, facilitators focused increasingly on the implementation step of CD. In 2003, facilitators began screening the proposed projects of potential applicants to the program before accepting them to ensure support within the participants’ institutions.

Success in implementing curricula can help clinician-educators meet the criteria for academic advancement. Previous surveys have demonstrated that promotion committees and department chairs assign high importance to a clinician-educator’s performance in the design, implementation, and evaluation of educational programs.34,35 Successes in CD can be included in the educational portfolios that are being increasingly used to support applications for promotion.3638

Despite successful implementation, only 20% of the projects led to publication. Because publication continues to be an important promotion criterion,34,39 and because the steps of well-conducted CD projects offer multiple opportunities for publication,23 program faculty recently began to emphasize this prospect. Starting with cohort 16, a session was added on publishing and disseminating curricular work. For cohort 18, the program addressed searching educational databases and obtaining IRB approval. Most participants now identify dissemination and/or publication as a goal.

There are limitations of our work. First, we used self-reports rather than objective measures of CD skills. Second, although participants produced and implemented a large number of curricula, we do not have measures of the quality of the products produced. We do know from final reports that the curricula included a needs assessment, objectives, educational strategies based on the objectives, and an evaluation plan. Third, our participants were self-selected. Fourth, our comparison group was self-selected by participants. Because we did not have a randomized controlled group, baseline characteristics were not completely identical for participants and nonparticipants. Nevertheless, we were able to adjust for differences in our multivariable analyses. Fifth, many participants and nonparticipants did not evaluate their skills and enjoyment of curricular development activities, especially at baseline. This limited our ability to make comparisons between and among groups. Sixth, we only present data until the end of the program. A separate long-term follow-up study is under way. Seventh, we report on a program from one institution that served predominantly general internal medicine faculty.

This report presents a model for training in CD that has long-term sustainability, has resulted in implementation of many new curricula on important topics, and is associated with participant satisfaction and improvement in self-rated skills. Overall, the outcomes are positive enough to demonstrate that longitudinal training in CD can be successful. Given the needs for improved medical curricula and for academic medical institutions to have a talented cadre of medical educators who can accomplish this work and be promoted, other institutions may want to institute similar programs.


We thank Ken Kolodner, Sc.D., for critically reviewing the statistical methodology used in the paper.

Dr. Windish was a General Internal Medicine Fellow at Johns Hopkins University School of Medicine, Baltimore, MD, at the time of project initiation, whereas Dr. Howard is currently retired and was previously affiliated with the Division of General Internal Medicine, Johns Hopkins University School of Medicine.

Potential Financial Conflicts of Interest None disclosed.


1. Epstein R, Hundert E. Defining and assessing professional competence. JAMA. 2002;287(2):226–35. [PubMed]
2. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy Press; 2001.
3. Report of the Ad Hoc Committee of Deans. Educating Doctors to Provide High Quality Medical Care: A Vision for Medical Education in the United States. American Association of Medical Colleges; 2004.
4. Institute of Medicine. Health Professions Education. A Bridge to Quality. Washington, D.C.: National Academy Press; 2003.
5. Institute of Medicine. Academic Health Centers: Leading Change in the 21st Century. Washington D.C.: National Academy Press; 2004.
6. The Blue Ridge Academic Health Group. Reforming Medical Education: Urgent Priority for Academic Health Centers in the New Century. Atlanta, GA: The Robert W. Woodruff Health Sciences Center; 2003.
7. Accreditation Council on Graduate Medical Education. Common Program Requirements, February 2004. Available at Accessed May 27, 2006.
8. Liasion Committee on Medical Education. Functions and Structure of a Medical School: Standards for Accreditation of Medical Education Programs Leading to an M.D. Degree. Available at Accessed May 27, 2006.
9. American Academy of Continuing Medical Education. AACME Accreditation Standards and Policies. Available at Accessed May 27, 2006.
10. Britton CV. American Academy of Pediatrics Committee on Pediatric Workforce. Ensuring culturally effective pediatric care: implications for education and health policy. Pediatrics. 2004;114(6):1677–85. [PubMed]
11. Accreditation Council for Graduate Medical Education. ACGME outcome project: enhancing residency education through outcomes assessment. Chicago (IL): Accreditation Council for Graduate Medical Education; 2000. Available at Accessed May 27, 2006.
12. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387–96. [PubMed]
13. Clark JM, Houston TK, Kolodner K, Branch WT Jr, Levine RB, Kern DE. Teaching the teachers: national survey of faculty development in departments of medicine of U.S. teaching hospitals. J Gen Intern Med. 2004;19(3):205–14. [PMC free article] [PubMed]
14. Sheets KJ, Anderson WA. The reporting of curriculum development activities in the health professions. Teach Learn Med. 1991;3(4):221–6.
15. Gruppen LD, Frohna AZ, Anderson RM, Lowe KD. Faculty development for educational leadership and scholarship. Acad Med. 2003;78(2):137–41. [PubMed]
16. Anderson WA, Stritter FT, Mygdal WK, Arndt JE, Reid A. Outcomes of three part-time faculty development programs. Fam Med. 1997;29(3):204–8. [PubMed]
17. Cole KA, Barker LR, Kolodner K, Williamson P, Wright SM, Kern DE. Faculty development in teaching skills: an intensive longitudinal model. Acad Med. 2004;79(5):469–80. [PubMed]
18. Knight AM, Cole KA, Kern DE, Barker LR, Koldner K, Wright SM. Long-term follow-up of a longitudinal faculty development program in teaching skills. J Gen Intern Med. 2005;20(8):721–5. [PMC free article] [PubMed]
19. Armstrong EG, Doyle J, Bennett NL. Transformative professional development of physicians as educators: assessment of a model. Acad Med. 2003;78(7):702–8. [PubMed]
20. Steinert Y, Nasmith L, McLeod PJ, Conochie L. A teaching scholars program to develop leaders in medical education. Acad Med. 2003;78(2):142–9. [PubMed]
21. Rosenbaum ME, Lenoch S, Ferguson KJ. Outcomes of a teaching scholars program to promote leadership in faculty development. Teach Learn Med. 2005;17(3):247–52. [PubMed]
22. Snyder S. A program to teach curriculum development to junior faculty. Fam Med. 2001;33(5):382–7. [PubMed]
23. Kern DE, Thomas PA, Howard DM, Bass EB. Curriculum Development for Medical Education: A Six-Step Approach. Baltimore (MD): Johns Hopkins University Press; 1998.
24. Skeff KM, Stratos GA, Mygdal W, Bland CJ, et al. Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(Suppl 2):S56–63. [PMC free article] [PubMed]
25. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):1156–61. [PubMed]
26. Skeff KM, Stratos GA, Bergen MR, Sampson K, Deutsch SL. Regional teaching improvement programs for community-based teachers. Am J Med. 1999;106(1):76–80. [PubMed]
27. Houston TK, Clark JM, Levine RB, et al. Outcomes of a national faculty development program in teaching skills: prospective follow-up of 110 medicine faculty development teams. J Gen Intern Med. 2004;19(12):1220–7. [PMC free article] [PubMed]
28. Levine SA, Caruso LB, Vanderschmidt H, Silliman RA, Barry PP. Faculty development in geriatrics for clinician educators: a unique model for skills acquisition and academic achievement. J Am Geriatr Soc. 2005;53(3):516–21. [PubMed]
29. McLaughlin SA. Faculty development. Acad Emerg Med. 2005;12(4):302e1–e5. [PubMed]
30. Barondess J. The GPEP report: III. Faculty involvement. Ann Intern Med. 1985;103(3):454–5. [PubMed]
31. Branch WT, Kroenke K, Levinson W. The clinician-educator—present and future roles. J Gen Intern Med. 1997;12(Suppl 2):S1–4. [PMC free article] [PubMed]
32. Saha S, Christakis DA, Saint S, Whooley MA, Simon SR. A survival guide for generalist physicians in academic fellowships part 1: getting started. J Gen Intern Med. 1999;14(12):745–9. [PMC free article] [PubMed]
33. Green ML. Identifying, appraising, and implementing medical education curricula: a guide for medical educators. Ann Intern Med. 2001;135(10):889–96. [PubMed]
34. Atasoylu AA, Wright SM, Beasley BW, et al. Promotion criteria for clinician-educators. J Gen Intern Med. 2003;18(9):711–6. [PMC free article] [PubMed]
35. Beasley BW, Wright SM, Cofrancesco J Jr, Babbott SF, Thomas PA, Bass EB. Promotion criteria for clinician-educators in the United States and Canada. A survey of promotion committee chairpersons. JAMA. 1997;278(9):723–8. [PubMed]
36. Fleming VM, Schindler N, Martin GJ, DaRosa DA. Separate and equitable promotion tracks for clinician-educators. JAMA. 2005;294(9):1101–4. [PubMed]
37. Hafler JP, Lovejoy FH Jr. Scholarly activities recorded in the portfolios of teacher-clinician faculty. Acad Med. 2000;75(6):649–52. [PubMed]
38. Simpson D, Hafler J, Brown D, Wilkerson L. Documentation systems for educators seeking academic promotion in US medical schools. Acad Med. 2004;79(8):783–90. [PubMed]
39. Beasley BW, Wright SM. Looking forward to promotion: characteristics of participants in the prospective study of promotion in academia. J Gen Intern Med. 2003;18(9):705–10. [PMC free article] [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine