Search tips
Search criteria 


Logo of amjpharmedLink to Publisher's site
Am J Pharm Educ. 2012 October 12; 76(8): 148.
PMCID: PMC3475777

Assessment of Full-time Faculty Preceptors By Colleges and Schools of Pharmacy in the United States and Puerto Rico

Harold L. Kirschenbaum, PharmD, MScorresponding author and Tina Zerilli, PharmD


Objective. To identify the manner in which colleges and schools of pharmacy in the United States and Puerto Rico assess full-time faculty preceptors.

Methods. Directors of pharmacy practice (or equivalent title) were invited to complete an online, self-administered questionnaire.

Results. Seventy of the 75 respondents (93.3%) confirmed that their college or school assessed full-time pharmacy faculty members based on activities related to precepting students at a practice site. The most commonly reported assessment components were summative student evaluations (98.5%), type of professional service provided (92.3%), scholarly accomplishments (86.2%), and community service (72.3%). Approximately 42% of respondents indicated that a letter of evaluation provided by a site-based supervisor was included in their assessment process. Some colleges and schools also conducted onsite assessment of faculty members.

Conclusions. Most colleges and schools of pharmacy assess full-time faculty-member preceptors via summative student assessments, although other strategies are used. Given the important role of preceptors in ensuring students are prepared for pharmacy practice, colleges and schools of pharmacy should review their assessment strategies for full-time faculty preceptors, keeping in mind the methodologies used by other institutions.

Keywords: assessment, faculty, preceptors


The 2007 ACPE Accreditation Standards and Guidelines 2.0 for doctor of pharmacy (PharmD) programs emphasize the importance of programmatic assessment and evaluation.1 One integral component of this process is the assessment of the college’s or school’s faculty members. As noted in Standard 26, faculty members should be evaluated regularly on their ability to teach effectively, produce scholarly work, engage in ongoing professional development, provide patient-care activities, and contribute to the pharmacy program, the community at large, the profession of pharmacy, and the development of students.1 This evaluation should involve self-assessment and input from multiple individuals including students, peers, and supervisors.

While all these factors should be evaluated, the literature regarding pharmacy faculty evaluation has focused largely on classroom teaching,2-12 with the most common evaluation strategies being peer assessment of classroom teaching5-8 and student evaluations (print and Web-based).9-13 There is a paucity of published data regarding the assessment/evaluation of faculty members in experiential or clinical teaching settings. In a survey documenting teaching evaluation practices in colleges and schools of pharmacy, Barnett and Matthews found that 100% of respondents used student evaluations, and 18% also used peer evaluation to evaluate experiential teaching on advanced pharmacy practice experiences (APPEs).14 Assessment methods used for evaluating teaching in introductory pharmacy practice experiences (IPPEs) were not described in the manuscript.

The report by Barnett and Matthews focused solely on teaching and did not evaluate other aspects of a faculty member’s responsibilities, such as service and scholarship. A search of the literature yielded no reports of evaluation methods used by colleges and schools of pharmacy to assess overall performance of clinical pharmacy faculty members at their practice sites. The purpose of this study was to identify the manner in which colleges and schools of pharmacy in the United States and Puerto Rico assess full-time faculty members who serve as preceptors and practice at experiential sites. Specifically, the purpose was to identify the types of assessments used, the frequency with which each is conducted, and those responsible for conducting the assessment. Methods for assessing classroom teaching were not determined.


A draft questionnaire was designed and reviewed by persons at 4 different colleges and schools of pharmacy with knowledge of the subject area and/or expertise in study design or curricular assessment. These individuals were asked to assess the proposed survey instrument for ease of completion, clarity, comprehensiveness, and overall suitability. Following feedback, the questionnaire was modified and sent to 3 different individuals with similar expertise for comments. After the second revision, the instrument and a cover letter were submitted to the institutional review board at Long Island University, and the research project was granted exempt status.

The final questionnaire contained 102 questions, but as the instrument was developed in such a way that a given response led to specific follow-up questions and omitted others, no respondent would need to answer all items. We estimated that the average respondent would need about 15 minutes to complete the survey instrument. The survey instrument included questions dealing with demographics of the respondent’s college or school, abilities and attributes assessed for practice-based full-time faculty members, various types of assessments, and frequency of conducting each assessment. Respondents were requested to focus on full-time faculty members (and preceptors who were paid partially by the college/school and partially by the practice site but were treated as full-time faculty members) who were assigned to experiential sites to precept students and, perhaps, provide service to the site. (A copy of the survey instrument is available from the authors upon request.)

In December 2010, a list of 124 directors of pharmacy practice (or equivalent title) at colleges and schools of pharmacy was obtained from the American Association of Colleges of Pharmacy. The investigators checked each college and school Web site to verify the names of directors. If the Web site did not provide this information, the institution was contacted by telephone. Four institutions outside the United States and Puerto Rico were eliminated from the contact list, and 1 institution was added to the list for a total of 121 colleges and schools.

The survey instrument was made available online through Student Voice (Fundamentals, Version 4, StudentVoice, Buffalo, NY), a commercial vendor. Invitations to participate along with a link to the survey instrument were sent to directors of pharmacy practice at the 121 colleges and schools in February 2011. Duplicate e-mail messages were sent to nonresponders approximately 3 and 6 weeks later, and telephone calls were made to persons who did not respond to the second e-mail reminder. In some instances, an alternate person was identified at a college or school (such as an assistant/associate dean for assessment or the director of experiential education) and they were contacted via telephone and/or e-mail.

StudentVoice was used to collect and collate data, as well as to generate descriptive statistics. R software (R Foundation for Statistical Computing, Vienna, Austria) was used for all inferential statistical analyses and to confirm descriptive statistics including frequencies, percentages, and means ±SD.15 Each question was analyzed for significant differences between public and private institutions. The Fisher exact test was used to compare percentages and a 2-sample t test was used to compare means. Sets of related questions with binary yes/no responses (eg, for abilities/attributes assessed by the college or school) were analyzed for differences in marginal probabilities using the Cochran Q test. For significant results (p< 0.05), post hoc analysis for differences between pairs of responses was carried out using the McNemar test with Bonferroni correction for all pairwise comparisons, excluding “other” responses; an adjusted p value for each response was obtained.

Pair-wise comparisons of overall responses revealed 3 distinct categories of reported frequencies for most questions. The categories were classified using the following scheme: most commonly reported, intermediate, and least commonly reported. The assignment of a finding to 1 of the 3 categories was done in such a way as to maximize the combined number of responses in the most- and least-commonly reported categories. Ties were broken by assigning the finding to the category that was the most balanced. Responses not assigned to the most commonly reported or least commonly reported category were assigned to the intermediate category. The overall responses (the total number of colleges and schools) in the most commonly reported category were significantly (p<0.05) more common than in the least commonly reported category for all items. There were no significant differences between responses within any category and there were no significant differences noted when comparing the most commonly reported category to the intermediate category, or when comparing the responses in the intermediate category to those in the least commonly reported category.


Of the 121 colleges and schools of pharmacy that received an invitation to participate in the survey, 2 institutions in Canada were inadvertently included and therefore eliminated from the study results. In addition, 6 colleges and schools in precandidate status were eliminated because they did not yet have any students enrolled in IPPEs or APPEs and, thus, would not have any full-time faculty members actively serving as preceptors. As a result of these adjustments, the total sample size was 113 institutions. Seventy-five responses were received for a 66.4% response rate. Based on a formula for small sample sizes,16 a target response rate for a sample of 113 is approximately 77%. This study’s response rate of 66.4% exceeded “very good” for a survey conducted via e-mail. 17

Of the 75 respondents, 39 (52%) were from private and 36 (48%) were from public institutions (Table 1). The percentage of respondents from private and public institutions was not significantly different than the percentage of private (46%) and public (54%) colleges and schools of pharmacy nationwide at the time the survey was conducted. In addition, the geographical region of respondents was not significantly different than the geographical distribution of all colleges and schools surveyed. Therefore, the sample appeared to be representative of colleges and schools of pharmacy nationwide at the time the survey instrument was administered.

Table 1.
Characteristics of Survey Respondents and All Colleges and Schools of Pharmacy with Full or Candidate Status in the United States and Puerto Rico, No. (%)

Seventy (93.3%) of the 75 institutions assessed full-time faculty members based on activities related to precepting students at a practice site. The number of full-time faculty preceptors varied widely, and ranged from 4 to 100, with a mean of 22.5 ± 14.1. Four (5.3%) institutions did not assess faculty members based on activities at an experiential site, but indicated that a process for doing so was being developed. One respondent (1.3%) indicated that students were only placed at off-campus sites with volunteer and/or adjunct faculty members; thus, the respondent did not complete the remainder of the questionnaire.

The abilities and attributes that institutions assess for full-time faculty member preceptors are listed in Table 2. The most commonly reported attribute was the ability to provide a positive learning experience. Other than the ability to provide students with an appropriate balance of guidance and autonomy, there were no significant differences between private and public institutions.

Table 2.
Assessed Abilities/Attributes of Full-time Faculty Member Preceptors

The most commonly reported assessment methods used by colleges and schools of pharmacy were summative student evaluations, compilation of the type of professional service provided by the faculty member, scholarly accomplishments of the preceptor, and community service provided (Table 3). The number of colleges and schools assessing community service was the only significantly different item between private and public institutions. Table 4 provides a breakdown of the frequency for conducting each assessment.

Table 3.
Methods Used to Assess Full-time Faculty Member Preceptors at the Practice Site, No. (%)
Table 4.
Frequency for Conducting Various Assessments for Full-time Faculty Member Preceptors at the Practice Site

Onsite evaluations were conducted by division/department chairs, teams of faculty members and/or administrators, as well as others. Of 14 respondents who reported that a team was sent to the site, 12 provided data on the membership of the team (Table 5). Twenty-eight out of 33 (85%) respondents reported that when college or school personnel were sent to a site to evaluate a full-time faculty member preceptor, the college or school provided guidance on the types of items to be considered. Fifteen of the 27 (56%) respondents (1 did not respond to the question) noted that a “grading tool” such as a rubric, rating scale, or checklist was provided by the institution. Table 6 delineates the criteria considered by college and school personnel when completing a letter of evaluation, report, rubric, etc, following a site visit.

Table 5.
Team Members Sent to the Site to Assess a Full-time Faculty Member Preceptor
Table 6.
Evaluation Criteria Considered by College Personnel in a Letter of Evaluation/Report/Grading Tool for Assessing Full-time Faculty Member Preceptors, No. (%)

Twenty-two respondents provided input on the criteria considered in a letter of evaluation or a grading tool (eg, rubric) used by a site-based supervisor in the assessment process (Table 7). The most commonly reported criteria considered were professional demeanor of the faculty member and comments from other members of the health care team.

Table 7.
Criteria Considered by the Site-based Director or Supervisor in a Letter of Evaluation and/or Grading Tool, No. (%)


In the current study, all responding colleges and schools of pharmacy that used full-time faculty members to precept students either already assessed preceptors or were in the process of developing a method to assess them. In addition, all but 1 respondent indicated that summative student evaluations during experiential education were used in the assessment process. Our results mirror those of Barnett and Matthews who reported that all of the 89 colleges and schools of pharmacy that responded to their survey instrument indicated that students completed evaluations of teaching during APPEs, although it is not clear whether each APPE was assessed by every student.14 Though not explicitly noted, these were likely summative because 99% of respondents reported that student evaluations were conducted at the conclusion of the APPE. Thus, this assessment strategy remains the most commonly used one. Evaluation of teaching during IPPEs was not determined in the article by Barnett and Matthews. Interestingly, respondents to the current survey instrument indicated that full-time faculty preceptors are commonly evaluated for their activities during IPPEs as well as APPEs; an evolution that is expected.

Apparently, evaluation by a pharmacy supervisor at the site – by submitting a letter of evaluation and/or by completing an assessment rubric – is a slightly (but not significantly) more frequent assessment methodology than is an onsite assessment by college or school personnel. As long as the number of experiential sites expands and colleges and schools use more sites outside their general region, this practice is likely to increase. Regardless, the percentage of institutions that use pharmacy supervisors or college and school personnel to provide assessments of practice-based faculty members is significantly lower than the percentage of institutions that use summative student evaluations. There are, however, several concerns surrounding the use of course evaluations by students.3,10,11 For example, Kidd and Latif noted that students’ grade expectations for a course were highly correlated with the mean course evaluation score.10 Therefore, we suggest that colleges and schools of pharmacy use additional assessment methodologies such as letters of evaluation by pharmacy supervisors and peer assessment to a greater extent.

The article by Barnett and Matthews focused on teaching evaluations and did not delve into other areas of faculty involvement such as scholarship and service.14 The results from the current survey illustrate that when evaluating full-time faculty preceptors, responding colleges and schools of pharmacy most commonly consider the standard “three-legged stool” approach for an academic appointment – teaching (usually assessed via summative student evaluations), service (professional and community, perhaps in lieu of or in addition to university/college/school activities), and scholarship. Although there may be a quantitative difference in the emphasis placed on each category, practice-based full-time faculty preceptors appear to be viewed in a manner similar to that for campus-based faculty members. The current study, however, did not seek information relative to criteria for reappointment, promotion, and/or tenure. Nevertheless, whether practice-based full-time faculty member preceptors should be assessed in the same manner as campus-based faculty members should be examined. An alternative approach has been advocated for18,19 and is commonly followed in the medical model whereby “clinician-educators” or “clinical teachers” are assessed in a manner that, for example, does not emphasize original research.20,21

The current study identified the abilities and attributes that are looked for in full-time faculty preceptors. The most commonly reported abilities and attributes included ability to provide a positive learning experience, the ability to interact with students, the ability to function as a role model, and the ability to explain clearly the objectives and/or expectations of the practice experience. Unless these items are included in the student evaluations and this specific assessment is emphasized, it is unclear as to how colleges and schools assessed these abilities and attributes. Colleges and schools of pharmacy should consider including the desired abilities/attributes in rubrics and other types of assessment tools provided to site-based and/or campus-based persons responsible for assessing the full-time faculty preceptor. This might require a greater delineation of optimal abilities and attributes.

As with all self-administered survey instruments, it is possible that some respondents misinterpreted 1 or more questions. There also is a potential for introducing bias into a study when data are self-reported. The questionnaire was lengthy and not all respondents answered every question. The study methodology did not include collecting summative and formative student evaluations, copies of rubrics used to assess preceptors, and sample letters prepared by site-based supervisors so no conclusions may be drawn regarding the quality or consistency of these assessment methods. Finally, the manner in which volunteer or adjunct preceptors are assessed was not studied.


The colleges and schools of pharmacy that responded to the survey instrument were assessing full-time practice-based faculty members. The most common assessment strategy was summative student evaluations, but a wide variety of other assessment strategies was used. Evaluations by site-based supervisors and/or college personnel are 2 assessment methods that could be used to a greater extent. In addition, colleges and schools should consider whether the abilities and attributes desired for practice-based faculty members are being assessed adequately. Colleges and schools of pharmacy should review their assessment strategies for practice-based faculty-member preceptors, keeping in mind the methodologies used by other institutions.


The authors wish to acknowledge Garrett Dancik, MS, PhD, Department of Surgery, School of Medicine, University of Colorado (Denver), for conducting the statistical analysis for the manuscript.


1. Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. 2012 Version 2.0 Adopted January 23, 2011. Accessed July 21,
2. Lubawy WC. Evaluating teaching using the best practices model. Am J Pharm Educ. 2003;67(3):Article 87.
3. Piascik P, Pittenger A, Soltis R, et al. An evidence basis for assessing excellence in pharmacy teaching. Currents Pharm Teach Learn. 2011;3(4):238–248.
4. Peterson SL, Wittstrom KM, Smith MJ. A course assessment process for curricular quality improvement. Am J Pharm Educ. 2011;75(8):Article 157. [PMC free article] [PubMed]
5. Schultz KK, Latif D. The planning and implementation of a faculty peer review teaching project. Am J Pharm Educ. 2006;70(2):Article 32. [PMC free article] [PubMed]
6. Hansen LB, McCollum M, Paulsen SM, et al. Evaluation of an evidence-based peer teaching assessment program. Am J Pharm Educ. 2007;71(3):Article 45. [PMC free article] [PubMed]
7. Trujillo JM, DiVall MV, Barr J, et al. Development of a peer teaching-assessment program and a peer observation and evaluation tool. Am J Pharm Educ. 2009;72(6):Article 147. [PMC free article] [PubMed]
8. Wellein MG, Ragucci KR, Lapointe M. A peer review process for classroom teaching. Am J Pharm Educ. 2009;73(5):Article 79. [PMC free article] [PubMed]
9. Anderson HM, Cain J, Bird E. Online student course evaluations: review of literature and a pilot study. Am J Pharm Educ. 2005;69(1):Article 5.
10. Kidd RS, Latif DA. Student evaluations: Are they valid measures of course effectiveness? Am J Pharm Educ. 2004;68(3):Article 61.
11. Barnett CW, Matthews HW, Jackson RA. A comparison between student ratings and faculty self-ratings of instructional effectiveness. Am J Pharm Educ. 2003;67(4):Article 117.
12. Surratt CK, Desselle SP. Pharmacy students’ perceptions of a teaching evaluation process. Am J Pharm Educ. 2007;71(1):Article 06. [PMC free article] [PubMed]
13. McCollum M, Cyr T, Criner TM, et al. Implementation of a web-based system for obtaining curricular assessment data. Am J Pharm Educ. 2003;67(3):Article 80.
14. Barnett CW, Matthews HW. Teaching evaluation practices in colleges and schools of pharmacy. Am J Pharm Educ. 2009;73(6):Article 103. [PMC free article] [PubMed]
15. R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; 2009. 2012 Accessed July 21,
16. Krejcie RV, Morgan DW. Determining sample size for research activities. Educ Psychol Measure. 1970;30(3):607–610.
17. University of Texas. Instructional Assessment Research. 2012 Accessed July 21,
18. Fleming VM, Schindler N, Martin GJ, DaRosa DA. Separate and equitable promotion tracks for clinician-educators. JAMA. 2005;294(9):1101–1104. [PubMed]
19. Hauer KE, Papadakis MA. Assessment of the contributions of clinician educators. J Gen Intern Med. 2010;25(1):5–6. [PMC free article] [PubMed]
20. Beasley BW, Wright SM, Cofrancesco J, Babbott SF, Thomas PA, Bass EB. Promotion criteria for clinician-educators in the United States and Canada: a survey of promotion committee chairpersons. JAMA. 1997;278(9):723–728. [PubMed]
21. Atasoylu AA, Wright SM, Beasley BW, et al. Promotion criteria for clinician-educators. J Gen Intern Med. 2003;18(9):711–716. [PMC free article] [PubMed]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy