A draft questionnaire was designed and reviewed by persons at 4 different colleges and schools of pharmacy with knowledge of the subject area and/or expertise in study design or curricular assessment. These individuals were asked to assess the proposed survey instrument for ease of completion, clarity, comprehensiveness, and overall suitability. Following feedback, the questionnaire was modified and sent to 3 different individuals with similar expertise for comments. After the second revision, the instrument and a cover letter were submitted to the institutional review board at Long Island University, and the research project was granted exempt status.
The final questionnaire contained 102 questions, but as the instrument was developed in such a way that a given response led to specific follow-up questions and omitted others, no respondent would need to answer all items. We estimated that the average respondent would need about 15 minutes to complete the survey instrument. The survey instrument included questions dealing with demographics of the respondent’s college or school, abilities and attributes assessed for practice-based full-time faculty members, various types of assessments, and frequency of conducting each assessment. Respondents were requested to focus on full-time faculty members (and preceptors who were paid partially by the college/school and partially by the practice site but were treated as full-time faculty members) who were assigned to experiential sites to precept students and, perhaps, provide service to the site. (A copy of the survey instrument is available from the authors upon request.)
In December 2010, a list of 124 directors of pharmacy practice (or equivalent title) at colleges and schools of pharmacy was obtained from the American Association of Colleges of Pharmacy. The investigators checked each college and school Web site to verify the names of directors. If the Web site did not provide this information, the institution was contacted by telephone. Four institutions outside the United States and Puerto Rico were eliminated from the contact list, and 1 institution was added to the list for a total of 121 colleges and schools.
The survey instrument was made available online through Student Voice (Fundamentals, Version 4, StudentVoice, Buffalo, NY), a commercial vendor. Invitations to participate along with a link to the survey instrument were sent to directors of pharmacy practice at the 121 colleges and schools in February 2011. Duplicate e-mail messages were sent to nonresponders approximately 3 and 6 weeks later, and telephone calls were made to persons who did not respond to the second e-mail reminder. In some instances, an alternate person was identified at a college or school (such as an assistant/associate dean for assessment or the director of experiential education) and they were contacted via telephone and/or e-mail.
StudentVoice was used to collect and collate data, as well as to generate descriptive statistics. R
software (R Foundation for Statistical Computing, Vienna, Austria) was used for all inferential statistical analyses and to confirm descriptive statistics including frequencies, percentages, and means ±SD.15
Each question was analyzed for significant differences between public and private institutions. The Fisher exact test was used to compare percentages and a 2-sample t
test was used to compare means. Sets of related questions with binary yes/no responses (eg, for abilities/attributes assessed by the college or school) were analyzed for differences in marginal probabilities using the Cochran Q test. For significant results (p
< 0.05), post hoc analysis for differences between pairs of responses was carried out using the McNemar test with Bonferroni correction for all pairwise comparisons, excluding “other” responses; an adjusted p
value for each response was obtained.
Pair-wise comparisons of overall responses revealed 3 distinct categories of reported frequencies for most questions. The categories were classified using the following scheme: most commonly reported, intermediate, and least commonly reported. The assignment of a finding to 1 of the 3 categories was done in such a way as to maximize the combined number of responses in the most- and least-commonly reported categories. Ties were broken by assigning the finding to the category that was the most balanced. Responses not assigned to the most commonly reported or least commonly reported category were assigned to the intermediate category. The overall responses (the total number of colleges and schools) in the most commonly reported category were significantly (p<0.05) more common than in the least commonly reported category for all items. There were no significant differences between responses within any category and there were no significant differences noted when comparing the most commonly reported category to the intermediate category, or when comparing the responses in the intermediate category to those in the least commonly reported category.