PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Patient Educ Couns. Author manuscript; available in PMC 2010 December 1.
Published in final edited form as:
PMCID: PMC2787991
NIHMSID: NIHMS148324

A Structured Implicit Abstraction Method to Evaluate whether Content of Counseling before Prostate Cancer Screening is Consistent with Recommendations by Experts

Abstract

Objective

To assess the content of counseling about prostate-specific antigen (PSA) screening. Guidelines recommend informed consent before screening because of concerns about benefits versus risks. As part of the professional practice standard for informed consent, clinicians should include content customarily provided by experts.

Methods

40 transcripts of conversations between medicine residents and standardized patients were abstracted using an instrument derived from an expert Delphi panel that ranked 10 “facts that experts believe men ought to know.”

Results

Transcripts contained definite criteria for an average of 1.7 facts, and either definite or partial criteria for 5.1 facts. Second- and third-year residents presented more facts than interns (p=0.01). The most common facts were “false positive PSA tests can occur” and “use of the PSA test as a screening test is controversial.” There was an r=0.88 correlation between inclusion by residents and the experts’ ranking.

Conclusion

Counseling varied but most transcripts included some expert-recommended facts. The absence of other facts could be a quality deficit or an effort to prioritize messages and lessen cognitive demands on the patient.

Practice implications

Clinicians should adapt counseling for each patient, but our abstraction approach may help to assess the quality of informed consent over larger populations.

Keywords: Communication, Doctor-patient relationship, Prostate-specific antigen, Risk counseling, Informed consent

1. INTRODUCTION

Communication is said to be the “main ingredient” of medical care [1]; effective communication is associated with improvements in patients’ and clinicians’ mutual understanding as well as in relationships, satisfaction, patient adherence, and medical outcomes [17]. Problems with communication have been found in a wide variety of health care settings [513], but most efforts to improve communication have been limited to educational programs [1316]. Since most methods developed for training programs require more resources than would be feasible across the entire population of health care providers and patients, we have been adapting methods from Quality Improvement for use with communication [1726]. Quality Improvement has a successful track record for affordably improving physician behavior across entire states [11, 12], or what some call a “population scale” [27]. To improve access to communication services across entire populations, new communication assessment methods will need to function on a lean budget and be simple enough for use by personnel without much experience in communication skills training [28].

This analysis of medicine residents’ counseling about prostate cancer screening is part of a larger effort to develop communication assessment tools that are suitable for use on a population scale [1726]. We use a quality indicator strategy to operationalize individual communication behaviors. Quality indicators are quantitatively reliable variables that each represents a small part of total quality that can be tracked and assessed again after an intervention [28, 29]. We have previously reported communication quality indicators operationalizing how clinicians conduct their communication, ranging from jargon usage and assessment of understanding to discussion about potential emotions [1925]. In this paper we introduce a communication quality indicator group that operationalizes whether content messages included in counseling are consistent with recommendations from experts. Making comparisons between clinicians’ counseling and experts’ recommendations is consistent with a “professional practice” legal standard for informed consent, which holds that patients should be given the same information customarily disclosed by expert physicians for similar patients’ best interests [30].

We chose to study communication about prostate cancer screening because professional organizations recommend routine counseling about its potential risks [3134]. Counseling is recommended for screening because there is no high-level evidence to suggest that screening with the prostate-specific antigen (PSA) test decreases morbidity or mortality [3439]. Informing men does not always influence decision-making [4045], but may have legal or intrinsic ethical value and is also important for helping men to understand the danger of prostate cancer and to be mentally prepared for an abnormal screening result [32, 33]. As with our other quality indicators [1725], we chose to study counseling in resident physicians for feasibility reasons and to obtain a comparison sample for later studies of clinicians’ counseling after formal education is complete [25].

In addition to investigating communication by resident physicians, this study has two other purposes. First, the paper presents a methodological advancement, in that we derived the indicator criteria from the results of a previously published Delphi panel by Chan and Sulmasy [35]. Our previous studies of content of counseling after newborn screening used individual consultations with experts [17, 18]. Delphi-derived data have the advantage of being anonymously collected, less prone to bias, and ranked by priority [46].

Second, we designed this study to pilot the use of the structured implicit review method for communication transcripts, instead of the explicit criteria abstraction approach we have used for other communication quality indicators [1924]. These approaches are adapted from quality improvement techniques for review of medical records [47]. In explicit criteria abstraction, chart reviewers search for objective features of medical care, following a data dictionary containing explicitly detailed definitions and examples. In structured implicit review, clinically knowledgeable abstractors are asked to make judgments about specific aspects of quality, using survey-like questions as a guide. Structured implicit review enables abstractors to make inferences about causation and clinicians’ motivations, and to identify nuances that might be missed by the more precise and quantitatively reliable method called explicit-criteria abstraction [48]. Explicit criteria abstraction generally has better quantitative reliability [1924] than structured implicit review, but for this project we chose structured implicit abstraction because its greater flexibility allowed us to incorporate the entire Chan and Sulmasy list without having to infer beyond the Delphi panel’s wording. Use of structured implicit methods also allowed us to investigate whether structured implicit review for communication performs similarly to when used in traditional Quality Improvement.

2. METHODS

2.1. Data source

For this study we abstracted transcripts of conversations between internal medicine residents and standardized patients portrayed to have a question about prostate cancer screening. Transcripts were made from tapes collected during four workshops in a Primary Care Internal Medicine residency program. The workshops were part of the educational curriculum, but residents were asked to give informed consent and were offered a chance to decline use of their tapes for research. Methods were approved by institutional review boards at Yale and the Medical College of Wisconsin.

Before the didactic portion of each workshop, residents were taped in a standardized patient encounter, in which a 50-year old man asked about prostate cancer screening. Encounters were done in the residents’ actual continuity clinic environment, and were not observed “live” by the peers or attending physicians. A handout stated that the patient had no family history of cancer and had had an unremarkable physical exam the week before, so the resident would not feel obliged to do a physical or take an extended history. The handout did not contain any suggestions about how to discuss the screening test, and none of the residents had previously been taught about or trained in facts recommended by the Chan and Sulmasy Delphi group.

Following the techniques of our Brief Standardized Communication Assessment (BSCA) tool for focusing data collection [25], patients began with a short speech patterned after the following example:

I’m sorry I’m back so soon after my physical, but I had to leave so quickly that I didn’t get a chance to ask a question. I recently saw an advertisement about prostate cancer screening, but I wasn’t sure if it was for me. What do you think?

To standardize the counseling task, patients were coached to avoid asking leading questions and to minimize the appearance of anxiety or confusion. All standardized patients were men and chosen to plausibly depict the 50-year old age of the man in the script. In order to standardize the counseling task, all patients were Caucasian.

The tapes were transcribed verbatim and proofread for accuracy by a board-certified internist (MF or JS). To lessen abstractor bias, all names and other personally identifying information were removed from the transcription during the proofreading process. There was a final sample of 40 transcripts for this analysis.

2.2. Structured implicit abstraction

Our abstraction methods used a content message identification instrument derived from a study by Chan and Sulmasy [35]. As part of that study, a Delphi panel of national experts on prostate cancer (6 urologists and 6 non-urologists) was convened to identify and vote on priorities for a list of ten “key facts experts believe men ought to know” about PSA screening before giving consent. As a result of the Delphi method, the list is ranked so that the top facts had received more of the experts’ high priority ratings than the lower-numbered facts (Table 1). Since 6 of the facts contained two related but distinct concepts and another fact contained 3 concepts, for abstraction purposes we parsed each fact into individual messages, the final number of which was 18. For a separate analysis we added another 16 messages derived from some additional lists described by Chan and Sulmasy. The final abstraction instrument therefore consisted of 34 separate messages that could be consolidated back into the original expert-recommended facts. The abstraction instrument was designed to facilitate the structured implicit review method as described in the Introduction.

Table 1
Expert-recommended “facts” about prostate cancer screening*

To focus the abstraction procedure, abstractors were instructed to read and abstract the transcript one sentence at a time. Individual instances of each message were marked in the transcript, but in contrast to our previous analyses of individual statements [1724], the final unit for this analysis was the entire transcript, i.e. whether each of the 34 messages was present somewhere in the transcript.

It is important to recognize that transcript abstraction is a much more targeted technique than the open-ended coding method referred to as “qualitative” analysis. As with their counterparts in Quality Improvement, communication abstractors read quickly through the transcript looking for only very specific content or conduct communication behaviors. In comparison, qualitative methods take much longer, are less reliable, and require more expertise than would be feasible for population-scale use, even though they might provide richer descriptions of interaction between clinician and patient.

To allow for partially ambiguous statements and help calculate inter-abstractor reliability, the instrument allowed the abstractors to assign the message variables either with “definite” or “partial” designations. After abstraction was complete, the definite and partial codings were consolidated back into the 10 facts that had been recommended by the Delphi panel of experts. For our analysis to accept a fact as “definite,” 2 abstractors had to have designated all component messages as definite, or 1 abstractor as definite and the other abstractor partial. Each transcript was reviewed by at least 2 abstractors; 23% of transcripts were reviewed by 3 abstractors. Abstraction data from every third transcript were discussed by the abstractors for quality control purposes, following the suggestion by Feinstein [49].

2.3. Analysis and statistics

Data were analyzed using 1-way ANOVA for continuous responses to categorical variables and the Chi-squared test for grouped categorical responses. Analyses were done using JMP software (SAS Institute, Cary, NC, USA).

Inter-abstractor reliability was calculated from the individual 34 concept variables using a weighted adaptation of Cohen’s method [50] for the definite/partial/absent coding schema. For perfect agreement in this method, both abstractors had to code the transcript in exactly the same way (definite or partial or absent). If abstractors split on definite versus partial ratings, half of an agreement was included in the calculation, although a full potential agreement was still used in the denominator for the Cohen correction for chance.

3. RESULTS

Descriptive data on the participants (Table 2) were similar to those of the population of the residency program at the time of the study. The interviews lasted an average of 10.4 minutes (SD 4.5, skew 0.65).

Table 2
Participant Characteristics

3.1. Performance of the abstraction method

For abstractors’ coding of the 34 content messages over the whole project, there were 1654 out of a possible 1972 agreements (κ=0.64). The median and maximum κ coefficients for the 34 individual messages were 0.51 and 0.82 respectively, consistent with our expectations for the structured implicit abstraction method. To assess feasibility of our methods we tracked expenses. The entire project was done for less than $50 per transcript, most of which was for transcription. Use of the abstraction instrument took less than 5 minutes of abstractor time per transcript.

3.2. Number of definite-criteria facts included

Using a strict approach of only counting facts that met “definite” criteria for the abstractors, the average total number of facts included was 1.7 per transcript (SD 1.2). As shown in the histogram in Figure 1, most transcripts included 3 or fewer facts. Eight transcripts failed to meet definite criteria for any of the recommended facts. No transcripts contained more than 5 facts.

Figure 1
Number of expert-recommended facts included in residents’ counseling

Transcripts contained an average of 3.4 content messages that met partial criteria because of partial overlap with the Delphi panel’s recommendations. When these partial-criteria facts were included in the calculation, the average number of facts identified increased to 5.1 facts per transcript (SD 2.1). One transcript was even found to include either definite or partial criteria for all 10 facts. A histogram of the number of definite plus partial facts per transcript has a roughly normal distribution (Figure 2).

Figure 2
Total number of expert-recommended facts included when partial criteria were considered

3.3. Facts included in counseling

Table 3 shows the number and percentage of transcripts including each fact when the analysis included either definite criteria or definite plus partial criteria. There was a positive correlation between the number of transcripts with definite-criteria facts and the facts’ ranking by the experts (r=0.88, p<0.001). The most common fact included was the fourth-ranked (“False positive PSA tests can occur”) seen with definite criteria in 58% of transcripts and with partial criteria in another 35% of transcripts. The experts’ top-ranked fact (“The use of the PSA test as a screening test is controversial”) was the second most common in the transcripts, with definite criteria in 33% of transcripts and partial criteria in another 50%. Definite criteria for the experts’ second-ranked fact were seen in only 3 transcripts (8%).

Table 3
Number and percentage of transcripts containing criteria for each expert-recommended fact

3.4. Factors associated with fact inclusion

When both definite and partial criteria facts were considered, we found that second- and third-year residents tended to include more facts in counseling (averages 5.3 and 6.3) than first-year residents (average 3.2, p=0.005). No significant difference was apparent between male and female residents (average 5.4 versus 5.0).

When only definite-criteria facts were included, there was a mildly positive correlation between duration of the transcript and the number of facts (r=0.32, p=0.04). The correlation increased to 0.54 (p=0.003) with the addition of the partial-criteria facts. This moderate correlation suggested to us that longer conversations might devote much of their additional duration to longer discussions of the same content messages, or else to conduct-related behaviors such as assessment of understanding.

4. DISCUSSION AND CONCLUSION

4.1. Discussion

Effective communication before PSA screening is consistent with guidelines and will help men to be mentally prepared for an abnormal result [3134]. In this paper we introduce a communication quality indicator for assessing the content of counseling likely to be provided. Previously we have developed communication quality indicators pertaining to several important communication behaviors, including other types of content messages [17, 18], jargon usage and explanation [21, 22], assessment of understanding [23, 24], speech complexity [25], and discussion about potential emotions [19, 20]. This paper demonstrates two new methodological options for communication quality indicators: derivation of abstraction criteria from a Delphi panel of experts [35] rather than from consultants’ expert opinions [17], and use of a structured implicit review technique for those analyses where abstractor flexibility is needed for the analysis aims.

In our demonstration sample of residents we found that the content of counseling varied widely but often lacked facts that were highly recommended by the expert Delphi panel. The addition of partial criteria to the analysis suggested that many residents may touch briefly on some of the expert-recommended facts, but without enough specificity to convey the entire concept to our abstractors. These findings could amount to a problem with quality of communication, in which case feedback about being more specific may help clinicians like our residents to be more effective communicators. Alternately, the residents could have been intentionally holding to the time limits that they normally face in clinic. The residents may also have prioritized their content messages because of an understanding that only a limited number of messages can be successfully learned in one visit [51]. Prioritization would be consistent with “first visit bias,” a possible limitation of standardized patient assessment methods [52].

Generalizability from this demonstration analysis is limited by the use of a small sample of residents from a single program, so further research at other programs and with physicians working outside of academic settings should be conducted before more specific conclusions can be made. Some questions about communication quality may be answered by the implementation of communication assessment methods over an entire population of clinicians and patients, such as our ongoing statewide analysis of processes and outcomes of communication after routine newborn screening [26]. To make communication assessment possible on this population scale, we designed the method to be affordable enough to work on a lean budget and simple enough to be implemented by Quality Improvement personnel without communication research training. To increase acceptability to clinicians, data collection is brief enough to reduce annoyance for busy clinicians, standardized enough to make comparisons fair, and transparent enough for clinicians to understand how we arrived at their assessment result. Feasibility for standardized patients on a population scale is advanced by use over the telephone [25]. Detailed descriptions of Communication Quality Assurance methods will be addressed in a forthcoming series of papers.

Generalizability from our demonstration analysis may also be reduced by the use of standardized patient encounters instead of recording encounters with actual patients. We chose to use simulation methods because they avoid logistical, privacy, and consent issues that would pose challenges for population-scale use. A limitation of standardized patient methods is that they may prompt clinicians to act on their best behavior due to a Hawthorne-like sense of observation. Future communication quality indicator analyses may be able to use recordings of encounters with actual patients, or with unannounced standardized patients.

A significant advantage of standardized patients and our BSCA approach in particular, is that variation can be minimized so as to enable fairer comparisons and ranking of clinicians. The Hawthorne effect may reduce cross-clinician variation due to clinician effort, and the instructions provided to BSCA patients may reduce variation due to patient differences or the attitudes of traditional standardized patients. The resulting uniformity across clinicians allows an equal footing comparison of communication competence that would be impossible in analyses of actual encounters or unannounced simulated patients [25]. We believe that assessment of competence, which is a necessary prerequisite for performance [53], will provide a cost-effective strategy for improvement until research from us and others suggests that a global improvement in communication competence is being achieved. In colloquial terms, we see communication competence as the current version of “low hanging fruit” that is often sought in traditional Quality Improvement. In the meantime, our newborn screening research will compare competence data with patients’ communication outcomes [26]. Further research may also help to understand whether such strict data collection and abstraction procedures will be valuable as a supplement for the plausibility and flexibility needed for medical education.

Our experience in this project with structured implicit review methods for communication transcripts was about what we had expected. We chose structured implicit methods for our Delphi-derived quality indicator because it allowed us to be faithful to the experts’ original wording [46], and to evaluate the effect on reliability from reducing the explicitness of abstraction criteria used in our previous projects [1724]. As expected, we found that reliability was less than we have experienced with our explicit criteria abstraction indicators. On the other hand, reliability was consistent with reliability often seen with quality improvement projects that incorporate structured implicit review of medical records [25, 47, 54]. The ideal quality indicator uses explicit criteria abstraction [29], but projects in both traditional Quality Improvement and Communication Quality Assurance the use of structured implicit review in abstraction may be an acceptable tradeoff when flexibility is needed for a given research topic [47,54].

Further research will be needed to develop more concrete guidelines for communication content, and guidance for how clinicians may plan out their counseling about specific issues. There is no consensus on how many messages need to be delivered for a patient to be “fully” informed about PSA screening, but conversations including few or none of the Chan and Sulmasy facts may be inconsistent with the professional practice standard for informed consent prior to PSA screening. For individual patients, the question of how many concepts to present depends on factors such as the patient’s levels of interest, attention, and health literacy. Regardless of the number of content messages included in counseling, clinicians are advised to use assessment of understanding questions to determine whether the messages were presented effectively [23, 24]. In addition, such methods will be accompanied by our other communication quality indicators, so that those methods’ greater reliability will enhance the fairness of comparisons across clinicians.

4.2. Conclusion

The data from this project are preliminary, but they suggest that problems may exist with the content domain of communication quality prior to PSA screening. The new method for comparing an aspect of counseling against Delphi-derived practice standard appears to be affordable, feasible for use on a population scale, and comparably reliable with similar analysis in traditional Quality Improvement. Further research and development needs to be done, but methodological innovations hold promise for a nascent field of population-scale Communication Quality Assurance.

4.3. Practice implications

We believe that if informed consent is worth doing, it is worth doing well. We hope that our observations of content messages included in pre-PSA counseling will expand awareness in clinicians and medical educators about the difficulties of practical counseling in the office settings. Even more so, we hope that implementation of population-scale assessment methods will enable improvement of communication on a same scale as health care itself. Improvements in the content of counseling may improve the experience of health care and promote the type of informed decision-making that guidelines recommend.

Acknowledgments

The authors are grateful to Dr. Stephen Huot and to the faculty and residents of the Yale University Primary Care Internal Medicine Residency Program. Dr. Farrell is supported in part by grants K01HL072530 and R01HL086691 from the National Heart, Lung, and Blood Institute. The authors do not have any actual or potential conflicts of interest to declare; there are no personal or other relationships with other people or organizations that could inappropriately influence, or be perceived to influence, this research.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

1. Roter DL, Hall JA, editors. Doctors Talking with Patients, Patients Talking with Doctors: Improving Communication in Medical Visits. Connecticut: Auburn House: Westport; 1992.
2. Kaplan SH, Greenfield S, Ware JE., Jr Assessing the effects of physician-patient interactions on the outcomes of chronic disease. Med Care. 1989;27:S110–S127. [PubMed]
3. Cegala DJ, Marinelli T, Post D. The effects of patient communication skills training on compliance. Arch Fam Med. 2000;9:57–64. [PubMed]
4. Thompson SC, Nanni C, Schwankovsky L. Patient-oriented interventions to improve communication in a medical office visit. Health Psychol. 1990;9:390–404. [PubMed]
5. Stewart MA. Effective physician-patient communication and health outcomes: a review. Can Med Ass J. 1995;152:1423–1433. [PMC free article] [PubMed]
6. Roter DL, Hall JA, Kern DE, Barker LR, Cole KA, Roca RP. Improving physicians' interviewing skills and reducing patients' emotional distress. A randomized clinical trial. Arch Intern Med. 1995;155:1877–1884. [PubMed]
7. Beckman HB, Frankel RM. The effect of physician behavior on the collection of data. Ann Intern Med. 1984;101:692–696. [PubMed]
8. Hadlow J, Pitts M. The understanding of common health terms by doctors, nurses and patients. Soc Sci Med. 1991;32:193–196. [PubMed]
9. Beckman H, Markakis K, Suchman A, Frankel R. The doctor-patient relationship and malpractice. Lessons from plaintiff depositions. Arch Intern Med. 1994;154:1365–1370. [PubMed]
10. Baile WF, Lenzi R, Kudelka AP, Maguire P, Novack D, Goldstein M, Myers EG, Bast RCJ. Improving Physician-Patient Communication in Cancer Care: Outcome of a Workshop for Oncologists. Journal of Cancer Education. 1997;12:166–173. [PubMed]
11. Jencks SF, Cuerdon T, Burwen DR, Fleming B, Houck PM, Kussmaul AE, Nilasena DS, Ordin DL, Arday DR. Quality of medical care delivered to Medicare beneficiaries: A profile at state and national levels. JAMA. 2000;284:1670–1676. [PubMed]
12. Jencks SF, Huff ED, Cuerdon T. Change in the Quality of Care Delivered to Medicare Beneficiaries, 1998–1999 to 2000–2001. JAMA. 2003;289:305–312. [PubMed]
13. Smith R, Lyles J, Mettler J, Stoffelmayr B, Van Egeren L, Marshall A, Gardiner J, Maduschke K, Stanley J, Osborn G, Shebroe V, Greenbaum R. The effectiveness of intensive training for residents in interviewing. A randomized, controlled study. Ann Intern Med. 1998;128:118–126. [PubMed]
14. Coulehan JL, Block MR. The medical interview : mastering skills for clinical practice. 5th ed. Philadelphia: F.A.: Davis Co.; 2006. p. 409. xix.
15. Bylund CL, Brown RF, di Ciccone BL, Levin TT, Gueguen JA, Hill C, Kissane DW. Training faculty to facilitate communication skills training: Development and evaluation of a workshop. Patient Educ Couns. 2008;70:430–436. [PubMed]
16. Lang F, Everett K, McGowen R, Bennard B. Faculty development in communication skills instruction: insights from a longitudinal program with "real-time feedback". Acad Med. 2000;75:1222–1228. [PubMed]
17. Farrell MH, La Pean A, Ladouceur L. Content of communication by pediatric residents after newborn genetic screening. Pediatrics. 2005;116:1492–1498. [PubMed]
18. La Pean A, Farrell MH. Initially misleading communication of carrier results after newborn genetic screening. Pediatrics. 2005;116:1499–1505. [PubMed]
19. Donovan J, Deuster L, Christopher SA, Farrell MH. Residents' precautionary discussion of emotions during communication about cancer screening. Poster at 2007 annual meeting of the Society for General Internal Medicine.2007.
20. Donovan JJ, Farrell MH, Deuster L, Christopher SA. "Precautionary empathy" by child health providers after newborn screening. Poster at International Conference on Communication and Health Care; Charleston, SC. 2007.
21. Deuster L, Christopher S, Donovan J, Farrell M. A Method to Quantify Residents' Jargon Use During Counseling of Standardized Patients About Cancer Screening. J Gen Intern Med. 2008;23:1947–1952. PMC ID: PMC2596518. [PMC free article] [PubMed]
22. Farrell M, Deuster L, Donovan J, Christopher S. Pediatric residents' use of jargon during counseling about newborn genetic screening results. Pediatrics. 2008;122:243–249. PMC ID: pending. [PubMed]
23. Farrell MH, Kuruvilla P. Assessment of parental understanding by pediatric residents during counseling after newborn genetic screening. Arch Pediatr Adolesc Med. 2008;162:199–204. [PubMed]
24. Farrell MH, Kuruvilla P, Eskra KL, Christopher SA, Brienza RS. A Method to Quantify and Compare Clinicians’ Assessments of Patient Understanding during Counseling of Standardized Patients Patient Educ Couns. 2009;77:128–135. PMC ID: PMC2737092. [PMC free article] [PubMed]
25. Farrell MH, Christopher SA, La Pean A, Ladouceur LK. The Brief Standardized Communication Assessment: A Patient Simulation Method Feasible for Population-Scale Use in Communication Quality Assurance. Medical Encounter. 2009;23:64.
26. Farrell MH. R01 HL086691, Improvement of communication process and outcomes after newborn genetic screening. National Heart, Lung, and Blood Institute: Medical College of Wisconsin; 2008.
27. Kindig D, Stoddart G. What is population health? Am J Public Health. 2003;93:380–383. [PubMed]
28. Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260:1743–1748. [PubMed]
29. Campbell SM, Braspenning J, Hutchinson A, Marshall MN. Research methods used in developing and applying quality indicators in primary care. BMJ. 2003;326:816–819. PMC ID: PMC1125721. [PMC free article] [PubMed]
30. Faden RR, Beauchamp TL. A History and Theory of Informed Consent. New York, NY: Oxford University Press, Inc; 1986. [PubMed]
31. Lim LS, Sherin K. Screening for prostate cancer in U.S. men ACPM position statement on preventive practice. Am J Prev Med. 2008;34:164–170. [PubMed]
32. Board of Directors of the American Urological Association. AUA Policy Statement on Early Detection of Prostate Cancer. 2006. [accessed 9/13/2009]. Available from: http://www.auanet.org/content/guidelines-and-quality-care/policy-statements/e/early-detection-of-prostate-cancer.cfm.
33. Smith RA, Cokkinides V, Eyre HJ. Cancer screening in the United States, 2007: a review of current guidelines, practices, and prospects. CA Cancer J Clin. 2007;57:90–104. [PubMed]
34. Screening for prostate cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2008;149:185–191. [PubMed]
35. Chan EC, Sulmasy DP. What should men know about prostate-specific antigen screening before giving informed consent? Am J Med. 1998;105:266–274. [PubMed]
36. Chan EC, Vernon SW, Haynes MC, O'Donnell FT, Ahn C. Physician Perspectives on the Importance of Facts Men Ought to Know About Prostate-Specific Antigen Testing. J Gen Intern Med. 2003;18:350–356. [PMC free article] [PubMed]
37. Woolf SH, Rothemich SF. Screening for prostate cancer: the roles of science, policy, and opinion in determining what is best for patients. Annu Rev Med. 1999;50:207–221. [PubMed]
38. Andriole GL, Grubb RL, III, Buys SS, Chia D, Church TR, Fouad MN, Gelmann EP, Kvale PA, Reding DJ, Weissfeld JL, Yokochi LA, Crawford ED, O'Brien B, Clapp JD, Rathmell JM, Riley TL, Hayes RB, Kramer BS, Izmirlian G, Miller AB, Pinsky PF, Prorok PC, Gohagan JK, Berg CD. for the PLCO Project Team, Mortality Results from a Randomized Prostate-Cancer Screening Trial. N Engl J Med. 2009;360:1310–1319. [PMC free article] [PubMed]
39. Schroder FH, Hugosson J, Roobol MJ, Tammela TLJ, Ciatto S, Nelen V, Kwiatkowski M, Lujan M, Lilja H, Zappa M, Denis LJ, Recker F, Berenguer A, Maattanen L, Bangma CH, Aus G, Villers A, Rebillard X, van der Kwast T, Blijenberg BG, Moss SM, de Koning HJ, Auvinen A. for the ERSPC Investigators, Screening and Prostate-Cancer Mortality in a Randomized European Study. N Engl J Med. 2009;360:1320–1328. [PubMed]
40. Farrell MH, Murphy MA, Schneider CE. How underlying patient beliefs can affect physician-patient communication about prostate-specific antigen testing. Eff Clin Pract. 2002;5:120–129. [PubMed]
41. Frosch DL, Kaplan RM, Felitti V. The evaluation of two methods to facilitate shared decision making for men considering the prostate-specific antigen test. Journal of General Internal Medicine. 2001;16:391–398. [PMC free article] [PubMed]
42. Frosch DL, Kaplan RM, Felitti VJ. A randomized controlled trial comparing internet and video to facilitate patient education for men considering the prostate specific antigen test. Journal of General Internal Medicine. 2003;18:781–787. [PMC free article] [PubMed]
43. O'Brien MA, Whelan TJ, Villasis-Keever M, Gafni A, Charles C, Roberts R, Schiff S, Cai W. Are cancer-related decision aids effective? A systematic review and meta-analysis. Journal of Clinical Oncology. 2009;27:974–985. [PubMed]
44. Barry MJ. Health decision aids to facilitate shared decision making in office practice. Annals of Internal Medicine. 2002;136:127–135. [PubMed]
45. Rai T, Clements A, Bukach C, Shine B, Austoker J, Watson E. What influences men's decision to have a prostate-specific antigen test? A qualitative study. Fam Pract. 2007;24:365–371. [PubMed]
46. Delbecq A, VendeVen A, Gustafson D. Group techniques for program planning; A guide to nominal group and delphi processes. Middleton, Wisconsin: Green Briar Press; 1986.
47. Rubenstein LV, Kahn KL, Harrison ER, Sherwood WJ, Rogers WH, Brook RH. Structured Implicit Review of the Medical Record: A Method for Measuring the Quality of Inhospital Medical Care and a Summary of Quality Changes Following Implementation of the Medicare Prospective Payment System. A Rand Note. 1991 N-3033-HCFA.
48. Kahn KL, Rubenstein LV, Sherwood MJ, Brook RH. Structured Implicit Review for Physician Implicit Measurement of Quality of Care: Development of the Form and Guidelines for Its Use. 1989
49. Feinstein A. Clinical Epidemiology: The Architecture of Clinical Research. Philadelphia: WB Saunders; 1985.
50. Cohen J. Weighted kappa: nominal scale agreement with provision for scale and disagreement or partial credit. Psychol Bull. 1970;70:213–220. [PubMed]
51. Ley P. Communicating with Patients. Improving communication, satisfaction and compliance. In: Marcer D, editor. Psychology and Medicine Series. London: Croom Helm; 1988.
52. Tamblyn RM, Abrahamowicz M, Berkson L, Dauphinee WD, Gayton DC, Grad RM, Isaac LM, Marrache M, McLeod PJ, Snell LS. First-visit bias in the measurement of clinical competence with standardized patients. Acad Med. 1992;67:S22–S24. [PubMed]
53. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65:S63–S67. [PubMed]
54. Hofer TP, Bernstein SJ, DeMonner S, Hayward RA. Discussion between reviewers does not improve reliability of peer review of hospital quality. Med Care. 2000;38:152–161. [PubMed]