Search tips
Search criteria 


Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
J Gen Intern Med. 2006 May; 21(5): 460–465.
PMCID: PMC1484801

Competency in Chest Radiography

A Comparison of Medical Students, Residents, and Fellows



Accurate interpretation of chest radiographs (CXR) is essential as clinical decisions depend on readings.


We sought to evaluate CXR interpretation ability at different levels of training and to determine factors associated with successful interpretation.


Ten CXR were selected from the teaching file of the internal medicine (IM) department. Participants were asked to record the most important diagnosis, their certainty in that diagnosis, interest in a pulmonary career and adequacy of CXR training. Two investigators independently scored each CXR on a scale of 0 to 2.


Participants (n = 145) from a single teaching hospital were third year medical students (MS) (n = 25), IM interns (n = 44), IM residents (n = 45), fellows from the divisions of cardiology and pulmonary/critical care (n = 16), and radiology residents (n = 15).


The median overall score was 11 of 20. An increased level of training was associated with overall score (MS 8, intern 10, IM resident 13, fellow 15, radiology resident 18, P<.001). Overall certainty was significantly correlated with overall score (r = .613, P<.001). Internal medicine interns and residents interested in a pulmonary career scored 14 of 20 while those not interested scored 11 (P = .027). Pneumothorax, misplaced central line, and pneumoperitoneum were diagnosed correctly 9%, 26%, and 46% of the time, respectively. Only 20 of 131 (15%) participants felt their CXR training sufficient.


We identified factors associated with successful CXR interpretation, including level of training, field of training, interest in a pulmonary career and overall certainty. Although interpretation improved with training, important diagnoses were missed.

Keywords: education, medical, radiography, thoracic, clinical competence, educational measurement

In academic medical centers, accurate interpretation of chest radiographs (CXR) is essential as house officers make clinical decisions based on their interpretations. Situations arise where action must be taken expeditiously before readings can be verified by an attending radiologist. Clinical decisions based on improper interpretations have potential implications for patient care.

Although competency in CXR interpretation is important, formal training varies widely. This may be partly because of the lack of importance national organizations place on CXR interpretation.17 Without national standards, there is wide variability among medical school and residency training programs.

Prior studies have shown inaccurate CXR interpretation by emergency medicine physicians,8,9 medical staff,10 primary care physicians,11 and anesthesiologists.12 Faulty interpretations change management in up to 11% of cases.10 Most studies, but not all,9,13,14 have demonstrated improved CXR interpretation score with level of training.1519 One study evaluating CXR interpretation by medical students (MS) showed that they performed poorly on common conditions and lacked confidence.20

We conducted a study to evaluate the competency of MS and house officers from different specialties in CXR interpretation. We also sought to identify factors that affect competence.


Study Design

The study took place at Beth Israel Medical Center, University Hospital and Manhattan Campus for The Albert Einstein College of Medicine. It is a teaching hospital with approximately 700 medical/surgical beds. The Beth Israel Medical Center Institutional Review Board approved this study. The requirement for written informed consent was waived but participation was voluntary and all subjects gave oral consent.

Three authors selected a convenience sample of 10 CXR from the teaching file of an internal medicine (IM) training program. The CXR were chosen to represent common conditions that subjects should be expected to diagnose and the images were printed on photography-grade paper. Each CXR was viewed as having only 1 correct diagnosis for the purpose of this study. We specifically included 1 normal CXR and 3 examples of radiographic emergencies—pneumothorax, misplaced central venous catheter (subclavian line with distal segment in internal jugular vein), and pneumoperitoneum (Figs. 14).

Figure 1
Figure 4
Figure 2
Figure 3
Misplaced central venous catheter.

All CXR were reviewed independently by 2 experts in CXR interpretation (a radiology attending and a pulmonary/critical care attending). They were not involved in the selection of the CXR and were viewing the CXR for the first time. The experts were blinded to the study's purpose as well as demographic or clinical information and did not have any other role in the conduct of the study. They independently recorded the most important finding on each CXR. There was uniform agreement on all diagnoses. The CXR included in the study are shown in Table 1.

Table 1
Chest Radiograph Diagnoses

The study was designed to enroll all third-year MS rotating through their IM rotation, IM interns and residents, radiology residents, pulmonary/critical care fellows, and cardiology fellows in the hospital. Most subjects were given the CXR survey at a noon conference, which replaced a daily lecture (1 hour period). Two authors enrolled a few additional subjects who did not attend the conference. Over 1 week they administered the test on a personal basis. Of the total number of eligible subjects, 90% enrolled, specifically 25/25 of MS (100%), 44/47 of IM interns (94%), 35/54 of IM residents (83%), 16/19 of fellows 16/19 (84%), and 15/16 of radiology residents (94%). The most common reason for failure to enroll was vacation time.

All participants had undergone standard CXR training. Medical students at Albert Einstein College of Medicine undergo a mandatory 2-week clerkship in diagnostic radiology. However, not all had received this clerkship before the survey. During the IM clerkship, they have 2 introductory lectures by a chest radiologist. They also participate in faculty-moderated, monthly interactive sessions for IM interns and residents. The MS were in the final (12th) week of their IM rotation. IM interns and residents receive a monthly interactive CXR conference where cases are reviewed with house staff attempting to establish the diagnoses. This is led by a pulmonary/critical care attending who selects CXR from a teaching file. Attendance is mandatory. In addition, there is daily formal CXR review during 5 critical care rotation months over the course of residency. These rounds are led by a pulmonary/critical care attending in the medical intensive care unit (MICU) and by a cardiology attending in the coronary care unit. Internal medicine house staff had completed at least 6 months of training at the time of the study. The amount of prior CXR training was not exactly equal for subjects at a particular training level. For example, only 50% of IM interns had completed MICU training at the time of the survey. Pulmonary/critical care fellows receive a weekly conference in CXR interpretation led by an attending radiologist and review cases daily with pulmonary attendings. The cardiology fellowship does not include formal CXR training but CXR are frequently reviewed in clinical practice. Radiology residents have weekly formal conferences in chest radiography in addition to spending 3 months on the chest radiography service over the 4 years of residency. All had completed at least 1 chest radiography month at the time of the survey.

In daily practice, all radiographs are archived on a digital system and are easily retrievable. High-quality monitors are available on medical wards for CXR review. Internal medicine house officers are responsible for initial CXR interpretations at night. Films identified as difficult may be reviewed with the radiology resident on call. An attending radiologist is also available to read films off-site for cases deemed particularly challenging.

Each participant was given a survey that included questions about their sex, postgraduate year, field of training, career interest in pulmonary medicine, and perceived ability to interpret CXR independently. Subjects recorded “Yes” if they felt able to read CXR independently and “No” if they did not. Each subject was then shown 10 CXR and asked to write the most important finding. They were specifically told that there might be one or more normal CXR. After recording their diagnosis on a particular CXR, participants marked their “certainty” on a scale of 0 to 4 (0–0%, 1–25%, 2–50%, 3–75%, 4–100%).21 A cumulative certainty total was compiled for a maximum “overall certainty” of 40 points. Subjects were not allowed to consult outside sources. Although unlimited time was offered, all subjects finished in less than an hour.

Two blinded, independent graders gave each CXR a “score” on a scale of 0 to 2 (0 incorrect, 1 partially correct, 2 correct).20 A third party adjudicated any disagreements between the two graders. Partial credit was given if a less specific term was written. For example, on the CXR demonstrating consolidation, 1 point was awarded if a subject recorded opacity. An “overall score” was compiled by adding up the score for each CXR, for a possible total of 20 points.

Statistical Analysis

Descriptive data are presented as median with the 25th and 75th percentile following in parentheses. The analysis of variance F test was used to investigate differences in score, confidence and certainty by level of training. When 2 groups were analyzed, t test for independent samples was used. Pearson's correlation was used to investigate univariate associations between continuous variables.

Factors independently related to overall CXR score were established through multiple logistic regression analysis. The group was divided into “high scorers” (overall score in the top 25th percentile) and “low scorers” (all other scores). Odds ratios were calculated to determine the strength of the associations between factors and overall score in the top 25th percentile.

A P-value of less than .05 was considered statistically significant. Analyses were performed using the SPSS 11.0 statistical analysis program (SPSS, Inc., Chicago, IL).


The participants included 25 third-year MS and 120 house officers (44 interns, 60 residents, and 16 fellows) at 1 teaching hospital. Sixty (41%) participants were women. Of the 104 interns and residents, 89 were from the department of IM (44 PGY-1, 20 PGY-2, 24 PGY-3, 1 PGY-4) and 15 were from the department of radiology (5 PGY-2, 4 PGY-3, 3 PGY-4, 3 PGY-5). Of the 16 fellows, 11 were from the division of cardiology (4 PGY-5, 5 PGY-4, 2 PGY-7) and 5 were from the division of pulmonary/critical care (1 PGY-4, 3 PGY-5, 1 PGY-6).

The median overall score achieved by the entire cohort was 11/20 (8–15) (25–75th percentile). The median overall score increased with level of training (Table 2). Among all house officers, postgraduate year was significantly correlated with overall score (r = .537, P<.001). There was no significant difference in overall score between men and women.

Table 2
Overall Score and Overall Certainty by Level of Training

The median overall certainty among the entire cohort was 27/40 (20–31). Overall certainty increased with level of training (Table 2). Among all house officers, postgraduate year was significantly correlated with increasing overall certainty (r = .427, P<.001). Among the entire cohort, overall score was significantly correlated with overall certainty (r = .613, P<.001).

Table 3 lists the scores and certainty obtained for the normal and critical CXR. For these CXR, few subjects were absolutely certain of their diagnoses. Subjects recorded absolute certainty on 26/145 (18%) of the normal CXR, 12/145 (9%) of the pneumothorax, 34/145 (24%) of the line misplacement, and 36/145 (25%) of the pneumoperitoneum.

Table 3
Score and Certainty for the Normal and Critical Chest Radiographs by Level of Training

We investigated particular CXRs where subjects claimed to be absolutely certain of their diagnoses. We found that many subjects absolutely certain of their diagnoses were wrong—normal 6/26 (23%), pneumothorax, 9/12 (75%), line misplacement 10/34 (29%), and pneumoperitoneum 5/36 (14%).

Among the 89 IM interns and residents, 10 (11%) were interested in pulmonary medicine as a career. Internal medicine interns and residents interested in a career in pulmonary medicine scored 14 (11–16) while their peers scored 11 (9–13.5) (P = .027). Overall certainty did not differ between groups.

Radiology residents scored higher than IM residents. Median overall score was 11 (8–14) for IM residents and 18 (15–18) for radiology residents (P<.001). Overall certainty was significantly higher among radiology residents (P<.001).

Only 21/145 (14%) of the participants felt capable of interpreting CXR independently. Specifically, 1/25 (4%) MS, 1/44 (2%) interns, 11/45 (25%) IM residents, 4/16 fellows (25%), and 4/15 (27%) radiology residents felt capable of interpreting CXR independently. Participants who felt able to interpret CXR independently scored 16 (12–18) while other participants scored 11 (8–14) (P<.001). Overall certainty was significantly higher in the participants who felt able to interpret CXR independently (P<.001).

In a logistic regression analysis, factors independently associated with overall score were field of training, interest in a pulmonary career, level of training, and overall certainty. After controlling for other variables, perceived ability in interpreting CXR independently was not significant (Table 4).

Table 4
Multiple Regression Analysis of Factors Associated with Overall Score in the Top 25 Percentile


Despite its importance, organizations have not stressed CXR interpretation. Only 29% of medical schools have a required clerkship in diagnostic radiology.22 The Liaison Committee on Medical Education (LCME) states that “Educational opportunities must be available in … diagnostic imaging.”1 Thus, specific CXR interpretation training is not mandated for undergraduate medical education.

While the American Board of Internal Medicine has a requirement for competency in electrocardiogram interpretation, there is no similar requirement for CXR interpretation.2 Surprisingly, it is also not a required procedure for board certification in cardiology, pulmonary medicine, or critical care medicine.2

The Accreditation Council for Graduate Medical Education (ACGME) is responsible for accreditation for most residency programs in the United States. The ACGME IM Residency Review Committee's program requirements state that “All residents should develop competency in interpretation of chest roentgenograms.”3 The program requirements for pulmonary medicine, cardiovascular medicine, and diagnostic radiology have similar statements.46 There is no mention of how competency should be achieved or assessed. The ACGME critical care medicine program requirements make no mention of CXR interpretation.7

At many institutions, house officers are expected to interpret CXR and make clinical decisions before a formal reading by a radiologist. This is particularly important for radiographic emergencies such as pneumothorax, pneumomediastinum, pneumoperitoneum, and misplacement of central venous catheters, pulmonary artery catheters, intra-aortic balloon pumps, chest tubes, gastric tubes, and endotracheal tubes.23 Our study included three emergencies—pneumothorax (misdiagnosed by 91%), misplaced central venous catheter (misdiagnosed by 74%), and pneumoperitoneum (misdiagnosed by 54%). In addition, a significant percentage of participants absolutely certain of their diagnoses on these emergencies were wrong. This is especially worrisome because house staff absolutely certain of their CXR interpretation may not ask a senior colleague for a second opinion.

Subjects also had difficulty interpreting the normal CXR. This occurred even though they were instructed that 1 or more of the CXR in the survey might be normal. Other researchers have noted difficulty in interpreting a study as normal.15,16,18,24 Potentially, interpreting a normal CXR as abnormal could lead to inappropriate decisions.

While the overall score was low, we identified several factors significantly correlated with successful interpretation. Overall certainty was correlated with overall score. Radiology residents performed significantly better than IM residents. Internal medicine interns and residents interested in a pulmonary career performed significantly better than their peers. Although we do not have data to confirm this, it is possible that residents interested in a pulmonary career may have done more self-study. Alternatively, they may have enrolled in more rotations where they were exposed to CXR teaching. Also, their attendance may have been better at CXR teaching conferences. Finally, overall score increased with level of training. This was found even though the amount of CXR instruction was not the same for each individual participant. For example, while all of the IM interns had had the opportunity to attend 6 formal CXR lectures, only 50% of the interns had had a MICU month at the time of the CXR survey.

One prior study identified certainty on a particular CXR as being associated with successful interpretation of that CXR.25 There is also evidence that when a clinician is certain about an interpretation he is less likely to be wrong.26 However, as our study demonstrates, verification may be required even when a house officer is 100% confident.

As identified by Pfeifer, 2 possible solutions exist to the problems in CXR interpretation identified by this and prior studies.27 The first approach would be to have all CXR immediately interpreted by a qualified radiologist. This could be accomplished by increasing the number of on-site radiologists or by tele-radiology. The second approach would be to improve interpretation skills of clinicians at the point of care. Theoretically, there is great value in integrating the radiographic interpretation with other findings from the history, physical exam, and laboratory findings. For example, a radiologist may interpret a CXR with bilateral infiltrates as pulmonary edema, A clinician may integrate the patient's history (cough and sputum production), physical exam (hyperthermia), and laboratory findings (increased white blood cell count) and diagnose multilobar pneumonia.

There are several possible approaches to improving CXR interpretation skills. Computer-aided diagnosis of CXR can improve interpretation.23,28,29 One study showed that using a picture archiving and communication system (PACS) rather than standard films improves interpretation.30 However, our study was done in an institution with PACS and important diagnoses were still missed. A program of formal training significantly improves CXR readings31,32 and computer-based training may be more effective than traditional methods.32 Quality improvement initiatives improve error rates.33,34 In a prospective study by Espinosa, a program stressing an interdisciplinary approach and review of all misinterpreted films led to significantly fewer errors.34 Other potential methods to improve CXR interpretation would be web-based modules or encouraging IM house staff to enroll in formal radiology electives. Perhaps the simplest method to improve CXR interpretations would be ensuring that MS and house staff read all CXR of patients under their care and review the results with a physician with proven competence in CXR interpretation. The utility of these methods warrant further investigation.

There are several strengths to our study compared with prior studies in this area. First, the study was one of the largest studies of CXR interpretation in terms of number of participants. Second, subjects from multiple fields of medicine and multiple training levels were compared. Third, this is only the second study to directly confirm that confidence on a particular CXR reading is associated with successful interpretation. Fourth, we have shown that interest in a pulmonary career is associated with successful interpretation. Fifth, we have identified particular CXR emergencies where interpretation skills are lacking. Finally, we have shown that even subjects who are 100% sure of their interpretations are wrong a high percentage of the time.

There are several limitations to our study. First, a small and somewhat arbitrary sample of CXRs was chosen for the survey. While these were representative of common conditions, results may have been different with other CXR. Second, we did not provide house staff with clinical context for the CXR. Schreiber demonstrated in 1963 that clinical history improves CXR interpretation.35 A systematic review of 16 articles also demonstrated that test interpretation improves if clinical information is provided.36 We chose not to provide clinical information because this was a study of how well trainees interpret important, common, unambiguous radiographic findings. Adding clinical information would make the results less clear as responses could reflect understanding of the clinical scenario more than ability to recognize radiographic abnormalities. Additionally, the current training system in the United States requires frequent hand-offs of clinical information that is variably transmitted to the persons required to check CXR. Third, the gold standard in our study can also be questioned. Studies have shown that even experienced radiologists may have differing interpretations of a CXR.19,3739 In our study, the blinded experts were in 100% agreement on our series of CXR. This was probably because the CXR were classic examples of common conditions. Fourth, the CXR in the study were depicted on paper. Although the quality of the reproductions was high, subjects were used to interpreting CXR on digital monitors and this may have affected the results. Finally, this study took place at 1 teaching institution. Possibly other institutions, with different teaching methods, may have different results.

In conclusion, we have identified deficiencies in CXR interpretation with potential implications for MS education, house staff education, and patient care. If house officers are expected to make clinical decisions based on CXR readings, more effective training is needed, particularly in radiographic emergencies. Further research is needed to determine the best methods of achieving and assessing competency in CXR interpretation.


The principal investigator had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.


1. Liaison Committee on Medical Education. LCME accreditation standards. Liaison Committee on Medical Education web site. [April 2005]. Available at
2. American Board of Internal Medicine. Internal medicine policies: requirements for certification in internal medicine. American Board of Internal Medicine website. [April 2005]. Available at
3. American Council of Graduate Medical Education. Internal medicine program requirements: internal medicine. American Council of Graduate Medical Education website. July 2003. [April 2005]. Available at
4. American Council of Graduate Medical Education. Internal medicine program requirements: pulmonary medicine. American Council of Graduate Medical Education website. July 1999. [April 2005]. Available at
5. American Council of Graduate Medical Education. Internal medicine program requirements: cardiovascular medicine. American Council of Graduate Medical Education website. July 1999. [April 2005]. Available at
6. American Council of Graduate Medical Education. Diagnostic radiology program requirements. American Council of Graduate Medical Education website. December 2003. [April 2005]. Available at
7. American Council of Graduate Medical Education. Internal medicine program requirements: critical care medicine. American Council of Graduate Medical Education website. July 1999. [April 2005]. Available at
8. Benger JR, Lyburn ID. What is the effect of reporting all emergency department radiographs? Emerg Med J. 2003;20:40–3. [PMC free article] [PubMed]
9. Gatt ME, Spectre G, Paltiel O, et al. Chest radiographs in the emergency department: is the radiologist really necessary? Postgrad Med J. 2003;79:214–7. [PMC free article] [PubMed]
10. Grosvenor LJ, Verma R, O'Brien R, et al. Does reporting of plain chest radiographs affect the immediate management of patients admitted to a medical assessment unit? Clin Radiol. 2003;58:719–22. [PubMed]
11. Kuritzky L, Hardy RI, Curry RW. Interpretation of chest roentgeonograms by primary care physicians. South Med J. 1987;80:1347–51. [PubMed]
12. Kaufman B, Dhar P, O'Neill D, et al. Chest radiograph interpretation skills of anesthesiologists. J Cardiothorac Vasc Anesth. 2001;15:680–3. [PubMed]
13. Young M, Marrie TJ. Interobserver variability in the interpretation of chest roentgenograms of patients with possible pneumonia. Arch Intern Med. 1994;154:2729–32. [PubMed]
14. Herman PG, Hessel SJ. Accuracy and its relationship to experience in the interpretation of chest radiographs. Invest Radiol. 1975;10:63–7. [PubMed]
15. Eng J, Mysko WK, Weller GE, et al. Interpretation of emergency department radiographs; a comparison of emergency medicine physicians with radiologists, residents with faculty, and film with digital display. AJR. 2000;175:1233–99. [PubMed]
16. Monnier-Cholley L, Carrat F, Cholley BP, et al. Detection of lung cancer on radiographs; receiver operating characteristic analyses of radiologists', pulmonologists', and anesthesiologists' performance. Radiology. 2004;233:799–805. [PubMed]
17. Walsh-Kelly CM, Melzer-Lange MD, Hennes HM, et al. Clinical impact of radiograph misinterpretation in pediatric ED and the effect of physician training level. Am J Emerg Med. 1995;13:262–4. [PubMed]
18. Potchen EJ, Cooper TG, Sierra AE, et al. Measuring performance in chest radiography. Radiology. 2000;217:456–9. [PubMed]
19. Quekel LG, Kessels AG, Goei R, et al. Detection of lung cancer on the chest radiograph; a study on observer performance. Eur J Radiol. 2001;39:111–6. [PubMed]
20. Jeffrey DR, Goddard PR, Callaway MP, et al. Chest radiograph interpretation by medical students. Clin Radiol. 2003;58:478–81. [PubMed]
21. Lave JR, Bankowitz RA, Hughes-Cromwick P, et al. Diagnostic certainty and hospital resource use. Cost Qual Q J. 1997;3:26–32. [PubMed]
22. Samuel S, Shaffer K. Profile of medical student teaching in radiology; teaching methods, staff participation and rewards. Acad Radiol. 2000;7:868–74. [PubMed]
23. Oldham SA. ICU chest radiographs—ICU calamities; evaluation of the portable chest radiograph. Emerg Radiol. 2002;9:43–54. [PubMed]
24. Shiraishi J, Hiroyuki A, Engelmann R. Computer-aided diagnosis to distinguish benign from malignant solitary pulmonary nodules on radiographs; ROC analysis of radiologists' performance—initial experience. Radiology. 2003;227:469–74. [PubMed]
25. Mayhue FE, Rust DD, Aldag JC, et al. Accuracy of interpretation of emergency department radiographs; effect of confidence levels. Ann Emerg Med. 1989;18:826–30. [PubMed]
26. Lufkin KC, Smith SW, Matticks CA, et al. Radiologists' review of radiographs interpreted confidently by emergency physicians infrequently lead to changes in patient management. Ann Emerg Med. 1998;31:202–7. [PubMed]
27. Pfeifer M. Nonradiologists reading radiographs; good medicine or stretching the scope of practice? J Cardiothorac Vasc Anesth. 2001;15:675–6. [PubMed]
28. Monnier-Cholley L, MacMahon H, Katsuragawa S. Computer-aided diagnosis for detection of interstitial opacities on chest radiographs. AJR. 1998;171:1651–6. [PubMed]
29. Kobayashi T, Xu XW, MacMahon H, et al. Effect of a computer-aided diagnosis scheme on radiologists' performance in detection of lung nodules on radiographs. Radiology. 1996;199:843–8. [PubMed]
30. Weatherburn G, Bryan S, Nicholas A, et al. The affect of a picture archiving and communication system (PACS) on diagnostic performance in the accident and emergency department. Emerg Med J. 2000;17:180–4. [PMC free article] [PubMed]
31. Dawes TJ, Vowler SL, Allen CM, et al. Training improves medical student interpretation in image interpretation. Br J Radiol. 2004;77:775–6. [PubMed]
32. Maleck M, Fischer MR, Kammer B, et al. Do computers teach better? A media comparison study for case-based teaching in radiology. Radiographics. 2001;21:1025–32. [PubMed]
33. Preston CA, Marr JJ, Amaraneni KK, et al. Reduction of “call-backs” to the emergency department due to discrepancies in the plain radiograph interpretation. Ann Emerg Med. 1988;17:1019–23. [PubMed]
34. Espinosa JA, Nolan TW. Reducing errors made by emergency physicians interpreting radiographs; longitudinal study. BMJ. 2000;320:737–40. [PMC free article] [PubMed]
35. Schreiber MH. The clinical history as a factor in roentgenogram interpretation. JAMA. 1963;185:399–401. [PubMed]
36. Loy CT, Irwig L. Accuracy of diagnostic tests read with and without clinical information; a systematic review. JAMA. 2004;292:1602–9. [PubMed]
37. Herman PG, Gerson DE, Hessel SJ, et al. Disagreements in chest roentgen interpretation. Chest. 1975;68:278–2. [PubMed]
38. Robinson PJ, Wilson D, Coral A, et al. Variation between experienced observers in the interpretation of accident and emergency radiographs. Br J Radiol. 1999;72:323–30. [PubMed]
39. Albaum MN, Hill LC, Murphy M, et al. Interobserver reliability of the chest radiograph in community acquired pneumonia. Chest. 1996;110:343–50. [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine