Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Arch Intern Med. Author manuscript; available in PMC 2011 January 9.
Published in final edited form as:
PMCID: PMC3017788

Associations between Physician Characteristics and Quality of Care

Rachel L. Orler, B.A.,1 Mark W. Friedberg, M.D., M.P.P,2 John L. Adams, Ph.D.,2 Elizabeth A. McGlynn, Ph.D.,2 and Ateev Mehrotra, M.D., M.P.H.1,2



Physicians’ performance on measures of clinical quality is rarely available to patients. Instead, patients are encouraged to select physicians on the basis of characteristics such as education, board certification, and malpractice history. In a large sample of Massachusetts physicians, we examined the relationship between physician characteristics and performance on a broad range of quality measures.


We calculated overall performance scores on 124 quality measures from RAND’s Quality Assessment Tools for each of 10,408 Massachusetts physicians using claims generated by 1.13 million adult patients. The patients were continuously enrolled in 1 of 4 Massachusetts commercial health plans during 2004–2005. Physician characteristics were obtained from the Massachusetts Board of Registration in Medicine. Associations between physician characteristics and overall performance scores were assessed using multivariate linear regression.


The mean overall performance score was 62.5%% (5th to 95th percentile range, 48.2% to 74.9%). Three physician characteristics were independently associated with significantly higher overall performance: female gender (1.6 percentage points higher than male, p<0.001), board certification (3.3 percentage points higher than non-certified, p<0.001), and graduation from a domestic medical school (1.0 percentage points higher than international, p<0.001). There was no association between performance and malpractice claims or disciplinary action.


Few characteristics of individual physicians were associated with higher performance on measures of quality, and observed associations were small in magnitude. Publicly available characteristics of individual physicians are poor proxies for performance on clinical quality measures.


To improve the quality of care received by their beneficiaries, some health plans use physician report cards and tiered physician networks to steer their members towards physicians who provide high-quality care. However, most patients do not have access to physician quality measures. Further, the quality metrics available to some patients are limited in scope and reflect only a few aspects of overall quality of care. Patients are therefore encouraged to use publicly available proxies for clinical performance when choosing a physician. Independent organizations such as the AARP advise patients to choose physicians based on characteristics such as education, disciplinary action, and board certification status.1 The consumer website HealthGrades limits its “recognized doctor” and “five-star doctor” labels to physicians who are board certified, who have never had their license revoked, and who are free of disciplinary actions or malpractice claims.2 Malpractice claims and board certification status, along with procedure-specific experience, are judged by consumers to be much more indicative of the quality of care delivered by a physician than ratings by government agencies or independent medical institutions.3

There appears to be a tacit belief that these physician characteristics are a signal for clinical quality. However, the value of publicly available individual physician characteristics as predictors of clinical quality is unclear. In previous studies that examine the relationship between individual physician characteristics and quality of care few definitive or broadly applicable conclusions have emerged. The relationship between performance on quality measures and physicians’ history of malpractice claims or disciplinary actions has not been studied to our knowledge.46 In general, studies have found an inverse relationship between years of experience and performance on quality measures.79 There have been mixed findings in the relationship between quality and other characteristics such as gender,814 board certification status,8, 15, 16 medical school site (i.e. international vs. domestic).8, 17, 18 Previous investigations of relationships between individual physician characteristics and performance on quality measures have been limited by the number of physicians assessed, the available physician characteristics, and the scope and validity of quality metrics used. Much of the previous literature related to physician characteristics and clinical quality has had a narrow clinical focus, each study examining only a limited range of processes, conditions, or specialties.

In this study we examined, in a large sample of Massachusetts physicians, the relationship between a number of physician characteristics and performance on a broad range of quality measures.



Physician performance scores were created using a de-identified aggregated claims dataset of 1.13 million patients between the ages of 18 and 65 who were enrolled continuously in 1 of 4 Massachusetts commercial health plans in 2004–5. Taken together, the 4 plans constituted over 85% of the commercial market in the state. The dataset included all professional, inpatient, facility, and pharmacy claims. Physicians were linked across the 4 health plans using a crosswalk developed by the Massachusetts Health Quality Partners (MHQP) that connects a unique physician identifier to the provider numbers used by each health plan.19 Children younger than 18 years of age were excluded because no pediatric quality measures were used. Elders (>65) were also excluded because co-insurance with Medicare was inconsistently recorded, and the plans could not reliably identify those for whom Medicare was the primary payer.


The MHQP maintains a database of all providers who have a contract with any of the major commercial health plans in the state. From this sample of providers, we eliminated those who practiced outside Massachusetts and those who did not bill at least one claim to any of the 4 health plans in 2004–5. We then eliminated non-physicians (i.e. podiatrists, chiropractors, acupuncturists), physicians with no assigned specialty, pediatricians, and specialties with no applicable quality measures or direct patient care (e.g., pathology, radiology). After these exclusions, physicians in 23 specialties contributed data to the analysis.


Publicly available data on individual physician characteristics was obtained from the Massachusetts Board of Registration in Medicine ( The Board publicly releases, for each physician, information on birth date, medical school graduation date, medical school attended, board certification status, gender, payment on malpractice claims, and disciplinary actions. These data are entered and updated by physicians at the time of licensure and re-licensure. However, malpractice and disciplinary information are maintained by the Board and are not self-entered by physicians. From this database we eliminated physicians with a limited license (i.e. residents). Experience was measured by years since medical school graduation. Medical schools in the United States were matched to their 2008 U.S. News and World Report’s rankings in research and primary care.20 Malpractice claims included those on which a payment was made between March 30, 1998 and February 28, 2008. The Board’s disciplinary archives listed all disciplinary and public actions by the board since June 9, 1999 through June 18, 2008.21 We did not include 5 publicly available variables for analysis. Two of these variables (criminal convictions, hospital disciplinary actions) were very rare among physicians. Two variables (publications published, awards) were inconsistently entered by physicians, and one variable (work site) had unclear definitions. For example, it was unclear how a physician might choose between “educational institution”, “hospital”, or “clinic”.


We used the RAND claims-based Quality Assessment (QA) tools to assess performance on measures of clinical quality. The development of the QA tools measures has been described in previous publications.22,23 Briefly, RAND staff selected conditions that were identified as leading causes of death, illness, and utilization of healthcare; staff physicians reviewed established national guidelines and medical literature to identify key processes of care subject to potential overuse and underuse throughout the continuum of care for each condition. Four nine-member multispecialty expert panels, each with a diversity of geography, practice setting, and sex, were convened to assess the validity of the indicators using the RAND–UCLA modified Delphi method. The QA Tools measures were initially developed to be abstracted from medical records and included 439 measures; these have been subsequently adapted to be scored using claims records. The claims-based QA Tools measures used in our analyses include 124 indicators of quality of care for 22 acute and chronic conditions, as well as preventive care which are listed in the appendix.

Instances when recommended care was indicated or provided were attributed to the individual physicians who triggered the indicator. Each physician’s composite performance score was created by dividing the number of instances in which recommended care was delivered by the number of instances patients were eligible for such care and that were assigned to that physician. This composite method has been described as the “overall score” method in previous literature.24 In order to prevent differences in the ease of delivering needed care (e.g., the mean rate of mammography for the state is much higher than the mean rate of cervical cancer screening) from affecting physicians’ overall performance scores, we standardized the expected performance on each indicator by subtracting its statewide mean from each physician’s score on that indicator. This process created a “measurement difficulty-adjusted” performance composite score whose mean was 0 across all physicians.25


We created multivariate linear regression models to examine the associations between physician characteristics and performance scores. The unit of analysis was the individual physician. The dependent variable was the composite difficulty-adjusted performance score. The independent variables were physician gender, board certification status, experience (years since graduation from medical school), medical school location (domestic or international), medical school ranking (within or below the top 10 in the 2008 U.S News rankings), malpractice claims (none vs. one or more in the last 10 years), and disciplinary actions by the board (none vs. one or more in the last 10 years). The regression was weighted by the number of quality measure opportunities attributed to each physician.

We ran several different versions of the regression model using different subsets of physicians and performance data: (1) all physicians and all indicators; (2) all physicians, but with separate regressions for acute, chronic, and preventive care indicators; (3) all physicians, but with separate regressions for female-patient-specific and male-patient-specific indicators (e.g., recommended prenatal or mammography care for women, and recommended benign prostatic hypertrophy or sexually transmitted infection care for men); and (4) all indicators, but with separate regression models for the 5 specialties that averaged greater than 150 quality measure opportunities per physician (internal medicine, family/general practice, cardiology, obstetrics and gynecology, and endocrinology).

Performance scores are presented as the mean score for the group of physicians possessing each characteristic. We created these scores by solving the regression model created for each care-type or physician specialty to find the percentage-point difference in difficulty-adjusted performance score attributable to that characteristic. We then added that quantity to the unadjusted mean performance score to arrive at a quantity representing the percentage of recommended care that physicians with that characteristic provide, adjusted for the degree of difficulty of each measure. To address the testing of multiple comparisons, we calculated the critical P value that limited the false discovery rate (the expected rate of type 1 error among all significant statistical tests) to 5%.26 P values below this threshold were considered statistically significant. All statistical analyses were performed using SAS version 9.2 (SAS Institute, Inc., Cary, North Carolina).


Of the 30,122 physicians in the MHQP database, there were 12,959 physicians in the 23 selected specialties who had a full license, who practiced in Massachusetts, and who submitted one or more claims in 2004–5. We then excluded the 2,239 physicians with no attributed quality measures and the 302 physicians that could not be linked to the physician characteristics dataset. The remaining 10,408 (80.3%) physicians were the basis of our analysis. There were 1,704,686 quality measure opportunities included in the analysis, a mean of 163.8 events per physician (range, 1–3329).

The majority of physicians were male (70.1%), board certified (92.8%), domestically trained (83.0%), and in possession of allopathic medical degrees (97.7%) [Table 1]. They spanned a wide breadth of experience in practice; 15.2% had less than 10 years and 24.7% had 30 or more years of experience. Few made payments on malpractice claims in the last decade (10.2%), and fewer had disciplinary actions against them in that time (1.0%). Approximately 1 in 10 attended schools ranked in the top 10 by US News and World Report for research (12.6%) or primary care (9.8%) [Table 1]. The physicians were distributed across the 23 specialties, but 34.5% of the physicians in the sample practiced internal medicine [Table 2].

Table 1
Characteristics of Physician Sample
Table 2
Specialty Distribution in Physician Sample


Among all physicians the mean unadjusted overall performance score was 62.5%, with an 5th to 95th percentile range of 48.2% to 74.9%. Performance scores varied by condition, ranging from 30.8% for cataract care to 68.0% for congestive heart failure care. Unadjusted performance scores for all physicians on the 20 most frequent occurring indicators are shown in Table 3.

Table 3
Unadjusted Performance Scores for 20 Most Frequently Triggered Quality Measures


In a multivariate model including all physicians and all types of care, female physician scored higher than male physicians (1.6 percentage points, p<0.001), board certified physicians scored higher than those without board certification (3.3 percentage points, p<0.001), and domestically trained physicians scored higher than internationally trained physicians (1.0 percentage point, p<0.001) [Table 4]. There were no statistically significant associations between performance and allopathic vs. osteopathic degree, medical school rankings, disciplinary actions, malpractice claims, or years of experience.

Table 4
Associations between Physician Characteristics and Performance on Quality Measures by Type of Care Provided

The available physician characteristics only explained 2.8% of overall variation in physician performance. Separate regressions models for acute, chronic, and preventive care demonstrated that board certification was associated with higher quality on 2 of the 3 types of care (1.8 percentage points for acute care, p=0.001; 5.9 percentage points for preventive care, p<0.001) [Table 4]. Of the physician characteristics, the greatest differences in quality were generally seen among the preventive care measures (female 5.3 percentage points higher than male, p<0.001; board certified 5.9 percentage points higher than non-certified, p<0.001; domestically trained 2.7 percentage points higher than internationally trained, p<0.001, paying a malpractice claim 3.7 percentage points higher than vs. no paid malpractice claim, p<0.001).

Utilizing separate regression models for male and female-specific measures, we found that female physicians had significantly higher performance scores than male physicians on female-specific measures (4.4 percentage points higher, p<0.001) and male-specific measures (5.2 percentage points higher, p=0.22). The latter difference was not statistically significant [Table 4].

Using separate regression models for each of 5 common specialties in our physician population, we found no physician characteristics that were consistently associated with higher clinical quality across all specialties [Table 5]. However, the associations seen at the overall for all physicians and for all types of care paralleled those seen in internal medicine.

Table 5
Associations between Physician Characteristics and Performance on Quality Measures by Physcician Specialty


Consumers are encouraged to use physician characteristics such as board certification and lack of paid malpractice claims as a signal for quality.1, 2 Yet in our study few individual physician characteristics are consistently associated with higher quality, and when present, these associations are small in magnitude and are generally not significant in a practical sense. If one just looks at the 3 physician characteristics that had an association with quality, the difference in overall composite performance between the average physician with the best combination of these characteristics (female, board certified, domestically trained), and the average physician with the worst combination (male, non-certified, internationally trained physician) is only 5.9%. Also, this is the average difference. Among physicians with the best combination there is a wide range of performance (48.8.5% to 75.3%, 5th to 95th percentile); this range is quite similar to the range of all physicians (48.2% to 74.9%). Thus, there is little evidence to suggest that a patient will consistently receive higher quality care by switching to a physician with these characteristics. Overall, the results highlight the need for externally available quality information for consumer use.

Despite the finding that physician characteristics are imprecise proxies for consumers to use in assessing quality, we did find some characteristics that were associated with higher performance. Board certification was associated with high performance scores at the overall level and with both acute and preventive care. We recognize this is an association and does not imply that board certification itself drives the difference between higher and lower quality physicians. However, this association does provide preliminary evidence suggesting that there may be some quality of care benefit to be derived from maintenance of certification programs or the inclusion of board certification activities as a requirement for maintenance of licensure.27 Further, while past studies have examined the relationship between board certification and quality in an assortment of specific clinical areas,15,16 this is the first to demonstrate a robust relationship between certification and clinical quality across a broad range of clinical conditions and types of care.

It is striking that we found no consistent association between number of malpractice claims or disciplinary actions and quality. Though malpractice claims have strong associations with measures of physician communication,28 physician communication style (and other physician attributes associated with malpractice claims) may have an inconsistent relationship to the process measures of quality that we investigated. Our results in this regard are similar to previous research showing little association between malpractice claims and physician quality as measured by health outcomes.[needs cite] In addition, the very low numbers of physicians with disciplinary actions against them by the board in our sample makes it difficult to detect any association.

In contrast to the previous literature, we did not find any associations between physicians’ years of experience and quality. There are several potential explanations for this difference. The previous systematic review by Choudhry and colleagues used a much broader definition to measure quality, including performance on theoretical evaluations such as written examinations or hypothetical clinical scenarios, guideline adherence for therapy or prevention, or health outcomes such as mortality; and included individual studies with narrow areas of clinical quality assessment.7 Our study utilized only process-based measures of quality of care across a broad range of clinical areas. Further, while the studies included in the systematic review assessing academic knowledge as a marker of quality all showed consistently negative associations between age and quality, results were somewhat more mixed when quality was measured by adherence to guidelines, a method more analogous to our own work. Lastly, while the majority of studies in the systematic review found a negative association between experience and quality, 21% of the studies in the review reported no effect, similar to the findings of our work.

Our study has limitations. The investigated physician characteristics are the major publicly available data on individual physicians that are easily accessible to consumers. However, we recognize that in the future, patients may have access to physician-level performance on some quality metrics. When available, these metrics may be different (and narrower in scope) than those utilized in this study. Further, though we utilized a broader range of clinical quality measures than any other study to our knowledge, the scope of the quality metrics is inherently limited. The RAND Quality Assessment Tools covered 22 conditions and included solely process-based measures. It is possible that there are stronger associations between physician characteristics and performance on quality measures that were not investigated, (e.g., measures of patient experience or mortality). Due to inherent limitations in medical claims, quality measurement using claims is less robust than quality measurement based in a medical records review. However, one key advantage of using claims is that it allowed us to assess quality of care for a large number of physicians.

Others have noted relationships between practice characteristics and quality measure performance, {Friedberg, 2009 #66, Pham, 2005 #39} but these practice characteristics were not available for the current analysis. Few physician practice characteristics are publicly reported by the Massachusetts BORIM, and their availability to patients who are choosing a physician is relatively limited. The question of whether generalists or specialists provide better care for specific conditions is not well addressed by our study, as we assessed the quality of care across an aggregated group of conditions, rather than on a condition by condition basis. This question has been investigated in other settings. {Smetana, 2007 #67}

Our study was limited to Massachusetts, a state with a high density of academic medical centers and higher overall quality of care than the national average.29 It is possible that in this setting of higher clinical quality, the effect of physician characteristics may be less important than it would be in a setting where the overall quality of care is lower.

In conclusion, we find that individual physician characteristics are poor proxies for performance on clinical quality measures and are not well suited for use as such by patients. Public reporting of individual physician quality data may provide the consumer with more valuable guidance when seeking providers of high-quality care.


The authors would like to thank Julie Lai and Scott Ashwood for their excellent programming on this work. We are grateful to Barbra Rabson and Jan Singer from the Massachusetts Health Quality Partners who facilitated obtaining access to the data sets used in this project.

Ms. Orler had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The research was supported by a contract from the U.S. Department of Labor and a grant from the Commonwealth Fund. Dr Mehrotra’s salary was supported by a career development award (KL2 RR024154-03) from the National Center for Research Resources, a component of the National Institutes of Health. Ms. Orler was supported by a stipend from the University of Pittsburgh School of Medicine Dean’s Summer Research Program.


*The research was supported by a contract from the U.S. Department of Labor and a grant from the Commonwealth Fund. Dr Mehrotra’s salary was supported by a career development award (KL2 RR024154-03) from the National Center for Research Resources, a component of the National Institutes of Health. Ms. Orler was supported by a stipend from the University of Pittsburgh School of Medicine Dean’s Summer Research Program. The authors have no potential conflicts of interest to disclose.


1. AARP. Choosing Your Managed Care Doctors. Nov 72006. [Accessed October 2, 2009].
2. HeathGrades, Inc. Find Out if Your Physician is a HealthGrades Recognized or Five-Star Doctor: Why is this Information Important to Me. 2009. [Accessed September 3, 2009].
3. Boscarino JA, Adams RE. Public perceptions of quality care and provider profiling in New York: implications for improving quality care and public health. J Public Health Manag Pract. 2004 May–Jun;10(3):241–250. [PubMed]
4. Ely JW, Dawson JD, Young PR, et al. Malpractice claims against family physicians - Are the best doctors sued more? Journal of Family Practice. 1999 Jan;48(1):23–30. [PubMed]
5. Khaliq AA, Dimassi H, Huang C-Y, Narine L, Smego JRA. Disciplinary action against physicians: Who is likely to get disciplined? The American Journal of Medicine. 2005;118(7):773–777. [PubMed]
6. Kohatsu ND, Gould D, Ross LK, Fox PJ. Characteristics associated with physician discipline: a case-control study. Arch Intern Med. 2004 Mar 22;164(6):653–658. [PubMed]
7. Choudhry NK, Fletcher RH, Soumerai SB. Systematic Review: The Relationship between Clinical Experience and Quality of Health Care. Ann Intern Med. 2005 February 15;142(4):260–273. [PubMed]
8. Pham HH, Schrag D, Hargraves JL, Bach PB. Delivery of preventive services to older adults by primary care physicians. JAMA. 2005 Jul 27;294(4):473–481. [PubMed]
9. Streja DA, Rabkin SW. Factors associated with implementation of preventive care measures in patients with diabetes mellitus. Arch Intern Med. 1999 Feb 8;159(3):294–302. [PubMed]
10. Kerfoot BP, Holmberg EF, Lawler EV, Krupat E, Conlin PR. Practitioner-Level Determinants of Inappropriate Prostate-Specific Antigen Screening. Arch Intern Med. 2007 July 9;167(13):1367–1372. [PubMed]
11. Kim C, McEwen LN, Gerzoff RB, et al. Is physician gender associated with the quality of diabetes care? Diabetes Care. 2005 Jul;28(7):1594–1598. [PubMed]
12. Berthold HK, Gouni-Berthold I, Bestehorn KP, Bohm M, Krone W. Physician gender is associated with the quality of type 2 diabetes care. J Intern Med. 2008 Oct;264(4):340–350. [PubMed]
13. Henderson JT, Weisman CS. Physician gender effects on preventive screening and counseling: an analysis of male and female patients’ health care experiences. Med Care. 2001 Dec;39(12):1281–1292. [PubMed]
14. Flocke SA, Gilchrist V. Physician and patient gender concordance and the delivery of comprehensive clinical preventive services. Med Care. 2005 May;43(5):486–492. [PubMed]
15. Sharp LK, Bashook PG, Lipsky MS, Horowitz SD, Miller SH. Specialty board certification and clinical outcomes: the missing link. Acad Med. 2002 Jun;77(6):534–542. [PubMed]
16. Brennan TA, Horwitz RI, Duffy FD, Cassel CK, Goode LD, Lipner RS. The role of physician specialty board certification status in the quality movement. Jama-Journal of the American Medical Association. 2004 Sep;292(9):1038–1043. [PubMed]
17. Ko DT, Austin PC, Chan BTB, Tu JV. Quality of Care of International and Canadian Medical Graduates in Acute Myocardial Infarction. Arch Intern Med. 2005 February 28;165(4):458–463. [PubMed]
18. Rhee S-O, Lyons TF, Payne BC, Moskowitz SE. USMGs versus FMGs: Are There Performance Differences in the Ambulatory Care Setting? Medical Care. 1986;24(3):248–258. [PubMed]
19. Friedberg MW, Coltin KL, Pearson SD, et al. Does affiliation of physician groups with one another produce higher quality primary care? J Gen Intern Med. 2007 Oct;22(10):1385–1392. [PMC free article] [PubMed]
20. America’s Best Graduate Schools: Schools of Medicine. US News and World Report. 2008. [Accessed June 22, 2008]. p. 70. Published April 4–11, 2008. [PubMed]
21. Massachusetts Board of Registration in Medicine. Disciplinary & Other Public Board Actions. A listing of Disciplinary Actions of the Massachusetts Board of Registration in Medicine from June 9, 1999 to June 1912, 2008. [Accessed June 20, 2008]. Available at:
22. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003 Jun 26;348(26):2635–2645. [PubMed]
23. Asch SM, Kerr EA, Keesey J, et al. Who is at greatest risk for receiving poor-quality health care? N Engl J Med. 2006 Mar 16;354(11):1147–1156. [PubMed]
24. Reeves D, Campbell SM, Adams J, Shekelle PG, Kontopantelis E, Roland MO. Combining multiple indicators of clinical quality: an evaluation of different analytic approaches. Med Care. 2007 Jun;45(6):489–496. [PubMed]
25. Min LC, Wenger NS, Fung C, et al. Multimorbidity is associated with better quality of care among vulnerable elders. Med Care. 2007 Jun;45(6):480–488. [PubMed]
26. Benjamini Y, Hochberg Y. Controlling the False Discovery Rate - a Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society Series B-Methodological. 1995;57(1):289–300.
27. Federation of State Medical Boards. An Analysis of the Impact of Implementation of Maintenance of Licensure Requirements. Mar2009. [Accessed October 3, 2009].
28. Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician-patient communication. The relationship with malpractice claims among primary care physicians and surgeons. JAMA. 1997 Feb 19;277(7):553–559. [PubMed]
29. Agency For Healthcare Research and Quality. NHQR 2008 State Snapshots. Jun 52009. [Accessed October 1, 2009].