Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2005 February; 40(1): 213–226.
PMCID: PMC1361134

Response Rates and Response Bias for 50 Surveys of Pediatricians


Research Objective

To track response rates across time for surveys of pediatricians, to explore whether response bias is present for these surveys, and to examine whether response bias increases with lower response rates.

Data Source/Study Setting

A total of 63,473 cases were gathered from 50 different surveys of pediatricians conducted by the American Academy of Pediatrics (AAP) since 1994. Thirty-one surveys targeted active U.S. members of the AAP, six targeted pediatric residents, and the remaining 13 targeted AAP-member and nonmember pediatric subspecialists. Information for the full target samples, including nonrespondents, was collected using administrative databases of the AAP and the American Board of Pediatrics.

Study Design

To assess bias for each survey, age, gender, location, and AAP membership type were compared for respondents and the full target sample. Correlational analyses were conducted to examine whether surveys with lower response rates had increasing levels of response bias.

Principal Findings

Response rates to the 50 surveys examined declined significantly across survey years (1994–2002). Response rates ranged from 52 to 81 percent with an average of 68 percent. Comparisons between respondents and the full target samples showed the respondent group to be younger, to have more females, and to have less specialty-fellow members. Response bias was not apparent for pediatricians' geographical location. The average response bias, however, was fairly small for all factors: age (0.45 years younger), gender (1.4 percentage points more females), and membership type (1.1 percentage points fewer specialty-fellow members). Gender response bias was found to be inversely associated with survey response rates (r=−0.38). Even for the surveys with the lowest response rates, amount of response bias never exceeded 5 percentage points for gender, 3 years for age, or 3 percent for membership type.


While response biases favoring women, young physicians, and nonspecialty-fellow members were found across the 52–81 percent response rates examined in this study, the amount of bias was minimal for these factors that could be tested. At least for surveys of pediatricians, more attention should be devoted by investigators to assessments of response bias rather than relying on response rates as a proxy of response bias.

Keywords: Response rate, response bias, physician survey

A survey's response rate is a conventional proxy for the amount of response bias contained in that study. While there are more theoretical opportunities for bias when response rates are low rather than high, there is no necessary relationship between response rates and bias (O'Neill, Marsden, and Silman 1995; Asch, Jedrziewski, and Christakis 1997). A review of physician surveys published between 1985 and 1995 found an average response rate of 61 percent for all surveys of physicians and an average response rate of 52 percent for surveys with more than 1,000 observations (Cummings, Savitz, and Konrad 2001). There was no significant decline in response rates to published physician surveys across that time, but it is unclear whether that pattern has continued.

By definition, not much information is available about nonrespondents. It has been estimated that only 18 percent of physician survey articles perform any type of comparison between responders and nonresponders with the remainder having only response rates to serve as an indicator of bias (Cummings, Savitz, and Konrad 2001). Direct examinations of response bias for health care professional groups have generally found only minimal amounts of response bias in surveys (Barton et al. 1980; Hovland, Romberg, and Moreland 1980; Locker and Grushka 1988; McCarthy, Koval, and MacDonald 1997; Thomsen 2000; Kellerman and Herold 2001). Surveys of patients and the full population have consistently shown substantially more response bias (Sheikh and Mattingly 1981; Benfante et al. 1989; Brennan and Hoek 1992; Diehr et al. 1992; Vestbo and Rasmussen 1992; Prendergast, Beal, and Williams 1993; Walsh 1994; Blanker et al. 2000; Kotaniemi et al. 2001; Barchielli and Balzi 2002; Fowler et al. 2002; Mazor et al. 2002; Solberg et al. 2002; Partin et al. 2003; Van Loon et al. 2003). Substantial response bias greatly limits the generalizability of survey findings (Asch, Jedrziewski, and Christakis 1997; Cummings, Savitz, and Konrad 2001). Studies focused specifically on assessing response bias for physician surveys are lacking. It is important to determine if response rates for physicians are falling, and if decreasing response rates are associated with increasing response bias.

The American Academy of Pediatrics (AAP) has conducted many surveys of pediatricians through its Periodic Survey of Fellows and other programs. The AAP also maintains an administrative database that contains demographic information about its members. Using this database along with data from the American Board of Pediatrics administrative database, age, gender, membership type, and location data were merged with respondent/nonrespondent information for 50 surveys of pediatricians. The objectives in doing this were (1) to monitor response rates across time, (2) to examine systematic response bias for several different pediatrician characteristics, and (3) to explore response bias as a function of survey response rates.


A total of 63,473 cases were gathered from 50 different surveys of pediatricians conducted by the AAP since 1994. Thirty-one surveys targeted random samples of active U.S. members of the AAP (AAP Periodic Survey and AAP Medicaid/SCHIP Participation Survey), six targeted random samples of pediatric residents (AAP Third Year Resident Survey), and the remaining 13 targeted AAP-member and nonmember pediatric subspecialists (Future of Pediatric Education II Survey).

Between four and six mailings were conducted for each of the 50 surveys and response rates were tracked. A simple definition of response rate was utilized in this study. The number of surveys returned with valid responses was divided by the total number of pediatricians on the mailing list. This provides a conservative estimate of response rate that does not try to adjust the denominator post hoc to exclude doctors whose addresses were bad or whose survey responses indicated that they should not have been included in the target sample.

Amount of response bias was computed for four different characteristics of physicians: age, gender, geographic location, and AAP membership type. These four characteristics were chosen based on their availability in the administrative databases and their usefulness in published analyses. Many articles based on these surveys reported descriptive results for the four characteristics, and the majority of these articles also reported significant relationships between the characteristics and other survey topic questions (Campbell et al. 1996; Brotherton, Tang, and O'Connor 1997; Olson, Christoffel, and O'Connor 1997; Schaffer et al. 1998; Barnett, Duncan, and O'Connor 1999; Brotherton, Mulvey, and O'Connor 1999; Cheng et al. 1999; Schanler, O'Connor, and Lawrence 1999; Redding et al. 2000; Stoddard et al. 2000; Cull et al. 2002; 2003a; 2003b; Pan, Cull, and Brotherton 2002; Pletcher et al. 2002; Wiley et al. 2002; Tunkel et al. 2002; Anderson et al. 2003; Kelly et al. 2003; Kline and O'Connor 2003; Tang, Yudkowsky, and Davis 2003). Of course, there are many other practice characteristics that we did not have access to that also could be associated with survey participation.

For each survey, mean age, percent female, percent living in the North-east census region, and percent who are classified as AAP specialty fellows were used as operational definitions of these factors. AAP specialty fellows are full members of the AAP who were board-certified by an accrediting body other than the American Board of Pediatrics or the Royal College of Physicians and Surgeons of Canada. Specialty fellows must belong to one of the AAP specialty sections and a majority of their professional time must be devoted to pediatric practice, teaching, or research activities. The specialty-fellow category does not correspond to the total number of pediatric subspecialists who are AAP members, because most pediatric subspecialists are certified by the American Board of Pediatrics. The specialty-fellow group comprises largely of surgical specialists who chose to focus on pediatric medicine. The analysis of specialty membership type was limited to those 31 surveys targeting the full membership because specialty fellows would not have been eligible or there would have been no variability in this characteristic for the other surveys.

To assess bias for each survey, characteristics of respondents were compared with the characteristics of the full target samples, including nonrespondents. This was accomplished using the administrative database of the AAP for all surveys and the administrative database of the American Board of Pediatrics for several of the Future of Pediatric Education II surveys. Bias was then defined as respondents' value for a given characteristic subtracted from the full target sample's value. Bias values of 0 reflect perfect agreement for respondents and the target sample. Positive bias values show respondents to be more of the characteristic of interest, such as more females than the target sample. Negative values, on the other hand, show respondents to be less of the characteristic, such as respondents being younger. To determine whether bias was statistically significant, one-sample t-tests were conducted comparing average bias to 0, no bias.

Bias distributions were created using data points for the factors of interest from each of the 50 surveys plotted as a function of response rate. Linear regression analyses were conducted to examine whether there was increased bias as response rates decreased for each of the characteristics examined. The surveys included in this study were approved by the AAP Institutional Review Board.


Response rates varied from 52 to 81 percent with a mean response rate of 68 percent. A significant decline in response rates was evident across survey years, r=−0.547, p<.001 (Figure 1). The average response rate for surveys conducted in 1998 or earlier was 70 percent compared with an average of 63 percent for surveys conducted after 1998, t=4.1, p<.001. This represents a 7 percentage point absolute drop and a 10 percent relative drop-off in response rates.

Figure 1
Response Rate by Survey Year

Four different practice characteristics were examined individually to test for response bias. Gender, which ranged from 15 percent female for a survey of pediatric otolaryngologists to 67 percent for the 2002 survey of pediatric residents, was the first factor examined. For 46 of the 50 surveys, bias was a positive number indicating that female pediatricians were more likely to respond to our surveys (Figure 2). If no bias was present, then there should have been roughly an equal number of positive and negative values. The mean response bias across studies was 1.4 percentage points more females. This fairly small difference from 0 was statistically significant, t=7.4, p<.001. As response rates decreased, response bias for gender significantly increased, r=−0.38, p=.007.

Figure 2
Gender Response Bias by Response Rate

Across surveys, there was also considerable variability in the mean ages of the target samples, with a low of 31 years for the 1999 survey of pediatric residents to a high of 55 years for a survey of developmental or behavioral subspecialists. For 41 of the 50 surveys, age bias was a negative number indicating that younger pediatricians were more likely to respond (Figure 3). The mean response bias across studies was −0.45 years. Although this bias is less than 1 year, this small difference was consistent, t=−4.2, p<.001. As response rates increased, the amount of negative bias was reduced, but this association was not statistically significant, r=0.125, p=.387.

Figure 3
Age Response Bias by Response Rate

The census region where pediatricians were located was fairly consistent across surveys. The percentage of pediatricians in each target sample from the Northeast region ranged from 18 percent for the Medicaid/SCHIP Participation Survey that oversampled from smaller states to 33 percent for the 2000 pediatric resident survey for which a disproportionate number of pediatric training programs are in the Northeast. For this factor, no response bias was apparent. For exactly 25 surveys, the bias value was negative, and for the remaining 25, the bias value was 0 or was positive. The mean region bias value, −0.01 percentage points, was very far from statistical significance, p=.946.

The final factor that we examined in this study was AAP membership type, focusing on the percentage of pediatricians who were AAP specialty fellows. The percentage of specialty fellows in the target sample was fairly low, ranging from 2.5 to 4 percent, for all 31 surveys of the AAP full membership. Response bias was apparent for this factor with 30 of 31 surveys showing less participation from this membership group (Figure 4). The mean membership type bias was −1.1 percentage points, t=−13.4, p<.001. Unlike gender, no correlation was apparent between the survey response rate and the amount of response bias, p=.21. The same small amount of bias was apparent for surveys with high response rates and surveys with low response rates.

Figure 4
American Academy of Pediatrics (AAP) Membership Type Bias by Response Rate

For the three characteristics where bias was apparent, gender, age, and membership type, Pearson's correlations were conducted to examine for possible overlap. The levels of bias for these characteristics were found to be unrelated to one another across the 50 surveys. None of the correlations approached significance: gender and age (r=0.01, p=.93), gender and membership type (r=−0.06, p=.74), and age and membership type (r=0.07, p=.70).


By aggregating the results of 50 fairly recent surveys of pediatricians (1994–2002), we were able to identify important trends regarding surveys of pediatricians. Paradoxically, the results of our investigation are both troubling and reassuring.

Most troubling is the pronounced decline in response rates since 1994 and especially since 1998. A previous study examining physician response rates from the previous decade (1985–1995) showed fairly low response rates to physician surveys but no decline (Cummings, Savitz, and Konrad 2001). AAP response rates consistently exceeded those averages, but unlike that study, a consistent decline in response rates was apparent. A potentially important difference between the two studies, however, is that our study included all surveys fielded during the time period regardless of whether the results were published. It is possible that a publication bias may have limited the likelihood of observing a decline in the Cummings, Savitz, and Konrad (2001) article.

In the present study, response rates declined from an average of 70 percent before 1998 to an average of 63 percent after 1998. This is a 7 percentage point absolute drop and a 10 percent relative drop in response rates. When correlations were conducted to examine whether response bias was associated with response rates, a significant relationship was found for gender. As should be expected by the generally assumed relationship between bias and response rates, greater gender response bias was found at the lower response rates. A similar pattern was apparent for age, but this correlation was not significant.

These data suggest a need to continue to monitor trends in response rates for surveys of pediatricians in the future. Now, however, is the time to consider what can be carried out to reverse this downward trend. The survey methods employed by the AAP remained stable across these surveys. This consisted of multiple mailings, typically five for each survey, with the survey instrument and a postage-paid return envelope included. For some surveys, postcard reminders were also mailed. The surveys did vary somewhat in length, ranging from four to eight pages. Some possible methods to increase response rates include combined e-mail and traditional mail request procedures, more regular use of incentives for participation (Deehan et al. 1997; Halpern et al. 2002), more parsimonious surveys, better allocation of resources through the use of sampling, and surveys better targeted at members' interests (Groves, Cialdini, and Couper 1992).

The results also may point to a potentially larger problem for the field of health services research. Just as increased use of antibiotics can result in diminishing effectiveness, increased numbers of survey requests to physicians may be limiting the effectiveness of the survey research paradigm. It is difficult to imagine how the field can systematically respond to this problem. While inside the AAP, systems have been designed, for example, to prevent fellows from being surveyed for the Periodic Survey of Fellows more than once every 3 years, it would be near impossible to try and coordinate and restrict the number of surveys going to pediatricians from researchers outside the AAP.

On the reassuring side, the surveys were very consistent in showing only small amounts of response bias regardless of the response rate. This consistency is striking given the wide variation in each pediatrician characteristic that was examined. Mean age, for example, ranged from 31 to 55 years and percent female ranged from 15 to 67 percent across surveys. Yet, age was found to be within a few years and gender within a few percentage points of the age and gender distributions of the target samples even for the lowest response rates. These results are consistent with the pattern in the research literature showing response bias to be more of a problem for surveys of the general population rather than for surveys of fairly homogeneous professional groups such as physicians.

The direction of bias showed that female pediatricians, younger pediatricians, and nonspecialty members were more likely to respond to the surveys. Perhaps, these groups were more likely to respond to surveys as a way of expressing their opinion to the AAP leadership, because they may be less likely to be involved in AAP activities. It is also possible that the survey topics were more interesting or that better mailing address information was available for these groups.

Minimal amounts of bias are even more reassuring for the age and gender factors given that these factors are often correlated with survey topic questions and they are commonly used as covariates in analyses of topic questions. Lack of substantial response bias for the factors that we investigated, however, does not necessarily imply that greater bias may not exist for some survey topic questions. Still, article reviewers should not be too quick to dismiss results based on a survey's response rate alone. Rather, investigators should attempt, whenever possible, to conduct some assessment of response bias and include these analyses in their results. This can be carried out by using administrative databases, as in this study, or by systematically re-contacting a subset of nonrespondents.

Several limitations of these results should be recognized. Most importantly, we could not directly measure the amounts of bias for the various survey topic questions. While the demographic characteristics examined in this study have been significantly correlated with many different topics, it is still possible that nonrespondents would have different attitudes and would have responded to certain topic questions differently than respondents. Also, study results are based only on surveys of pediatricians. It is not clear if similar levels of response rates and response bias would be found for surveys of other physicians. The response rates all fell within a 30 percentage point range (52 to 82 percent). We do not know if the relationship between bias and response rates or if the amount of bias is similar for response rates falling beneath 50 percent.

In conclusion, this study systematically examined trends in response rates and response bias for surveys of pediatricians. While response rates have declined over the past 10 years, the amount of response bias for these surveys was minimal. These results suggest that for homogeneous professional groups like physicians, less than optimal response rates may not necessarily mean excessive levels of response bias. Still, we must begin considering ways to reverse the downward trend in response rates to prevent future response rates for physician surveys from falling to a level where response bias is problematic.


We thank Lynn Olson, Ph.D., Benard Dreyer, M.D., and Beth Yudkowsky, M.P.H. for reviewing previous versions of this manuscript.


  • Anderson MR, Jewett EA, Cull WL, Jardine DS, Outwater KM, Mulvey HJ. “The Practice of Pediatric Critical Care Medicine: Results of the FOPE II Survey of Sections Project.” Pediatric Critical Care Medicine. 2003;4(4):412–17. [PubMed]
  • Asch DA, Jedrziewski MK, Christakis NA. “Response Rates to Mail Surveys Published in Medical Journals.” Journal of Clinical Epidemiology. 1997;50(10):1129–36. [PubMed]
  • Barchielli A, Balzi D. “Nine-Year Follow-Up of a Survey on Smoking Habits in Florence (Italy): Higher Mortality among Non-Responders.” International Journal of Epidemiology. 2002;31(5):1038–42. [PubMed]
  • Barnett S, Duncan P, O'Connor KG. “Pediatricians' Responses to the Demand for School Health Programming.” Pediatrics. 1999;103(4) e45. [PubMed]
  • Barton J, Bain C, Hennekens CH, Rosner B, Belanger C, Roth A, Speizer FE. “Characteristics of Respondents and Non-Respondents to a Mailed Questionnaire.” American Journal of Public Health. 1980;70(8):823–5. [PubMed]
  • Benfante R, Reed D, MacLean C, Kagan A. “Response Bias in the Honolulu Heart Program.” American Journal of Epidemiology. 1989;130(6):1088–100. [PubMed]
  • Blanker MH, Groeneveld FP, Prins A, Bernsen RM, Bohnen AM, Bosch JL. “Strong Effects of Definition and Nonresponse Bias on Prevalence Rates of Clinical Benign Prostatic Hyperplasia: The Krimpen Study of Male Urogenital Tract Problems and General Health Status.” BJU International. 2000;85(6):665–71. [PubMed]
  • Brennan M, Hoek J. “The Behavior of Respondents, Nonrespondents, and Refusers across Mail Surveys.” Public Opinion Quarterly. 1992;56:530–35.
  • Brotherton SE, Mulvey HJ, O'Connor KG. “Women in Pediatric Practice: Trends and Implications.” Pediatric Annals. 1999;28(3):177–83. [PubMed]
  • Brotherton SE, Tang SF, O'Connor KG. “Trends in Practice Characteristics: Analyses of 19 Periodic Surveys (1987–1992) of Fellows of the American Academy of Pediatrics.” Pediatrics. 1997;100(1):8–18. [PubMed]
  • Campbell JR, Schaffer SJ, Szilagyi PG, O'Connor KG, Briss P, Weitzman M. “Blood Lead Screening Practices among US Pediatricians.” Pediatrics. 1996;98(3, part 1):372–7. [PubMed]
  • Cheng TL, DeWitt TG, Savageau JA, O'Connor KG. “Determinants of Counseling in Primary Care Pediatric Practice.” Archives of Pediatrics and Adolescent Medicine. 1999;153(6):629–35. [PubMed]
  • Cull WL, Mulvey HJ, O'Connor KG, Sowell DR, Berkowitz CD, Britton CV. “Pediatricians Working Part Time: Past, Present, and Future.” Pediatrics. 2002;109(6):1015–20. [PubMed]
  • Cull WL, Yudkowsky BK, Schonfeld DJ, Berkowitz CD, Pan RJ. “Research Exposure during Pediatric Residency: Influence on Career Expectations.” The Journal of Pediatrics. 2003a;143:564–9. [PubMed]
  • Cull WL, Yudkowsky BK, Shipman SA, Pan RJ. “Pediatric Training and Job Market Trends: Results from the AAP Third Year Resident Survey (1997–2002).” Pediatrics. 2003b;112(4):787–92. [PubMed]
  • Cummings SM, Savitz LA, Konrad TR. “Reported Response Rates to Mailed Physician Questionnaires.” Health Services Research. 2001;35(6):1347–55. [PMC free article] [PubMed]
  • Deehan A, Templeton L, Taylor C, Drummond C, Strang J. “The Effect of Cash and Other Financial Inducements on the Response Rate of General Practitioners in a National Postal Study.” The British Journal of General Practice. 1997;47(415):87–90. [PMC free article] [PubMed]
  • Diehr P, Koepsell TD, Cheadle A, Psaty BM. “Assessing Response Bias in Random-Digit Dialing Surveys: The Telephone-Prefix Method.” Statistics in Medicine. 1992;11(8):1009–21. [PubMed]
  • Fowler FJ, Gallagher PM, Stringfellow VL, Zaslavsky AM, Thompson JW, Cleary PD. “Using Telephone Interviews to Reduce Nonresponse Bias to Mail Surveys of Health Plan Members.” Medical Care. 2002;40(3):190–200. [PubMed]
  • Groves RM, Cialdini RB, Couper MP. “Understanding the Decision to Participate in a Survey.” Public Opinion Quarterly. 1992;56:475–95.
  • Halpern SD, Ubel PA, Berlin JA, Asch DA. “Randomized Trial of 5 Dollars versus 10 Dollars Monetary Incentives, Envelope Size, and Candy to Increase Physician Response Rates to Mailed Questionnaires.” Medical Care. 2002;40(9):834–9. [PubMed]
  • Hovland EJ, Romberg E, Moreland EF. “Nonresponse Bias to Mail Survey Questionnaires within a Professional Population.” Journal of Dental Education. 1980;44(5):270–4. [PubMed]
  • Kellerman SE, Herold J. “Physician Response to Surveys: A Review of the Literature.” American Journal of Preventive Medicine. 2001;20(1):61–7. [PubMed]
  • Kelly DP, Cull WL, Jewett EA, Brotherton SE, Roizen NJ, Berkowitz CD, Coleman WL, Mulvey HJ. “Developmental and Behavioral Pediatric Practice Patterns and Implications for the Workforce: Results of the FOPE II Survey of Sections Project.” The Journal of Developmental and Behavioral Pediatrics. 2003;24(3):180–88. [PubMed]
  • Kline MW, O'Connor KG. “Disparity between Pediatricians' Knowledge and Practices Regarding Perinatal Human Immunodeficiency Virus Counseling and Testing.” Pediatrics. 2003;112(5) e367. [PubMed]
  • Kotaniemi JT, Hassi J, Kataja M, Jonsson E, Laitinen LA, Sovijarvi AR, Lundback B. “Does Non-Responder Bias Have a Significant Effect on the Results in a Postal Questionnaire Study?” European Journal of Epidemiology. 2001;17(9):809–17. [PubMed]
  • Locker D, Grushka M. “Response Trends and Nonresponse Bias in a Mail Survey of Oral and Facial Pain.” Journal of Public Health Dentistry. 1988;48(1):20–5. [PubMed]
  • Mazor KM, Clauser BE, Field T, Yood RA, Gurwitz JH. “A Demonstration of the Impact of Response Bias on the Results of Patient Satisfaction Surveys.” Health Services Research. 2002;37(5):1403–17. [PMC free article] [PubMed]
  • McCarthy GM, Koval JJ, MacDonald JK. “Nonresponse Bias in a Survey of Ontario Dentists' Infection Control and Attitudes Concerning HIV.” Journal of Public Health Dentistry. 1997;57(1):59–62. [PubMed]
  • Olson LM, Christoffel KK, O'Connor KG. “Pediatricians' Experience with and Attitudes toward Firearms.” Archives of Pediatrics and Adolescent Medicine. 1997;151(4):352–9. [PubMed]
  • O'Neill TW, Marsden D, Silman AJ. “Differences in the Characteristics of Responders and Non-Responders in a Prevalence Survey of Vertebral Osteoporosis.” Osteoporosis International. 1995;5(5):327–34. [PubMed]
  • Pan RJ, Cull WL, Brotherton SE. “Pediatric Resident's Career Intentions: Data from the Leading Edge of the Pediatrician Workforce.” Pediatrics. 2002;109(2):182–88. [PubMed]
  • Partin MR, Malone M, Winnett M, Slater J, Bar-Cohen A, Caplan L. “The Impact of Survey Nonresponse Bias on Conclusions Drawn from a Mammography Intervention Trial.” Journal of Clinical Epidemiology. 2003;56(9):867–73. [PubMed]
  • Pletcher BA, Jewett EAB, Cull WL, Brotherton SE, Hoyme HE, Pan RJ, Mulvey HJ. “The Practice of Clinical Genetics: A Survey of Practitioners.” Genetics in Medicine. 2002;4:142–9. [PubMed]
  • Prendergast MJ, Beal JF, Williams SA. “An Investigation of Non-response Bias by Comparison of Dental Health in 5-Year-Old Children According to Parental Response to a Questionnaire.” Community Dental Health. 1993;10(3):225–34. [PubMed]
  • Redding GJ, Cloutier MM, Dorkin HL, Brotherton SE, Mulvey HJ. “Practice of Pediatric Pulmonology: Results of the Future of Pediatric Education Project.” Pediatric Pulmonology. 2000;30(3):190–7. [PubMed]
  • Schaffer SJ, Campbell JR, Szilagyi PG, Weitzman M. “Lead Screening Practices of Pediatric Residents.” Archives of Pediatric and Adolescent Medicine. 1998;152(2):185–9. [PubMed]
  • Schanler RJ, O'Connor KG, Lawrence RA. “Pediatricians' Practices and Attitudes Regarding Breastfeeding Promotion.” Pediatrics. 1999;103(4) e35. [PubMed]
  • Sheikh K, Mattingly S. “Investigating Non-Response Bias in Mail Surveys.” Journal of Epidemiology and Community Health. 1981;35(4):293–6. [PMC free article] [PubMed]
  • Solberg LI, Beth Plane M, Brown RL, Underbakke G, McBride PE. “Nonresponse Bias: Does It Affect Measurement of Clinician Behavior?” Medical Care. 2002;40(4):347–52. [PubMed]
  • Stoddard JJ, Cull WL, Jewett EA, Brotherton SE, Mulvey HJ, Alden ER. “Providing Pediatric Subspecialty Care: A Workforce Analysis.” Pediatrics. 2000;106(6):1325–33. [PubMed]
  • Tang SF, Yudkowsky BK, Davis J. “Medicaid Participation by Private and Safety Net Pediatricians, 1993 and 2000.” Pediatrics. 2003;112(2):368–72. [PubMed]
  • Thomsen S. “An Examination of Nonresponse in a Work Environment Questionnaire Mailed to Psychiatric Health Care Personnel.” Journal of Occupational Health Psychology. 2000;5(1):204–10. [PubMed]
  • Tunkel DE, Cull WL, Jewett EAB, Brotherton SE, Britton CV, Mulvey HJ. “Practice of Pediatric Otolaryngology: Results of the Future of Pediatric Education II Project.” Archives of Otolaryngology Head and Neck Surgery. 2002;128:759–64. [PubMed]
  • Van Loon AJ, Tijhuis M, Picavet HS, Surtees PG, Ormel J. “Survey Non-Response in the Netherlands: Effects on Prevalence Estimates and Associations.” Annals of Epidemiology. 2003;13(2):105–10. [PubMed]
  • Vestbo J, Rasmussen FV. “Baseline Characteristics Are Not Sufficient Indicators of Non-Response Bias Follow-up Studies.” Journal of Epidemiology and Community Health. 1992;46(6):617–9. [PMC free article] [PubMed]
  • Walsh K. “Evaluation of the Use of General Practice Age-Sex Registers in Epidemiological Research.” The British Journal of General Practice. 1994;44(380):118–22. [PMC free article] [PubMed]
  • Wiley JF, Fuchs S, Brotherton SE, Burke G, Cull WL, Friday J, Simon H, Jewett EA, Mulvey H. “A Comparison of Pediatric Emergency Medicine and General Emergency Medicine Physicians' Practice Patterns: Results from the Future of Pediatric Education II (FOPE II) Survey of Sections Project.” Pediatric Emergency Care. 2002;18:153–58. [PubMed]

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust