PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of hsresearchLink to Publisher's site
 
Health Serv Res. Dec 2005; 40(6 Pt 1): 1803–1817.
PMCID: PMC1361226
Patient and Provider Assessments of Adherence and the Sources of Disparities: Evidence from Diabetes Care
Karen E Lutfey and Jonathan D Ketcham
Address correspondence to Karen E. Lutfey, Ph.D., New England Research Institutes, 9 Galen Street, Watertown, MA 02472. Jonathan D. Ketcham, Ph.D., is with the School of Health Management and Policy, W. P. Carey School of Business, Arizona State University, Tempe, AZ.
Objective
To (1) compare diabetes patients' self-assessments of adherence with their providers' assessments; (2) determine whether there are systematic differences between the two for certain types of patients; and (3) consider how the cognitive processing that providers use to assess adherence might explain these differences.
Data Sources/Study Setting
Primary survey data were collected in 1998 from 156 patient provider pairs in two subspecialty endocrinology clinics in a large Midwestern city.
Study Design
Data were collected in a cross-sectional survey study design. Providers were surveyed immediately after seeing each diabetes patient, and patients were surveyed via telephone within 1 week of clinic visits.
Data Collection/Extraction Methods
Bivariate descriptive results and multivariate regression analyses are used to examine how patient characteristics relate to four measures of overall adherence assessments: (1) patients' self-assessments; (2) providers' assessments of patient adherence; (3) differences between those assessments; and (4) absolute values of those differences.
Principal Findings
Patient self-assessments are almost entirely independent of observable characteristics such as sex, race, and age. Provider assessments vary with observable characteristics such as patient race and age but not with less readily observable factors such as education and income. For black patients, we observe that relative to white patients, providers' assessments are significantly farther away from—although not systematically farther above or below—patients' self-assessments.
Conclusions
Providers appear to rely on observable cues, particularly age and race, to make inferences about an individual patient's adherence. These findings point to a need for further research of various types of provider cognitive processing, particularly in terms of distinguishing between prejudice and uncertainty. If disparities in assessment stem more from information and communication problems than from provider prejudice, policy interventions should facilitate providers' systematic acquisition and processing of information, particularly for some types of patients.
Keywords: Adherence, clinical encounter, racial/ethnic differences in health and health care, chronic disease, statistical discrimination
Inequalities in health care can be attributed to a wide range of sources, including differences in access to care, the source of care, insurance coverage, education, and socioeconomic status. However, many studies find that significant disparities persist even after these factors are taken into account (Williams 1999; Nazroo 2003; Williams, Neighbors, and Jackson 2003). For example, racial differences are observed in cardiovascular treatments that are not attributable to either disease severity or overuse of services by whites (Institute of Medicine 2003, p. 5). Similar findings hold across a range of health problems, including cancer (McMahon et al. 1999), HIV (Moore et al. 1994), and diabetes care (Chin, Zhang, and Merrell 1998).
To determine the source of these disparities, significant attention has been given to the role of providers, particularly how prejudice, stereotyping, and uncertainty can affect providers' assessments of patients and decisions about their treatment (Balsa and McGuire 2001, 2003; Miller et al. 2002; van Ryn 2002; Institute of Medicine 2003; Snowden 2003; van Ryn and Fu 2003). Authors of this research have found varying amounts of evidence supporting the notion that “provider beliefs about patients, and provider behavior during encounters are independently influenced by patient race/ethnicity” (van Ryn 2002, p. 140). This influence could stem from providers evaluating black patients more negatively than whites as a result of negative stereotyping (van Ryn 2000). Alternatively, such differences in provider beliefs and behavior could result from “statistical discrimination” if providers have more difficulty making sense of minority patients' symptom reports (Balsa and McGuire 2001). Others have suggested that interaction between race-concordant doctor–patient dyads might differ from race–discordant pairs, possibly reflecting underlying differences in attitudes or communication (Cooper et al. 2003). In the midst of this proliferating literature on race and health are also some dissenting, or at least qualifying, voices. For example, Adler and Ostrove suggest that health disparities attributed to racial differences may in fact reflect socioeconomic differences (1999, p. 10). The Institute of Medicine is also somewhat conservative in embracing some of these race findings, calling for additional research in their report, Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare (2003, p. 178).
We aim to build on the success of this previous research while addressing some of its limitations. One way in which we attempt to accomplish this is by orienting to the notion of patient adherence as a shared patient–provider undertaking, rather than as the provider-dominated ideology commonly espoused in health research. Recognizing that most people will at some point not follow their doctor's recommendations, Wright (2000, p. 704) suggests that more attention be directed to understanding why patients adhere rather than simply asking who adheres. By refocusing research efforts in this way, the perspective of the patient, rather than that of the provider, becomes central.
We use a unique dataset to examine systematic differences between patients' and providers' assessments of patient adherence to diabetes treatments. By giving equal weight to patients' and providers' perspectives, we (1) build on Wright's (2000) observations about the importance of considering the social meanings of patient adherence and (2) provide explicit analysis of patients' views of their own health behavior, which is normally left implicit if considered at all. This approach also resonates with the work of Cooper et al. (2003) by addressing potential attitude differences that might underlie discordant doctor–patient relationships.
We use these data to compare how individual patients' ratings of their overall adherence compare with their providers' ratings, and whether systematic differences exist between them. In the subsequent discussion, we consider what might drive these differences in patient and provider assessments. In particular, we discuss the roles of prejudice and uncertainty in providers' cognitive processing. We conclude by suggesting future topics for research and policy in these areas.
Data Collection
The data come from a telephone survey of patients and a post-visit survey of their providers. The data were collected from 2 weekly subspecialty endocrinology clinics housed in the same university-based medical center in a large, Midwestern city (see Lutfey 2003 for a detailed explanation of the data collection). One clinic served a predominantly white, college educated, insured population, while the other served a predominantly minority, high school educated, underinsured population. These surveys represent a nearly complete census of all diabetes patients seen at both clinics over a 3-month period (with a response rate of 94 percent). After 3 months, patients in our sample began to return for their next visit (as recommended by the American Diabetes Association), limiting the sample size to 173 patients. We further restricted the analysis to the 156 of these with data for all of the variables we consider.
Questions about adherence were framed identically for patients and providers to allow for direct comparison. Providers completed a 1-page, 5-item questionnaire for each patient, while patients completed a longer, 20-minute telephone survey containing additional questions. Both were asked to rate, on a scale of 0–10, the patient's overall adherence (e.g., “On a scale of 0–10, where 0 is ‘not at all’ and 10 is ‘very closely,’ how closely would you say you/this patient adhere[s] to [your] the treatment regimen you have created for him/her?”). Patients participated in an 8-item test of cognitive functioning from the Wechsler Adult Intelligence Scale (WAIS) Similarities subtest, which is also used in several large-scale social surveys. The WAIS Similarities subtest asks patients to describe similarities between sets of items, such as, “In what way are an orange and a banana alike?” and scores responses from 0 to 2 points. In addition, patients reported race, gender, age, occupation, education, health insurance coverage, income, time with diabetes, and type of diabetes. While some of the patient characteristics in the data are readily observable to the provider, such as race, sex, and age, others are less observable to the provider, including education, income, insurance, and cognitive ability.
Although these data include both the patients' and providers' subjective assessments, they do not provide any objective measure of adherence. While it is possible to hypothesize various reasons why either patients' or providers' responses might be closer to objective measures of adherence, such conjecture is beyond the scope of this paper. Thus our goal is not to treat either of these assessments as correct and the other as incorrect. Rather, we are interested in examining whether there are systematic differences between patients' and providers' assessments for certain types of patients relative to others, and if so, considering which cognitive processes that providers use might lead to these differences.
Empirical Methodology
The main dependent variables of interest are derived from the assessments of adherence described above. From these, we create four distinct variables: (1) the patients' self-ratings of adherence; (2) the providers' ratings of patients' adherence; (3) the difference between the two, calculated as (patient rating)−(provider rating); and (4) the absolute value of that difference, calculated as |(patient rating)−(provider rating)|. The difference indicates whether providers' assessments are systematically above or below patients' assessments (i.e., direction of differences), while the absolute value of the difference captures the degree to which providers' ratings deviate from patients' ratings (i.e., distance between assessments).1
We use both bivariate descriptive statistics and multivariate regressions to examine how patient and provider adherence assessments, their differences, and the absolute value of those differences vary with regard to observable and less observable patient characteristics. Although the ratings are reported on 0–10 scales, we estimate the regressions using ordinary least squares. The independent variables are treated as categorical rather than continuous, where the categories are listed in Table 1.
Table 1
Table 1
Ratings of Patient Adherence to Diabetes Treatments, by Patient Characteristics
Table 1 reports means for all of the patient characteristics in our analyses and the means of the four measures of overall adherence by those characteristics. The data in the first column show that patient self-assessments do not vary much with the characteristics we measure. However, the data in the second column show that provider assessments differ significantly across categories for a number of characteristics, including race, education, and income. Both providers' assessments and the patients' self-assessments of overall adherence were lower for black patients than white patients.
The data in the third column of Table 1 indicate that the amount by which patients' self-assessments exceed providers' assessments is greater for black patients than white patients. Moreover, the data in the final column show that the absolute difference between patients' and providers' assessments is significantly greater for black than white patients, indicating that the patient and provider assessments are especially divergent for black patients. Furthermore, patient assessments and physician assessments tend to be closer for patients who are 45–54 years old than for those ages 18–44.
In general, the results in Table 1 suggest that providers' perceptions of adherence concur more closely with self-reported adherence of certain types of patients. These results largely match the regression results we report below, although somewhat different patterns appear because the regressions are able to isolate the differences for each characteristic while holding the others constant.
Table 2 reports selected coefficients for regression analyses of the four measures of overall adherence. Each regression includes all of the variables reported in Table 1, except that in place of the two cognitive ability categories, we use the WAIS score (scaled 0–16) and the score squared because, a priori, nonlinear relationships seem plausible. The regressions also control for: type of diabetes (1, 2, or unknown); time with diabetes (less than 1 year, 1–5 years, 6–10 years, or more than 10 years); which clinic provided care (1 or 2); and which physician the patient saw (1, 2, or other). The table presents coefficients for only the observable characteristics (race, sex, and age) the F-statistic from the test of joint significance of the characteristics that are less observable to providers. The table also presents the absolute value of the t-statistics from robust (Huber–White) standard errors, which correct for potential heteroskedasticity.
Table 2
Table 2
Selected Coefficients from OLS Results for Ratings of Adherence to Diabetes Treatments
The results in Table 2 indicate that patient assessments are largely independent of both observable and less observable patient characteristics, which is consistent with the descriptive results presented in Table 1. Provider assessments of black patients are nearly 1.2 points below their average assessment of white patients. The results in the third column indicate that the extent to which patients' assessments exceed providers' assessments did not vary significantly by race. The results in the last column, which reports regression results for the absolute difference between patient and provider assessments, indicates that the amount of discrepancy between patient and provider assessments is significantly greater for black patients than white patients.
Figure 1 presents regression-adjusted means by race, as calculated from the regressions analyses presented in Table 2. Providers assess black patients about 17 percent lower than white patients. The absolute difference results indicate that the amount of discrepancy between patients' and providers' assessments is 1.0 point greater (or 67 percent higher) for black patients than for white patients.
Figure 1
Figure 1
Regression-Adjusted^ Means of Ratings of Adherence to Diabetes Treatments, by Race
Table 2 also shows that the amount by which patient assessments exceed provider assessments is significantly smaller for patients aged 45–54 and 65 years and older than for patients younger than 45 years. Also, the absolute differences are significantly smaller on average for patients age 45–54 years than for patients under 45 years. There were no significant differences by sex. Likewise, the F-statistics for the contribution of the less observable characteristics are consistently insignificant at p <.10.
Discussion of Empirical Findings
Overall, the results suggest that patients' self-evaluations of diabetes adherence vary little with the patient characteristics that we measure. By contrast, providers' assessments of patient adherence vary according to observable characteristics, such as race and age, but not as much by characteristics that are more difficult to observe, such as education (as supported by the F-statistics being insignificant at even the 10 percent level). But we find significantly larger absolute values of differences for black patients, indicating a systematically greater gap between patient and provider ratings. By contrast, for age we find significant differences between provider and patient assessments yet insignificant differences in the absolute values.
In the next section, we discuss what these results might imply about the cognitive processes providers use when assessing patient adherence to treatment for diabetes. Before drawing these conclusions, however, we conducted a series of secondary analyses to rule out a number of plausible alternative explanations.2 These results do not support the hypotheses that the observed pattern of results can be explained by provider beliefs about a patient's cognitive ability, overall control of diabetes, or the existence of comorbidities that we are not able to observe directly. For example, self-reported health status does not significantly vary with the explanatory variables, limiting the likelihood that unobserved comorbidities are playing a role in the patterns we observe.
One remaining possibility that we were not able to directly address is that interactions with providers affect patient self-assessments of adherence, that is, that patient ratings are endogenous. If providers are more likely to tell certain types of patients that they are nonadherent, perhaps those patients subsequently view themselves as less adherent. Alternatively, providers might lower their expectations for certain types of patients and convey those expectations to patients. This could in turn lead to those patients rating themselves more highly than providers because the patients are less aware of the levels of adherence that could be achieved. Our survey method might reduce some concerns over such endogeneity because patients were surveyed several days after the clinic visit. This should limit any immediate but short-lived effects from interactions with providers. A better way of addressing this issue of endogeneity would be to use an external measure of adherence, such as a medication possession ratio, but we did not have access to such data for this study.
What do these findings imply about providers' cognitive processing when assessing patient adherence? Recognizing that there is some variation across disciplines in the definitions of such terms, we use the following social psychological definitions, drawn from the IOM Unequal Treatment report. Stereotyping refers to the process by which people use social categories (e.g., race, sex) as they acquire, process, and recall information about others (Institute of Medicine 2003, p. 169). Faced with the task of making sense of a world filled with infinite detail, stereotypes operate as heuristic devices allowing individuals to organize information into familiar categories, albeit often in overly rigid or exaggerated ways (see also Byrd and Clayton 2003, p. 525). Prejudice refers to a specific type of stereotype—those with negative attitude or affect. In the words of Byrd and Clayton, prejudice refers to “an antipathy, felt or expressed, based upon a faulty generalization and directed toward a group as a whole or toward individual members of a group” (2003, p. 524). Finally, uncertainty refers to the problems people sometimes encounter in the process of cognitively processing their social worlds; uncertainty occurs when social actors have difficulty cueing an appropriate or accurate stereotype, or when they use stereotypes unreliably (Balsa and McGuire 2003; Institute of Medicine 2003, p. 172).
Our results suggest that providers appear to use observable cues, race and age in particular, to make inferences about an individual patient's adherence, even though patient self-ratings do not vary with these factors. This is further supported by the finding that physician assessments do not vary with patient characteristics that are more difficult for them to observe. These findings are not consistent with the idea that patients simply convey their preferences and behaviors to providers and providers apply such information equally for all types of patients. In addition to reinforcing the view that providers' cognitive processing about adherence is important for clinical decision making, we offer some empirical evidence about which types of processing might play a role. If, for example, we had found that providers give systematically lower assessments of black patients as compared with white patients (assuming other relevant factors were adequately taken into account), we might have evidence of providers behaving with prejudice toward black patients (this would also require us to assume that patient assessments were reflections of true adherence behavior). Instead, we find significant variation in the absolute value of the difference between patients' and providers' assessments for black patients as compared with white patients. This result suggests a difference in distance rather than one of direction: providers are neither systematically above nor below black patients' self-ratings, but they are systematically farther away, compared to their ratings of nonblacks. This finding suggests that providers have greater uncertainty about the adherence of black patients. In practical terms, this uncertainty might suggest that providers have more difficulty communicating with certain types of patients.3
Beyond this, however, there is little we can conclude in terms of clearly supporting more specific cognitive processing theories that are sometimes used to understand health disparities. Our findings evoke, for example, Balsa and McGuire's (2001) Balsa and McGuire's (2003), Balsa and McGuire's (2005)) work on statistical discrimination—a type of stereotyping wherein providers, uncertain how to characterize individuals belonging to specific groups, tend to use the group's average characteristics in evaluating any given individual. This theoretical perspective predicts that providers would be less extreme in their evaluations of black patients compared with whites, because they would tend toward their perception of the group's average. At the same time, our findings also evoke the complexity–extremity effect (Linville 1982; Linville and Jones 1980), which suggests that people tend to have more extreme evaluations (positive or negative) of others belonging to groups with whom they have had little exposure, and of whom they therefore have less complex cognitive understandings.4 This theoretical perspective implies findings that would be quite divergent from a statistical discrimination finding: rather than tending toward the mean of a group, assessments would be polarized. While either of these perspectives would be consistent with the greater distance between providers' and black patients' assessments that we observe, our findings as a whole leave us without distinct and mutually exclusive evidence for either theory.5
The patterns in our age results imply that providers do not appear to face the same uncertainty with young patients as they seem to for black patients. Because of various limitations of our study, we do not suggest these findings constitute clear evidence of provider prejudice against younger patients. We do, however, assert that future research examining cognitive processing and health outcomes would benefit from further exploration of various dimensions of bias, including the underlying types of cognitive processing, particularly with respect to age. Together, these findings imply a need for additional systematically coordinated, cross-disciplinary, creative work to build and execute a research agenda that would consider factors such as the following: What constitutes empirical evidence of provider prejudice, uncertainty, and stereotyping? What can we learn from the multiple disciplinary perspectives represented in this literature, and how can we better coordinate those efforts? What is the relationship of not only provider assessments, but also of patient self-assessments, to objective measures of health behavior? How will an understanding of relative differences contribute to a broader agenda of learning about cognitive processing?
This research has additional implications for understanding racial/ethnic health disparities. For example, the data permit an explicit comparison between an individual patient's self-rating of adherence and the physician's rating of that same patient. This improvement in data offers a first step in considering some underlying features of concordant versus discordant doctor–patient interaction.
Second, the results suggest that differences in beliefs about patient adherence might underlie some of the disparities in treatment decisions, at least in the context of diabetes patients. According to van Ryn and Fu (2003, p. 251), relatively little research has tested how providers' beliefs about patients' social behaviors influence their professional decision making. More fundamentally, however, we view this work as important because it functions as a theoretical antecedent to questions of how medical treatment decisions are made (see also Institute of Medicine 2003, p. 173–4). What is lacking in many of these studies, and others asking similar questions focused on provider bias, are the sorts of systematic measures of provider assessments that we offer here.
Clearly, the present study has several limitations, including a small sample size from only two clinics, its focus on a single medical indication, and its lack of an objective measure of patient adherence. To build on this work, future research might consider multiple dimensions of adherence; compare patient and provider assessments of adherence or other aspects of health behavior; integrate more “objective” data such as lab measures or automated pill counts; study of mechanisms other than adherence that might lead to disparities in treatment; and examine illnesses other than diabetes.
Despite these limitations, our findings have important policy implications. The effectiveness of policies aimed at reducing disparities depends on correctly identifying and targeting their sources (Balsa and McGuire 2003). Our work here contributes by offering empirical evidence that greater provider uncertainty about black patients' adherence, rather than prejudice or negative stereotypes about black patients' ability to adhere, might be responsible for racial disparities in diabetes treatment, to the extent that beliefs about adherence affect subsequent treatment decisions. Patients and providers may be doing the best they can given the information they have (Balsa and McGuire 2003), and as a result, improving providers' abilities to assess blacks' adherence might improve outcomes more than trying to minimize or eliminate prejudice. This goal could be accomplished through a multilevel, comprehensive approach (Miller et al. 1997; Roter et al. 1998), part of which may involve giving providers incentives to spend more time with black patients, educating them about how to better elicit information from patients of different races, or encouraging them to implement more objective measures of patient adherence where possible. For age-based differences, which are characterized more by differences in direction rather than distance, policy implications might be more effectively aimed at understanding and changing negative attitudes about younger patients' adherence behaviors.
Acknowledgments
During the preparation of this manuscript, both authors were generously supported by the Robert Wood Johnson Foundation Scholars in Health Policy Research Program at the University of California, Berkeley (2002–2004). The first author has also received support for this research from the University of Minnesota. We are grateful to seminar participants at the Center for Health and Public Policy Studies at the University of California, Berkeley, where an early version of this paper was presented.
Notes
1As an example to distinguish the “difference” from the “absolute value of the difference,” suppose that one patient's self-rating is 1.5 points above the physician's (4.5 versus 3) while another's is 1.5 below the physician's (1.5 versus 3). The average difference for these two patients is equal to 0 ([1.5+(−1.5)]/2), while the average absolute value of the difference equals 1.5 ([|1.5|+|−1.5|]/2)./
2Specifically, we repeat the regressions with alternative dependent variables: the patient's cognitive ability battery score (and omitting that variable from the regressors), physician assessment of patient cognitive ability, their differences, and their absolute differences; patient and provider assessments of overall control of diabetes, their differences and their absolute differences; and patient self-reported health status. These results are available from the authors upon request.
3A plausible alternative is that providers communicate equally well with all types, but the variance of the adherence distribution is greater for some types of patients. Because we use a subjective measure of adherence, this could result if either black patient's adherence varies more, or if they have greater uncertainty about their own adherence. However, we did not find evidence of this. For example, we find greater absolute differences between black patients and providers, but in unreported regressions we found that, conditional on other characteristics, black patients did not have greater variation in their self-reported adherence, as measured by the absolute deviation from the race-specific mean.
4In a classic study of white college students evaluating fictional black and white law school applicants (Linville and Jones 1980), there was an interaction between race and quality of applicants, such that “good” black applicants were rated more positively than “good” white applicants and “bad” black applicants were rated more negatively than “bad” white applicants.
5Statistical discrimination implies that providers' assessments of black patients should have less variation than their assessments of white patients, while the complexity–extremity effect implies the variation should be larger. In our data, both an F-test and Levene's test of equality of variances show no significant differences between the variances of providers' assessments of blacks and whites.
  • Adler NE, Ostrove JM. Socioeconomic Status and Health; What We Know and What We Don't. Annals of the New York Academy of Sciences. 1999;896:3–15. [PubMed]
  • Balsa AI, McGuire TG. Statistical Discrimination in Health Care. Journal of Health Economics. 2001;20:881–907. [PubMed]
  • Balsa AI, McGuire TG. Prejudice, Clinical Uncertainty and Stereotyping as Sources of Health Disparities. Journal of Health Economics. 2003;22:89–116. [PubMed]
  • Balsa AI, McGuire TG, Meredith LS. Testing for Statistical Discrimination in Health Care. Health Services Research. 2005;40(1):227–52. [PMC free article] [PubMed]
  • Byrd WM, Clayton LA. Institute of Medicine's Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare. Washington, DC: National Academy of Sciences; 2003. Racial and Ethnic Disparities in Healthcare; A Background and History; pp. 455–527.
  • Chin MH, Zhang JX, Merrell K. Diabetes in the African-American Medicare Population; Morbidity, Quality of Care, and Resources Utilization. Diabetes Care. 1998;21(7):1090–5. [PubMed]
  • Cooper LA, Roter DL, Johnson RL, Ford DE, Steinwachs DM, Powe NR. Patient-Centered Communication, Ratings of Care, and Concordance of Patient and Physician Race. Annals of Internal Medicine. 2003;139(11):907–15. [PubMed]
  • Institute of Medicine Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare. Washington, DC: National Academy of Sciences; 2003.
  • Linville PW. The Complexity–Extremity Effect and Age-Based Stereotyping. Journal of Personality and Social Psychology. 1982;42:193–211.
  • Linville PW, Jones EE. Polarized Appraisals of Out-Group Members. Journal of Personality and Social Psychology. 1980;38:689–703.
  • Lutfey K. The Influence of Clinic Organizational Features on Providers' Assessments of Patient Compliance with Medical Treatment Regimens. Research in the Sociology of Health Care. 2003;21:63–83.
  • McMahon LF, Wolfe RA, Huang S, Tedeschi P, Manning W, Edlund MJ. Racial and Gender Variation in Use of Diagnostic Colonic Procedures in the Michigan Medicare Population. Medical Care. 1999;37(7):712–7. [PubMed]
  • Miller LG, Liu H, Hays RD, Golin CE, Beck CK, Asch SM, Ma Y, Kaplan AH, Wenger NS. How Well Do Clinicians Estimate Patients' Adherence to Combination Antiretroviral Therapy? Journal of General Internal Medicine. 2002;17:1–11. [PMC free article] [PubMed]
  • Miller NH, Hill M, Kottke T, Ockene IS. The Multilevel Compliance Challenge; Recommendations for a Call to Action. Circulation. 1997;95:1085–90. [PubMed]
  • Moore RD, Stanton D, Gopolan R, Chaisson RE. Racial Differences in the Use of Drug Therapy for HIV Disease in an Urban Community. New England Journal of Medicine. 1994;330(11):763–8. [PubMed]
  • Nazroo JY. The Structuring of Ethnic Inequalities in Health; Economic Position, Racial Discrimination, and Racism. American Journal of Public Health. 2003;93(2):277–84. [PubMed]
  • Roter DL, Hall JA, Merisca R, Nordstrom B, Cretin D, Svarstad B. Effectiveness of Interventions to Improve Patient Compliance; A Meta-Analysis. Medical Care. 1998;36(8):1138–61. [PubMed]
  • Snowden LR. Bias in Mental Health Assessment and Intervention; Theory and Evidence. American Journal of Public Health. 2003;93(2):239–42. [PubMed]
  • van Ryn M. Research on the Provider Contribution to Race/Ethnicity Disparities in Medical Care. Medical Care. 2002;40(1):140–51. [PubMed]
  • van Ryn M, Burke J. The Effect of Patient Race and Socio-Economic Status on Physicians' Perceptions of Patients. Social Science and Medicine. 2000;50:813–28. [PubMed]
  • van Ryn M, Fu SS. Paved with Good Intentions; Do Public Health and Human Service Providers Contribute to Racial/Ethnic Disparities in Health? American Journal of Public Health. 2003;93(2):248–55. [PubMed]
  • Williams DR. Race, Socioeconomic Status, and Health. Annals of the New York Academy of Sciences. 1999;896:173–88. [PubMed]
  • Williams DR, Neighbors HW, Jackson JS. Racial/Ethnic Discrimination and Health; Findings from Community Studies. American Journal of Public Health. 2003;93(2):200–8. [PubMed]
  • Wright MT. The Old Problem of Adherence; Research on Treatment Adherence and Its Relevance for HIV/AIDS. AIDS Care. 2000;12(6):703–10. [PubMed]
Articles from Health Services Research are provided here courtesy of
Health Research & Educational Trust