|Home | About | Journals | Submit | Contact Us | Français|
The Multidimensional Pain Inventory (MPI) is one of the most widely used instruments to assess patients’ coping with chronic pain. It provides a psychosocial classification system that categorizes patients into three coping styles: Adaptive, Dysfunctional, and Interpersonally Distressed. To date, comprehensive information about the validity of the MPI taxonomy obtained from informants other than the patient has been unavailable. This has limited conclusions about the extent to which the MPI captures patients’ adaptation to chronic pain beyond self-report. The present study is the first to examine whether the distinct multidimensional profiles underlying the patient clusters can be confirmed by proxy report. Ninety-nine chronic pain patients, their partners, and their healthcare providers participated in the study. Patients completed the MPI twice to determine stability of classification. Partners and providers rated the patient on MPI proxy versions developed for this study. Results revealed that partner- and provider- reported MPI ratings corresponded with the self-report patient profiles. The profiles of patients showing classification stability rather than switching of cluster assignment between the two MPI assessments had the highest correspondence with proxy ratings. These results extend prior validity research on the MPI and demonstrate that differential psychological adaptational styles to chronic pain can be reliably recognized by partners and healthcare providers.
Chronic pain affects up to 25% of the adult population and represents a common reason for seeking medical treatment . Numerous studies show that patients display variability in their psychosocial adjustment to chronic pain . In an effort to clarify this heterogeneity, empirical classification systems have been developed. A widely used instrument is the West Haven Yale Multidimensional Pain Inventory (MPI) . The inventory yields three psychosocial coping clusters: Adaptive (AC) patients with low pain impact and high levels of functional activity; Dysfunctional (DYS) patients with high pain impact, affective distress, and severe functional limitations; Interpersonally Distressed (ID) patients with poor social support by their significant others in response to pain . Extensive support exists for the generalizability of the taxonomy across various chronic pain conditions .
To date, there has been limited research on whether the distinct MPI patient profiles are recognized by informants other than the patient. As such, it is possible that the taxonomy captures a purely subjective, transient, and intrapsychic phenomenon. If a family member or healthcare provider perceives the patient’s coping responses as consistent with patient self-report, this would lend increased validity to the taxonomy.
A shared assessment of patients’ adaptation to chronic pain can be essential for effective treatment. It is recognized that treatment should match patients’ physical and psychosocial needs for patients to experience clinically meaningful benefits . Providers’ perceptions of the patient have a substantial impact on decisions regarding healthcare provision [12,34]. Patient-provider discrepancy can complicate treatment. Likewise, patients’ spouses and family members play an integral part in pain management by providing support and assistance . As such, the utility of their involvement in assessment and medical care is increasingly discussed . Some studies suggest that the provision of support, however, is dependent upon the congruence of patients’ and partners’ perceptions of the patient’s health status [25,32].
This study is the first to examine whether patients’ proxies can corroborate the distinct adaptational profiles underlying the MPI. We created a parallel partner version and rewrote a subset of items for a provider version. Retest instability in patient classification has raised concern about the taxonomy’s reliability . We administered the MPI to patients twice to examine whether classification stability and timing of patient-proxy assessments would influence patient-proxy agreement.
Recent evidence suggests that informant ratings are reasonably accurate ; we expected that proxy-reported MPI profiles would correspond with patient-reported profiles. Given that patient-proxy agreement has been shown to be better when respondents are assessed at the same time rather than at different times , we expected greater agreement for MPI assessments that were more proximal in time. Finally, we expected that partners would have a better sense of patients’ coping style when patients presented a coping pattern that was stable rather than fluctuating across assessments.
Recruitment was conducted at five pain rehabilitation facilities: two pain management clinics, one of which was affiliated with the university hospital, and three physical therapy offices. Flyers were posted in the examination rooms and distributed by staff members, and patients were approached in the waiting room. Interested patients were screened for eligibility privately in the clinic. Out of the 180 patients who were approached in-person, 35 (19%) declined to be screened.
One-hundred and twenty patients were eligible and gave written consent to participate in the study. Eligible patients were at least 18 years old, had a diagnosis of low back pain, osteoarthritis, rheumatoid arthritis, and/or fibromyalgia, were seen regularly at the treating facility (i.e., average frequency of visits had to be at least every 4–6 weeks), were able to read, write, and speak English, and had no visual impairments that would interfere with questionnaire completion. Of these, 17% (n = 21) withdrew shortly after enrollment. Reasons for withdrawal were upcoming health concerns, such as surgeries (n = 5), personal matters, including relocation and family issues (n = 8), and discontinuation of treatment at the facility for unknown reasons (n = 8). The remaining 99 patients (83%) completed the study. There were no significant differences between completers and non-completers on demographic (i.e., age, gender, marital status, race, and education) and medical characteristics (i.e., years since diagnosis, and symptom duration). Data were not available for all patients’ partners and healthcare providers. For the proxy sample, 70 partners agreed to participate in the study; providers’ data were collected for 87 patients.
Patients and their partners were asked to provide basic demographic information including their age, gender, race, ethnicity, education, marital status, income, number of children, current occupation and disability status. They also completed several other assessment instruments not reported here.
The 61-item patient version assesses a range of psychosocial variables that are associated with the chronic pain experience. Patients’ responses form 13 subscales across three sections. The first section addresses pain severity, perceptions of pain-related interference, appraisals of the support received from significant others, perceived life control, and affective distress. The second section addresses patients’ perceptions of significant others’ behavioral responses to their pain: punishing/negative responses, distracting responses, and solicitous responses. In the third section, the patient rates how often he or she engages in 18 common activities. These 18 items form a general activity scale. A modified version of the MPI instructional set that clarifies the meaning of “significant other” was used . The MPI subscales have demonstrated good temporal stability (r = .62–.91) and internal consistency (Cronbach’s alpha α = .70–.90) .
The computer scoring system (MAP)  uses 9 of the 13 scales to classify each patient into one of three adaptational styles (i.e., DYS, ID, and AC) or into a Hybrid or an Anomalous category. The program uses multivariate classification procedures and a goodness-of-fit approach to determine if an individual’s MPI scale scores are sufficiently similar to any of the three prototypic profiles. Hybrid indicates that a patient’s profile represents aspects of two of the three adaptational styles. Anomalous indicates inconsistencies in the patient’s profile that preclude statistically confident classification. Neither of these two categories reflect a systematic coping pattern. Hence, for comparisons between the MPI classifications in the present paper the analyses were conducted with patients in the three main adaptational clusters.
A partner version parallel to the patient MPI version was developed for the current study to assess partners’ perceptions of the patient’s coping style. Although a significant-other version of the MPI has previously been developed [11,19], it does not include all nine MPI subscales. In the present study, the complete set of items in the patient version was reworded to fit a partner format (e.g., patient version: “Rate the level of your pain at the present moment” – partner version: “Rate the level of your partner’s pain at the present moment”; patient version: “During the past week, how tense or anxious have you been” – partner version: “During the past week, how tense or anxious has your partner been”; patient version: “How able are you to predict when your pain will start, get better or worse’ – partner version: “How able is your partner to predict when his/her pain will start, get better or worse”). It was developed to leave the original item stem of the patient version maximally intact while only changing the respondent perspective. For each item, a “Don’t know” option was added. Given the internal nature of the pain experience and its psychological consequences, this option was added to minimize partner guessing and to determine potential reporting difficulties for each item. A full description of the psychometric properties of the MPI partner version is presented in the results section.
The patients’ treating pain management specialist or physical therapist completed one item from each of four MPI subscales: pain severity (“How much suffering does your patient experience because of his/her pain”), interference (“In general, how much does your patient’s pain interfere with his/her day-to-day activities”), life control (“How much control do you feel your patient has over his/her pain”), and affective distress (“How tense or anxious is your patient”). These four subscales were selected because they directly assess the patient’s physical and emotional health. Given that the provider’s knowledge about the patient is garnered from health care-related interactions, only these subscales were deemed appropriate. Only one item for each of the four MPI subscales was used to minimize provider burden and increase compliance with questionnaire completion. In order to obtain the best possible representation of the constructs underlying each MPI subscale, items with the highest item-total correlations as reported by Kerns and colleagues  were chosen and reworded to fit a provider format.
The study was reviewed and approved by the University Institutional Review Board (IRB). Patients gave written consent to participate in the study including collection of ratings from their partner and healthcare provider. During the clinic visit when a patient was enrolled in the study, they completed the first MPI assessment. Patients were also asked to identify a person to whom they felt close (as outlined by the instructional set of the MPI) and who might be willing to participate in the study. These individuals could be spouses, significant others, family members, or close friends. Patients were given a study flyer and invitation letter for their partner. We also asked patients for their verbal permission to call their designated partners and inquire about their willingness to participate in the study. With the patient’s approval, we initiated contact with the partner approximately 1–2 days after patient enrollment. Approximately two weeks after the first assessment, patients received the second MPI via mail to be completed at home. Partners who agreed were mailed the MPI partner version in a separate envelope on the same day that patients were mailed their second MPI assessment. Patients and partners were instructed to complete the questionnaires in privacy and to not share the questions and their answers with one another to avoid non-independence of reporting. On the day of the patient’s routine visit to the clinic 4–6 weeks after study enrollment, providers completed the four MPI provider items. All participating patients, partners, and providers completed their respective MPI version.
All data were double-entered and computer verified. The MPI was scored using the MAP software program . As partner and provider reports were obtained on days that were most proximal to the second MPI administration, analyses were based on the second assessment.
In step one, overall convergence between patient and proxy MPI ratings on a subscale and item-level was examined via Pearson correlations and paired samples t-test. Paired samples t-tests take into account the non-independence that may exist between dyads.
In step two, differences between the categorical patient-reported MPI classifications on the individual MPI proxy report scales were examined. This allowed us to determine whether the MPI patient profiles are validated by proxy report. Instead of conducting omnibus univariate ANOVA tests, we examined differences between the patient clusters via planned comparisons. The contrasts were selected a priori and based upon previously established scale score differences among the clusters in patient self-report [39,40]. Thus, our goal was to replicate established differences between patients’ clusters in our ratings by partners and healthcare providers. The planned comparison strategy was used to maximize statistical power. A more conservative significance level of P ≤0.01 was chosen to minimize Type I error due to the number of planned contrasts. In addition, we present Cohen’s d, a common measure of effect size (d = .20 small, d = .50 medium, d = .80 large effect) defined as the difference between two means divided by their pooled standard deviation . MPI patient cluster at the second assessment served as the independent variable and scale scores on the partner version served as the dependent variables. Analyses for the provider report followed the same analytic pattern as for partners but were conducted on an item-level.
In step three, we examined the accuracy of partner-reported MPI subscales in classifying patients into their respective cluster. For this purpose, discriminant function analyses were conducted. In addition, partner ratings on the MPI subscales were entered into the MAP, the computerized scoring program typically used to classify patient responses; we then compared the classifications resulting from partner report with those obtained from patient report. Given that the provider items did not encompass all MPI dimensions, these analyses were only conducted with the partner scale scores. To determine whether the timing of patient and partner MPI assessments would impact classification accuracy, we examined patient–partner agreement based on the first as well as second MPI patient assessments. Agreement was determined via Cohen’s Kappa (κ) because it corrects for agreement between two respondents which occurs by chance . In addition, we compared patient–partner agreement between patients with stable and unstable MPI classifications to determine whether stability would impact the results.
Step one of the analyses was conducted on the individual subscale/item-level and utilized information from all available patient- partner dyads (n = 70) and patient–provider dyads (n = 87) including patients classified as Anomalous and Hybrid. In steps two and three of the analyses, dyads including patients whose scores indicated an Anomalous or Hybrid pattern were excluded, because their MPI classification is not clinically meaningful. Of the 70 patients with partner data, 16 were classified as Anomalous or Hybrid, resulting in a sample of n = 54 for partner analyses. Of the 87 patients with provider data, 18 were classified as Anomalous or Hybrid, resulting in a sample of n = 69 for provider analyses.
Forty-nine patients (50%) were female. The majority of patients were married or living with a partner (70%) and were White (87%). Approximately 26% had completed college and an additional 33% had some college education. The mean age was 52 years (SD = 11.9). A full description of demographics is found in Table 1. Patients identified their primary pain complaint as low back pain (84%), osteoarthritis (10%), rheumatoid arthritis (3%), and fibromyalgia (3%). Forty-seven patients (47%) came from pain management clinics, and the other 52 patients (53%) came from physical therapy offices. There were significant differences between patients from the two recruitment sites for years since pain symptoms first began (t(91) = 2.1, p < .05): Patients from pain management facilities had symptoms for a longer period of time (mean = 10.8, SD = 8.7) than patients from physical therapy offices (mean = 7.1, SD = 8.7). In addition, patients from pain management facilities were more likely to be on disability than patients from physical therapy offices (χ2(1) = 8.9, p < .01). Finally, the rates of MPI classification were different between recruitment locations (χ2 (2) = 12.1, p < .01): In the pain management facilities 18% were classified as AC (n = 7), 37% as DYS (n = 14), and 45% as ID (n = 17); in contrast, in the physical therapy offices, 55% were classified as AC (n = 21), 13% as DYS (n = 5), and 32% as ID (n = 12). Patients with an Anomalous or Hybrid (n = 21) classification differed significantly from the remaining patients on marital status (P < .05): 38% of patients with an Anomalous or Hybrid pattern were married compared to 69% of patients with one of the three adaptational styles.
Out of the 99 patients, 15 (15%) did not volunteer a partner. There were no significant differences on demographic and medical characteristics between patients who had partner data and those who did not. Of the 84 volunteered partners, a total of 70 (83%) agreed to participate in the study. Most of the partners (76%) were in a romantic relationship with the patient, and 24% were family or close friends (see Table 1). A total of nine healthcare providers (four medical doctors, four physical therapists, and one nurse practitioner) agreed to participate in the study. Provider ratings were available for 88% of the patients (n = 87). The remaining 12 patients did not have appointments with their provider within the time frame of the study. The gender distribution of providers was 33% female (n = 3) and 67% male (n = 6).
Reliability estimates and scale score distributions for the current study were comparable to those reported previously (α = .70–.90) by Kerns and colleagues . The nine subscales demonstrated adequate internal consistencies at the first (α= .66–.93) and second assessments (α = .77–.96). Test–retest reliability of scale scores ranged from r = .63–.87 with the average retest interval being 25 days (SD = 9.7). Means and standard deviations for each subscale corresponded to those reported previously .
Prior to computation of scale scores, item characteristics for the MPI partner version were inspected. Recent reviews argue that a lack of patient-proxy agreement can result from methodological limitations, including compromised reliability, and lack of score variability . Partners’ responses utilized the full range of the scale (range = 0–6) with the exception of two items (range = 1–6). The standard deviations for each item suggested satisfactory variability of responses (all SDs ≥ one full scale-point). Next, the frequency of “Don’t know” for each item was inspected. As expected, for the subscales where respondents were asked to rate their own behavior toward the patient (i.e., social support, punishing, solicitous, and distracting responses) “Don’t know” was utilized only one time. Slightly higher rates were evident on subscales where respondents were asked to rate patients’ experiences with the illness; however, the average rate of “Don’t know” across those scales was only 3% (range = 0–13%). We further examined the item-total correlations of the MPI partner version. By convention, an item-total correlation of .30 is considered the acceptable minimum for an item . The average item-total correlation for partner-report was .53 (range = .07–.82), which was marginally lower than the average item-total correlation of .61 (range = .08–.94) for patient report. Out of the 56 item-total correlations, six were below .30. Four of these came from the General Activity subscale that assesses 19 daily behavioral activities. These four items were: mows the lawn, r = .21; plays cards/games, r = .26; goes to a movie, r = .20; and works on the car, r = .07. In sum, the item characteristics of the MPI partner version allowed computation of scale scores.
Table 2 presents descriptive statistics and psychometric properties for the individual subscales of the MPI partner version in comparison with those obtained by the patient version. Scale scores were computed by summing the items comprising each individual subscale and averaging the sums by the number of non-missing items . Internal consistencies for partner-reported punishing (α = .85), solicitous (α = .70), and distracting responses (α = .67) were satisfactory in the present study and comparable to those reported previously by Kerns and Rosenberg  and recently by Pence and colleagues . Internal consistencies were generally good, with a median across scales of 0.83 and a minimum of 0.61 (for support). In comparison to previous studies, we found similar internal consistencies for partner-reported pain severity (α = .85) and for pain-related interference (α = .91) [30,35]. The standard deviations of the MPI partner version were similar to those obtained for the patient version (Table 2).
Inspection of the item characteristics for the provider version revealed that providers’ responses utilized the full range of the scale (range = 0–6). The standard deviations for each item suggested satisfactory variability of responses (all SDs ≥ one full scale-point). Inspection of the occurrence of “Don’t know” revealed that providers utilized this option rarely for the items pain severity (n = 3) and affective distress (n = 1).
Patients and partners completed the MPI within a mean of 5.0 days of one another (SD = 9.01; range = 0–40 days). Paired samples t-tests were conducted on each of the nine subscales to determine the absolute levels of agreement between patients and partners (see Table 2). Only one scale showed a trend toward different mean levels (P < .05). Partners reported lower levels of pain-related interference than patients.
In addition, moderate convergent validity between patient and partner ratings was evident with an average coefficient of r = .47 (see Table 2). Pain severity (r = .61) and interference (r = .58) demonstrated the highest coefficients indicating substantial agreement . Agreement on punishing (r = .56), solicitous (r = .42), and distracting responses (r = .37) was moderate and comparable to the results of Sharp and Nicholas  (r = .42, .56, .40, respectively).
The coefficient for the social support subscale was the only one at a level not reaching statistical significance (r = .22, P = .08).
Paired sample t-tests were conducted on the common items. As shown in Table 3, providers rated the severity of patients’ pain significantly lower than patients (P = .01).
Correlational analyses revealed moderate to no agreement between the patient and provider common MPI items: pain severity (r = .31, P < .01), interference (r = .35, P < .01), life control (r = .03, P = .78), and affective distress (r = .18, P = .11) (see Table 3).
To determine if partners’ MPI scales differentiate the MPI taxonomy clusters, planned contrasts via univariate ANOVA were conducted to examine mean differences between MPI patient clusters on partner-reported MPI subscales. Planned contrasts of partner responses were designed to test expected differences reported in the literature for patient responses. Results are presented in Table 4.
Partners of DYS patients reported higher levels of patients’ pain severity than partners of AC and ID patients (P < .01, d = 1.0); in addition, partners of AC patients reported lower levels than partners of DYS and ID patients (P < .01, d = .84).
Partners of DYS patients reported higher levels of patients’ pain-related interference than partners of AC and ID patients; in addition, partners of AC patients reported lower levels than partners of DYS and ID patients. Both contrasts confirmed these predictions with effect sizes of d = .68 and d = .85, respectively (although the first contrast did not reach statistical significance at P ≤.01).
Partners of AC patients reported higher levels of patients’ life control than partners of DYS and ID patients (P < .01, d = .78).
Partners of AC patients reported lower levels of patients’ affective distress than partners of DYS and ID patients (P < .001, d = 1.14).
Although the planned contrast did not reach statistical significance (P = .041), partner-reported means suggested lower levels of social support for ID patients than for AC and DYS patients (d = .59).
Partners of ID patients reported that they provide higher levels of punishing responses in response to patients’ pain than partners of AC and DYS patients (P < .01, d = .87).
Partners of ID patients reported that they provide lower levels of solicitous responses in response to patients’ pain than partners of AC and DYS patients (P < .01, d = .80).
No significant differences were found for distracting responses.
Although the planned contrast for this subscale was not significant (P = .037, d = .67), partner-reported means suggested that partners of DYS patients reported lower levels of patients’ general activity than partners of AC and ID patients.
Overall, partner-reported MPI subscale differences among clusters demonstrated correspondence with those obtained through patient self-report. In order to display the degree of correspondence, profiles obtained for patient–partner dyads within each MPI cluster were graphed together. As can be seen in Fig. 1, the pattern of results supported both a high degree of convergence between patient- and partner-report and a high degree of discrimination between patient clusters based on partner-report.
To determine if providers’ MPI ratings differentiate the MPI taxonomy clusters, planned contrasts were conducted to examine mean differences between MPI patient clusters on provider-reported MPI items. Results are presented in Table 5.
Providers reported higher levels of pain severity for DYS patients than for AC and ID patients (P < .001, d = 1.31); in addition, providers reported lower levels for AC patients than for DYS and ID patients (P < .01, d = .94).
Providers reported higher levels of pain-related interference for DYS patients than for AC and ID patients (P < .01, d = .84) and lower levels for AC patients than for DYS and ID patients (P < .01, d = 1.02).
No significant differences were found for life control.
Although the planned contrast did not reach statistical significance (P = .025), provider-reported means suggested lower levels of affective distress for AC patients than for DYS and ID patients (d = .58).
To obtain additional information about the degree of correspondence between patient and provider reports, the profiles of patient–provider dyads within each cluster on the four MPI items were plotted for patients and providers (see Fig. 2). As for patients’ partners, the cluster profiles resulting from patient and provider MPI item-scores were highly convergent.
Our next goal was to determine whether partner-reported MPI ratings can be used to discriminate between the three adaptational patient clusters. Specifically, we were interested in examining whether partners could more accurately classify patients into their respective clusters at the second MPI assessment compared to the first. Patients completed their second MPI only a few days apart from when partners completed their MPI. In contrast, the time interval between patient and partner assessments for the first MPI was much longer. Statistically speaking “a pair of variables measured at the same time should correlate more strongly than the same pair measured at different times” (, p. 144). Related literature has shown that patient–proxy agreement is indeed better if assessments are concurrent rather than retrospective . As a result, we expected that classification accuracy would be higher for a shorter assessment interval between respondents.
We performed two discriminant function analyses on partner-reported MPI subscales, one with patient cluster at Time 1 and another one with patient cluster at Time 2 as the dependent variable. For MPI Time 1, analyses revealed two discriminant functions with a combined χ2 (18, N = 58) = 31.28, P = .03. The two functions accounted for 57% and 43%, respectively, of the explained between-group variability. These percentages refer to the relative importance of each function in explaining the dependent variable. The squared canonical correlation represents the proportion of variance in the dependent variable that is discriminated by the independents variables for each function. The squared canonical correlation coefficients for the two functions were 0.29 and 0.24, respectively. For MPI Time 2, analyses revealed two discriminant functions with a combined χ2 (18, N = 53) = 33.88, P = .01. The two functions accounted for 54% and 46%, respectively, of the explained between-group variability. The squared canonical correlation coefficients for the two functions were 0.32 and 0.28, respectively. Using patient MPI Time 1 clusters as prior probabilities 62% of AC, 75% of DYS, and 72% of ID patients were correctly classified yielding an overall success rate of 70%. Using patient MPI Time 2 clusters as prior probabilities, the discriminant functions correctly classified 75% of AC, 64% of DYS, and 68% of ID patients yielding an identical success rate of 70%. Thus, our hypothesis was not confirmed: agreement between patient and partner classifications as determined by Cohen’s Kappa was moderate and identical for Time 1 (κ = .54, P < .001) and Time 2 (κ = .54, P < .001).
As an alternative strategy to determine the level of agreement between patient- and partner-reported MPI profiles, we entered partner-reported MPI ratings into the MAP scoring program. Using patient MPI Time 1 clusters, 81% of patients were correctly classified as AC, 47% were correctly classified as DYS, and 39% were correctly classified as ID yielding an overall success rate of 56% (χ2 (4, N = 46) = 15.63, p < .01). Using patient MPI Time 2 clusters, again, a similar pattern emerged: 75% of patients were correctly classified as AC, 58% as DYS, and 36% as ID yielding an overall success rate of 57% (χ2 (4, N = 42) = 12.00, p < .05). Similar to the results of the discriminant function analyses, Cohen’s Kappa was identical for Time 1 (κ = .34, P = .001) and Time 2 (κ = .35, P = .001).
Furthermore, we explored if patient-partner agreement varied as a function of patients’ classification stability. Partners might get a better sense of patients’ coping style if they present a stable profile over time (i.e., have the same MPI classification across assessments) than if patients’ coping style fluctuates. Thirty-six patients switched adaptational cluster from the first to the second MPI assessment yielding an instability rate of 38%.
For unstable patients, Cohen’s kappa was non-significant at Time 1 (κ = .26, p = .14) and Time 2 (κ = .20, p = .35), whereas for stable patients Cohen’s kappa was significant at κ = .63 (p < .001).
For unstable patients, Cohen’s kappa was non-significant at Time 1 (κ = .02, p = .93) and Time 2 (κ = .06, p = .76), whereas for stable patients Cohen’s kappa was significant at κ = .46 (p < .001).
Thus, our hypothesis that agreement would be higher for patients who maintained a stable classification than those who switched adaptational cluster was confirmed.
This study is the first to examine whether the MPI patient profiles can be validated with information obtained independent of the patient. Two primary goals of the taxonomy have been to characterize the heterogeneity of coping with chronic pain and to accurately discriminate patients with different coping patterns for appropriate treatment tailoring. The literature investigating the MPI taxonomy has presented findings that question the validity of the patient clusters, for example, in terms of their ability to predict treatment response [1,14,41], their temporal stability , and the distinctiveness of the two maladaptive patterns . We designed this study based on the assumption that if patients’ partners and healthcare providers generated the same picture of the patient’s adaptational style, this would provide convergent validity of the taxonomy. If there was discrepancy, it could be explained by the inability of others to know the internal experience of pain or inadequacies of the instruments. Previously, Kerns and Rosenberg  developed an MPI partner version, but did not include all of the MPI scales, thus precluding the ability to place the patient in the taxonomy using the partner version. To accomplish this, we created measures for patients’ partners and providers that were parallel to the patient version.
We found important preliminary evidence to suggest that the distinct multidimensional profiles underlying the taxonomy are corroborated by proxy report. On a subscale level, no significant differences were found between patients’ and partners’ mean ratings except for pain-related interference. Partners slightly underestimated interference relative to patients. These results are encouraging given several studies that have observed that proxies significantly overestimate patients’ pain [4,10,27,31,32] and illness-related distress . The correlations between patient and partner MPI scale scores showed that partners are reasonably consistent with patients’ ratings of the same dimensions (r = .37–.61) except for social support (r = .22). These results correspond with related literature concluding moderate patient–partner agreement on the pre-existing MPI subscales; for example, Kerns and Rosenberg  (r = .42–.49), Flor and colleagues  (r = .38–.77), and Sharp and Nicholas  (r = .40–.63).
The primary goal of the present paper was to examine whether the partner ratings accurately discriminated the patient clusters. We are not aware of any other study that has investigated the validity of the MPI taxonomy in this specific way. Almost all of the a priori planned contrasts yielded the expected differences between the cluster profiles with moderate to large effect sizes.
Our results for provider ratings were less pronounced, particularly in terms of patient-provider agreement on the MPI item-level. Of the four items, providers significantly underestimated the pain severity of their patient. This finding is consistent with previous literature [9,17,36,38]. The more stringent test of within-dyad correlations found weak associations for life control and affective distress and only modest associations for pain intensity and interference. Healthcare providers may not routinely assess the mental and emotional aspects of the pain experience. Previous research has shown that patient-healthcare provider interactions often lack communication about patients’ psychological concerns . As such, providers may have more difficulty in ascertaining patients’ feelings and mental appraisals of their condition. In addition, although providers’ ratings of patients’ pain and interference were somewhat accurate, these data add to the literature that suggests that even in an ongoing treatment relationship, providers do not have a full understanding of the patients’ pain experience . Discrepancies may in part arise from the fact that providers judge patients’ pain in relation to the pain experienced by other patients under their care . Finally, it may also be possible that the reduced patient–provider agreement resulted in part from measurement unreliability. Recall that providers completed single-item ratings that are generally less reliable than multi-item ratings.
Results of the planned contrasts, however, were more encouraging. Provider ratings sorted patients into their respective cluster based on pain severity and interference. Although not reaching statistical significance, their ratings of life control and affective distress were also in the predicted direction.
It appears that the MPI patient clusters are not the result of purely subjective experiences of the patient or only influenced by presentational biases [15,40]. Instead, results suggest that the coping patterns reflected in the clusters have objective reality or at least are valid in an interpersonally agreed upon way. We believe this series of results strongly supports the convergent validity of the MPI taxonomy.
Our final goal was to determine whether patients could be correctly classified into their respective cluster based on partner-reported MPI subscales. We also examined whether partners would show a higher degree of correspondence to the profiles of stable patients than those whose classification changed from one assessment to the next. Our earlier work identified an inconvenient finding that approximately a third of chronic pain patients generate different cluster assignments with repeated testing . This was also confirmed in this sample with a classification instability rate of 38%. It is problematic for observational studies examining cluster differences as well as clinical trials investigating treatment efficacy if a third of the sample is unstable. We assumed that this also would be relevant for examining the validity of the clusters in this study. Patient–partner agreement was substantially higher for stable than for unstable patients. Thus, for patients in a steady state of coping with chronic pain, partners can report on patients’ coping with a reasonable degree of accuracy. However, for patients whose pain levels and coping vacillates over relatively short periods of time, partners’ ratings are more discrepant. The fact that patient–partner agreement was identical for the first and second MPI assessments substantiates this conclusion.
Several study limitations are noted. The patient and partner sample sizes limited statistical power. Since some of the measures were completed at home, there is no guarantee that patient and partner responses were independent. If this took place, it could have inflated correspondence. In addition, our results are preliminary. Subsequent investigations of the MPI partner version including refinement of item content and psychometric properties, assessments of test–retest reliability, and examinations of its criterion-validity will be necessary to substantiate its use for clinical research. Likewise, the provider assessment is brief and will require psychometric development for further use.
We found classification accuracy for partner report to be unsatisfactory using the MAP scoring program. The classification rates between the discriminant function analysis (DA) and the MAP may be different because the DA maximizes classification accuracy based on the present sample. In contrast, the MAP scoring algorithm was not designed and validated for partner responses but is based on patient normative data. At the least, these results emphasize that patient and partner ratings should not be considered as substitutes for one another. This is further corroborated by the fact that the strength of patient–partner agreement, albeit moderate, was far from perfect. Our findings suggest that partner reports have higher correspondence for patients with a stable coping pattern. As would logically be expected, instability of patient cluster assignment over time is associated with less patient–partner correspondence. This instability could result from the clustering algorithm or it may identify a clinically interesting patient subset whose approach to coping with chronic pain is more state than trait-like. The integration of ratings from both respondents may provide a clinically rich assessment of adjustment .
In terms of strengths, this study was the first to examine the convergent validity of the MPI taxonomy via proxy ratings across all nine dimensions. The MPI is the primary standardized, clinically available method for characterizing patients’ individual adaptation to chronic pain. Thus, validation of patient coping classification is both theoretically and clinically important. We were encouraged by the use of a complete partner version. Related research using the pre-existing partner subscales has confirmed their satisfactory psychometric properties and utility for examining patient–partner agreement [19,35]. Our results join in with these findings. Furthermore, our study is unique in that it reports evidence that partners’ and providers’ ratings act in concert to confirm distinct coping profiles of chronic pain patients.
Pending replication, the results have important clinical implications. The utility of partners’ involvement in clinical care has gained considerable attention. Spouse-assisted psychosocial interventions for chronic pain are increasingly being used [18,24]. An essential step towards maximizing their efficacy is a better understanding of patients’ and partners’ appraisals of the disease experience . The ability to administer the MPI to patient and partner to learn about their individual perspectives could prove to be extremely useful. Outside informants may provide a more stable and trait-like depiction of patients’ coping profile and, therefore, uniquely contribute to the ability to characterize patients’ coping with pain. A shared assessment would allow tailoring treatment to identify and address disparities that might interfere with effective pain management. The limitations in provider perception of patient coping underline the importance of routine assessment that encompasses patients’ physical and mental health status and the potential for comprehensive evaluation by inviting partner ratings.
This study was supported in part by the Applied Behavioral Medicine Research Institute, Stony Brook University. There are no conflicts of interest. We are grateful to Drs. Carole Agin, Irina Lokshina, Farrokh Manekscha, and Marc Yland; Julie Scheuerman, NP, Andrew Martino, PT, and Bill Devlin, PT for their participation and assistance in establishing patient access. We would like to acknowledge Kelly Creighton and Sharon Martino, PT for their help in recruitment and data collection. We are grateful to the anonymous reviewers whose valuable insights have been incorporated into the paper. Finally, we thank our patients and their partners for making this study possible.
Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited.
In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/copyright
1Partner version of MPI and Healthcare Provider Questionnaire are available from the authors.
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues.