|Home | About | Journals | Submit | Contact Us | Français|
The Women's Health Initiative (WHI), initiated in 1993, enrolled 161,808 postmenopausal women aged 50-79 years and followed them with annual questionnaires for 8 years in order to study major causes of morbidity and mortality. Our objective was to determine the most effective and efficient means to validate self-reported rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) in the WHI.
Data from 2 of 40 WHI clinical centers were used. Of these 7443 women, 643 self-reported RA and 106 self-reported SLE. Research coordinators contacted these women using mailers and telephone calls to obtain medical record releases and a Connective Tissue Screening Questionnaire (CSQ). Medical records were obtained on 286 self-reported RA and 34 self-reported SLE and reviewed by 3 rheumatologists blind to the self-reported diagnoses. Sensitivity, specificity, and the kappa statistic were computed to evaluate the level of agreement between self-report and chart review.
Self-reported RA was accurate only 14.7% (42/286 cases) of the time. Coupling the self-report to medication data improved the positive predictive value (PPV; 62.2%) and kappa (0.53), suggesting a moderate agreement to chart review. Self-reported SLE was accurate only 11.8% (4/34 cases) of the time. Coupling the self-report to medication data improved the PPV (40.0%) and kappa (0.44), suggesting a moderate agreement to chart review. The CSQ was inferior to using medication data but was substantially better than self-report alone.
The performance of disease self-report coupled with medication history in validating RA and SLE was very good and should obviate the need for time-consuming medical record reviews.
The causes of rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) are still not known despite impressive strides in the treatment of these diseases. Issues of disease causation and personal risk are often addressed with epidemiologic studies. One such study, the Women's Health Initiative (WHI), enrolled 161,808 postmenopausal women at 40 centers in the United States1. While the WHI was not designed to directly study RA and SLE, it did collect information on self-reported rheumatic disease diagnoses, demographic information, an exhaustive array of environmental and dietary exposures, activity indices, medication data, serial measures of general well-being and musculoskeletal symptoms, and health morbidity and mortality information on all participating women. The WHI dataset has the potential to be a resource for investigators to examine many questions concerning the predictors, determinants, and natural history of RA and SLE.
To use the WHI dataset to study epidemiological questions about RA and SLE, it is necessary to understand how accurate the self-reported disease designations are. If the agreement between self-reported disease and actual disease is poor, it could create an ascertainment bias that could affect WHI analyses in unpredictable ways. Historically, the validity of self-reported diagnoses in RA and SLE has varied widely according to differences in geography and validation methodology (Table 1). The validity of self-reported diagnoses has been reported to be between 7% and 96% in RA and 21% and 84% in SLE, with validations from cohorts of independent-living women similar to the WHI tending to have lower confirmation rates2-12.
Prior studies make it clear that the validity of self-reported diagnosis varies widely from cohort to cohort. Therefore, our primary goal was to validate self-reported RA and SLE diagnostic information collected by the WHI. As prior studies suggest that strict reliance on self-reported diagnoses can be problematic, we also wished to determine if the use of other data collected by the WHI, such as medication data and/or a disease screening questionnaire, could improve diagnostic accuracy. Our study design uses data collected from 2 WHI clinical centers and compares incrementally sophisticated validation methods to the results of medical chart reviews and physician interviews to determine the most effective means to define RA and SLE.
The WHI clinical trials and observational study, initiated in 1993, were designed to study the major causes of morbidity and mortality in postmenopausal women. The WHI observational study enrolled 93,676 postmenopausal women aged 50-79 years from 40 clinical centers across the US and tracked their health for an average of 8 years. Its purpose was to give reliable estimates of the extent that known risk factors predict heart disease, cancers, and fractures and to identify new risk factors for these health problems. The WHI clinical trials included trials of the effects of postmenopausal hormone therapy on coronary heart disease, hip fracture, and breast cancer, using separate trials of estrogen plus progestin and estrogen alone1. The trials randomized 16,608 women with an intact uterus to 0.625 mg of conjugated equine estrogens plus 2.5 mg of medroxyprogesterone or placebo and prospectively monitored the women for an average of 5.6 years; and randomized 10,739 women without an intact uterus to 0.625 mg of conjugated equine estrogens or placebo and prospectively monitored the women for an average of 7.1 years. The dietary modification WHI clinical trial randomized 48,835 postmenopausal women to either a low-fat, high fruit, vegetable and grain diet or usual eating habits and measured the effects of diet on breast cancer, colorectal cancer, and heart disease in postmenopausal women over a period of 8 to 12 years1. Further details of the WHI study design, recruitment, screening, randomization, and eligibility criteria are described elsewhere1,13-16.
Our study was a blinded validation of self-reported rheumatic diagnoses at 2 of the 40 clinical sites in the WHI. The 2 clinical sites were the MedStar Research Institute (n = 3682) and George Washington University Hospital (n = 3761), both located in the Washington, DC, metropolitan area. All women in the observational study and clinical trials who self-reported RA, SLE, or osteoarthritis (OA) were eligible to participate. Eligible women were identified from the WHI database and contacted by mail and telephone. Participants provided informed consent, physician information, and medical record releases. Medical records to support the diagnoses of RA and/or SLE were obtained and reviewed by a group of rheumatologists blinded to the self-reported diagnoses of the participants.
As part of the WHI observational and clinical trials, all participants completed a yearly health questionnaire that included the following 3 questions about arthritis: “Did your doctor ever say that you had arthritis?” (Yes/No), “What type of arthritis do you have?” (RA, Other/Osteoarthritis), and “Has a doctor told you that you have systemic lupus erythematosus (“lupus” or SLE)?” (Yes/No). Those who self-reported RA or SLE at baseline (prevalent cases) and during later followup (incident cases) were both included. A total of 643 women with self-reported RA and 106 with self-reported SLE were identified at the 2 sites. In addition, 76 women with self-reported other arthritis/OA were identified at random to serve as a control population in the review process.
Eligible women were mailed a packet of study materials containing a letter of introduction, an Institutional Review Board approved informed consent and Health Insurance Portability and Accountability Act waiver, the Connective Tissue Screening Questionnaire (CSQ)17, an Arthritis Update questionnaire, a medical record release, and a stamped envelope in which to return forms. Eligible women not responding to our initial mailing were contacted by telephone to discuss the study and their potential involvement.
Demographic information was extracted from the WHI database. In general, the participants from our 2 sites were an average 62.4 years of age, either Caucasian (54.7%) or African American (38.6%), and well educated (91% completed high school). The WHI collected demographic information only at the initial screening visit.
The WHI database had collected a medication inventory at baseline and at study year 3. Additionally, all participants in the controlled trials had medication information collected at study years 1, 6, and 9. To do so, women were instructed to bring all medications they were currently taking, including over-the-counter and herbal medications, to visits to be recorded by research staff. Medications used at any time during the study, including at baseline, were included in our analyses. Only medication names and pill strength were collected by the WHI staff.
The following medications were defined as medications used in the treatment of RA: hydroxychloroquine, sulfasalazine, minocycline, methotrexate, leflunomide, azathioprine, cyclosporine, gold, cyclophosphamide, antirheumatic biologic agents (i.e., tumor necrosis factor-α and interleukin 1 antagonists), and oral steroids.
The following medications were defined as medications used in the treatment of SLE: hydroxychloroquine, sulfasalazine, methotrexate, leflunomide, azathioprine, cyclosporine, mycophenolate mofetil, cyclophosphamide, and oral steroids.
Because of the use of oral steroids in many other conditions, the analyses were also performed excluding oral steroids from the medication definitions. No participant reported D-penicillamine or tacrolimus use during the study.
The CSQ is based on the American College of Rheumatology (ACR) classification criteria for SLE and RA and several other connective tissue diseases. The questionnaire consists of 30 items with yes/no responses. Scoring is performed using a scoring algorithm. Scores are expressed as “probable,” “possible,” or “no” disease categories. The CSQ has been reported to be 85% sensitive and 92% specific for detecting RA and 96% sensitive and 85% specific for detecting SLE17. The CSQ fits entirely on one page and can be optically scanned and automatically scored.
Participants were asked for the names and contact information for their physicians. The questionnaire specifically inquired about the doctor(s) that treat their arthritis. It also collected information about their primary physicians that provided them care from 1988 (5 yrs prior to subject involvement with the WHI) through the present.
An experienced rheumatologist (BW, FC, and JK) blinded to the self-reported diagnosis of the participant contacted the appropriate physician's office to obtain a copy of the participant's medical record, obtain a physician validation questionnaire, or to interview the physician over the telephone. All these options were allowed in order to maximize community physician response rates and to fill in potential gaps in the medical record. The physician validation questionnaire queried the diagnoses of RA, SLE, and OA using the defined ACR classification criteria as a guideline for each disease18-22. Reviewers used standard questions during the interviews that followed the ACR criteria. Reviewers made requests from one physician at a time, starting with the doctor currently caring for the participant's condition. If an office did not respond to a request or the medical record was incomplete, the physician was contacted by telephone by the reviewer. After completing a review of all the available records from one community physician, the reviewer determined if, in their judgment, the outpatient chart they reviewed contained adequate information to finish their review. If the information obtained was inadequate, the next doctor on the list was contacted and the review process repeated until the reviewer felt that adequate information was obtained or all materials were exhausted. Only then did the reviewer determine if the participant met any of the study definitions for arthritis. Each reviewer used his or her clinical impression of the case to assign the presence or absence of RA, SLE, and OA. A strict ACR criteria cutoff was not employed. If one of these rheumatic conditions was thought to be present, each reviewer used a 5-level ordinal scale to assign a level of confidence. The type of physician making the diagnosis (i.e., rheumatologist, orthopedist, general internist/family physician) was also recorded by the reviewers.
To measure potential inter-reviewer variability, 10% of the charts were randomly selected to undergo a blinded double-review process. Diagnostic differences between the reviewers were arbitrated by a separate blinded rheumatologist (AW). If diagnostic agreement between the reviewers was in excess of 95% for the RA and SLE diagnoses, it was decided that the remainder of the reviews would be performed using a single reviewer method. Otherwise, the double-reviewer method would be employed throughout the study.
Study data were entered into a specially designed computer program by each individual reviewer. The program was designed with deliberate redundancy to minimize significant data entry errors. Each reviewer had an individual database at a local computer to store the entered data. Each month the reviewers electronically sent a copy of their database to the study statisticians. Locally collected data were then combined with previously collected WHI data requested from the central WHI database.
The positive predictive values (PPV) and negative predictive value (NPV) were determined for self-report alone, self-report coupled with antirheumatic use, self-report with positive score on CSQ, and self-report with antirheumatic use and positive score on CSQ, using the chart validation data as the gold standard. The kappa statistic with exact binomial 95% confidence intervals was used to evaluate the level of agreement between self-reported diagnoses and chart reviewed diagnoses. Typical statistical methods were used to examine disagreements. Tests for equal kappa coefficients were performed to determine the effect of education, age, and income on the agreement between self-reported diagnosis and chart review findings. Differences in demographics and confirmation rates between the George Washington University and MedStar Research Institute cohorts as well as between prevalent and incident cases were determined using chi-square tests and general linear models. Our sampling frame did not attempt to identify false-negative cases under the assumption that false-negative cases would be exceedingly rare. Sensitivity and specificity were calculated and are reported, but our methods make their interpretation more difficult.
A total of 348 prevalent and 295 incident cases of RA and 26 prevalent and 80 incident cases of SLE were reported in 7.7 years of followup of 7443 women in the WHI at our 2 centers. Of the 825 eligible women (including OA controls), adequate records and physician interviews were obtained for 367 women, a 44% response rate (Table 2). Of the 458 women without records, 140 declined to participate, 176 did not respond, 40 could not be contacted, and 11 provided CSQ data but had incomplete or absent medical records. Twenty-five eligible women had died prior to study initiation. Completed CSQ were obtained on 272 women. Only 261 women completed both the CSQ and medical record review. The demographic characteristics of the study participants are shown in Table 3. The study cohort mainly consisted of elderly, well educated, and non-impoverished women. There were no demographic differences between the responder and the nonresponder groups, with the exception of education. Responders tended to be more educated than the nonresponders. Of the responders, there were significant demographic differences between the 2 sites. The George Washington University cohort was predominately Caucasian, while the MedStar site was predominately African American (p < 0.0001). The George Washington cohort was more educated (p < 0.0001) and had higher incomes (p < 0.0001) than the MedStar cohort.
Thirty-two charts (~ 10%) were randomly selected by the study statistician to undergo double-review. There were no disagreements between the rheumatologists in regard to the diagnosis of RA and SLE. Due to the high degree of agreement between reviewers in determining the diagnoses of primary interest (RA and SLE), a single-reviewer format was utilized.
The performance of the different epidemiologic definitions of RA compared to the chart review findings is shown in Table 4. Of the 286 women with self-reported RA, only 42 were confirmed by chart review (14.7%). Coupling the self-reported diagnosis to medication data improved the PPV (62.2%), NPV (93.9%), and kappa (0.53), suggesting a moderate agreement to chart review findings with few false-negative cases. The addition of the CSQ to the definition improved the PPV (82.4%) but also decreased the NPV (91.4%) and kappa value (0.49). Using either medication or CSQ data in conjunction with self-report did not improve the kappa value (0.42). Use of more or less stringent CSQ disease definitions (i.e., probable disease vs probable + possible disease) did not lead to significant alterations in the test's performance. The analyses were repeated after removing oral steroids from the list of defining medications. The RA and medication definition had a greater PPV (67.9%) with a decrease in kappa (0.49). Similar findings were noted when CSQ results were also included. Overall, the addition of oral steroids to the medication definition improved the performance of RA epidemiologic definitions.
Further analyses were performed on the false-positive and false-negative participants when the self-report plus medication definition was employed. Fourteen women who self-reported RA and were taking appropriate medications who had negative chart reviews (false-positives) had the following conditions: OA (12), Sjögren's syndrome (2), polymyalgia rheumatica (2), undifferentiated connective tissue disease (1), unspecified seronegative spondyloarthopathy (1), eosinophilic fasciitis (1), and autoimmune hepatitis (1). Review of the 19 women found to be false-negative with the self-reported RA with medication definition revealed that all had self-reported RA, and that 17 had documentation of medication use in their medical records that was not identified in the WHI database. Ninety-three percent of the cases of RA confirmed by medical records had documentation from a rheumatologist. There was no difference in rates of confirmation between prevalent and incident cases (p = 0.13) or between the 2 study sites (p = 0.23).
The performance of the different epidemiologic definitions of SLE compared to the chart review findings is shown in Table 5. Of the 34 women with self-reported SLE, only 4 were confirmed by chart review (11.8%). Combining the self-reported diagnosis with medication data improved the PPV (50.0%) and kappa (0.49), suggesting a moderate agreement to chart review findings with few false-negative cases. The addition of the CSQ to the definition decreased the PPV (33.3%) and kappa (0.39) without much change in NPV (99.6%). Using either medication or CSQ data in conjunction with self-report did not improve the kappa value (0.24). Use of alternative CSQ disease definitions did not lead to significant alterations in test performance. The analyses were repeated after removing oral steroids from the list of defining medications. The addition or removal of oral steroids from the medication definition had no effect on the performance of SLE epidemiologic definitions.
Further analyses were performed on the false-positive and false-negative participants when the self-report plus medication definition was employed. Three women had self-reported SLE, were taking appropriate medications, and had negative chart review for SLE. They all had OA; one subject also had positive antinuclear antibodies without symptoms and another had Sjögren's syndrome. Review of the 2 women with positive chart reviews for SLE that did not meet the self-report criteria for SLE revealed that neither had self-reported SLE and both had documentation of medication use in their charts. One hundred percent of the confirmed cases of SLE had records from a rheumatologist. There was no difference in rates of confirmation between prevalent and incident cases (p = 1.0) or between the 2 study sites (p = 1.0).
Education did affect the self-reported accuracy of RA, with college educated participants more likely to self-report correctly compared to participants with less education (chi-square 10.44, p = 0.001). This was not seen in SLE or OA. Income and age had no significant effect on self-reported accuracy for all 3 conditions, although persons with incomes lower than $35,000 were less likely to accurately report disease (chi-square 3.52, p = 0.06).
Prior studies3-6,8,9,11,12 have reported poor concordance between self-reported diagnoses and true cases of RA and SLE, observations that are reinforced by the ~ 15% concordance of self-reported RA cases and the ~ 12% concordance of self-reported SLE cases in the WHI. This demonstrates that there is a problematic gulf between patients and their physicians in understanding RA. While some of this misunderstanding is a function of education, it alone does not account for such poor concordance. However, all the participants with confirmed RA did report that they have the disease. No case of RA confirmed by chart review was improperly coded as SLE or OA by the patients. This observation with self-reported RA has been reported in most other validation studies as well2-6. False-negative reporting of RA does not seem to be a problem, which is an advantage in epidemiological studies. Our results provide guidance for developing valid and reproducible means to confirm self-reports of RA and SLE in an epidemiologic setting. We report that it is possible to use self-reported data available in a typical epidemiologic study to provide a reasonably specific diagnosis of RA and SLE.
The use of medication history to confirm the diagnosis of RA has not been frequent in prior validation studies. The majority of studies describe utilizing serology, specialized RA questionnaires, and research algorithms23,24. Only one previous study utilized medication data for diagnostic confirmation, but it was used as part of a larger algorithm that included thorough examinations9. The use of medication data to validate self-reported RA diagnosis appears to be a reasonable strategy from our data. This strategy may miss about 45% of true RA cases, but 62% of the cases recognized are valid cases. Our data with this strategy enables WHI investigators to be more confident that they are studying RA. Indeed, we suspect that the use of medication data would have performed even better with a more directed inquiry about medication use during the initial data gathering. Seventeen of 19 patients who did have RA but were recorded not to be taking medication actually were using disease modifying antirheumatic drugs (DMARD) on chart review. Had these medications been identified by the WHI, the self-report and medication definition would have had a PPV of 74%. The WHI data provided a snapshot of medication use once every 3 years that focused only on medications taken at least twice a week. It is possible that this precluded some participants from reporting weekly methotrexate or periodic biologics use. A more regular review of medication use or asking the participants directly about DMARD use may have improved performance. Regardless, a diagnostic strategy of combining self-report and medication data to define cases of RA in epidemiological studies appears to be a reasonable approach.
Our data were similar in regard to validation of self-reported SLE. Self-report coupled with medication yielded the highest kappa value. However, these results should be considered carefully due to the low numbers of actual cases and the unexpected finding of 2 cases of SLE that were not self-reported by the participants.
The use of self-report questionnaires to improve diagnostic validity in epidemiologic studies has been addressed in several studies. Our data suggest that use of a screening questionnaire for multiple autoimmune disorders does not perform much better than using medication data for validation. Our study demonstrates that questionnaire use decreased PPV and did not improve agreement with chart review. This small decrease in PPV is more likely a reflection of a decrease in sample size than an implicit characteristic of the questionnaire. The combination of both medication data and the questionnaire also did not provide any improvement in agreement with chart review. The combination of the 3 datapoints was too stringent, excluding a large number of participants with RA. Even an “either/or” paradigm for medication data and the screening questionnaire did not improve performance. These results may reflect our use of the CSQ in a role it was not designed for. The CSQ may not appear to be well suited for the aging WHI study population. Our elderly population may have had difficulty with the language and structure with the CSQ, as evidenced by 87 participants filling out all the study paperwork properly but refusing to complete the CSQ. Many others needed to be contacted directly to clarify or complete the questionnaire as well. Perhaps a questionnaire with larger print and a simpler format may have performed better. While the CSQ did not outperform prospectively collected medication data, it clearly provides a substantial benefit over self-report alone, demonstrating very high specificities.
There are some drawbacks in using our approach to defining disease in an epidemiologic study. Requiring medication use to be defined as a case is, in essence, a tautology. It assumes that all cases of the disease have been properly diagnosed and treated. This excludes undiagnosed, untreated, misdiagnosed, and mistreated cases. This would be problematic in studying early manifestations of these diseases. Using medications to define disease also requires a meticulous medication data collection. Our study suggests that nearly 40% of actual cases are missed due to inaccurate medication data collection. Both these drawbacks can be partially controlled for in studies by the use of blinding and randomization, such as the WHI controlled trials, but these drawbacks should be considered in interpreting epidemiologic studies utilizing these definitions. The investigators also chose a validation methodology that utilized reviewer impression rather than ACR criteria for RA and SLE for determining the validity of individual cases. The ACR criteria have not been validated for use in chart reviews but do represent a standard diagnostic tool that can be reproduced by other research groups. A final drawback with this study was its 44% response rate. This affects the ease with which these results can be generalized to larger patient populations.
These results highlight the difficulties of managing ascertainment bias in large epidemiologic studies of rheumatic diseases. Our data suggest that despite being well educated and of fair financial means, women with various forms of rheumatic disease tend to incorrectly identify their type of arthritis. Our study demonstrates that medication histories appear to be very good for confirming self-reported cases of RA and SLE. This can obviate the need for time-consuming medical chart review. Given the large size of the WHI cohort, this concordance is likely adequate to allow many valuable analyses of the natural history and predictors of RA and SLE in postmenopausal women. We hope that our experiences in defining RA and SLE in the WHI will aid investigators working with large research datasets in performing their own diagnostic validations, and provide insight for community practitioners into how patients with rheumatic diseases truly understand their conditions.
We acknowledge Karen Barr, George Washington University site coordinator; Amy Smith, MedStar Research Institute site coordinator; Kay Mickel, MedStar Research Institute site clinical manager; Donna Embersit, George Washington University site clinical manager; Mary Pettinger, Women's Health Initiative Statistical Support; and Greg Foster, MedStar Research Institute statistical support.
The WHI program is funded by the National Heart, Lung and Blood Institute, US Department of Health and Human Services. Funding for our study was provided by an American College of Rheumatology Clinical Investigator Fellowship Award and a Washington Hospital Center Graduate Medical Education research grant.
WHI investigators. Program Office: (National Heart, Lung, and Blood Institute, Bethesda, MD) Elizabeth Nabel, Jacques Rossouw, Shari Ludlam, Linda Pottern, Joan McGowan, Leslie Ford, and Nancy Geller. Clinical Coordinating Center: (Fred Hutchinson Cancer Research Center, Seattle, WA) Ross Prentice, Garnet Anderson, Andrea LaCroix, Charles L. Kooperberg, Ruth E. Patterson, Anne McTiernan; (Wake Forest University School of Medicine, Winston-Salem, NC) Sally Shumaker; (Medical Research Labs, Highland Heights, KY) Evan Stein; (University of California at San Francisco, San Francisco, CA) Steven Cummings. Clinical Centers: (Albert Einstein College of Medicine, Bronx, NY) Sylvia Wassertheil-Smoller; (Baylor College of Medicine, Houston, TX) Jennifer Hays; (Brigham and Women's Hospital, Harvard Medical School, Boston, MA) JoAnn Manson; (Brown University, Providence, RI) Annlouise R. Assaf; (Emory University, Atlanta, GA) Lawrence Phillips; (Fred Hutchinson Cancer Research Center, Seattle, WA) Shirley Beresford; (George Washington University Medical Center, Washington, DC) Judith Hsia; (Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center, Torrance, CA) Rowan Chlebowski; (Kaiser Permanente Center for Health Research, Portland, OR) Evelyn Whitlock; (Kaiser Permanente Division of Research, Oakland, CA) Bette Caan; (Medical College of Wisconsin, Milwaukee, WI) Jane Morley Kotchen; (MedStar Research Institute/Howard University, Washington, DC) Barbara V. Howard; (Northwestern University, Chicago/Evanston, IL) Linda Van Horn; (Rush Medical Center, Chicago, IL) Henry Black; (Stanford Prevention Research Center, Stanford, CA) Marcia L. Stefanick; (State University of New York at Stony Brook, Stony Brook, NY) Dorothy Lane; (The Ohio State University, Columbus, OH) Rebecca Jackson; (University of Alabama at Birmingham, Birmingham, AL) Cora E. Lewis; (University of Arizona, Tucson/Phoenix, AZ) Tamsen Bassford; (State University of New York at Buffalo, Buffalo, NY) Jean Wactawski-Wende; (University of California at Davis, Sacramento, CA) John Robbins; (University of California at Irvine, CA) F. Allan Hubbell; (University of California at Los Angeles, Los Angeles, CA) Howard Judd; (University of California at San Diego, LaJolla/Chula Vista, CA) Robert D. Langer; (University of Cincinnati, Cincinnati, OH) Margery Gass; (University of Florida, Gainesville/Jacksonville, FL) Marian Limacher; (University of Hawaii, Honolulu, HI) David Curb; (University of Iowa, Iowa City/Davenport, IA) Robert Wallace; (University of Massachusetts/Fallon Clinic, Worcester, MA) Judith Ockene; (University of Medicine and Dentistry of New Jersey, Newark, NJ) Norman Lasser; (University of Miami, Miami, FL) Mary Jo O'Sullivan; (University of Minnesota, Minneapolis, MN) Karen Margolis; (University of Nevada, Reno, NV) Robert Brunner; (University of North Carolina, Chapel Hill, NC) Gerardo Heiss; (University of Pittsburgh, Pittsburgh, PA) Lewis Kuller; (University of Tennessee, Memphis, TN) Karen C. Johnson; (University of Texas Health Science Center, San Antonio, TX) Robert Brzyski; (University of Wisconsin, Madison, WI) Gloria E. Sarto; (Wake Forest University School of Medicine, Winston-Salem, NC) Denise Bonds; (Wayne State University School of Medicine/Hutzel Hospital, Detroit, MI) Susan Hendrix.