|Home | About | Journals | Submit | Contact Us | Français|
Health information exchange is a national priority, but there is limited evidence of its effectiveness.
We sought to determine the effect of health information exchange on ambulatory quality.
We conducted a retrospective cohort study over two years of 138 primary care physicians in small group practices in the Hudson Valley region of New York State. All physicians had access to an electronic portal, through which they could view clinical data (such as laboratory and radiology test results) for their patients over time, regardless of the ordering physician. We considered 15 quality measures that were being used by the community for a pay-for-performance program, as well as the subset of 8 measures expected to be affected by the portal. We adjusted for 11 physician characteristics (including health care quality at baseline).
Nearly half (43%) of the physicians were portal users. Non-users performed at or above the regional benchmark on 48% of the measures at baseline and 49% of the measures at followup (p = 0.58). Users performed at or above the regional benchmark on 57% of the measures at baseline and 64% at follow-up (p<0.001). Use of the portal was independently associated with higher quality of care at follow-up for those measures expected to be affected by the portal (p = 0.01), but not for those not expected to be affected by the portal (p = 0.12).
Use of an electronic portal for viewing clinical data was associated with modest improvements in ambulatory quality.
Health information exchange is a national priority. The Health Information Technology for Economic and Clinical Health Act (HITECH) established up to $29 billion in incentives for providers and hospitals for “meaningful use” of electronic health records, which includes health information exchange [1, 2]. Health information exchange involves the electronic sharing of clinical data, including sharing of clinical data across health care providers caring for the same patient . Types of data that can be shared include laboratory results, radiology results, hospital discharge summaries, and operative reports.
Without electronic health information exchange, clinical information is missing in 1 out of every 7 primary care visits, because the data reside elsewhere and are not accessible at the point of care . Health information exchange could improve quality by providing more complete and more timely access to clinical data, which in turn could improve medical decision making [5–8]. For example, using health information exchange, physicians could determine if tests recommended by clinical guidelines have been done for their patients or not, including when ordered by other providers. If access to external clinical data reveals that recommended tests have not been done, then those tests could be ordered; if it reveals that recommended tests have been done, then physicians could document those tests and avert duplicate ordering.
Previous evidence demonstrating the effect of health information exchange on quality has been limited [9–11]. Previous work in hospital-based settings has found that electronic laboratory result viewing, which is one aspect of health information exchange, can decrease redundant testing and shorten the time to address abnormal test results [12–15]. In the ambulatory setting, electronic laboratory result viewing has been found to increase patient satisfaction  and improve the timeliness of public health reporting [17, 18]. Other studies have considered the effects of health information exchange on emergency department utilization with mixed results [19–21].
We sought to determine whether health information exchange was associated with higher ambulatory care quality, defined primarily as higher rates of recommended testing. We previously found, in a cross-sectional study, that health information exchange was associated with higher ambulatory care quality . However, it was not possible in that study to rule out confounding by physicians' baseline quality of care. This study was designed to address that limitation. Our objective was to determine any association between health information exchange and quality of care, while adjusting for baseline quality of care.
We conducted a retrospective cohort study of primary care physicians in the ambulatory setting. All of the physicians had the opportunity to view clinical data through a free-standing Internet-based portal. We determined associations between actual usage of the portal and health care quality over time, adjusting for physician characteristics, including baseline quality.
This study took place in the Hudson Valley, the region of New York State immediately north of New York City. We included primary care physicians for adults (general internists and family medicine physicians) who were members of the Taconic Independent Practice Association (IPA), a not-for-profit organization that includes approximately 50% of the physicians in the Hudson Valley .
MVP Health Care is a regional health plan with covered lives in New York, Vermont and New Hampshire . The Taconic IPA is the exclusive provider network for MVP Health Care patients in the Hudson Valley of New York, though Taconic IPA providers also accept other types of insurance. We restricted our sample to those primary care physicians in the Taconic IPA who each had at least 150 patients with MVP Health Care. Since 2001, MVP has been issuing Physician Quality Reports to primary care providers, which compare a physician's performance to a regional benchmark, which is typically the mean performance for MVP's HMO product. Since 2002, financial bonuses have been given for performance exceeding these regional benchmarks.
The health information exchange portal is run by MedAllies, a for-profit company, which was created by the Taconic IPA [25, 26]. The portal, launched in 2001, is Internet-based and allows physicians to log in with secure passwords from any computer. The portal displays results over time and allows providers to view their patients' results regardless of whether the underlying tests were ordered by themselves or other providers. The portal is a central repository that uses a master patient index, standardized terminology, and interoperability standards.
The portal allows physicians to access results in 2 counties, with data from 5 hospitals and 2 reference laboratories. All results generated by these hospitals and laboratories flow through the portal, including admission, discharge and transfer information; inpatient and ambulatory laboratory results; radiology and pathology results; and all inpatient transcriptions (including history and physical, consult, operative, and discharge summary reports).
The portal overall receives 10,000 results per week, contains clinical information for more than 700,000 patients, and is accessed by more than 1600 users (including 500 physicians) in 175 practices. On average, approximately 90% of a typical primary care physician's panel is represented.
We received data from MedAllies on usage of the portal for 3 time periods: January – June 2005 (Time 1), July – December 2005 (Time 2), and January – June 2006 (Time 3). At that time, the portal was the most common health information technology intervention in the community, much more common, for example, than electronic health records, thus enabling measurement of the portal's effect in isolation from other interventions.
We received health care quality data from MVP Health Care, which MVP had collected from administrative claims and patient surveys and had included in their Physician Quality Reports. This included performance on 13 metrics from the Health Plan Employer Data and Information Set (HEDIS) and 2 patient satisfaction metrics: rates of
Of those measures, we expected 8 to be affected by the use of the portal:
These 8 measures were the ones that could be affected by the availability of external clinical data (e.g. laboratory tests) or were patient satisfaction measures, which could be improved by better provider knowledge of external clinical data.
We received physician-level data for all 15 quality metrics for 2 time periods. We used the first time period as the Baseline for the study (July 2004 – June 2005) and the second time period as the Followup (July 2005 – June 2006).
From MVP, we also obtained data on adoption of electronic health records during the baseline and follow-up time periods, as well as data on 9 other physician characteristics: gender, age, specialty, board certification, degree (MD vs. DO), physician group size, patient panel size, case mix and resource consumption. Case mix was calculated by MVP using the Diagnostic Cost Group (DxCG) All Encounter Explanation model [27, 28]. Resource consumption was also calculated by MVP, using its own algorithm, and reflected the total amount of health care services a physician's panel utilized. Case mix and resource consumption were each standardized, with values of 1.0 representing average, <1.0 lower than average, and >1.0 higher than average.
Portal usage was measured by MedAllies as the average number of days per month a physician logged in during each time period. Usage exceeding 15 days per month was truncated by MedAllies and treated as 15 days per month. We present the number of physicians who had any use during each time period, and we present descriptive statistics on intensity of use. We chose to dichotomize usage in each time period as any use or no use. We considered treating usage as a continuous variable but did not have enough statistical power to do so.
Using Pearson correlation coefficients, we found that usage in one time period was associated with usage in another time period. We selected usage in Time 3 to be the main predictor, because this period had the most users and thus the most statistical power.
We used descriptive statistics to characterize our sample overall and stratified by usage in Time 3. We compared users to non-users, using t-tests for continuous variables and chi-squared or Fisher's exact tests for dichotomous variables.
We compared the performance of each physician to a regional benchmark: the mean performance of MVP's HMO. We assigned each physician a value of 1 if he or she scored equal to or better than the regional benchmark and 0 otherwise. We generated a quality index for each physician, equal to the number of metrics for which he or she performed at or better than the regional benchmark, divided by the total number of metrics for which that physician was eligible. The quality index could range from 0 to 100, with higher scores indicating higher quality. We calculated the average quality index for each study group (users and non-users) in each time period (baseline and follow-up). We compared the average quality index across study groups and across time periods using t-tests.
In order to adjust for potential confounders, we used generalized estimation equations. This method accounts for the clustering that occurs with having multiple quality metrics per physician and repeated measurements over time.
We constructed regression models in which quality at follow-up was the dependent variable. The independent variable was usage of the portal in Time 3, as defined above. We considered the 11 physician characteristics, including adoption of electronic health records and quality at baseline, as potential confounders. We entered those variables with bivariate p-values ≤0.20 into a multivariate model. We used backward stepwise elimination to generate the most parsimonious model, considering p-values ≤0.05 to be significant. We considered the 15 measures overall and stratified by whether or not the measures were expected to be affected by portal use.
All analyses were performed using SAS version 9.1 (SAS Institute, Inc., Cary, NC).
The funding sources had no role in the study's design, conduct or reporting.
Of the 168 primary care physicians in our original, cross-sectional study , 138 (82%) also had data for health care quality at follow-up and were included in this study. Of the 138 physicians in this study, most were male, with an average age of 48 years (►Table 1). Half were general internists, and half were in family practice. Most were board-certified with MD degrees. The average practice size was 4 physicians. Fewer than 1 in 5 had adopted electronic health records.
The number of physicians using the electronic portal increased over time. Of the 138 physicians, 46 (33%) used the portal in the first time period, 58 (42%) used it in the second time period, and 59 (43%) used it in the third time period. Forty (29%) physicians used it in all 3 time periods.
We found that usage in one time period was strongly associated with usage in other time periods (p<0.001). The correlation between usage in Time 1 and usage in Time 2 was 0.81, and the correlation between usage in Time 2 and usage in Time 3 was 0.78. Among users in Time 3, the average physician logged in 8 days per month (SD 6 days, median 7 days).
Users were similar to non-users in terms of gender, age, specialty, board certification, degree, practice size, case mix, and resource consumption (►Table 1). Users were more likely than non-users to adopt electronic health records (p = 0.03) and have larger panels of patients (p<0.001).
In terms of quality, physicians performed well, with ≥50% of the physicians overall meeting or exceeding the regional benchmark for 9 of the 15 measures (►Table 2).
When we considered the results by measure, there were trends (p<0.20) for better performance by users compared to non-users for 7 of the 15 measures (►Table 2).
When we aggregated across measures, non-users performed at or above the regional benchmark for 48% of the quality metrics at baseline and 49% at follow-up (p = 0.58, ►Figure 1). Users performed at or above the regional benchmark for 57% of the quality metrics at baseline and 64% at follow-up (p<0.001), which is equivalent to an absolute improvement of 7% and a relative improvement of 12%. This is consistent with the bivariate association we observed between usage of the portal and higher quality of care at follow-up (►Table 3).
In the final multivariate model, usage of the electronic portal persisted as an independent predictor of quality of care at follow-up [Odds Ratio (OR) 1.42; 95% confidence interval (CI) 1.04, 1.95; p = 0.03; ►Table 4]. This model included adjustment for quality of care at baseline, which continued to be a strong predictor of quality at follow-up.
This association between usage of the portal and higher quality of care was significant for the subset of measures expected to be affected by the portal (adjusted OR 1.56; 95% CI 1.09, 2.23; p = 0.01) but not for those not expected to be affected (adjusted OR 1.49; 95% CI 0.90, 2.46; p = 0.12).
We found that usage of an electronic portal for viewing clinical data was independently associated with modest improvements in quality of care, with an absolute improvement of 7% and a relative improvement of 12%. This association persisted after adjusting for physician characteristics including baseline quality of care. This association was significant for the subset measures that could be affected by access to external clinical data, but not for the other measures. The magnitude of the association we observed was modest and slightly less than the typical 12–20% improvement seen in quality for inpatient EHRs and for some previous ambulatory EHR initiatives .
This study is novel for its measurement of the effectiveness of a community-based portal on ambulatory quality, particularly defined as rates of recommended testing. Previous studies have focused on the related topics of physicians' perceptions [29, 30], patients' perceptions [16, 31–33], public health [17, 18], response time , redundant testing [12, 14, 15], emergency department utilization [19–21], and projected cost savings for the health care system .
This study is also notable for having taken place in small group practices, where the large majority of care is delivered and where adoption of HIT has lagged . The setting was not in an integrated delivery system [36–38] or a hospital-based practice, but rather a community with multiple payers and physician-owned practices. This study's fragmented setting is more typical of the majority of American health care. This study is also strengthened by having actual usage data, which is distinct from other studies that used the potential for access to data as the predictor .
This study has several limitations. It took place in a single community, which may limit generalizability. This community is active in practice-based quality improvement, and many of the physicians who participated in this study were in practices that subsequently went on to achieve recognition by the National Committee for Quality Assurance as Patient-Centered Medical Homes. The data are several years old, and newer iterations of the portal and other forms of health information exchange may yield different results. The study was not a randomized trial, so the findings may be confounded by unmeasured physician characteristics; however, this study adjusted for baseline physician quality, which makes a spurious correlation less likely. We did not have data on which physicians were in which practices, so we were not able to adjust for clustering at that level. We also could not account for potential bias introduced by the fact that 18% of providers did not have data at follow-up. Future studies could examine health information exchange in a more granular manner, attempting to link access to data with specific medical decisions. Larger studies could also consider whether a “dose-response” relationship exists between usage and quality.
The federal government in investing heavily in health information exchange through the Statewide Health Information Exchange Cooperative Agreement Program, the Beacon Community program, and the Direct program [40, 41]. These investments all build on the government's incentives for meaningful use of electronic health records. How these efforts will interact to affect care is still unfolding.
We found modest but significant improvements in ambulatory quality with the use of an electronic portal for viewing clinical data. This study took place in a community with small group practices and multiple commercial payers, typical of many communities across the country. The findings from this study can inform national discussions, as the country embarks on an unprecedented effort to encourage the adoption and meaningful use of HIT.
We studied a web-based portal that allowed providers to view their patients' clinical data, regardless of whether the underlying tests were ordered by themselves or by other providers. We found that users of this portal were more likely to provide higher quality of care than non-users, even after adjusting for provider characteristics.
The authors have no conflicts of interest to disclose.
The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research involving Human Subjects, and was reviewed by the Institutional Review Boards of Weill Cornell Medical College in New York City and Kingston Hospital in Kingston, New York.
The authors thank A. John Blair III, MD, President of the Taconic IPA and CEO of MedAllies; Jerry Salkowe, MD, Vice President of Clinical Quality Improvement at MVP Health Care; and Deborah Chambers, Director of Quality Improvement at MVP Health Care for providing access to data. The authors also thank Alison Edwards, MStat for her assistance with data analysis. This work was supported by funding from the Agency for Healthcare Research and Quality (grant #1 UC1 HS01636), the Taconic Independent Practice Association, the New York State Department of Health, and the Commonwealth Fund (grant #20060550). The funding sources had no role in the study's design, conduct or reporting. The authors had full access to all the data in the study and take full responsibility for the integrity of the data and the accuracy of the data analysis. Previous versions of this work were presented at the American Medical Informatics Association Annual Symposium in San Francisco, CA on November 17, 2009 and the National Meeting of the Society of General Internal Medicine in Phoenix, Arizona on May 5, 2011.