|Home | About | Journals | Submit | Contact Us | Français|
If you are of a certain age —maybe 40 or older — there’s a good chance you are missing your tonsils.
Until Robert Bolande, MD, published a seminal review of the evidence regarding “ritualistic” tonsillectomy in the New England Journal of Medicine1 in 1969, the procedure was accepted as a rite of passage for young children. Purported to cure a variety of maladies from recurrent respiratory and ear infections to scarlet fever and colds, tonsillectomies were performed over decades for whatever the medically accepted role of the tonsil was at the moment.
With a surgeon’s precision, Bolande dissected arguments for the procedure. He acknowledged what little controlled evidence existed to support it, lamented the “meager scientific justification,” and observed that a substantial number of children were subjected to the operation because “Physicians’ willingness to comply with parental demand has been institutionalized.”
Bolande’s exhortation to dismiss the “great legacy of misinformation from the past” made him something of a new breed of physician — not the healer who practiced medicine on the basis of ideas and theories but a scientist grounded in knowledge.
Bolande was also a decade or more ahead of the Dartmouth Atlas, clinical practice guidelines, and managed care — all of which coalesced to bury the era of anecdotal medicine.
For years, evidence-based medicine (EBM) has been managed care’s mantra. A well-designed, randomized, controlled trial published in a reputable medical journal backed by solid evidence was proof enough for third-party payers.
Now, that mantra might be “Don’t believe everything you read.”
Never before has the validity of the evidence base been under such attack. Depending on whom you’re talking to, so much of the published evidence is suspect, because it was (a) funded by a biopharma manufacturer positioning its product in the best possible light, (b) submitted by an academic researcher desperate to “publish or perish,” or (c) riddled with methodological holes and illogical assumptions. Moreover, a good deal of research is designed to answer a narrow question, and no one study provides all of the evidence needed to make an accurate assessment of any intervention.
Some of managed care’s skepticism is justified. But the vitriol — perhaps best exemplified by a 2005 PLoS Medicine article by UnitedHealth executive Richard Smith titled “Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies” — will have to be overcome before our fragmented healthcare system can move on to aligning interests.
“Anytime you make a coverage decision based on the best evidence you have, the criticism from the scientific community is that the evidence is not perfect, which is always the case. Then you also get questions about payment from a variety of employers and health plans who may take exception to the way you interpret the evidence,” says Paul H. Keckley, PhD, executive director of the Deloitte Center for Health Solutions, in Washington. “It’s purgatory.”
There is hope, though, for restoring confidence in the evidence base. While biologic therapies stream out of the pipeline and threaten to send healthcare costs into the stratosphere, converging forces have pushed medical care toward a performance-based treatment model. The ensuing financial and political pressures may force stakeholders to agree on what the evidence should be. To that end, many stakeholders have faith in the new Patient-Centered Outcomes Research Institute and other initiatives to shepherd American healthcare into a golden era of evidence-based medicine. Will these succeed?
In some respects, EBM has gone in and out of fashion since the demise of the tonsillectomy. In the late 1980s, HMOs started to balk at off-label use of cancer medications, giving rise to the drug compendia that Medicare and many commercial insurers use to determine coverage. By 1992, the term “evidence-based medicine” had appeared, with the University of Oxford’s David L. Sackett, MD, defining it as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” On this side of the pond, the Agency for Health Care Policy and Research (AHCPR) began to develop clinical practice guidelines to help physicians reduce costly, unnecessary variations in care — and many HMOs made adherence to those guidelines a condition of payment.
Then came the managed care backlash of the late ’90s. Congress was besieged by constituents who were livid about HMO payment denials. In one particularly raucous congressional subcommittee hearing, Linda Peeno, MD, testified that “Managed care maims and kills patients” and “confessed” that in her previous life as a managed care medical director, she “used ‘economic credentialing’ to select the best inexpensive physicians and rarely correlated these with quality factors.” Congress took it out on AHCPR, stripping its funding and threatening it with de-authorization unless it got out of the guideline business.
Managed care retrenched but never lost sight of the goals of EBM. In a 2004 Medscape General Medicine article,2 Keckley — who at the time ran the Vanderbilt Center for Evidence-Based Medicine — noted that health plans were moving from managed utilization to evidence-based care management. He discussed the challenges that lay ahead: Medical directors understood EBM but other decision makers in their organizations did not; physicians didn’t routinely practice EBM but might respond to financial incentives to do so; and all would have to agree on what evidence-based practices were.
Since then, there has been a new flowering of EBM in managed care. But the garden can use some weeding.
“We are moving toward some sort of a performance-based treatment model with a comparative effectiveness driver,” Keckley says today. But defining the particulars of evidence-based practice, he adds, has proven very difficult.
“An avalanche of science has hit the system, and the tools we use to evaluate the science — the informatics — have become problematic. Sometimes, the more evidence we have, the more we realize we don’t have any evidence. We have studies that seem to contradict themselves, but when you look at the data, they don’t contradict as much as they show contrasts between methodologies.”
Another obstacle is agreeing on what evidence-based practice should be. This starts with analyzing claims data to see where practice variations exist. “The national plans have done a pretty good job” of that, Keckley says. “But below the top 45 or 50 plans, it has been inconsistent. It costs money to analyze data and build out your platform for evidence-based care.
“So what most [other] plans do is simply look at a medical society’s guidelines and say ‘That’s what we’ll do.’ But those guidelines often are not evidence based. A medical society can be selective in its determination of what’s appropriate evidence and what’s not.”
And therein lies the rub.
David Kloth, MD, medical director for Connecticut Pain Care, in Danbury, and past president of the American Society of Interventional Pain Physicians (ASIPP), maintains that differences among guidelines stem from whether the writing committee has considered the totality of the literature or discounted or eliminated studies that do not fit their agendas. “Two different physicians or organizations can review the same article and come up with different conclusions and recommendations by applying different weights to the various evidence,” he says. “We have found that the bias of the organization strongly affects how it interprets the literature.”
To Kloth, that explains, for instance, discrepancies in guidelines issued by ASIPP, the American College of Occupational and Environmental Medicine, and the American Pain Society for the same procedures.
The same problem extends to the drug compendia.
In 2008–2009, Agency for Healthcare Research and Quality (AHRQ)-funded research into compendia development generated two papers, including one published in Annals of Internal Medicine.3 In that paper, Amy Abernethy, MD, reported that for the 14 off-label indications she and her colleagues studied, the recommendations across compendia were inconsistent and that the evidence for them was scant and “often neither the most recent nor derived from the highest level of evidence.”
How does that happen?
One answer may be that a lot of science has a short shelf life. In 2007, Kaveh Shojania, MD, and colleagues at the Ottawa Health Research Institute analyzed 100 systematic reviews to determine how often they should be updated. After looking for subsequent changes in at least one primary outcome, or mortality, or important safety data, Shojania determined that 15 percent of those reviews went out of date within a year, 23 percent were obsolete in two years, and the average review was overturned in five.
Biopharma-sponsored research has been skewered in the past couple of years by payers and journal editors skeptical of the industry’s motives. Senior contributing writer Michael D. Dalzell discussed this phenomenon with Garrett Bergman, MD, CSL Behring’s North American senior director for medical affairs.
Biotechnology Healthcare: In the past couple of years, much has been written about the evidence base being owned and published by proprietary interests. Increasingly, we’re hearing payer decision makers say they don’t trust the evidence base. Is some very good research being painted with the same brush?
Garrett Bergman, MD: As a rule, the evidence base derives from studies that are conducted following good clinical practice standards. Have there been exceptions? Yes. But they are few and far between. The objective of any study conducted to obtain product approval is to show relative safety and efficacy and the extent and likelihood of adverse effects. It’s in a pharmaceutical company’s best interests to flag problems as early in the generation of clinical data as possible, because each successive step in the study involves more patients, more cost, and therefore more risk to the company. If you project a problem, you don’t continue to invest millions of dollars on a drug that may pose safety issues after it’s on the market or that is not likely to be approved.
This is why companies work with experts in a particular disease state to help them understand the disease better and design a rigorous study that will ensure company resources are used most effectively. No doubt, there is some good research that’s been viewed with skepticism because of perceptions, but I haven’t really heard payers say they don’t trust the evidence base.
CSL Behring is in a space that is different from larger pharmaceutical companies; our products are designed to treat rare diseases. Therefore, the base of patients with whom we work is quite small, comparatively speaking. So, if you are interested in a disease that affects only hundreds or even a few thousand people in the United States and you conduct a study in which up to even 50 percent of those individuals participate, you may not be able to achieve the same level of robustness and statistical significance that could be obtained in studies of therapies for common diseases, such as heart disease or cancer.
The U.S. Food and Drug Administration understands this aspect of our business, and FDA reviewers try to be reasonable. They base their determination of how many people they want included in a study in part on the size of the target population. We view the relationship among the FDA’s scientists and doctors, CSL Behring’s researchers, and independent disease state experts as a collaboration. It’s essential for all drug companies to ensure that every aspect of a study, including the collection of clinical data, is transparent. One of the ways in which the biologics industry responds with drugs where relatively few people are involved is to continue rigorous testing, monitoring, and collecting and analyzing data after the drug has been approved and is being marketed.
BH: With respect to the evidence base, what considerations might you offer to payers — or where should payers look for meaningful data —before discounting a study simply because it is sponsored by industry?
Bergman: This is a classic example of throwing out the baby with the bathwater. We need to view this in a more realistic light. Pharmaceutical companies have the resources and motivation to conduct an in-depth, robust studies of their investigational new drugs. Study length depends on what a new drug is supposed to accomplish and can take as little as six months and as long as three years. If a patient in a study can take a product once and see the positive effect, the study could take even less time. If it’s a drug that must be used to treat a chronic condition, the study takes longer, to ensure there are no serious adverse effects over time. Again, the emphasis here is on transparency.
There are also instances in which the National Institutes of Health will do a multicenter study that demonstrates the effect different treatments have on a disease state. This can generate statistically significant scientific data and can be an option for payers to consider. Another option is to consider studies conducted by cooperative study groups, which conduct studies at multiple centers that agree to use a standard protocol. The kinds of results that are of interest to clinicians typically result from cooperative studies, such as how intervention in a particular disease helps patients. Where payers can go for other data depends on the disease state and availability of literature. The validity of published studies has been enhanced by the requirement that authors fully disclose their relationship with sponsoring or competing drug companies when publishing papers and studies.
— Michael D. Dalzell
A second answer might be found in the complexity of the evidence base that Keckley alluded to.
“The challenge with evidence-based medicine is that for the practicing physician, the scientific rigor and details have become difficult to manage,” says Nilam Soni, MD, assistant professor of medicine at the University of Chicago. “How does a physician who went to medical school years ago keep up with and understand complicated statistics — the Chi-square method or even P values? These are common terms in the research literature, but may not be familiar to many practicing physicians.”
Soni, whose research interests include EBM, says that although researchers may understand the information, they don’t have contact with patients. “So how does a practicing physician apply the information at the bedside?”
A third answer — and the most cynical — says to follow the dollar.
The second of the two reports to come out of the AHRQ compendia project examined the potential for conflict of interest in compendia development. In an April 2009 white paper, “Potential Conflict of Interest in the Production of Drug Compendia,” Ross McKinney, MD, and colleagues at the Duke Evidence-based Practice Center found that disclosure policies for each of the compendia varied, as did the dollar thresholds for reviewers’ financial ties with industry and the proportion of reviewers with conflicts of interest. One compendium received significant funding from the pharmaceutical industry.
McKinney’s examination stemmed in part from the controversy over the development of guidelines for the use of erythropoiesis-stimulating agents (ESAs) in patients with chronic renal disease who are treated for anemia. National Kidney Foundation guidelines issued in 2006 recommended greater use of ESAs than did previous guidelines. Among the 16 members of the guidelines committee, 14 had financial relationships with manufacturers that would benefit from the more generous guidelines. The committee dismissed evidence from randomized, controlled trials that did not support aggressive ESA use while ignoring two major studies that linked greater use with an increased risk of cardiovascular events.
Within a year, CMS issued a national coverage determination (NCD) that not only dismissed the Kidney Foundation guidelines but also severely limited ESA uses for which it would pay. The NCD, which cited nearly 1,000 references, foreshadowed greater government scrutiny of the evidence base (see “PCORI’s effect on the evidence,” below).
Questions about biopharmas’ influence on the evidence base became increasingly pervasive as EBM regained steam during the second half of the last decade. “Pharmaceutical industry-sponsored clinical trials can have a corrosive impact both on physicians who derive substantial income from their participation and, in turn, on evidence claims themselves,” Howard Kushner, PhD, wrote in Permanente Journal4 earlier this year.
As power shifted from prescribers to third-party payers, pharma and biotech companies developed extremely sophisticated marketing techniques aimed at evidence-demanding MCOs. Biopharma now funds two thirds of the trials published in New England Journal of Medicine, Journal of the American Medical Association, Lancet, and Annals of Internal Medicine. And right or wrong — but not coincidentally —the drug industry has taken the brunt of the rap on the evidence base.
In his PLoS Medicine article, Smith — the longtime editor of British Medical Journal — offered a critical look at how pharma gets the results it wants in clinical trials. The issue isn’t in the technical quality of the studies, he wrote, but rather in how you frame the research: Conduct trials against products known to be inferior; use multiple endpoints and conduct subgroup analyses, then cherry pick the most favorable results for publication; and choose surrogate endpoints that are likely to impress, such as relative instead of absolute risk.
Moreover, he wrote, publishing strategies abound. These range from suppressing poor trial results to reporting positive studies in multiple publications. Key findings often are published in supplements, and results from multicenter trials can be parsed out in any number of journals. It amounts to manipulation, Smith said, though he acknowledged the complicity of journal editors eager to publish gold-standard research and of publishers who like the profits that come with the sale of reprints.
None of those things are likely to inspire payers’ confidence in the literature. But Gil Bashe, executive vice president at Makovsky & Co., a New York healthcare public relations firm, says it’s unfair to discount industry-designed research en masse.
“We have to be extremely cautious not to create an apartheid system of science,” he says. “Industry researchers bring the same thoughtfulness, thoroughness, and quality as outside academic researchers. However, they have much more access to funding, and there are many more regulatory eyes gazing down on that research. So where academic research is peer-reviewed by colleagues, industry research has to go through a higher degree of analysis.”
The focus, he says, should be on quality of the research, “but often the question is: ‘Is there an agenda?’ People from academic centers have agendas — it could be grants, recognition, status. All the same, we have to look carefully at the data and accept [them] at face value.”
Nancy Dreyer, PhD, chief of scientific affairs and senior vice president at Outcome, a Cambridge, Mass.-based company that develops patient registries and post-approval studies, takes Bashe’s admonition not to dismiss industry-sponsored research a step further: “Don’t believe it just because it was funded by a popular nonprofit,” she says.
“It’s not about the funding. It’s about the execution, about being a critical reviewer. There are relatively few instances of fraud. What is much more common is that you hear only part of the story or you see analyses that aren’t well thought through.”
Converging events, though, may render this discussion obsolete.
When the dust finally settled on healthcare reform, the United States had itself a first: The Patient-Centered Outcomes Research Institute (PCORI), a public-private agency with a dedicated source of funding to oversee comparative effectiveness research. Its board of governors consists of a diverse group of 19 members representing academia, patient advocates, third-party payers, employers, care delivery systems, such as Kaiser Permanente and the Veterans Administration, and three members from drug and device makers.
Jonothan Tierce, CPhil, general manager and Center of Excellence leader for IMS Health’s Global Health Economics and Outcomes Research unit, believes PCORI has the opportunity to restore a perception of legitimacy to the evidence base. “In the private sector, when companies partner to develop information, they both have a stake in the answers and in shaping the way research is conducted,” he says.
Up to now, proprietary interests have owned and published most information about their products, Tierce notes. The dissemination of PCORI-funded CER will mean that biotech and pharmaceutical companies will have far less control over the post-market research and publications about their wares. Call it a “stick approach” to forcing manufacturers to honor their phase 4 obligations.
“There’s also a carrot approach” to the advent of CER, he says. “There may be competitive reasons they want to do a postmarketing study, or they might want to get out ahead of the curve in terms of things like safety … in case some other research appears that calls into question the safety and efficacy of a product.”
What’s more, stronger postlaunch scrutiny could result in better prelaunch trial design. In the past, Tierce says, a manufacturer might have conducted a phase 3 study, marketed its product, and dealt with any known (or possibly suspected) issues later. “Now they’ll say, ‘Let’s really know this before we launch because we know there will be these concerns about it.’ There may be safety issues they want to put to rest in a phase 3 study, so they may power the study for safety, especially for rarer events, requiring larger study sizes.” Ultimately, he says, this will reduce the number of biologics reaching the market. Those that make it will have narrower indications and stronger labeling.
PCORI’s initial focus, though, will be to establish methodologies for research and data analysis, Tierce says. “This is the impact of the injection of funding into PCORI — we’ll do some things that we wouldn’t have been able to do before, because there was no single interest in someone spending that money. We need to think of that as a social investment.”
That work won’t be easy, says Keckley, at Deloitte. “One of the challenges is that many companies want to monetize their data and don’t want to comingle with any other data. To say that data that sit in a company’s vault should be a part of something bigger in a de-identified way, and if that company puts a substantial value on those data, what have you done to your shareholders? That’s going to be a healthy debate.”
In the past year, Soni and his colleagues at the University of Chicago have conducted CER on biomarkers for sepsis — specifically, their utility in predicting treatment outcomes in acutely ill patients. It’s one of many studies that are part of a larger AHRQ undertaking called the Effective Health Care Program. Like the highly respected Cochrane Collaboration, the program synthesizes both published and unpublished data, commissions original CER, and then summarizes the results for clinicians, policy makers, and consumers. Results are placed online, in plain English, for free.
Soni says the program can help clinicians, medical societies, and decision makers cut through the confusing informatics and the sheer volume of information coming across their desks every day. “It’s a useful way for societies to develop their guidelines,” he says, mentioning guidelines for anticoagulation to prevent deep-vein thrombosis in orthopedic procedures as an example. The American College of Chest Physicians and the American Orthopaedic Association, Soni says, publish guidelines, but “They don’t match. AHRQ is working with both societies to come up with synchronized guidelines.”
Electronic dissemination of credible information, Soni believes, will replace conventional forms of communicating evidence, which, he thinks, will create a user-friendly system that encourages more careful examination of the most relevant evidence.
Engaging practicing physicians in evidence synthesis rings true with Kloth, who sees the potential for a disconnect between the studies PCORI recommends and oversees and what the practicing physician needs to know when treating a patient. “Many of the persons who are appointed to these committees are are researchers who look at statistics,” says the former ASIPP president. “You can’t just look at statistics. You have got to understand the specific patient’s problem, what alternative treatments exist, and then judiciously apply the scientific evidence to arrive at an appropriate treatment plan.”
There’s a movement to bring back another form of evidence to address that as well as issues like outcomes differences between controlled trials and real-world use. Though shunned by many decision makers as inferior, observational studies are, in Dreyer’s view, “an essential component of understanding what works.”
Observational studies conjure up a quaint memory of 1950s medical literature. But “Observational research has developed some very rigorous methodology in the last decade,” Dreyer notes. “And the truth is, we have learned a lot of things from that very blunt instrument. I started my career in epidemiology studying diethylstilbestrol. Through observational studies, they figured out a huge risk to children of women who used this product in utero — and that was based on eight patients.”
Keckley thinks that when payers dismiss observational studies, they dismiss a valuable body of science. “Observational studies with strong correlation coefficients and predictive models are very plausible. Our ability to build out clear algorithms with high specificity ratios — one of those gold standards — is pretty good. We’re going to find that observational analytics, next to randomized controlled trials (RCTs), is the other gold standard. Our ability to use biomedical informatics to create causal relationships is the future of evidence-based medicine.”
Real-world evidence that documents why a biologic works better in some patients than in others would be gold to medical directors and P&T committees frustrated by a lack of comparative effectiveness data across RCTs. Their frustration can be understood when one considers the promise of a 2009 AHRQ comparative effectiveness review of immunomodulators, one of the fastest-growing expenditure categories among specialty pharmaceuticals. Yet the Evidence-Based Practice Center at Oregon Health and Science University ended its 210-page, 294-reference report by saying that “Insufficient evidence exists for most comparisons about the efficacy and safety” of nine biologics studied for rheumatoid arthritis and six other indications.
Inconclusive, yes, but the conclusion is appropriate, says Tierce, if those 294 studies tell all we know. “We have to rigorously interpret the total weight of the evidence without uncritically accepting every element of any study, rejecting a study out of hand just because of its funding source, and recognizing when the evidence is insufficient to draw a particular conclusion,” he says. “This is why evidence-based guidelines often say ‘insufficient evidence’ rather than ‘not recommended.’”
For payers, trust in the evidence base starts with relying on it to define what works. The hope is that PCORI, AHRQ, or others can generate a transparent, user-friendly evidence base that communicates the totality of the evidence in a credible way. The goal is to change the mantra to “If you read it, you can believe it.”