|Home | About | Journals | Submit | Contact Us | Français|
Comparative effectiveness, the evaluation of multiple treatments for one condition to find the best option, is essential to evidence-based medicine and coverage with evidence development. This article outlines comparative effectiveness and the changes it has undergone in recent decades.
Physicians and payers alike are demonstrating increased interest in the use of evidence-based medicine (EBM), or implementation of treatments that have proven track records of delivering desired results in specific patient populations.1 The cover story of Journal of Oncology Practice in November 2007, “Medicare's Coverage With Evidence Development: A Policy-Making Tool in Evolution,”1 discussed how EBM is related to coverage with evidence development, defined as Medicare coverage for a treatment or technology on the basis of data collection through a clinical trial or registry. Essential to both EBM and coverage with evidence development is the concept of comparative effectiveness (CE), or the evaluation of multiple treatments for a single disease or condition to identify the best option.1
CE has undergone major changes in recent decades. This article will explore the origin and evolution of CE, how CE may influence national health care, and what the future may hold for CE.
“Comparative effectiveness actually started with a group of researchers probably 20 years ago, when the term was actually coined,” says Mark Boutin, executive vice president and chief operating officer of the National Health Council, an organization that represents approximately 100 million individuals with chronic conditions. Boutin is former vice president of government relations and advocacy for the American Cancer Society for New England. The National Health Council will be publishing a report on CE, which will be accessible at www.nhcouncil.org, in the first quarter of 2009.
“[CE] was originally intended to mean development of research to help patients and family caregivers make more informed decisions with providers,” Boutin says. “It's gone through a lot of iterations, and a lot of different terms have come into the language around this topic, from evidence-based medicine to comparative cost effectiveness, and we're seeing a fairly significant shift in terms of what it means.”
He identifies this shift as a result, in part, of the differing needs of a diverse group of stakeholders. “I think it has a lot to do with the pressures of our current system to look for effective ways to provide care and pay for it,” Boutin says. “When you look at CE, it does give you the opportunity to do research and find out what treatments will work best for the largest group of population, and it has evolved into a space where you start to look at where you get the best results for the least cost for the largest group of people. The challenge, of course, is that while [a treatment] may work best for the majority or even 90%, it can be harmful to or even kill some others.”
Brenda Gleason is president of New York–based M2 Health Care Consulting, LLC, a strategic policy and communications consulting firm. Gleason suggests that regardless of the focus of one's search for information, the amount of data available through CE is now much greater than it was in earlier years. “CE used to conjure up the image of a dossier filled with head-to-head randomized clinical trials showing an intervention's effectiveness. While that is still considered the gold standard of evidence for the most part, CE has evolved in recent years to include more meta-analyses of clinical data,” she says. “Health information technology has enabled the sifting of millions of data points to look for signals and patterns that were undetectable in the past.”
The search for additional and more in-depth information on treatment options has been a consistent trend in the development of CE. “There has clearly been an increased interest in learning more about how different health care interventions work best for different patients,” says Jean Slutsky, PA, MSPH, director of the Center for Outcomes and Evidence of the Agency for Healthcare Research and Quality. “This is a natural evolution in making sure that we provide the most effective health care to the right patient at the right time.”
In a presentation at the 2008 Annual Meeting of the Hematology/Oncology Carrier Advisory Committee Network in September 2008, Slutsky reinforced that the focus of CE should be on improving patient outcomes, which she described as translating evidence into clinical action to provide the right care to the right patients. “Comparative effectiveness should be a public good to give health care decision makers a way to access rigorous, unbiased information about comparative benefits and harms of different therapeutics, closely aligned with daily care decisions,” she said.2 The goal, according to Slutsky, is to “develop and disseminate better evidence about benefits and risks of alternatives, not to identify winners and losers.” Additionally, CE can help decision makers determine which health care interventions add value, offer minimal benefit over current choices, fail to reach their potential, and work for some but not all patients. CE can also “tease out unintended consequences,” Slutsky said.
The biggest shift in the way CE is viewed has been marked by a recent and growing emphasis on CE as the basis for coverage decisions, made all the more critical by economic challenges in the United States. “CE is intended to uncover what is clinically effective. However, it has evolved to include not only an analysis of clinical effectiveness, but also cost effectiveness,” says Gleason. “As more and more treatments become available, payers are interested in determining not only what is the best treatment, but what is the most valuable treatment—that is, the best treatment for the price. The current economic situation is putting a spotlight on costs. In this environment, CE is likely to give more weight to cost-effectiveness data.”
Donald Moran, president of The Moran Company, a health care research and consulting firm based in Washington, DC, feels that CE and its application are largely influenced by what he refers to as underlying agendas. The health services research community is backing CE because it sees it as a funding source, and the political community is backing it because it sounds great. “In essence, there are really two things going on here that at some point the world must appreciate,” he says. There is a technical aspect, and there is a practical aspect.
With regard to the technical aspect, Moran says, “For every 100 studies that you do, you're going to get useful definitive evidence out of maybe 10, or at the outside 20, studies that says A is better than B for population C. The remaining 80% to 90% of the time, you're going to get noise—studies that don't reach conclusions. [This is] the nature of scientific research. What we understand about medicine is that different things work for different people for unknowable reasons. The prospect that all kinds of definitive decisions are going to become apparent from this work strikes me as remote at best. The vast number of these studies is going to be inconclusive.”
Regarding the practical aspect, Moran says that the ultimate influence of CE depends on who carries the burden. “Right now, in effect, the payer community carries the burden in our system of reaching the conclusion that something is not medically necessary and therefore denying coverage on medical grounds. [A treatment is] generally covered unless payers can come up with a reason it should be excluded. If evidence is inconclusive, and the burden is on payers, there is not much traction. If you turn it around and make manufacturers and proponents carry the burden, so [a treatment is] not covered unless they can come up with evidence that it's medically useful, then [it would be] much more effective, and maybe too effective, because the majority of studies are inconclusive.”
“The question is not what the evidence shows, because the evidence is ambiguous,” Moran says. “The question is who bears the burden of what will and will not be covered.” He says that this change in burden “gives the anti-manufacturer bias people an agenda to push that sounds high minded and scientific. … To others, it's just kind of good government because it's the responsible thing to do, so we should do it. For manufacturers, it fills their hearts with fear and loathing.”
Burden aside, inherent in CE is the challenge of determining what best actually means, and how that can be evaluated in an uncontrolled, real-world environment. “What is positive about CE research is that it can give good info to a patient, a provider, and family caregivers so they can make better decisions,” Boutin says. But when one looks at CE in the context of a single individual, one sees that not every patient has the same goals, or the same best. Boutin points out that some patients will choose longevity, whereas others will choose quality of life, which may not necessarily include the same longevity as that resulting from another treatment.
In addition, what works in a clinical trial may not always work outside the clinical setting. “The reality is, when you do that research and you figure out based on a population model what's going to work well, you then need to take that research and deploy it,” Boutin says. “When you put this into the real world, where people have multiple comorbidities, are taking multiple medications, have their own lifestyle issues, [you] may find that it doesn't work very well when deployed in real-world settings, or maybe [among] women or seniors or children or ethnic and racial populations.”
Boutin explains, “In terms of looking at the whole topic … you have to do the research and then you have to deploy it and do the research in the real-world setting and see where it actually works and where it doesn't work. Are you actually achieving individual patient goals, helping them achieve what they are actually trying to accomplish? When you evaluate those two components, then you have good information that can help you make good coverage decisions.”
In 2007, the Enhanced Health Care Value for All Act (HR 2184),3 which proposed new funding for CE research, additional power for the advisory board of the Agency for Healthcare Research and Quality to make CE a priority, and a closer link between CE research and medical practice,1 was introduced, followed by the Comparative Effectiveness Research Act of 2008 (S 3408),4 commonly referred to as the Baucus Bill after cosupporter Senator Max Baucus (D-MT), chairman of the US Senate Committee on Finance. In brief, the Baucus Bill would have established a dedicated CE research institute and a health care CE research trust fund. The bill was unsuccessful, but its precepts play a key roll in Baucus' “Call to Action: Health Reform 2009,”5 in which he proposes a national entity to provide “systematic, unbiased information about what treatments, technologies, and procedures work best.”
“In theory, this core concept of CE sounds like a great idea,” says Gleason. “In practice, it is difficult to understand how it would work, especially in a complex field of medicine such as oncology. If you were to gather 20 oncologists in the same subset of expertise and ask them, ‘What works best?’ they would in turn ask a litany of questions in order to begin to describe best treatment. This simple exchange of information is at the crux of the difficulty of achieving final answers via CE reviews. It is impossible to know what works best for an individual patient by looking at a list of guidelines. Instead, the physician, or team of physicians, makes recommendations based on multiple data points, including qualitative determinations such as a patient's mental state, ability to adhere to treatment regimens, and family support systems.”
When it comes to controlling spending, the stakes are high in a landscape in which the capacity to spend is limited by nationwide economic hardship. With CE as a tool to evaluate not only bottom-line effectiveness for patient outcomes, but cost effectiveness as well, it is likely that this pressure will continue to grow. Although all current proposed legislation regarding CE specifically excludes cost-effectiveness comparisons, the basis for such assessments is clearly inherent in the legislative analyses.
“Oncologists need to remain vigilant in using the guidelines in place when they exist, and continue to advocate for patients, and their own professional integrity as experts in their field, when the guidelines don't exist,” says Brenda Gleason, president of M2 Health Care Consulting, LLC.
ASCO provides a variety of tools to support its members in providing high-quality care for their patients, including the comprehensive Quality Care and Guidelines section on http://www.asco.org, which addresses topics such as:
The Agency for Healthcare Research and Quality also offers an effective health care program at http://effectivehealthcare.ahrq.gov, which includes research reviews, summary guides, and reports of new research to help oncologists and others stay up to date on the latest information. For additional recommended reading, download the Congressional Research Service report for October 2007, “Comparative Clinical Effectiveness and Cost-Effectiveness Research: Background, History, and Overview,” at http://aging.senate.gov/crs/medicare6.pdf.
“Oncologists should participate in the discussion and work of comparative effectiveness. Their unique perspectives will help inform the process,” says Jean Slutsky, director of the Center for Outcomes and Evidence of the Agency for Healthcare Research and Quality.
“I think that CE has been offered as a potential magic bullet,” says Boutin, “and that's going to create a lot of pressure and momentum to do this. I think we're going to find, though, that it doesn't serve as that magic bullet. I think it can have impact in quality, both positive and negative, but CE research alone is not going to solve the cost issue from our point of view. The cost issue is better achieved in bringing a number of systemic changes together that [focus on] delivery of care. No one item, whether health information technology (HIT) or CE, is going to solve the problem.”
Gleason echoes Boutin's sentiments; CE is not the be-all and end-all solution, but rather a tool to achieve overall better results for patients. “When CE is discussed at the national level, the concept is a simple one. Shouldn't we be practicing evidence-based medicine? Patients assume physicians are recommending treatments based on evidence,” she says. “The first step in the process should be to get the evidence that already exists into the hands of practicing physicians. Then we have a clear sense of what works—that is, what doctors should be recommending to their patients. This requires getting the scientific information into the clinical practice—something HIT can do much faster than the creation of a national CE body.” The creation of a national body to update CE recommendations annually is, at best, says Gleason, not an acceptable solution for changing the status quo.
What health care reformers are trying to achieve with CE is a solution similar to the National Institute for Clinical Excellence of the United Kingdom, says Moran. “Some kind of government-sponsored institution that makes determinations that are essentially binding in terms of what technologies and products can go into the UK market and under what pricing terms. Me, personally, I'm very dubious about it from a technical standpoint, because the majority of results are inconclusive. If the result is keeping treatments out of the market until they are definitively proven, that will have an immediate and chilling effect.”