Comparative Effectiveness Research Stirs Excitement as Well as Debate
It doesn’t take a randomized clinical trial, the joke goes, to prove that it’s unwise to leap from an airplane without a parachute. However, randomized clinical trials have been vital in helping clinicians make informed choices about patient care, whether in treating advanced colorectal cancer or controlling high blood pressure.
Unfortunately, much of the treatment and medical management provided in the United States—some estimates suggest nearly half of all the care delivered—doesn’t have a large cache of evidence to support its effectiveness, many health services researchers stress. The result, they argue, is uneven, sometimes poor-quality care that varies greatly from region to region and that’s often accompanied by a hefty price tag.
One potential remedy for this dilemma that has been touted by the Institute of Medicine (IOM), among others, is comparative effectiveness research (CER). Interest in, and debate about, CER has exploded recently thanks to $1.1 billion in funding in the recently enacted American Recovery and Reinvestment Act (ARRA), approximately one-third of which went to NIH. (See sidebar.)
A Flurry of Activity on CER
Under ARRA, the $1.1 billion set aside for CER is being split three ways:
- $300 million for the Agency for Healthcare Research and Quality
- $400 million for NIH
- $400 million for other HHS agencies
The NCI challenge grants for CER should be announced in the coming months, according to Dr. Brown. As a requirement of the ARRA bill, the IOM was asked to develop a report with recommendations on the priority areas that HHS should address with its CER funding. The IOM report is due by June 30.
Just last week, in fact, a committee of leaders in cancer research convened by the Friends of Cancer Research released a report calling for a “new paradigm” of CER in the United States, offering cancer care as a case study in how and why it should be done.
In the mean time, NIH institutes report being flooded with applications for funding to support the CER opportunities created by ARRA. And according to Dr. Martin Brown, chief of the Health Services and Economics Branch in NCI’s Division of Cancer Control and Population Sciences, NCI has also issued funding opportunities intended to help create a sturdier infrastructure for CER “that will really help to push the field forward.”
But What Is CER?
CER is typically defined as an evaluation of the impact of different options for treating or managing a given condition in a particular patient group. Because, while clinical trials and other studies can help define appropriate care, many researchers concede that they all have limitations.
“We do a lot of things in medicine for which there are different approaches, and often we don’t know which one is the best,” Dr. Brown said. He cites, for example, managing patients with advanced cancers. “All kinds of treatments and strategies are being undertaken” in these patients, he said, “and we don’t have a good handle on what’s being done.”
Ideally, says Dr. Diana Buist from the Group Health Center for Health Studies in Seattle, comparative effectiveness studies will incorporate data from “real world” settings, because most clinical trials are tightly controlled and have homogenous patient populations, so they are often not representative of the majority of patients being treated in the community.
Under the auspices of the NCI-supported Breast Cancer Surveillance Consortium, Dr. Buist and her colleagues have been conducting comparative effectiveness studies, including clinical trials in community clinics, of imaging for breast cancer screening and diagnosis. “There’s a tremendous amount of variability in clinical practice—at the patient, provider, and facility/clinic levels—that influences outcomes in imaging,” she said. “We’re trying to study how to provide the most effective breast cancer screening in community practices that reduces variability and optimizes screening delivery.”
Unlike most clinical trials, comparative effectiveness studies aren’t necessarily aimed at drawing a clear line of superiority between two interventions, said Dr. Steven Pearson of the Institute for Clinical and Economic Review (ICER) at Massachusetts General Hospital. Instead, these studies often look at a broader picture of effectiveness.
Last year, for example, ICER published a comparative effectiveness study of standard optical colonoscopy and computed tomography colonography (CTC), or “virtual” colonoscopy. The assessment did not focus solely on detection of certain sized polyps, the “efficacy” endpoints used in colonoscopy clinical trials. Rather, it looked at data on factors such as mortality, patient preference, procedure costs, and procedure risks, in addition to polyp detection. The final report graded CTC according to different scenarios: compared against no screening at all and against standard colonoscopy based on different possible reimbursement rates for a CTC procedure.
These more expansive approaches for assessing effectiveness are desperately needed, Dr. Pearson said. “There are 18,000 randomized trials conducted every year and yet we always say we don’t have enough good evidence.”
Not Everybody Is Convinced
Although CER has its proponents, it has also raised concerns in different quarters of the health care system. During a White House-sponsored conference on health care reform last month, for example, Pfizer CEO Jeffrey Kindler questioned whether the results from comparative effectiveness studies would be “automatically linked” to insurance coverage decisions.
To a limited extent, insurers and other payers are using CER to guide coverage decisions, explained Dr. Barbara McNeil, who heads the Department of Health Care Policy at Harvard Medical School and sits on medical coverage advisory boards for Medicare and Blue Cross and Blue Shield. Last year, the Washington State Health Care Authority, for example, relying in part on the ICER study, cited lack of efficacy data and cost among the deciding factors against adding CTC as a covered benefit for state Medicaid patients.
Some oncologists also have misgivings. “We would certainly welcome more data and are always looking for better ways to treat our patients,” said Dr. Patrick Cobb, president of the Community Oncology Alliance and a managing partner of an oncology practice in Montana. But he’s concerned that results from CER could be used to limit treatment choice. “There could be instances where one treatment would be better for a certain patient than another.”
But, countered Dr. Brown, it’s a misconception that CER will lead to “one-size-fits-all” medicine. “There’s nothing inherent in CER that rules out looking at the issue of heterogeneity in many ways,” he said. “And that’s a good reason for NIH to be involved in CER, because the research managers and grantees have informed expertise on those sorts of issues.”