Here's one head-to-head drug test that should have been a slam-dunk and wasn't.
A few years back the government funded a clinical study of two drugs -- one very expensive and the other not so much -- to treat a leading cause of blindness. But the problems along the way should give everyone pause about how hard it will really be to figure out which medicines and treatments are better values, the idea behind so-called comparative effectiveness research.
At issue are two treatments for age-related macular degeneration. In 2005, the FDA approved Genentech's Lucentis, a modified cancer drug, as the first-ever treatment for so-called wet AMD. The big downside? It costs $2,000 for a monthly dose.
Almost immediately, opthalmologists began using Avastin, the original cancer drug also from Genentech, which wasn't approved for macular degeneration, instead. It cost only $50 per dose, and doctors who used it said it appeared to work just as well.
With the obvious need for a head-to-head comparison, particularly since 95 percent of wet AMD patients are on Medicare and thus treated at taxpayer expense, the National Eye Institute approved funding of a clinical study, called CATT for short, in 2006.
But that was the last part of the process that was simple, some of the study's lead researchers write in a piece in this week's New England Journal of Medicine. "Our experience with CATT highlights important roadblocks and dramatic changes needed in federal infrastructure for (comparative effectiveness research) to be conducted efficiently," wrote the authors from the Cleveland Clinic and University of Pennsylvania.
The first obstacle came in figuring out who would pay for the drugs when there was no drug company to sponsor the research. Existing Medicare policy didn't allow payment for the drugs; it took a specific change in policy which didn't happen until 2007.
Then there was the problem of patient co-payments. The differential was obviously enormous between a drug that costs $2,000 a dose and one that costs $50 per dose, presenting a challenge not just for recruiting patients who would not want to be in the more expensive group, but for keeping the research "blind." In other words, even patients with supplemental insurance would be getting "explanation of benefits" statements that would make it clear which drug they were getting.
Eventually, Congress passed a bill to create payment mechanisms needed to carry out such trials; a bill that didn't pass until July 2008. That came too late for the CATT study, which finally got underway in early 2008, and reached its patient capacity in late 2009.
But the delays weren't without cost, the authors noted. "The roadblocks delayed study initiation by more than a year, while another 200,000 patients and their doctors had to make decisions without important information about relative efficacy and safety," they wrote.
And without a more comprehensive policy to cover both federal and private insurers about who will pay for drugs being tested in such trials, they note, "it is difficult to imagine that the $1.1 billion" included in the 2009 stimulus bill for comparative effectiveness research "will be used effectively."