Trial By Error, Continued: A Follow-Up Post on FITNET-NHS

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

Last week’s post on FITNET-NHS and Esther Crawley stirred up a lot of interest. I guess people get upset when researchers cite shoddy evidence from poorly designed trials to justify foisting psychological treatments on kids with a physiological disease. I wanted to post some additional bits and pieces related to the issue.

*****

I sent Dr. Crawley a link to last week’s post, offering her an opportunity to send her response to Dr. Racaniello for posting on Virology Blog, along with my response to her response. So far, Dr. Racaniello and I haven’t heard back, I doubt we will. Maybe she feels more comfortable misrepresenting facts in trial protocols and radio interviews than in addressing the legitimate concerns raised by patients and confronting the methodological flaws in her research. I hope Dr. Crawley knows she will always have a place on Virology Blog to present her perspective, should she choose to exercise that option. (Esther, are you reading this?)

*****

From reading the research of the CBT/GET/PACE crowd, I get the impression they are all in the habit of peer-reviewing and supporting each others’ work. I make that assumption because it is hard to imagine that independent scientists not affiliated with this group would overlook all the obvious problems that mar their studies, like outcome measures that represent worse health than entry criteria, as in the PACE trial itself. So it’s not surprising to learn that one of the three principal PACE investigators, psychiatrist Michael Sharpe, was on the committee that reviewed, and approved, Dr. Crawley’s one-million-pound FITNET-NHS study.

FITNET-NHS is being funded by the U.K.’s National Institute for Health Research. I have no idea what role, if any, Dr. Sharpe played in pushing through Dr. Crawley’s grant, but it likely didn’t hurt that the FITNET-NHS protocol cited PACE favorably while failing to point out that it has been rejected as fatally flawed by dozens of distinguished scientists and clinicians. Of course, the protocol also failed to point out that the reanalyses of the trial data have shown that the findings published by the PACE authors were much better than the results using the methods they promised in their protocol. (More on the reanalyses below.) And as I noted in my previous post, the FITNET-NHS protocol also misstated the NICE guidelines for chronic fatigue syndrome, making post-exertional malaise an optional symptom rather than a required component, thus conflating chronic fatigue and chronic fatigue syndrome, just as the PACE authors did by using the overly broad Oxford criteria.

The FITNET-NHS proposal also didn’t note some similarities between PACE and the Dutch FITNET trial on which it is based. Like the PACE trial, the Dutch relied on a post-hoc definition of recovery. The thresholds the FITNET investigators selected after they saw the results were pretty lax, which certainly made it easier to find that participants had attained recovery. Also like the PACE trial, the Dutch participants in the comparison group ended up in the same place as the intervention group at long-term follow-up. Just as the CBT and GET in PACE offered no extended advantages, the same was true of the online CBT provided in FITNET.

And again like the PACE authors, the FITNET investigators downplayed these null findings in their follow-up paper. In a clinical trial, the primary results are supposed to be comparisons between the groups. Yet in the follow-up PACE and FITNET articles, both teams highlighted the within-group comparisons. That is, they treated the fact that there were no long-term differences between the groups as an afterthought and boasted instead that the intervention groups sustained the progress they initially made. That might be an interesting sub-finding, but to present within-group results as a clinical trial’s main outcome is highly disingenuous.

*****

As part of her media blitz for the FITNET-NHS launch, Dr. Crawley was interviewed on a BBC radio program by a colleague, Dr. Phil Hammond. In this interview, she made some statements that demonstrate one of two things: Either she doesn’t know what she’s talking about and her misrepresentations are genuine mistakes, or she’s lying. So either she’s incompetent, or she lacks integrity. Not a great choice.

Let’s parse what she said about the fact that, at long-term follow-up, there were no apparent differences between the intervention and the comparison groups in the Dutch FITNET study. Here’s her comment:

Oh, people have really made a mistake on this, said Dr. Crawley. So, in the FITNET Trial, they were offered FITNET or usual care for six months, and then if they didn’t make a recovery in the usual care, they were offered FITNET again, and they were then followed up at 2 to 3 years, so of course what happened is that a lot of the children who were in the original control arm, then got FITNET as well, so it’s not surprising that at 2 or 3 years, the results were similar.

This is simply not an accurate description. As Dr. Crawley must know, some of the Dutch FITNET participants in the usual care comparison group went on to receive FITNET, and others didn’t. Both sets of usual care participants, not just those who received FITNET, caught up to the original FITNET group. For Dr. Crawley to suggest that the reason the others caught up was that they received FITNET is, perhaps, an unfortunate mistake. Or else it’s a deliberate untruth.

*****

Another example from the BBC radio interview: Dr. Crawley’s inaccurate description of the two reanalyses of the raw trial data from the PACE study. Here’s what she said:

First of all they did a reanalysis of recovery based on what the authors originally said they were going to do, and that reanalysis done by the authors is entirely consistent with their original results. [Actually, Dr. Crawley is mistaken here; the PACE authors did a reanalysis of improvement, not of recovery]€¦Then the people that did the reanalysis did it again, using a different definition of recovery, that was much much harder to reach–and the trial just wasn’t big enough to show a difference, and they didn’t show a difference. [Here, Dr. Crawley is talking about the reanalysis done by patients and academic statisticians.] Now, you know, you can pick and choose how you redefine recovery, and that’s all very important research, but the message from the PACE Trial is not contested; the message is, if you want to get better, you’re much more likely to get better if you get specialist treatment.

This statement is at serious odds with the facts. Let’s recap: In reporting their findings in The Lancet in 2011, the PACE authors presented improvement results for the two primary outcomes of fatigue and physical function. They reported that about 60 percent of participants in the CBT and GET arms reached the selected thresholds for improvement on both measures. In a 2013 paper in the journal Psychological Medicine, they presented recovery results based on a composite recovery definition that included the two primary outcomes and two additional measures. In this paper, they reported recovery rates for the favored intervention groups of 22 percent.

Using the raw trial data that the court ordered them to release earlier this year, the PACE authors themselves reanalyzed the Lancet improvement findings, based on their own initial, more stringent definition of improvement in the protocol. In this analysis, the authors reported that only about 20 percent improved on both measures, using the methods for assessing improvement outlined in the protocol. In other words, only a third as many improved, according to the authors’ own original definition, compared to the 60 percent they reported in The Lancet. Moreover, in the reanalysis, ten percent improved in the comparison group, meaning that CBT and GET led to improvements in only one in ten participants, a pretty sad result for a five-million-pound trial.

However, because these meager findings were statistically significant, the PACE authors and their followers have, amazingly, trumpeted them as supporting their initial claims. In reality, the new improvement findings demonstrate that any benefits offered by CBT and GET are marginal. It is preposterous and insulting to proclaim, as the PACE authors and Dr. Crawley have, that this represents confirmation of the results reported in The Lancet. Dr. Crawley’s statement that the message from the PACE trial is not contested is of course nonsense. The PACE message has been exposed as bullshit, and everyone knows it.

The PACE authors did not present their own reanalysis of the recovery findings, probably because those turned out to be null, as was shown in a reanalysis of that data by patients and academic statisticians, published on Virology Blog. That reanalysis found single-digit recovery rates for all the study arms, and no statistically significant differences between the groups. Dr. Crawley declared in the radio interview that this reanalysis used a different definition of recovery, that was much harder to reach. And she acknowledged that the reanalysis didn’t show a difference, but she blamed this on the fact that the PACE trial wasn’t big enough, even though it was the largest trial ever of treatments for ME/CFS.

This reasoning is specious. Dr. Crawley is ignoring the central point: The recovery reanalysis was based on the authors’ own protocol definition of recovery, not some arbitrarily harsh criteria created by outside agitators opposed to the trial. The PACE authors themselves had an obligation to provide the findings they promised in their protocol; after all, that’s the basis on which they received funding and ethical permission to proceed with the trial.

It is certainly understandable why they, and Dr. Crawley, prefer the manipulated and false recovery data published in Psychological Medicine. But deciding post-hoc to use weaker outcome measures and then refuse to provide your original results is not science. That’s data manipulation. And if this outcome-switching is done with the intent to hide poor results in favor of better ones, it is considered scientific misconduct.

*****

I also want to say a few words about the leaflet promoting FITNET-NHS. The leaflet states that most patients recover with specialist treatment and less than ten percent recover from standard care. Then it announces that this specialist treatment is available through the trial, implicitly promising that most of those who get the therapy will be cured.

This is problematic for a host of reasons. As I pointed out in my previous post, any claims that the Dutch FITNET trial, the basis for Dr. Crawley’s study, led to recovery must be presented with great caution and caveats. Instead, the leaflet presents such recovery as an uncontested fact. Also, the whole point of clinical trials is to find out if treatments work, in this case, whether the online CBT approach is effective, as well as cost-effective. But the leaflet is essentially announcing the result–recovery, before the trial even starts. If Dr. Crawley is so sure that this treatment is effective in leading to recovery, why is she doing the trial in the first place? And if she’s not sure what the results will be, why is she promising recovery?

Finally, as has been pointed out many times, the PACE investigators, Dr. Crawley and their Dutch colleagues all appear to believe that they can claim recovery based solely on subjective measures. Certainly any definition of recovery should require that participants can perform physically at their pre-sickness level. However, the Dutch researchers refused to release the one set of data, how much participants moved, as assessed by ankle monitors called actometers–that would have proven that the kids in FITNET had recovered on an objective measure of physical performance. The refusal to publish this data is telling, and leaves room for only one interpretation: The Dutch data showed that participants did no better than before the trial, or perhaps even worse, on this measure of physical movement.

This FITNET-NHS leaflet should be withdrawn because of its deceptive approach to promoting the chances of recovery in Dr. Crawley’s study. I hope the advertising regulators in the U.K. take a look at this leaflet and assess whether it accurately represents the facts.

*****

As long as we’re talking about the Dutch members of the CBT/GET ideological movement, let’s also look briefly at another piece of flawed research from that group. Like the PACE authors and Dr. Crawley, these investigators have found ways to mix up those with chronic fatigue and those with chronic fatigue syndrome. A case in point is a 2001 study that has been cited in systematic reviews as evidence for the effectiveness of CBT in this patient population. (Dr. Bleijenberg, a co-investigator on the FITNET-NHS trial, was also a co-author of this study.)

In this 2001 study, published in The Lancet (of course!), the Dutch researchers described their case definition for identifying participants like this: Patients were eligible for the study if they met the US Centers for Disease Control and Prevention criteria for CFS, with the exception of the criterion requiring four of eight additional symptoms to be present.

This statement is incoherent. (Why do I need to keep using words like incoherent and preposterous when describing this body of research?) The CDC definition has two main components: 1) six months of unexplained fatigue, and 2) four of eight other symptoms. If you abandon the second component, you can no longer refer to this as meeting the CDC definition. All you’re left with is the requirement that participants have suffered from six months of fatigue.

And that, of course, is the case definition known as the Oxford criteria, developed by PACE investigator Michael Sharpe in the 1990s. And as last year’s seminal report from the U.S. National Institutes of Health suggested, this case definition is so broad that it scoops up many people with fatiguing illnesses who do not have the disease known as ME/CFS. According to the NIH report, the Oxford criteria can impair progress and cause harm, and should therefore be retired from use. The reason is that any results could not accurately be extrapolated to people with ME/CFS specifically. This is especially so for treatments, such as CBT and GET, that are likely to be effective for many people suffering from other fatiguing illnesses.

In short, to cite any findings from such studies as evidence for treatments for ME/CFS is unscientific and completely unjustified. The 2001 Dutch study might be an excellent look at the use of CBT for chronic fatigue*. But like FITNET-NHS, it is not a legitimate study of people with chronic fatigue syndrome, and the Dutch Health Council should acknowledge this fact in its current deliberations about the illness.

*In the original phrasing, I referred to the intervention mistakenly as ‘online CBT.’

Comments are closed.

Scroll to Top