By David Tuller, DrPH
David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.
Wow, the research from the CBT/GET crowd in The Netherlands never ceases to amaze. Like the work of their friends in the U.K., each study comes up with new ways to be bad. It’s almost too easy to poke holes in these things. And yet the investigators appear unable to restrain themselves from making extremely generous over-interpretations of their findings–interpretations that cannot withstand serious scrutiny. The investigators always conclude, no matter what, that cognitive and/or behavioral therapies are effective for treating the disease they usually call chronic fatigue syndrome.
That this so-called science manages to get through peer review is astonishing. That is, unless we assume the studies are all peer-reviewed by other investigators who share the authors’ “unhelpful beliefs” and “dysfunctional cognitions” about ME/CFS and the curative powers of cognitive behavior therapy and graded exercise therapy.
Let’s take a quick look at yet another Dutch study of CBT for adolescents, a 2004 trial published in the BMJ. This one offers a superb example of over-interpretation. The small trial, with 71 participants, had two arms. One group received ten sessions of CBT over five months. The other received…a place on a waiting list for treatment. That’s right–they got nothing. Guess what? Those who got something did better on subjective measures at five months than those who got nothing. The investigators’ definitive conclusion: CBT is an effective treatment for sick teens.
I mean, WTF? It’s not hard to figure out that, you know, offering people some treatment is more likely to produce positive responses to subjective questions than offering them a place on a waiting list. That banal insight must be right in the first chapter of Psychological Research for Dummies. Aren’t these investigators presenting themselves as authorities on human behavior? Have they heard of something called the placebo effect?
Here’s what this BMJ study proved: Ten sessions of something lead to more reports of short-term benefits than no sessions of anything. But ten sessions of what? Maybe ten sessions of poker-playing or ten sessions of watching Seinfeld reruns while holding hands with the therapist and singing “The Girl from Ipanema” in falsetto would have produced the same results. Who knows? To flatly declare that their findings prove that CBT is an effective treatment—without caveats or an iota of caution—is a huge and unacceptable interpretive leap. The paper should never have been published in this form. It’s ridiculous to take this study as some kind of solid “evidence” for CBT.
But from the perspective of the Dutch research group, this waiting-list strategy apparently worked so well that they used it again for a 2015 study of group CBT for chronic fatigue syndrome. In this study, providing CBT in groups of four or eight patients worked significantly better than placing patients on a waiting list and providing them with absolutely nothing. Of course, no one could possibly take these findings to mean that group CBT specifically is an effective treatment—except they did.
When I’m reading this stuff I sometimes feel like I’m going out of my mind. Do I really have to pick through every one of these papers to point out flaws that a first-year epidemiology student could spot?
One big issue here is how these folks piggy-back one bad study on top of another to build what appears to be a robust body of research but is, in fact, a house of cards. When you expose the cracks in the foundational studies, the whole edifice comes tumbling down. A case in point: a 2007 Dutch study that explored the effect of CBT on “self-reported cognitive impairments and neuropsychological test performance.” Using data from two earlier studies, the investigators concluded that CBT reduced self-reported cognitive impairment but did not improve neuropsychological test performance.
Which studies was this 2007 study based on? Well, one of them was the very problematic 2004 study I have just discussed–the one that found CBT effective when compared to nothing. The other was the 2001 study in The Lancet that I wrote about in my last post. As I noted, this Lancet study claimed to be using the CDC criteria for chronic fatigue syndrome, but then waived the requirement that patients have four other symptoms besides fatigue. So it was, in effect, a study of a heterogeneous group of people suffering from at least six months of fatigue.
This case definition—six months of fatigue, with no other symptoms necessary—was used in the PACE trial and is known as the Oxford criteria. It has been discredited because it generates heterogeneous populations of people suffering from a variety of fatiguing illnesses. The results of Oxford criteria studies cannot be extrapolated to those with ME/CFS.
The 2007 study relies on the accuracy and validity of the two studies whose data it incorporates. Since those earlier studies violated basic understandings of scientific analysis, the new study is also bogus and cannot be taken seriously.
The PACE authors themselves have perfected this strategy of generating new bad papers by stacking up earlier bad ones. In November, Trudie Chalder demonstrated her personal flair for this technique as co-author of a systematic review of “attentional and interpretive bias towards illness-related information in chronic fatigue syndrome.” The authors’ conclusion: “Cognitive processing biases may maintain illness beliefs and symptoms in people with CFS.” The proposed solution to that would obviously be some sessions of CBT to correct those pesky cognitive processing biases.
Among other problems, Dr. Chalder and her co-authors included data from Oxford criteria studies. By including in the mix these heterogeneous samples of people suffering from chronic fatigue, Dr. Chalder and her colleagues have invalidated their claim that it is a study of the illness known as chronic fatigue syndrome. Of course, Psychological Medicine, which published this new research gem, is the journal that published—and has consistently refused to correct–the PACE “recovery” paper in which participants could get worse but still meet “recovery” thresholds.
The Dutch branch of the CBT/GET ideological brigade has been centered at Radboud University Nijmegen, home base for many years of two of the movement’s leading lights: Dr. Gijs Bleijenberg and Dr. Hans Knoop. Dr. Knoop recently moved to the University of Amsterdam and is currently a co-investigator of FITNET-NHS with Esther Crawley. Dr. Bleijenberg, on the occasion of his own retirement a few years ago, had this to say about his longtime friend and colleague, PACE investigator Michael Sharpe: “Dear Mike, we know each other nearly 20 years. You have inspired me very much in the way you treated CFS. Thanks a lot!”
Indeed. Dr. Bleijenberg and his Dutch colleagues appear to have learned a great deal from their PACE besties. Dr. Bleijenberg and Dr. Knoop demonstrated their own nimble use of language in the 2011 commentary in The Lancet that accompanied the publication of the first PACE results. I discussed this deceptive commentary at length in a post last year, so I won’t regurgitate the whole sorry argument here. But the Dutch investigators themselves are well aware that their claim that thirty percent of PACE participants met a “strict criterion” for recovery is preposterous.
How do I know that Dr. Bleijenberg and Dr. Knoop know this? Because as I documented in last year’s post, claims in the 2011 commentary contradict and ignore statements they themselves made in a 2007 paper that posed this question: “Is a full recovery possible after cognitive behavioural therapy for chronic fatigue syndrome?” (The answer, of course, was yes. Peter White, the lead PACE investigator, was a co-author of the 2007 paper.) Moreover, Dr. Bleijenberg and Dr. Knoop certainly know that the “strict criterion” they touted included thresholds that some participants had already met at baseline—yet they have still refused to correct this statement.
Given that all of these studies present serious methodological concerns, the Dutch Health Council panel considering the science of ME/CFS should be very, very wary of using them to formulate recommendations. The panel should understand that, within the next few months, peer-reviewed analyses of the original PACE data are likely to be published. (Two such analyses—one by the PACE authors themselves, one by an independent group of patients and academic statisticians–have already been published online, without peer review.) The upcoming papers will demonstrate conclusively that the “benefits” reported by the PACE team were mostly or completely illusory—and were obtained only by methodological anomalies like dramatic and unacceptable changes in outcome measures.
In an open letter to The Lancet posted on Virology Blog last February, dozens of prominent scientists and clinicians condemned the PACE study and its conclusions in harsh terms. In the U.K., the First-Tier Tribunal cited this worldwide dismay about the trial’s egregious lapses while demolishing the PACE authors’ excuses for withholding their data. The studies from the Radboud University crowd and their compatriots all rest on the same silly, unproven hypotheses of dysfunctional thinking, fear of activity, and deconditioning, and are just as intellectually incoherent and dishonest.
Should the Health Council produce a report recommending cognitive and behavioral treatments based on this laughable body of “research,” the organization could become an international joke and suffer enormous long-term reputational damage. The entire PACE paradigm is undergoing a very public unraveling. Everyone can now see what patients have seen for years. Meanwhile, biomedical researchers in the U.S., Norway, and elsewhere are narrowing in on the actual pathophysiology underlying ME/CFS.
It would be a shame to see the Dutch marching backwards to embrace scientific illiteracy and adopt an “Earth-is-flat” approach to reality.
And for a special bonus, let’s now take another quick peek at Dr. Crawley’s work. Someone recently e-mailed me a photo of a poster presentation by Dr. Crawley and three colleagues. This poster was shown at the inaugural conference of the U.K. CFS/ME Research Collaborative, or CMRC, held in 2014. The poster was based on information from the same dataset used for Dr. Crawley’s recent Pediatrics study. As I pointed out two posts ago, that flawed study claimed a surprisingly high prevalence of 2 % among adolescents—a figure that drew widespread attention in media reports.
Dr. Crawley has cited high prevalence estimates to argue for more research into and treatment with CBT and GET. And if these prevalence rates were real, that might make sense. However, as I noted, her method of identifying the illness was specious—she decided, without justification or explanation, that she could diagnose chronic fatigue syndrome through parental and child reports of chronic fatigue, and without information from clinical examinations. In fact, after those who appeared to have high levels of depression were removed, the prevalence fell to 0.6 %–although this lower figure is not the one Dr. Crawley has emphasized.
Despite the high prevalence, however, the same dataset showed that adolescents suffering from the illness generally got better without any treatment at all, according to the 2014 poster presentation. Here’s the poster’s conclusion: “Persistent CFS/ME is rare in teenagers and most teenagers not seen in a clinical service will recovery spontaneously.”
Isn’t that great? Why haven’t I seen these hopeful data before? Although the poster predated this year’s Pediatrics paper, the data about very high rates of spontaneous recovery did not make it into that prevalence study. Moreover, the FITNET-NHS protocol and the recruitment leaflet highlight the claim that few adolescents will recover at six months without “specialist treatment” but most will recover if they receive it. Unmentioned is the highly salient fact that this “specialist treatment” apparently makes no long-term difference.
In reality, the adolescents who recovered spontaneously most likely were not suffering from ME/CFS in the first place. Dr. Crawley certainly hasn’t provided sufficient evidence that any of the children in the database she used actually had it, despite her insistence on using the term. Most likely, some unknown number of those identified as having chronic fatigue syndrome in the Pediatrics paper and in the poster presentation did have ME/CFS. But many or most were experiencing what could only be called a bout of chronic fatigue, for unknown reasons.
It is disappointing that Dr. Crawley did not include the spontaneous recovery rate in the Pediatrics paper or in the FITNET-NHS protocol. In fact, as far as I can tell, these optimistic findings have not been published anywhere. I don’t know the rationale for this decision to withhold rather than publish substantive information. Perhaps the calculation is that public reports of high rates of spontaneous recovery would undermine the arguments for ever-more funding to study CBT and GET? Just a guess, of course.
(Esther–Forgive me if I’m mistaken about whether these data have been published somewhere. I have only seen this information in your poster for the inaugural CMRC conference in 2014. If the data have been peer-reviewed and published, I stand corrected on that point and applaud your integrity.)