By David Tuller, DrPH
It’s another month, and here’s another worthless paper from Trudie Chalder, King’s College London’s factually and statistically challenged professor of cognitive behavior therapy (CBT). In her desperate effort to prove that the treatment paradigm for ME/CFS combining CBT and graded exercise therapy (GET) is evidence-based, she has now published a paper called “A systematic review of randomized controlled trials evaluating prognosis following treatment for adults with chronic fatigue syndrome.”
The article was published in the journal Psychological Medicine, which seems to function as an in-house public relations organ for the GET/CBT ideological brigades. Professor Michael Sharpe and Professor Sir Simon Wessely are on the editorial board; that says it all. In 2013, this august journal published the PACE trial “recovery” paper, in which participants could get worse on key measures and still be counted as “recovered.” Editors at Psychological Medicine refused to acknowledge the violations of standard scientific practice on full display in that paper. Articles in the journal, at least on this issue, cannot be taken at face value.
In this latest piece, Professor Chalder and colleagues trot out data on post-treatment, short-term, medium-term, and long-term outcomes from 15 papers based on clinical trial data. The authors purport that the collective results show benefits after treatment with CBT and GET. As with so much of what Professor Chalder touches, it is crap. She does not understand that her work is essentially meaningless—except in that it occasionally documents the opposite of what she claims. That was the case with the now-discredited PACE trial, which is one of the studies included in the review. (Papers rebutting the PACE findings are not included.)
To start off, it is jarring to read a 2022 article that cites the 2007 guidelines from the UK’s National Institute for Health and Care Excellence (NICE) for what the agency then called CFS/ME–but that completely ignores the 2021 version for ME/CFS. In fact, the article was submitted to Psychological Medicine last November, a month after publication of the new ME/CFS guidelines. At that point, the 2007 guidelines were no longer operative, and referencing them as if they are renders the paper immediately out-of-date–as if the authors are still living in a past era. This decision also raises questions about the competence and integrity of the authors. (On the other hand, this is Professor Chalder, so in her case such questions of competency and integrity have already been definitely answered.)
After decades of research into and promotion of the CBT/GET approach, these are the luke-warm conclusions of the review: “Results suggest some support for the positive effects of CBT and GET at short-term to medium-term follow-up although this requires further investigation given the inconsistent findings of previous reviews. Findings may not be generalizable to severe CFS.” And that unimpressive description is from those seeking to put the best face on things!
What about long-term follow-up?
The evidence base presented in the review hardly seems like a solid basis for assuring disability insurance companies that these interventions can get patients back to work–even though that is what members of the CBT/GET cabal have routinely maintained. Moreover, the review authors make no mention in their conclusions of benefits at long-term follow-up—that is, after more than a year. It is well-known that multiple studies in this field, like the PACE trial, have reported no significant differences between groups on key outcomes at long-term follow-up. The interventions, in other words, have been shown to have no long-term benefits. The review does not highlight or discuss the implications of this key point.
Then there are statements like this: “Current literature shows that cognitive behavioral therapy (CBT) and graded exercise therapy (GET) are the most promising treatments, both of which yield improvements in fatigue and functioning.” This assertion is untenable after the most recent authoritative assessment, the 2021 NICE guidelines, found otherwise in reversing the 2007 recommendations for these interventions. In doing so, NICE assessed the bulk of the evidence as of “very low” quality, with some of it merely of “low” quality. It is perplexing that neither editors at Psychological Medicine nor peer reviewers apparently noticed or cared about this salient and unacceptable omission.
Not surprisingly, the studies were all open-label trials relying on subjective outcomes—a combination of factors that can maximize reporting bias, for a range of reasons. Combining multiple such studies into a systematic review increases the overall numbers, but does not fix a key problem—how to interpret findings fraught with an unknown level of bias.
Chalder and her colleagues acknowledge the reliance on subjective outcomes as a limitation that “may have increased the risk of observer or detection bias.” In response, they recommend that “future trials should obtain a range of outcome measures and investigate potential discrepancies between them” and “should report objective outcomes, in addition to self-report measures.”
It is ironic and hypocritical–laughable, really–that Professor Chalder should raise these points. The PACE trial included four different objective outcomes—a 6-minute walking test, a step-test for fitness, employment status, and whether or not the participant was receiving social benefits. All four failed to match the positive reports of success. Yet rather than “investigate potential discrepancies” between their own objective and subjective outcomes, the PACE team dismissed the objectivity of their failed objective measures.
This new systematic review does not discuss the range of poor objective results from the PACE trial and other research. The decision to exclude these data marks this paper as a public relations document, not a serious examination of the overall impact of CBT and GET. As usual, Professor Chalder has produced more self-serving garbage.
(The review seems to include at least one flat untruth about PACE. In referencing conflicting reports on work status after treatment, the authors suggest that McCrone et al, a PACE trial paper, reported “modest increases in employment.” Here’s the actual phrasing from McCrone et al: “There was no clear difference between treatments in terms of lost employment.” )