By David Tuller, DrPH
*A clarification has been added to this post–see below
It’s Thursday morning in Australia, and I’ve just arrived in Brisbane after a red-eye from Perth, with a week left to go on my tour Down Under. Of course I’m backed up on things I need to write about, and hope to have some down time soon to pull stuff together. (I only post on here from Monday to Wednesday, because Professor Racaniello posts from Thursday to Sunday. I’m doing this post now from the Brisbane airport so I can get it up before Wednesday ends in New York.)
First, for those keeping track, here is yet another clip of me talking. This is a conversation with young Australians with ME/CFS. They “meet” regularly for video chats, and invited me to join them a couple of weeks ago. I enjoyed hanging out with them online and suggested that I didn’t have to be in Australia for us to do this again:
Since criticizing my crowdfunding efforts, Professor Michael Sharpe has continued his quixotic and silly twitter campaign to rebut his many critics. I have chosen for now not to engage with him further, since he is apparently impervious to reasoned argument. Of course, his self-portrayal as the aggrieved party in the PACE saga is unattractive and untrue.
At some point I might review his recent claims. Right now, however, I want to briefly mention some comments he made in 2011, shortly after The Lancet published the first PACE results, in a radio interview on ABC, the Australian Broadcasting Corporation:
““We have a number needed to treat; I think it’s about seven to get a clinically important treatment benefit with CBT and GET. What this trial isn’t able to answer is how much better are these treatments than really not having very much treatment at all.”
In epidemiology, “number needed to treat” (NNT) is the calculation of how many people need to receive an intervention in order for one person to achieve the desired outcome. For the reported PACE results, as Professor Sharpe indicates, the NNT was seven–that is, only one in seven received what he called a “clinically important benefit.” He presumably calculated that by taking the percentage the investigators declared had “improved” with CBT (59%) and GET (61%) and subtracting the percentage who had “improved” in the comparison group of “specialist medical care” (45%). The difference is about 15%, which is close to one in seven.
Of course, this post-hoc definition of “improvement” was significantly less stringent than the definition outlined in the original protocol, as has been pointed out many times, most recently in the paper published last month in BMC Psychology that rebutted the PACE findings. Although the purported “improvement” rate of around 60% for the two active intervention groups has been cited in the past to tout the success of PACE, that figure looks much less robust when combined with the fact that 45% “improved” without receiving either CBT or GET.
It has to be said that an NNT of seven in PACE is not a cause for cheering. It is a pretty low number, especially when compared to the optimistic predictions made by the investigators in designing the trial in the first place. This impression is certainly compounded by the second sentence in the above quote–about whether the treatments are much better “than really not having very much treatment at all.”
The trial was designed to document whether or not these treatments worked. If, after spending five million pounds, the authors could not definitively show “how much better” these treatments were than not having them, why have the therapies become the de facto standard of care all around the world? Even if we take the reported results at face value–and of course I don’t, given the rampant outcome-switching and data manipulation–why have the PACE investigators promoted their study as proof of the effectiveness of CBT and GET rather than proof that they have almost no effect at all?
Of course, the PACE authors have somewhat protected themselves by arguing that the treatment effects are “modest.” But such exceedingly “modest” evidence should never have been used to try to impose these treatments on patients seeking care or hoping to access disability insurance benefits, whether in the U.K., Australia, or anywhere else. It certainly should not be used to threaten parents who refuse to have their children undergo the treatments–especially given the many reports of harm arising from GET and, to a lesser extent, the form of CBT promoted by the PACE authors.
For reasons that I don’t understand, no one has held Professor Sharpe to account for his statement that it was not clear from PACE “how much better” CBT and GET were than not having treatment at all. It seems to me that this statement meets the brilliant definition of “gaffe” proposed by U.S. political journalist Michael Kinsley: When someone inadvertently or accidentally reveals an embarrassing truth. Kinsley was referring to politicians, but the same definition can certainly apply to scientists who seem to have made strenuous efforts to hide the pathetic results of their research.
The ABC interview with Sharpe contained other interesting moments, as did a companion interview with Lancet editor Richard Horton. I hope at some point to have time to deconstruct their statements in more depth. For now, I’ll just mention this: In his own interview, Dr. Horton claimed that PACE had gone through “endless rounds of peer review”–even though we know that the paper was fast-tracked to publication. Of course, Dr. Horton has never answered the question I have posed multiple times: How many “endless rounds of peer review” was it possible to conduct during a fast-tracking process?
*Clarification (April 20, 2018): It has been pointed out to me that the “number needed to treat” (NNT) for many if not most medical interventions is much higher than seven–often in the order of many dozens or hundreds. This is indisputable. In this post, I was referring specifically to the NNT for PACE, which is unimpressive given the circumstances. The investigators had made optimistic predictions for the effectiveness of CBT and GET. And the therapies were based on the presumption that people did not have an ongoing organic illness but were only experiencing symptoms caused by deconditioning–which itself was attributed to sedentary behavior arising from false illness beliefs.
If this hypothesis were true, then behavioral and psychological interventions designed to correct the “unhelpful” ideations and to get people to increase their activity levels should have produced more encouraging findings. Professor Sharpe’s subsequent statement– that the results were not robust enough to prove that undergoing these treatments was much better than not having much treatment at all–reinforces the point.