By David Tuller, DrPH
The Journal of Psychosomatic Research (JSR), an influential publication. recently published an article that made a crucial point—in clinical trials, subjective outcomes are at “a greater risk of bias due to any unblinding.” The article, which I wrote about here, was authored by the journal’s current editor and two previous editors, both of whom are still on the journal’s advisory board.
The article involved whether or not a well-blinded clinical trial of homeopathy, which the journal had published years earlier, should be retracted. The details were complex and of little consequence here, beyond this: The journal’s decision not to retract the study rested on the assessment that the blinding remained robust despite one investigator’s efforts to undermine it. Given the article’s editorial provenance, it would likely be fair to assume the following passage represents the journal’s position:
“Reporting on the integrity of the blind has merit and is especially valuable when dealing with subjective outcomes for which there is a greater risk of bias due to any unblinding…Subjective outcomes are frequently used in studies that fall within this journal’s scope, at the interface of psychology and medicine. We recommend assessing the integrity of the blind for any clinical trial, particularly those utilizing subjective outcomes.”
One obvious corollary of this point is that extra care must be taken in interpreting subjective outcomes when intervention assignment is not blinded. Another is that, when blinding is not secure, objective outcomes do not present the same risk of bias as subjective ones.
Unfortunately, at least two members of JSR’s advisory board—Professors Michael Sharpe and Peter White, two of the three lead PACE investigators–do not share this cautious view, at least judging by their research history. Now JSR has provided Professor White and several colleagues yet another opportunity to misinterpret findings of their research.
The new study is called “Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial.” This was essentially a home-based version of the GET intervention tested in PACE. The primary outcomes were self-reported fatigue and physical function. In 2017, the investigators reported in The Lancet that those in the graded exercise self-help (GES) arm reported modest but positive results at 12 weeks post-randomization, compared to those who received so-called standard medical care (SMC) alone. (The GES arm also received SMC.) Professor White was the senior author.
The study was hyped by the UK’s Science Media Centre, a beehive of support for the biopsychosocial ideological brigades. For the SMC’s round-up of “expert reaction,” some of the usual suspects presented cheery comments. “This study contributes to a body of evidence that graded exercise can help to improve functioning and reduce fatigue in people with chronic fatigue syndrome,” declared Professor Trudie Chalder, the other lead PACE investigator along with Professors Sharpe and White.
The SMC also solicited a comment from Professor Chris Ponting, a geneticist at the University of Edinburgh. Here’s what he said: “The beneficial effect [of GES] was for fewer than 1 in 5 individuals, for an unblinded trial, and there was no consideration of long-term benefit or otherwise. The study could also have exploited actometers that would have more accurately measured participant activity.”
As Ponting noted, the reported benefits were not impressive. The fact that the study was unblinded, he appeared to imply, raised questions about the credibility of even those meager results. In mentioning the decision to forego actometers, he was also highlighting the risk of bias inherent in relying on subjective outcomes in the context of unblinded research. In effect, he was drawing attention four years ago to a significant problem that the journal’s current and former editors addressed last month.
GETSET Follow-Up Fails Upwards
According to the new study, posted on April 2 with Professor White again as senior author, the GES intervention provided no benefits at one-year follow-up over SMC. Yet here’s how the findings were described in the “highlights” section describing the paper on the ScienceDirect site: “Guided graded exercise self-help (GES) can lead to sustained improvement in patients with chronic fatigue syndrome.”
Given the null results, this was deceptive. In the abstract, the conclusion was marginally less dishonest but still unacceptable: “The short-term improvements after GES were maintained at long-term follow-up, with further improvement in the SMC group such that the groups no longer differed at long-term follow-up.” (I have not yet been able to access the full study through the Berkeley library; I’m not sure why. Also, it is not clear to me if the investigators or others involved in the publication process wrote the “highlights” section.)
In sum, the findings were presented as if the study had proven the intervention to be effective over the long-term period–even though the findings documented the exact opposite. The fact that the non-intervention group caught up is treated as a secondary matter–ignored completely in the “highlights” section and downplayed in the abstract’s conclusion.(Another of the four “highlights” is significant but for some reason is not mentioned anywhere in the abstract: “Most patients remained unwell at follow up; more effective treatments are required.”)
As usual with these people, this is not the proper or transparent way to report the results of a clinical trial. The main outcome of a clinical trial—and the first that should be reported, even in a follow-up study—is the comparison between the intervention and non-intervention groups. So let’s be clear: At one year, the GETSET study produced null results for the only important comparison—between the group that received GES and the group that received SMC alone. The only conclusion possible from this study is that GES had no documented long-term benefits.
Any other way of framing the findings—such as the way the investigators have framed them—is spin. And egregious spin at that. Only investigators aware and perhaps scared that their findings undermine the foundational theories of their entire approach to intervention would try to disguise the bad news in this clumsy and anti-scientific manner.
Incidentally, Professor White and his PACE colleagues used this same silly parlor trick when they faced a similar dilemma a few years ago. The long-term PACE results, published in Lancet Psychiatry in 2015, showed no benefits for the CBT and GET interventions over the two comparison groups. As with GETSET and other follow-up studies of these psycho-behavioral treatments, the non-intervention study participants had caught up. And what did the PACE investigators do? As with GETSET, they declared success based on within-group findings. Then they tried to explain away the disastrous fact that they themselves had uncovered: Their prized intervention had no long-term benefits.
In this case, the GETSET follow-up’s hyping of its null results appeared to diss the journal’s own recent admonition that subjective findings have “a greater risk of bias due to any unblinding.” That common-sense wisdom has apparently not yet tempered the continuing passion of the most devoted biopsychosocial brigadiers—a group that includes Professor White–to make unfounded claims of success.
In addition to its fake-news presentation of the GETSET results, the study also suggests that the intervention might be “cost-effective.” Huh? What does it mean for an intervention that produces null results to be cost-effective? Cost-effective at what? I’ll let others dissect that particular claim.