Trial By Error: My Letters to Psychosomatics Journal About Prof White’s Misleading GETSET Paper

By David Tuller, DrPH

In early April, I wrote about a study published in the Journal of Psychosomatic Researcha one-year follow-up of the GETSET trial of self-help graded exercise therapy for ME/CFS. The investigators had previously reported short-term benefits for the intervention. In this new paper, despite no benefits of the intervention over regular care, the team reported success because the intervention arm did not score worse than it had at 12 weeks. Needless to say, this is not the proper way to report clinical trial results.

On April 24th, I sent a letter to Professor Jess Fiedorowicz, editor-in-chief of the Journal of Psychosomatic Research. He responded quickly and promised to review the matter with journal colleagues. Given the August deadline for the National Institute for Health and Care Excellence to publish its revised version of its new ME/CFS guidelines, I sent a follow-up letter today to try to nudge the journal to respond sooner rather than later.

Below I have posted the exchange.

**********

Dear Dr Fiedorowicz–

The Journal of Psychosomatic Research recently published a study called “Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial.” Professor Peter White, a member of the journal’s advisory board, is the senior author.

In this clinical trial, the investigators were testing a self-help graded exercise program, which they reported had shown some short-term benefits. According to the new study, Clark et al, the intervention provided no benefits at one-year follow-up over specialist medical care (SMC). Yet here’s how the findings were described in the €œhighlights€ section: €œGuided graded exercise self-help (GES) can lead to sustained improvement in patients with chronic fatigue syndrome.€

Given the null results, this description is troubling. In the study abstract, the conclusion is marginally better but still unacceptable: €œThe short-term improvements after GES were maintained at long-term follow-up, with further improvement in the SMC group such that the groups no longer differed at long-term follow-up.€  

In sum, the results were presented as if the trial had shown the intervention to be effective€“even though the one-year findings should have reasonably led to the opposite assessment. The fact that the non-intervention group scored the same at the end is framed as a matter of lesser significance€“ignored completely in the €œhighlights€ section and given second billing in the abstract’s conclusion. 

(I wrote about the problems with Clark et al in a post on Virology Blog, a science site hosted by Vincent Racaniello, Higgins Professor of Microbiology at Columbia University. I have cc’d him on this letter.)

The main outcomes of a clinical trial are the comparisons between the intervention and non-intervention groups, not within-group comparisons. The main point that should have been made in both the highlights and the  conclusion is this: At one year, there were no benefits in the group that received the self-help graded exercise plus SMC over the group that received SMC alone. In sum, the study had null results. 

Any other way of framing the findings is inappropriate. The framing of Clark et al is a clumsy effort to gloss over the fact that the intervention failed to show benefits at one year by standard clinical trial metrics. 

This is the same problematic strategy Professor White and colleagues used to handle the long-term follow-up to the discredited PACE trial, which at that point also showed no benefits for the interventions over SMC. In Lancet Psychiatry, the authors claimed success because of the “within-group” comparisons and presented the null results for the between-group comparison as an afterthought. That is spin. It is not proper science.

Beyond this issue, Clark et al’s presentation of its results appears to undermine the spirit of an admirable recent commentary in the journal, in which you and two of your predecessors as editor noted that subjective outcomes have €œa greater risk of bias due to any unblinding.€ (I wrote about this commentary on Virology Blog.)

GETSET was an unblinded trial relying on subjective outcomes, exactly the kind that would be most €œat risk of bias,” per your cogent commentary. It is therefore perplexing that the Journal of Psychosomatic Research did not apply a more rigorous approach to evaluating Clark et al than appears to have been the case.

The presentation of what are clearly null results as a success suggests that the peer review was less than rigorous. The trial design itself also stands in apparent disregard of the journal’s own professed position on subjective outcomes and the importance of blinding. Perhaps peer review procedures are different for papers authored by the journal’s advisory board members. 

As you might know, the UK’s National Institute for Health and Care Excellence is currently developing a new clinical guidance for ME/CFS. The final version is scheduled to be published in August. That means Clark et al still has the unfortunate potential to influence the deliberations. I am therefore cc-ing several clinician and patient members of the NICE ME/CFS guidance committee, to alert them to the concerns about the study.

At this point, the Journal of Psychosomatic Research should correct any statements in Clark et al implying that the most important findings are the intervention arm’s within-group comparisons rather than the null results for the comparisons between the groups. Thank you for your attention to this matter.

Best–David

David Tuller, DrPH
Senior Fellow in Public Health and Journalism
Center for Global Public Health
School of Public Health
University of California, Berkeley

**********

Thank you for bringing this concern to our attention.  I have forwarded this and will discuss with our publisher, Associate Editors, and immediate past Editors and respond following discussion with the broader group.

Best wishes in life, work, and advocacy,

Jess—

Jess G. Fiedorowicz, M.D., Ph.D.
Editor-in-Chief, Journal of Psychosomatic Research
Adjunct Faculty, Departments of Psychiatry, Epidemiology, and Internal Medicine
The University of Iowa

**********

Dear Jess–

It has been almost two weeks since I alerted the journal to the problems with the reporting of the GETSET one-year follow-up results. By promoting the within-group comparison for the intervention arm rather than the null results of the between-group comparison–a form of outcome-swapping–Professor White and his colleagues engaged in a deceptive presentation of their data. As I have pointed out, Professor White was already familiar with this strategy, since the PACE trial follow-up paper similarly downplayed null results for the between-group comparisons by highlighting first the within-group comparisons. Perhaps Professor White and colleagues are unaware that this is not an acceptable way to report clinical trial results, even at follow-up.

As you know, there is a hard deadline here–the National Institute for Health and Care Excellence is planning to release the final version of its revised ME/CFS clinical guidance in August, and deliberations are ongoing. It would be fair to assume this paper is being raised by some committee members to push for GET to be re-endorsed, since the draft version released in November recommended against it. The GETSET follow-up reads like a paper designed to influence the NICE debate, although whether that was in fact the goal I have no way of knowing.

The urgency of the matter means the standard academic tendency to examine and debate issues for weeks and months is not appropriate in this instaance. Cc-ing a few NICE committee members, as I did with my initial letter to you and am doing again here, is also insufficient to avert the potentially disastrous outcome of having this study taken at face value. The Journal of Psychosomatic Research has an obligation to make it clear sooner rather than later that this follow-up report documented null benefits for GET at one year and should not have been framed as evidence for the effectiveness of the intervention.

Beyond that, the journal should investigate–and explain to the public–how and why a paper with such an elementary flaw passed peer review in the first place. My Berkeley epidemiology colleagues would be dismayed if their students reported study results in this dishonest fashion.

Going forward, the journal’s recent affirmation of the bias inherent in subjective outcomes when blinding is not rigorous should certainly be taken into account during the peer review process. If your laudatory editorial on this important issue is to amount to more than pretty phrases, perhaps the journal should refrain in future from accepting any unblinded studies that rely solely on subjective outcomes–even though this is the sort of study design favored by Professor White and other members of your editorial advisory board.

By your own standards, the reported results of such research are inevitably fraught with bias, rendering them of questionable value. Given that these studies can nonetheless impact both health policy and clinical decision-making, routinely peer-reviewing and publishing them would appear to be unwarranted as a scientific matter as well as antithetical to the interests of patients.

I look forward to hearing how the journal plans to address these critical matters involving both the GETSET follow-up and the larger issue of unblinded studies relying on subjective outcomes. Thanks!

Best–David

=

Comments are closed.

Scroll to Top