By David Tuller, DrPH
The Journal of Pyschosomatic Research, a high-profile publication from Elsevier, has recently published an article relevant to long-standing arguments about trials that are both unblinded and reliant on subjective outcomes–like, say, the PACE study and related research into psycho-behavioral treatments for ME/CFS. This specific question–how to assess research quality when subjective outcomes are involved–is at the core of ongoing debates over both draft guidelines for ME/CFS from the UK’s National Institute for Health and Care Excellence and Cochrane’s contested review of exercise therapies for chronic fatigue syndrome.
The authors of the article in the Journal of Psychosomatic Research are the current editor (Jess Fiedorowicz, of the University of Ottawa) and two former editors (James L. Levenson of Virginia Commonwealth University, and Albert Leentjens of Maastricht University Medical Centre). The two former editors currently sit on the journal’s advisory board. I assume that the article thus reflects the journal’s own perspective. And that perspective would seem to contradict claims made by some of the journal’s prominent advisory board members, including two of the lead PACE investigators.
The article, first available online in early March, is called “When is lack of scientific Integrity a reason for retracting a paper? A case study.” It discusses a 2004 paper published by the journal–a study investigating homeopathic treatment for what the investigators called chronic fatigue syndrome. Treatment allocation—to a homeopathic preparation or a placebo—was blinded to both participants and their providers. Not surprisingly, the findings did not support the notion that homeopathy was an effective treatment for chronic fatigue syndrome.
In 2019, the Journal of Psychosomatic Research received a request to retract the 2004 study. The request was based on the public admission by one of the homeopaths involved that she had engaged in actions meant to undermine the blinding of the treatment allocation. In the recent article, the three editors explain why they do not believe retraction is warranted in this case, despite what is a clear lapse in scientific integrity. The editors report that, after investigating the matter, they are convinced that the study findings remain robust and that any efforts to influence the outcomes did not work. They attribute this success to the rigor of the process of blinding everyone who was involved in the study.
So…the point of all this is not to report that homeopathy is not an effective approach for chronic fatigue syndrome (and presumably for the other clinical states grouped under that term, the name myalgic encephelomyelitis, and ME/CFS). It is to note that this team of top editors at a major psychosomatics journal places a great deal of value on the role of blinding in ensuring the integrity of research results. Their “case study” includes this significant passage:
“Reporting on the integrity of the blind has merit and is especially valuable when dealing with subjective outcomes for which there is a greater risk of bias due to any unblinding…. Un-blinded assessors of subjective binary outcomes may exaggerate odds ratios by an average of 36%. Subjective outcomes are frequently used in studies that fall within this journal’s scope, at the interface of psychology and medicine. We recommend assessing the integrity of the blind for any clinical trial, particularly those utilizing subjective outcomes akin to the primary outcomes of the…study in question.”
This passage does not directly address the problem of unblinded trials and subjective outcomes. But the editors certainly make no bones of the fact that subjective outcomes, absent blinding, are at “a greater risk of bias.” And they recommend that checking the “integrity of the blind” is of particular importance for trials “utilizing subjective outcomes”–like the various questionnaires chosen for the homeopathy study in question. It is understandable that these issues are important to these editors, given that, as they note, “subjective outcomes are frequently used in studies that fall within this journal’s scope.”
Who is on the journal’s advisory board?
A corollary to the message of the cited paragraph is that trials “utilizing subjective outcomes” that are not blinded at all are at even greater risk of bias. It is therefore interesting that the Journal of Psychosomatic Research’s roster of advisory board members includes Professors Michael Sharpe and Peter White—two leading proponents of the view that subjective outcomes in unblinded trials are not a problem to interpretation, at least when it comes to their own research.
Colleagues of Professors Sharpe and White who hold similar views are also members of this journal’s advisory board: Professors Per Fink from Denmark, Judith Rosmalen from the Netherlands, and Jon Stone from the UK. These professionals are all experts in the field of so-called “medically unexplained symptoms,” or MUS–in which they place ME/CFS as well as irritable bowel syndrome and other entities of unknown etiology. Yet the body of MUS research is riddled with studies that are unblinded and rely on self-reported outcomes.
Despite the lack of objective support for many of their findings, investigators in this field routinely overlook the role that bias plays in research relying on subjective outcomes in unblinded studies. That role has now been highlighted in a very prominent manner by a journal that these same investigators serve as editorial advisory board members. Hm.
A leading example of this combination of being unblinded and relying on subjective outcomes is the PACE trial, with Professors Sharpe and White as two of the three lead investigators. PACE included four objective measures—a six-minute walking test, a step-test for fitness, whether people were employed, and whether they were on social benefits. All failed to meet the subjective reports of improvements on questionnaires about physical function and fatigue. Rather than acknowledging this fact forthrightly and discussing the implications of the poor results for the validity and reliability of their subjective measures, the PACE authors scattered these findings across multiple papers—a decision that helped minimize any attention to the collective failure of all of them.
The PACE investigators also questioned the objectivity of their own self-selected objective outcomes. They claimed they implemented the six-minute walking test differently than other investigators, so the disastrous results could not be compared to those from studies of people with a variety of conditions. They claimed that independent economic factors and changes during the period of the study meant that whether people got back to work or got off benefits would not be an objective measurement of the success or failure of treatment.
They had also dropped a key objective measure—wearing a movement monitor known as an actimeter for a week at the end of the trial. However, they retained it as a baseline measure, meaning participants wore it for a week at the start. Publicly, the investigators claimed they dropped it as an outcome because they believed that wearing it for a week would be too much of a burden for participants at the end of the trial. They did not explain why it would be more of a burden at the end of the trial than at the beginning.
When trial minutes were released after a freedom of information request, they revealed that the PACE team dropped the actimeters as an outcome after learning that Dutch colleagues had found their own results from this measure did not meet their positive subjective reports. In other words, retaining the measure did not appear to be a useful way to do what they seemed determined to do: “prove” their therapies worked. So they disappeared it.
Now the current and former editors of the Journal of Psychosomatic Research have made declarative statements about about blinding and subjective outcomes. So I’m curious: What is their opinion of the PACE trial? Do Fiedorowicz, Levenson and Leentjens know that this debacle was unblinded and relied on subjective outcomes, and that the investigators assessed their objective findings as irrelevant? Do they care about this contradiction between their own pronouncements on research integrity and the apparent belief by members of their advisory board that combining unblinded studies and subjective findings does not present a problem to proclaiming treatment success?