By David Tuller, DrPH
This week, the Journal of Health Psychology published a special issue containing a raft of commentaries on the PACE trial. Most of them slammed the study for its many, many unacceptable flaws. Not surprisingly, Sir Simon Wessely’s lackeys at the Science Media Centre immediately posted three comments from “experts” lauding the trial and criticizing the JHP commentaries. I thought it might be helpful to deconstruct these rather pathetic efforts at defending the indefensible.
I’ve posted all three statements below, followed by my comments. I decided to keep them relatively brief, although I could have gone on much longer.
Prof. Malcolm Macleod, Professor of Neurology and Translational Neuroscience, University of Edinburgh, said:
“The PACE trial, while not perfect, provides far and away the best evidence for the effectiveness of any intervention for chronic fatigue; and certainly is more robust than any of the other research cited. Reading the criticisms, I was struck by how little actual meat there is in them; and wondered where some of the authors came from. In fact, one of them lists as an institution a research centre (Soerabaja Research Center) which only seems to exist as an affiliation on papers he wrote criticising the PACE trial.
“Their main criticisms seem to revolve around the primary outcome was changed halfway through the trial: there are lots of reasons this can happen, some justifiable and others not; the main think is whether it was done without knowledge of the outcomes already accumulated in the trial and before data lock – which is what was done here.
“So I don’t think there is really a story here, apart from a group of authors, some of doubtful provenance, kicking up dust about a study which has a few minor wrinkles (as all do) but still provides information reliable enough to shape practice. If you substitute ‘CFS’ for ‘autism’ and ‘PACE trial’ for ‘vaccination’ you see a familiar pattern…”
Professor Macleod’s comments reflect a lack of understanding of both the illness itself and the fatal flaws of the PACE study. In his first sentence, he refers to “chronic fatigue.” As I and others have noted about 7 million times, “chronic fatigue” is a symptom of a great many illnesses; “chronic fatigue syndrome,” or “myalgic encephalomyelitis,” or ME/CFS or CFS/ME, or “systemic exertion intolerance disease,” is a specific disease entity.
Although there is still no universally accepted case definition, calling it “chronic fatigue” is a clear indication that Professor Macleod does not have a firm grasp on what he is talking about. The same could be said for the Science Media Centre’s failure to correct this phrasing. If they can’t even properly cite the illness in question, how can anything they claim about it be viewed as authoritative?
Professor Macleod does not engage in any substantive discussion about the criticisms outlined in the JHP commentaries. Instead, like other PACE defenders, he chooses to insult the authors. He notes that he “wondered where some of the authors came from” and suggests that some are of “doubtful provenance,” whatever that means. If he is still wondering who the commentary authors are, I can clue him in: They include experts from University College London, Northwestern University, DePaul University, the University of Hertfordshire, Victoria University of Wellington in New Zealand, UC Berkeley (that’s me), and the ME Association. Others are from patients and independent scholars who have proven themselves time and again to be expert researchers with more integrity and honesty than the entire cabal of PACE defenders.
Professor Macleod also states this about the rampant outcome-switching in PACE: “The main think (sic) is whether it was done without knowledge of the outcomes already accumulated in the trial and before data lock – which is what was done here.” This further demonstrates that he does not understand what happened in PACE. There are, in some cases, legitimate reasons to change outcome assessment methods in clinical trials. However, simply deciding mid-trial that you like other outcome methods better is not a legitimate reason—especially when every single change allows the researchers to report more impressive results.
Moreover, in an open label trial with subjective outcomes like PACE, investigators should have a pretty good idea which way things are trending before seeing the actual results. It is specious to assume that the PACE investigators were “without knowledge of the outcomes already accumulated”—they could have easily known things were not going well and relaxed all their outcome measures as a result.
Furthermore, while they obtained oversight committee approval for changing the primary outcome in the 2011 Lancet paper, they apparently received no approval for their overhaul of the definition of “recovery”—at least, no such approval is cited in the 2013 Psychological Medicine paper. And two of the four “recovery” criteria—the physical function and fatigue outcomes—were from post-hoc analyses, so they were obviously not generated before “data lock.” Professor Macleod does not mention this issue; like the PACE authors themselves, he would prefer to ignore the embarrassing fact that 13% of participants were already “recovered” for physical function at baseline.
Finally, it is rich that he brings up the analogy of anti-vaccination campaigners. Given that it was The Lancet that dramatically spurred that movement with its publication of the now-discredited 1998 Andrew Wakefield paper linking autism to vaccines, Professor Macleod’s statement just makes him appear clueless about The Lancet’s egregious behavior in both cases. I hope someone lets him know, sooner rather than later, that he has made a fool of himself.
Lancet editor Richard Horton vigorously defended the Wakefield study for years, just as he has defended PACE. And just as The Lancet finally retracted that paper, it will ultimately have to retract PACE as well.
Dr Neha Issar-Brown, Programme Manager, Population and Systems Medicine at the Medical Research Council (co-funders, along with the National Institute for Health Research, of the PACE trial), said:
“The Medical Research Council funded and supported the PACE trial after subjecting the research proposal to a robust peer-review process involving experts in the field, as is the case with all our funding decisions. This included ensuring adherence to standardised trials methodology and design principles. The researchers’ findings were then peer-reviewed before publication in journals. All research evolves by continually re-evaluating existing evidence and looking for new knowledge and we would always welcome high-quality research applications to better understand the underlying disease mechanisms, causes, prevention and treatments for this extremely debilitating condition.”
This statement from the Medical Research Council is not in fact a defense of PACE or a response to any of the criticisms. It is simply a statement of the MRC’s role and an explanation of the process of publication. Yet it is simply false that the PACE trial was conducted according to “standardised trial methodology and design principles,” as the commentaries make abundantly clear. Repeating this claim without engaging critics does not alter the facts.
Moreover, the published studies are so full of flaws that it is absurd to cite the fact that they were peer-reviewed as evidence of their validity and reliability. Any study in which participants could meet outcome thresholds at baseline—and that includes both the 2011 and 2013 papers—should obviously never have passed peer review. What we know about The Lancet publication, in particular, is that the paper went through “endless rounds of peer review,” per editor Horton’s words, yet was simultaneously fast-tracked to publication. Despite my many efforts to extract an explanation from Dr. Horton, he has never bothered to explain how many “endless rounds of peer review” it is possible to complete during a fast-track peer review process.
A spokesperson for University of Oxford said:
“The PACE trial of Chronic Fatigue Syndrome treatments was conducted to the highest scientific standards and scrutiny. This included extensive peer review from the Medical Research Council, ethical approval from a Research Ethics Committee, independent oversight by a Trial Steering Committee and further peer review before publication in high-impact journals such as The Lancet.
“The allegation that criteria for patient improvement and recovery were changed to increase the reported benefit of some treatments is completely unfounded. As the study authors have repeatedly made clear, the criteria were changed on expert advice and with oversight committee approvals before any of the outcome data was analysed.
“Oxford University considers Professor Sharpe and his colleagues to be highly reputable scientists whose sole aim has been to improve quality of life for patients with ME/CFS. While scientific research should always be open to challenge and debate, this does not justify the unwarranted attacks on professionalism and personal integrity which the PACE trial team have been subjected to.”
Finally, this statement from Oxford’s unnamed “spokesman” is just a jumble of public relations nonsense. It is always suspicious when an institution declines to put a name to a statement—it often means no individual is willing to take responsibility for what is being said. In the case of this utterly vacuous statement, it makes sense that smart communications professionals would not want to have it attributed to them.
Let’s take this paragraph in particular: “The allegation that criteria for patient improvement and recovery were changed to increase the reported benefit of some treatments is completely unfounded. As the study authors have repeatedly made clear, the criteria were changed on expert advice and with oversight committee approvals before any of the outcome data was analysed.”
The claim that the PACE investigators obtained “oversight committee approvals” for the wholesale redefinition of “recovery” is a lie. They did not obtain any committee approvals for this; at least, that’s the only conclusion that can be drawn from the fact that no such approvals were mentioned in the Psychological Medicine paper. It is perplexing to see an official statement from Oxford—presumably vetted by Professor Sharpe and the SMC (or perhaps not)—contain such a blatantly false claim.
Moreover, it is silly to argue that boosting outcomes was not the aim of the PACE investigators. Obviously, it was. The PACE investigators themselves have argued repeatedly that they relaxed outcome measures, particularly for “recovery,” because they decided mid-trial that the original measures were too stringent. So they clearly knew that the changes they made would improve the reported findings. In that sense, it doesn’t really matter whether they examined the data beforehand; if you make it easier to meet outcome measures by lowering your standards, then you obviously know you will achieve better results.
So this Oxford statement is laughable. If these are the best defenses the Science Media Centre can concoct at this stage of the controversy, then PACE is really in big, big trouble.