An open letter to PLoS One

Update: July 30, 2018

This letter inaccurately explained the issues with the PLoS One study. Specifically, when I wrote it I misunderstood which analyses had and hadn’t been done. The letter was e-mailed to PloS One long ago, so I can’t correct what has already been sent. But the following three paragraphs should have replaced the middle section of the letter:

“The PACE statistical analysis plan included three separate assumptions for how to measure the costs of what they called “informal care”–the care provided by family and friends—in assessing cost-effectiveness from the societal perspective. The investigators promised to analyze the data based on valuing this informal care at: 1) the cost of a home-care worker; 2) the minimum wage; and 3) zero cost. The latter, of course, is what happens in the real world—families care for loved ones without getting paid anything by anyone.

In PLoS One, the main analysis for assessing informal care presented only the results under a fourth assumption not mentioned in the statistical analysis plan—valuing this care at the mean national wage. The paper did not explain the reasons for this switch. Under this new assumption, the authors reported, CBT and GET proved more cost-effective than the two other PACE treatment arms. The paper did not include the results based on any of the three ways of measuring informal care promised in the statistical analysis plan. But the authors noted that sensitivity analyses using alternative approaches “did not make a substantial difference to the results” and that the findings were “robust” under other assumptions for informal care.

Sensitivity analyses are statistical tests used to determine whether, and to what extent, different assumptions lead to changes in results. The “alternative approaches” mentioned in the study as being included in the sensitivity analyses were the first two approaches cited in the statistical analysis plan—valuing informal care at the cost of a home-care worker and at minimum wage. The paper did not explain why it had dropped any mention of the third promised method of valuing informal care—the zero-cost assumption.”


PLoS One
1160 Battery Street
Koshland Building East, Suite 100
San Francisco, CA 94111

Dear PLoS One Editors:

In 2012, PLoS One published “Adaptive Pacing, Cognitive Behaviour Therapy, Graded Exercise, and Specialist Medical Care for Chronic Fatigue Syndrome: A Cost-Effectiveness Analysis.” This was one in a series of papers highlighting results from the PACE study—the largest trial of treatments for the illness, also known as ME/CFS. Psychologist James Coyne has been seeking data from the study based on PLoS’ open-access policies, an effort we support.

However, as David Tuller from the University of California, Berkeley, documented in an investigation of PACE published last October on Virology Blog, the trial suffered from many indefensible flaws, as patients and advocates have argued for years. Among Dr. Tuller’s findings: the main claim of the PLoS One paper–that cognitive behavior therapy and graded exercise therapy are cost-effective treatments–is wrong, since it is based on an erroneous characterization of the study’s sensitivity analyses. The PACE authors have repeatedly cited this inaccurate claim of cost-effectiveness to justify their continued promotion of these interventions.

Yet the claim is not supported by the evidence, and it is not necessary to obtain the study data to draw this conclusion. The claim is based solely on the decision to value the free care provided by family and friends as if it were compensated at the level of a well-paid health care worker. Here is what Dr. Tuller wrote last October about the PLoS One paper and its findings:

        The PLoS One paper argued that the graded exercise and cognitive behavior therapies were the most cost-effective treatments from a societal perspective. In reaching this conclusion, the investigators valued so-called  “informal” care—unpaid care provided by family and friends–at the replacement cost of a homecare worker. The PACE statistical analysis plan (approved in 2010 but not published until 2013) had included two additional, lower-cost assumptions. The first valued informal care at minimum wage, the second at zero compensation. 

       The PLoS One paper itself did not provide these additional findings, noting only that “sensitivity analyses revealed that the results were robust for alternative assumptions.”

Commenters on the PLoS One website, including [patient] Tom Kindlon, challenged the claim that the findings would be “robust” under the alternative assumptions for informal care. In fact, they pointed out, the lower-cost conditions would reduce or fully eliminate the reported societal cost-benefit advantages of the cognitive behavior and graded exercise therapies. 

        In a posted response, the paper’s lead author, Paul McCrone, conceded that the commenters were right about the impact that the lower-cost, alternative assumptions would have on the findings. However, McCrone did not explain or even mention the apparently erroneous sensitivity analyses he had cited in the paper, which had found the societal cost-benefit advantages for graded exercise therapy and cognitive behavior therapy to be “robust” under all assumptions. Instead, he argued that the two lower-cost approaches were unfair to caregivers because families deserved more economic consideration for their labor.

        “In our opinion, the time spent by families caring for people with CFS/ME has a real value and so to give it a zero cost is controversial,” McCrone wrote. “Likewise, to assume it only has the value of the minimum wage is also very restrictive.”

       In a subsequent comment, Kindlon chided McCrone, pointing out that he had still not explained the paper’s claim that the sensitivity analyses showed the findings were “robust” for all assumptions. Kindlon also noted that the alternative, lower-cost assumptions were included in PACE’s own statistical plan.

      “Remember it was the investigators themselves that chose the alternative assumptions,” wrote Kindlon. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way. There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”

Given that Dr. McCrone, the lead author, directly contradicted in his comments what the paper itself claimed about sensitivity analyses having confirmed the “robustness” of the findings under other assumptions, it is clearly not necessary to scrutinize the study data to confirm that this central finding cannot be supported. Dr. McCrone has not responded to e-mail requests from Dr. Tuller to explain the discrepancy. And PLoS One, although alerted to this problem last fall by Dr. Tuller, has apparently not yet taken steps to rectify the misinformation about the sensitivity analyses contained in the paper.

PLoS One has an obligation to question Dr. McCrone about the contradiction between the text of the paper and his subsequent comments, so he can either provide a reasonable explanation, produce the actual sensitivity analyses demonstrating “robustness” under all three assumptions outlined in the statistical analysis plan, or correct the paper’s core finding that CBT and GET are “cost-effective” no matter how informal care is valued.  Should he fail to do so, PLoS One has an obligation itself to correct the paper, independent of the disposition of the issue of access to trial data.

We appreciate your quick response to these concerns.


Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Rebecca Goldin, Ph.D.
Professor of Mathematical Sciences
George Mason University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley

Comments on this entry are closed.

  • BURNA 23 May 2016, 4:17 pm

    Thanks to all the signatories for highlighting yet another flawed paper touting benefits related to CBT and GET where none exist. What has happened to the peer review process on PACE papers ?

  • Sasha 23 May 2016, 5:44 pm

    Thank you for pursuing this. I think a lot of us patients had been hoping that PLOS One would be a journal that would set a standard for dealing with legitimate concerns about PACE and it’s been disappointing first, that the PACE authors wouldn’t release the data from this cost-benefit paper in PLOS One, and second, that the FINE trial authors have now been permitted to remove their raw data from being available alongside their PLOS One paper.

    I hope PLOS One will take your criticism seriously and take the appropriate action. They’d be the first journal in the world to do so in relation to any PACE paper. The Lancet and Psychological Medicine have both made a very poor showing.

  • Paul_Watton 24 May 2016, 4:29 am

    Thank you again to the authors of the above letter and to Tom Kindlon for his invaluable work.
    To a non-scientist like me, who previously trusted and had respect for the integrity of Peer reviewed Scientific Journals such as PLOS One, to now witness their failure to uphold their own publication standards is very disappointing.
    It seems to me that the Emperor has no clothes.

  • Freda 24 May 2016, 12:43 pm

    Why are medical journals not responding to critiques of the PACE works?? It’s not just the Emporer that has no clothes. Who? What? Why?

  • jmtc 24 May 2016, 2:25 pm

    I think there are misunderstandings in this letter. In the actual Plos One study the authors mention opportunity cost and replacement cost as two alternative methods for valuing informal care and go on to explain that they have chosen to use opportunity cost, not replacement cost as Tuller writes. As far as I understand when outlining limitations of their study at the end of the Plos One article the authors say regarding informal care that sensitivity analyses revealed that the results were robust for alternative assumptions that they are most likely referring to the alternative assumption using the replacement cost and not to assumptions re minimum wage or zero cost ,and these are what Dr McCrone is referring to in his comment, not the actual study, so there is actually no contradiction other than that the statistical plan is different to the actual study.

  • Steve Hawkins 27 May 2016, 2:08 pm

    Thank you for continuing to expose the shortcomings of evidence in this particular brand of snake-oil salesmanship.

    I haven’t read into the detail of this paper, but, from the story so far, I think that it is safe to assume that only people who could attend study centres, and are, thus, only mildly affected, are the subject of this paper. It is therefore duplicitous to use the costs of home care against the cost of CBT, because the CBT will not have been used on the population that need home care, who will not have been included in this study.

    Throughout this whole sorry saga the study population has been carefully selected to be favourable to the desired outcome of the salesmen. This is, unfortunately, looking to be the norm in conducting medical RCTs. Writing for a forthcoming conference on evidence-based medicine (Evidence Live 22 June) Carl Heneghan, Professor of Evidence-Based Medicine at Oxford University, writes:

    ” An analysis of 20,000 Medicare patients with a principal diagnosis of heart failure reported that only 13–25% met the criteria for 3 of the pivotal RCTs. A further review of 52 studies, which compared baseline characteristics of RCT patients with real world patients, found that many trials are highly selective and have lower risk profile patients than those seen in the real-world: 37 (71%) of the studies concluded the populations were not representative. The patients we are often most interested in applying evidence to – the elderly and those with comorbidities – were most often excluded. In only 15 (29%) studies were the RCT samples generally representative of real-world populations. ”

    As we have seen, all the papers dealing with CBT/GET have followed this common pattern that Heneghan has found. An ‘easy’ group of patients has been selected so as not to provide too much of a challenge to the researchers’ hypothesis; results have required a great deal of statistical sleight of hand to show any effect at all, but, once a particular permutation has managed to come up on the desired ‘positive’ side, a ‘robust’ success is trumpeted from the rooftops, and the World is told that ME/CFS can be cured in a very simplistic manner, even though the majority of patients are not of the kind that were admitted to the study.

    Enormous savings are then proclaimed, as if the ‘proven’ treatment applied to the whole population.

    This is a massive confidence trick, that Heneghan seems to have discovered is typical of an uncomfortably large percentage of all medical research trials. Heneghan reminds us that the mere fact of one treatment appearing statistically slightly more effective than another, does not count for much in the real world, and calls for alternative measures of ‘minimum clinically important difference’ (MCID) acceptable differences to be applied. Such an approach would put all these PACE-related trials, in the ‘effect too small to bother’ bracket.

    Heneghan also notes:

    “… A COPD symposium assessing MCID stated, ‘clinical opinion and patient subjective response should trump statistical theory,’ which fits with the definition and ethos of EBM, …”

    I think that all patients who have been following the sorry saga of CBT/GET and its prophets will thoroughly agree.

    I recommend Heneghan’s series of blogs at the link above, to all who’ve been following the PACE saga. I think that most of his observations will sound very familiar.

  • quayman 28 May 2016, 1:25 pm

    From Paul McCrone himself:
    “I have looked at the data and valuing informal care at the minimum wage rate for those with cost and QALY data results in higher societal costs for GET of £3. As such GET does not dominate SMC at this level but rather has an cost per QALY of £87.”