poliovirusI cannot let September pass without noting that 34 years ago this month, I arrived at Columbia University to start my laboratory to do research on poliovirus (pictured). That virus is no longer the sole object of our attention – we are wrapping up some work on poliovirus and our attention has shifted elsewhere. But this is a good month to think about the status of the poliovirus eradication effort.

So far this year 26 cases of poliomyelitis have been recorded – 23 caused by wild type virus, and three caused by vaccine-derived virus. At the same time in 2015 there were 44 reported cases of polio – small progress, but, in the words of Bill Gates, the last one percent is the hardest.

One of the disappointments this year is Nigeria. It was on the verge of being polio-free for one year – the last case of type 1 poliovirus in Nigeria had been recorded in July of 2014. In August the government reported that 2 children developed polio in the Borno State. The genome sequence of the virus revealed that it had been circulating undetected in this region since 2011. Due to threats from militant extremists, it has not been possible for vaccination teams to properly cover this area, and surveillance for polioviruses has also been inefficient. The virus can circulate freely in a poorly immunized population, and as only 1% of infections lead to paralysis, cases of polio might have been missed.

The conclusion from this incident is that the declaration that poliovirus is no longer present in any region is only as good as the surveillance for the virus, which can never be perfect as all sources of infection cannot be covered.

Of the 26 cases of polio recorded so far in 2016, most have been in Afghanistan and Pakistan (9 and 14, respectively). It is quite clear that conflict has prevented vaccination teams from immunizing the population: in Pakistan, militants have attacked polio teams during vaccination campaigns.

Recently 5 of 27 sewage samples taken from different parts of the province of Balochistan in Pakistan have tested positive for poliovirus. Nucleotide sequence analysis revealed that the viruses originated in Afghanistan. The fact that such viruses are present in sewage means that there are still individuals without intestinal immunity to poliovirus in these regions. In response to this finding, a massive polio immunization campaign was planned for the end of September in Pakistan. This effort would involve 6000 teams to reach 2.4 million children. Apparently police will be deployed to protect immunization teams (source: ProMedMail).

The success of the polio eradication program so far has made it clear that if vaccines can be deployed, circulation of the virus can be curtailed. If immunization could proceed unfettered, I suspect the virus would be gone in five years. But can anyone predict whether it will be possible to curtail the violence in Pakistan, Afghanistan, and Nigeria that has limited polio vaccination efforts?

TWiV 408: Boston Quammens

Four years after filming ‘Threading the NEIDL’, Vincent and Alan return to the National Emerging Infectious Diseases Laboratory BSL4 facility at Boston University where they speak with science writer David Quammen.

You can find TWiV #408 at microbe.tv/twiv, or watch/listen here.

Click arrow to play
Download TWiV 408 (42 MB .mp3, 69 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

by David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

‘The PACE trial is a fraud.’ Ever since Virology Blog posted my 14,000-essord investigation of the PACE trial last October, I’ve wanted to write that sentence. (I should point out that Dr. Racaniello has already called the PACE trial a “sham,” and I’ve already referred to it as “doggie-poo.” I’m not sure that “fraud” is any worse. Whatever word you use, the trial stinks.)

Let me be clear: I don’t mean “fraud” in the legal sense—I’m not a lawyer–but in the sense that it’s a deceptive and morally bankrupt piece of research. The investigators made dramatic changes from the methodology they outlined in their protocol, which allowed them to report purported “results” that were much, much better than those they would have been able to claim under their originally planned methods. Then they reported only the better-looking “results,” with no sensitivity analyses to analyze the impact of the changes—the standard statistical approach in such circumstances.

This is simply not allowed in science. It means the reported benefits for cognitive behavior therapy and graded exercise therapy were largely illusory–an artifact of the huge shifts in outcome assessments the authors introduced mid-trial. (That’s putting aside all the other flaws, like juicing up responses with a mid-trial newsletter promoting the interventions under investigation, failing to obtain legitimate informed consent from the participants, etc.)

That PACE suffered from serious methodological deficiencies should have been obvious to anyone who read the studies. That includes the reviewers for The Lancet, which published the PACE results for “improvement” in 2011 after what editor Richard Horton has called “endless rounds of peer-review,” and the journal Psychological Medicine, which published results for “recovery” in 2013. Certainly the deficiencies should have been obvious to anyone who read the trenchant letters and commentaries that patients routinely published in response to the egregious errors committed by the PACE team. Even so, the entire U.K. medical, academic and public health establishments refused to acknowledge what was right before their eyes, finding it easier instead to brand patients as unstable, anti-science, and possibly dangerous.

Thanks to the efforts of the incredible Alem Matthees, a patient in Perth, Australia, the U.K.’s First-Tier Tribunal last month ordered the liberation of the PACE trial data he’d requested under a freedom-of-information request. (The brief he wrote for the April hearing, outlining the case against PACE in great detail, was a masterpiece.) Instead of appealing, Queen Mary University of London, the home institution of lead PACE investigator Peter White, made the right decision. On Friday, September 9, the university announced its intention to comply with the tribunal ruling, and sent the data file to Mr. Matthees. The university has a short window of time before it has to release the data publicly.

I’m guessing that QMUL forced the PACE team’s hand by refusing to allow an appeal of the tribunal decision. I doubt that Dr. White and his colleagues would ever have given up their data willingly, especially now that I’ve seen the actual results. Perhaps administrators had finally tired of the PACE shenanigans, recognized that the study was not worth defending, and understood that continuing to fight would further harm QMUL’s reputation. It must be clear to the university now that its own reputational interests diverge sharply from those of Dr. White and the PACE team. I predict that the split will become more apparent as the trial’s reputation and credibility crumble; I don’t expect QMUL spokespeople to be out there vigorously defending the unacceptable conduct of the PACE investigators.

Last weekend, several smart, savvy patients helped Mr. Matthees analyze the newly available data, in collaboration with two well-known academic statisticians, Bruce Levin from Columbia and Philip Stark from Berkeley.  Yesterday, Virology Blog published the group’s findings of the single-digit, non-statistically significant “recovery” rates the trial would have been able to report had the investigators adhered to the methods they outlined in the protocol. That’s a remarkable drop from the original Psychological Medicine paper, which claimed that 22 percent of those in the favored intervention groups achieved “recovery,” compared to seven percent for the non-therapy group.

Now it’s clear: The PACE authors themselves are the anti-science faction. They tortured their data and ended up producing sexier results. Then they claimed they couldn’t share their data because of alleged worries about patient confidentiality and sociopathic anti-PACE vigilantes. The court dismissed these arguments as baseless, in scathing terms. (It should be noted that their ethical concerns for patients did not extend to complying with a critical promise they made in their protocol—to tell prospective participants about “any possible conflicts of interest” in obtaining informed consent. Given this omission, they have no legitimate informed consent for any of their 641 participants and therefore should not be allowed to publish any of their data at all.)

The day before QMUL released the imprisoned data to Mr. Matthees, the PACE authors themselves posted a pre-emptive re-analysis of results for the two primary outcomes of physical function and fatigue, according to the protocol methods. In the Lancet paper, they had revised and weakened their own definition of what constituted “improvement.” With this revised definition, they could report in The Lancetthat approximately 60 % in the cognitive behavior and graded exercise therapy arms “improved” to a clinically significant degree on both fatigue and physical function.

The re-analysis the PACE authors posted last week sought to put the best possible face on the very poor data they were required to release. Yet patients examining the new numbers quickly noted that, under the more stringent definition of “improvement” outlined in the protocol, only about 20 percent in the two groups could be called “overall improvers.”. Solely by introducing a more relaxed definition of “improvement,” the PACE team—enabled by The Lancet’s negligence and an apparently inadequate “endless” review process–was able to triple the trial’s reported success rate..

So now it’s time to ask what happens to the papers already published. The editors have made their feelings clear. I have written multiple e-mails to Lancet editor Richard Horton since I first contacted him about my PACE investigation, almost a year before it ran. He never responded until September 9, the day QMUL liberated the PACE data. Given that the PACE authors’ own analysis showed that the new data showed significantly less impressive results than those published in The Lancet, I sent Dr. Horton a short e-mail asking when we could expect some sort of addendum or correction to the 2011 paper. He responded curtly: “Mr. Tuller–We have no such plans.”

The editors of Psychological Medicine are Kenneth Kendler of Virginia Commonwealth University and Robin Murray of Kings College London. After I wrote to the journal last December, pointing out the problems, I received the following from Dr. Murray, whose home base is KCL’s Department of Psychosis Studies: “Obviously the best way of addressing the truth or otherwise of the findings is to attempt to replicate them. I would therefore like to encourage you to initiate an attempted replication of the study. This would be the best way for you to contribute to the debate…Should you do this, then Psychological Medicine will be most interested in the findings either positive or negative.”

This was not an appropriate response. I told Dr. Murray it was “disgraceful,” given that the paper was so obviously flawed. This week, I wrote again to Dr. Murray and Dr. Kendler, asking if they now planned to deal with the paper’s problems, given the re-analysis by Matthees et al. In response, Dr. Murray suggested that I submit a re-analysis, based on the released data, and Psychological Medicine would be happy to consider it. “We would, of course, send it out to referees for scientific scrutiny in the same manner as we did for the original paper,” he wrote.

I explained that it was his and the journal’s responsibility to address the problems, whether or not anyone submitted a re-analysis. I also noted that I could not improve on the Matthees re-analysis, which completed rebutted the results reported in Psychological Medicine’s paper. I urged Dr. Murray to contact either Dr. Racaniello or Mr. Matthees to discuss republishing it, if he truly wished to contribute to the debate. Finally, I noted that the peer-reviewers for the original paper had okayed a study in which participants could be disabled and recovered simultaneously, so I wasn’t sure if the journal’s assessment process could be trusted.

(By the way, Kings College London, where Dr. Murray is based, is also the home institution of PACE investigator Trudie Chalder as well as Simon Wessely, a close colleague of the PACE authors and president of the Royal College of Psychiatrists*. That could explain Dr. Murray’s inability or reluctance to acknowledge that the “recovery” paper his journal peer-reviewed and published is meaningless.)

Earlier today, the PACE authors posted a blog on The BMJ site, their latest effort to salvage their damaged reputations. They make no mention of their massive research errors and focus only on their supposed fears that releasing even anonymous data will frighten away future research participants. They have provided no evidence to back up this unfounded claim, and the tribunal flatly rejected it. They also state that only researchers who present  “pre-specified” analysis plans should be able to obtain trial data. This is laughable, since Dr. White and his colleagues abandoned their own pre-specified analyses in favor of analyses they decided they preferred much later on, long after the trial started.

They have continued to refer to their reported analyses, deceptively, as “pre-specified,” even though these methods were revised mid-trial. The following point has been stated many times before, but bears repeating: In an open label trial like PACE, researchers are likely to know very well what the outcome trends are before they review any actual data. So the PACE team’s claim that the changes they made were “pre-specified” because they were made before reviewing outcome data is specious. I have tried to ask them about this issue multiple times, and have never received an answer.

Dr. White, his colleagues, and their defenders don’t yet seem to grasp that the intellectual construct they invented and came to believe in—the PACE paradigm or the PACE enterprise or the PACE cult, have your pick—is in a state of collapse. They are used to saying whatever they want about patients—Internet Abuse! Knife-wielding! Death threats!!–and having it be believed. In responding to legitimate concerns and questions, they have covered up their abuse of the scientific process by providing non-answers, evasions and misrepresentations—the academic publishing equivalent of “the dog ate my homework.” Amazingly, journal editors, health officials, reporters and others have accepted these non-responsive responses as reasonable and sufficient. I do not.

Now their work is finally being scrutinized the way it should have been by peer reviewers before this damaging research was ever published in the first place. The fallout is not going to be pretty. If nothing else, they have provided a great gift to academia with their $8 million** disaster—for years to come, graduate students in the U.S., the U.K. and elsewhere will be dissecting PACE as a classic case study of bad research and mass delusion.

*Correction: The original version of the post mistakenly called the organization the Royal Society of Psychiatrists.

**Correction: The original version of the post stated that PACE cost $8 million, not $6.4 million. In fact, PACE cost five million pounds, so the cost in dollars depends on the exchange rate used. The $8 million figure is based on the exchange rate from last October, when Virology Blog published my PACE investigation. But the pound has fallen since the Brexit vote in June, so the cost in dollars at the current exchange rate is lower.

Last October, Virology Blog posted David Tuller’s 14,000-word investigation of the many flaws of the PACE trial (link to article), which had reported that cognitive behavior therapy and graded exercise therapy could lead to “improvement” and “recovery” from ME/CFS. The first results, on “improvement,” were published in The Lancet in 2011; a follow-up study, on “recovery,” was published in the journal Psychological Medicine in 2013.

The investigation by Dr. Tuller, a lecturer in public health and journalism at UC Berkeley, built on the impressive analyses already done by ME/CFS patients; his work helped demolish the credibility of the PACE trial as a piece of scientific research. In February, Virology Blog posted an open letter (link) to The Lancet and its editor, Richard Horton, stating that the trial’s flaws “have no place in published research.” Surprisingly, the PACE authors, The Lancet, and others in the U.K. medical and academic establishment have continued their vigorous defense of the study, despite its glaring methodological and ethical deficiencies.

Today, I’m delighted to publish an important new analysis of PACE trial data—an analysis that the authors never wanted you to see.  The results should put to rest once and for all any question about whether the PACE trial’s enormous mid-trial changes in assessment methods allowed the investigators to report better results than they otherwise would have had. While the answer was obvious from Dr. Tuller’s reporting, the new analysis makes the argument incontrovertible.

ME/CFS patients developed and wrote this groundbreaking analysis, advised by two academic co-authors. It was compiled from data obtained through a freedom-of-information request, pursued with heroic persistence by an Australian patient, Alem Matthees. Since the authors dramatically weakened all of their “recovery” criteria long after the trial started, with no committee approval for the redefinition of “recovery,” it was entirely predictable that the protocol-specified results would be worse. Now we know just how much worse they are.

According to the new analysis, “recovery” rates for the graded exercise and cognitive behavior therapy arms were in the mid-single-digits and were not statistically significant. In contrast, the PACE authors managed to report statistically significant “recovery” rates of 22 percent for their favored interventions. Given the results based on the pre-selected protocol metrics for which they received study approval and funding, it is now up to the PACE authors to explain why anyone should accept their published outcomes as accurate, reliable or legitimate.

The complete text of the analysis is below. A pdf is also available (link to pdf).


A preliminary analysis of ‘recovery’ from chronic fatigue syndrome in the PACE trial using individual participant data


Wednesday 21 September 2016

Alem Matthees (1), Tom Kindlon (2), Carly Maryhew (3), Philip Stark (4), Bruce Levin (5).

1. Perth, Australia. alem.matthees@gmail.com
2. Information Officer, Irish ME/CFS Association, Dublin, Ireland.
3. Amersfoort, Netherlands.
4. Associate Dean, Mathematical and Physical Sciences; Professor, Department of Statistics; University of California, Berkeley, California, USA.
5. Professor of Biostatistics and Past Chair, Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, USA.


The PACE trial tested interventions for chronic fatigue syndrome, but the published ‘recovery’ rates were based on thresholds that deviated substantially from the published trial protocol. Individual participant data on a selection of measures has recently been released under the Freedom of Information Act, enabling the re-analysis of recovery rates in accordance with the thresholds specified in the published trial protocol. The recovery rate using these thresholds is 3.1% for specialist medical care alone; for the adjunctive therapies it is 6.8% for cognitive behavioural therapy, 4.4% for graded exercise therapy, and 1.9% for adaptive pacing therapy. This re-analysis demonstrates that the previously reported recovery rates were inflated by an average of four-fold. Furthermore, in contrast with the published paper by the trial investigators, the recovery rates in the cognitive behavioural therapy and graded exercise therapy groups are not significantly higher than with specialist medical care alone. The implications of these findings are discussed.


The PACE trial was a large multi-centre study of therapeutic interventions for chronic fatigue syndrome (CFS) in the United Kingdom (UK). The trial compared three therapies which were each added to specialist medical care (SMC): cognitive behavioural therapy (CBT), graded exercise therapy (GET), and adaptive pacing therapy (APT). [1] Henceforth SMC alone will be ‘SMC’, SMC plus CBT will be ‘CBT’, SMC plus GET will be ‘GET’, and SMC plus APT will be ‘APT’. Outcomes consisted of two self-report primary measures (fatigue and physical function), and a mixture of self-report and objective secondary measures. The trial’s co-principal investigators are longstanding practitioners and proponents of the CBT and GET approach, whereas APT was a highly formalised and modified version of an alternative energy management approach.

After making major changes to the protocol-specified “recovery” criteria, White et al. (2013) reported that when using “a comprehensive and conservative definition of recovery”, CBT and GET were associated with significantly increased recovery rates of 22% at 52-week follow-up, compared to only 8% for APT and 7% for SMC [2]. However, those figures were not derived using the published trial protocol (White et al., 2007 [3]), but instead using a substantially revised version that has been widely criticised for being overly lax and poorly justified (e.g. [4]). For example, the changes created an overlap between trial eligibility criteria for severe disabling fatigue, and the new “normal range”. Trial participants could consequently be classified as recovered without clinically significant improvements to self-reported physical function or fatigue, and in some cases without any improvement whatsoever on these outcome measures. Approximately 13% of participants at baseline simultaneously met the trial eligibility criteria for ‘significant disability’ and the revised recovery criteria for normal self-reported physical function. The justification given for changing the physical function threshold of recovery was apparently based on a misinterpretation of basic summary statistics [5,6], and the authors also incorrectly described their revised threshold as more stringent than previous research [2]. These errors have not been corrected, despite the publishing journal’s policy that such errors should be amended, resulting in growing calls for a fully independent re-analysis of the PACE trial results [7,8].

More than six years after data collection was completed for the 52-week follow-up, the PACE trial investigators have still not published the recovery rates as defined in the trial protocol. Queen Mary University of London (QMUL), holder of the trial data and home of the chief principal investigator, have also not allowed access to the data for others to analyse these outcomes. Following a Freedom of Information Act (FOIA) request for a selection of trial data, an Information Tribunal upheld an earlier decision from the Information Commissioner ordering the release of that data (see case EA/2015/0269). On 9 September 2016, QMUL released the requested data [9]. Given the public nature of the data release, and the strong public interest in addressing the issue of “recovery” from CFS in the PACE trial, we are releasing a preliminary analysis using the main thresholds set in the published trial protocol. The underlying data is also being made available [10], while more detailed and complete analyses on the available outcome measures will be published at a later date.


Measures and criteria

Using the variables available in the FOIA dataset, ‘recovery’ from CFS in the PACE trial is analysed here based on the main outcome measures described by White et al. (2013) in the “cumulative criteria for trial recovery” [2]. These measures are: (i) the Chalder Fatigue Questionnaire (CFQ); (ii) the Short-Form-36 (SF-36) physical function subscale; (iii) the Clinical Global Impression (CGI) change scale; and (iv) the Oxford CFS criteria. However, instead of the weakened thresholds used in their analysis, we will use the thresholds specified in the published trial protocol by White et al. (2007) [3]. A comparison between the different thresholds for each outcome measure is presented in Table 1.

table 1

Where follow-up data for self-rated CGI scores were missing we did not impute doctor-rated scores, in contrast to the approach of White et al., because the trial protocol stated that all primary and secondary outcomes are “either self-rated or objective in order to minimise observer bias” from non-blinded assessors. We discuss the minimal impact of this imputation below. Participants missing any recovery criteria data at 52-week follow-up were classified as non-recovered.

Statistical analysis

White et al. (2013) conducted an available-case analysis which excluded from the denominators of each group the participants who dropped out [2]. This is not the recommended practice in clinical trials, where intention-to-treat analysis (which includes all randomised participants) is commonly preferred. An available-case analysis may overestimate real-world treatment effects because it does not include participants who were lost to follow-up. Attrition from trials can occur for various reasons, including an inability to tolerate the prescribed treatment, a perceived lack of benefit, and adverse reactions. Thus, an available-case analysis only takes into account the patients who were willing and able to tolerate the prescribed treatments. Nonetheless, both types of analyses are presented here for comparison. We present a preliminary exploratory analysis of the frequency and percentage of participants meeting all the recovery criteria in each group, based on the intention-to-treat principle, as well as the available-case subgroup.

Neither the published trial protocol [3] nor the published statistical analysis plan [11] specified a method for determining the statistical significance of the differences in recovery rates between treatment groups. In their published paper on recovery, White et al. (2013) presented logistic regression analyses for trial arm pairwise comparisons, adjusting for the baseline stratification variables of treatment centre, meeting CDC CFS criteria, meeting London ME criteria, and having a depressive illness [2]. However, it has been shown that logistic regression may be an inappropriate method of analysis in the context of randomised trials [12]. While Fisher’s exact test would be preferable, a more rigorous approach would also take into account stratification variables, which unfortunately were not part of the available FOIA dataset. Nonetheless, there is reason to believe that the effect of including these stratification variables would be minimal on our analyses: the stratification variables were approximately evenly distributed between groups [1], and attempting to replicate the previously published [2] odds ratios and 95% confidence intervals using logistic regression, but without stratification variables, yielded very similar results to the ones previously published (see Table 3).

We therefore present recovery rates for each group and compare the observed rates for each active treatment arm with those of the SMC arm using Fisher’s exact tests. The confidence intervals for recovery rates in each group and comparative odds ratios are exact 95% confidence intervals using the point probability method [13]. For sake of direct comparison with results published by White et al. (2013), we also present results of logistic regression analysis which included only the treatment arm as a predictor variable, with conventional approximate 95% confidence intervals.


For our analysis of ‘recovery’ in the PACE trial, full data were available for 89% to 94% of participants, depending on the treatment group and outcome measure. Percentages are calculated for both intention-to-treat, and on an available-case basis. Imputing the missing self-rated CGI scores with doctor-rated CGI scores made no difference to the intention-to-treat analysis, as there were no participants with missing self-rated CGI scores with an assessor rating of 1, required for recovery; in the available-case analysis, the only effect this had was to decrease the CBT denominator by 1, and the assessor score for that participant was 3, “a little better”, therefore non-recovered. Table 2 provides the results and Figure 1 compares our recovery rates with those of White et al. (2013):

table 2

figure 1

The CBT, GET, and APT groups did not demonstrate a statistically significant advantage over the SMC group in any of the above analyses, nor an empirical recovery rate that would generally be considered adequate (the highest observed rate was 7.7%). In the intention-to-treat analysis, the exact p value for the three degree of freedom chi-squared test for no overall differences amongst the four groups was 0.14. In the available-case analysis, the p value was 0.10. Given the number of comparisons, a correction for multiple testing might be appropriate, but as none of the uncorrected p values were significant at the p<0.05 level, this more conservative approach would not alter the conclusion. Our findings therefore contradict the conclusion of White et al. (2013), that CBT and GET were significantly more likely than the SMC group to be associated with ‘recovery’ at 52 weeks [2]. However, the very low recovery rates substantially decrease the ability to detect statistically significant differences between groups (see the Limitations section). The multiple changes to the recovery criteria had inflated the estimates of recovery by approximately 2.3 to 5.1 -fold, depending on the group, with an average inflation of 3.8-fold.


Lack of statistical power

When designing the PACE trial and determining the number of participants needed, the investigators’ power analyses were based not on recovery estimates but on the prediction of relatively high rates of clinical improvement in the additional therapy groups compared to SMC alone [3]. However, the very low recovery rates introduce a complication for tests of significance, due to insufficient statistical power to detect modest but clinically important differences between groups. For example, with the CBT vs. SMC comparison by intention-to-treat, a true odds ratio of 4.2 would have been required to give Fisher’s exact test 80% power to declare significance, given the observed margins. If we assume SMC has a probability of 3.1%, an odds ratio of 4.2 would have conferred a recovery probability of 11.8%, which was not achieved in the trial.

We believe that for our preliminary analysis it was important to follow the protocol-specified recovery criteria, which make more sense than the revised thresholds. For example, the former required level of physical function would suggest a ‘recovered’ individual could at least do most normal activities, but may have limitations with a few of the items on the SF-36 health survey, such as vigorous exercise, walking up flights of stairs, or bending down. The revised threshold that White et al. (2013) used meant that a ‘recovered’ individual could have remained limited on four to eight out of ten items depending on severity. We found that when using the revised recovery criteria, 8% (7/87) of the ‘recovered’ participants still met trial eligibility criteria for ‘significant disability’.

Weakening the recovery thresholds increases statistical power to detect group differences because it makes the event (i.e. ‘recovery’) rates more frequent (i.e. less close to zero) but it also leads to the inclusion of patients who still, for example, have significant illness-related restrictions in physical capacity as per SF-36 physical function score. We argue that if significant differences between groups cannot be detected in sample sizes of approximately n=160 per group, then this may indicate that CBT and GET simply do not substantially increase recovery rates.

Lack of data on stratification variables

In order to increase the chance of being granted or enforced, the FOIA request asked for a ‘bare minimum’ set of variables, as asking for too many variables, or for variables that may be judged to significantly increase the risk of re-identification of participants, would have decreased the chance that the FOIA request would be granted. This was a reasonable compromise given that QMUL had previously blocked all requests for the protocol-specified recovery rates and the underlying data to calculate them. Some non-crucial variables are therefore missing from the dataset acquired under the FOIA but there is reason to believe that this would have little effect on the results.

Allocation of participants in the PACE trial was stratified [1]: “The first three participants at each of the six clinics were allocated with straightforward randomisation. Thereafter allocation was stratified by centre, alternative criteria for chronic fatigue syndrome and myalgic encephalomyelitis, and depressive disorder (major or minor depressive episode or dysthymia), with computer-generated probabilistic minimisation.”

This means that testing for statistical significance assuming simple randomisation results in p- values that are approximate and effect-size estimates that might be biased. The FOIA dataset does not contain the stratification variables. While the lack of these variables may somewhat alter the estimated treatment effects and the p-values or confidence levels, we expect the differences to be minor, a conclusion that is supported by Table 3 below. Table 1 of the publication of the main trial results (White et al., 2011) shows that the stratification variables were approximately evenly distributed between groups [1]. We have replicated the rates of “trial recovery” as previously published by White et al. (2013) [2]. We also attempted to replicate their previously reported logistic regression, without the stratification variables, and the results were essentially the same (see Table 3), suggesting that the adjustments would not have a significant impact on the outcome of our own analysis of recovery.

table 3

If QMUL or the PACE trial investigators believe that further adjustment is necessary here to have confidence in the results, then we invite them to present analyses that include stratification variables or release the raw data for those variables without unnecessary restrictions.

Lack of data on alternative ME/CFS criteria

For the same reasons described in the previous subsection, the FOIA dataset does not contain the variables for meeting CDC CFS criteria or London ME (myalgic encephalomyelitis) criteria. These were part of the original definition of recovery, but we argue that these are superfluous because:

(a) While our definition of recovery is less stringent without the alternative ME/CFS criteria, these additional criteria had no significant effect on the results reported by White et al. (2013) [2]). (b) The alternative ME/CFS criteria used in the trial had some questionable modifications [14], that have not been used in any other trial, thus seriously limiting cross-trial comparability and validation of their results. (c) The Oxford CFS criteria are the most sensitive and least specific (most inclusive) criteria, so those who fulfil all other aspects of the recovery criteria would most likely also fail to meet alternative ME/CFS criteria. (d) All participants were first screened using the Oxford CFS criteria as this was the primary case definition, whereas the additional case criteria were not entry requirements [1].


It is important that patients, health care professionals, and researchers have accurate information about the chances of recovery from CFS. In the absence of definitive outcome measures, recovery criteria should set reasonable standards that approach restoration of good health, in keeping with commonly understood conceptions of recovery from illness [15]. Accordingly, the changes made by the PACE trial investigators after the trial was well under way resulted in the recovery criteria becoming too lax to allow conclusions about the efficacy of CBT and GET as rehabilitative treatments for CFS. This analysis, based on the published trial protocol, demonstrates that the major changes to the thresholds for recovery had inflated the estimates of recovery by an average of approximately four-fold. QMUL recently posted the PACE trial primary ‘improvement’ outcomes as specified in the protocol [16] and that also showed a similar difference between the proportion of participants classified as improved compared to the post-hoc figures previously published in the Lancet in 2011 [1]. It is clear from these results that the changes made to the protocol were not minor or insignificant, as they have produced major differences that warrant further consideration.

The PACE trial protocol was published with the implication that changes would be unlikely [17], and while the trial investigators describe their analysis of recovery as pre-specified, there is no mention of changes to the recovery criteria in the statistical analysis plan that was finalised shortly before the unblinding of trial data [11]. Confusion has predictably ensued regarding the timing and nature of the substantial changes made to the recovery criteria [18]. Changing study endpoints should be rare and is only rarely acceptable; moreover, trial investigators may not be appropriate decision makers for endpoint revisions [19,20]. Key aspects of pre-registered design and analyses are often ignored in subsequent publications, and positive results are often the product of overly flexible rules of design and data analysis [21,22].

As reported in a recent BMJ editorial by chief editor Fiona Godlee (3 March 2016), when there is enough doubt to warrant independent re-analysis [23]: “Such independent reanalysis and public access to anonymised data should anyway be the rule, not the exception, whoever funds the trial.” The PACE trial provides a good example of the problems that can occur when investigators are allowed to substantially deviate from the trial protocol without adequate justification or scrutiny. We therefore propose that a thorough, transparent, and independent re-analysis be conducted to provide greater clarity about the PACE trial results. Pending a comprehensive review or audit of trial data, it seems prudent that the published trial results should be treated as potentially unsound, as well as the medical texts, review articles, and public policies based on those results.


Writing this article in such a brief period of time would not have been possible without the diverse and invaluable contributions from patients, and others, who chose not to be named as authors.


AM submitted a FOIA request and participated in legal proceedings to acquire the dataset. TK is a committee member of the Irish ME/CFS Association (voluntary position).


1. White PD, Goldsmith KA, Johnson AL, Potts L, Walwyn R, DeCesare JC, Baber HL, Burgess M, Clark LV, Cox DL, Bavinton J, Angus BJ, Murphy G, Murphy M, O’Dowd H, Wilks D, McCrone P, Chalder T, Sharpe M; PACE trial management group. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet. 2011 Mar 5;377(9768):823-36. doi: 10.1016/S0140-6736(11)60096-2. Epub 2011 Feb 18. PMID: 21334061. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633/

2. White PD, Goldsmith K, Johnson AL, Chalder T, Sharpe M. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med. 2013 Oct;43(10):2227-35. doi: 10.1017/S0033291713000020. PMID: 23363640. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776285/

3. White PD, Sharpe MC, Chalder T, DeCesare JC, Walwyn R; PACE trial group. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol. 2007 Mar 8;7:6. PMID: 17397525. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2147058/

4. A list of articles by David Tuller on ME/CFS and PACE at Virology Blog. http://www.virology.ws/mecfs/

5. Kindlon T, Baldwin A. Response to: reports of recovery in chronic fatigue syndrome may present less than meets the eye. Evid Based Ment Health. 2015 May;18(2):e5. doi: 10.1136/eb-2014-101961. Epub 2014 Sep 19. PMID: 25239244. http://ebmh.bmj.com/content/18/2/e5.long

6. Matthees A. Assessment of recovery status in chronic fatigue syndrome using normative data. Qual Life Res. 2015 Apr;24(4):905-7. doi: 10.1007/s11136-014-0819-0. Epub 2014 Oct 11. PMID: 25304959. http://link.springer.com/article/10.1007%2Fs11136-014-0819-0

7. Davis RW, Edwards JCW, Jason LA, et al. An open letter to The Lancet, again. Virology Blog. 10 February 2016. http://www.virology.ws/2016/02/10/open-letter-lancet-again/

8. #MEAction. Press release: 12,000 signature PACE petition delivered to the Lancet. http://www.meaction.net/press-release-12000-signature-pace-petition-delivered-to-the-lancet/

9. Queen Mary University of London. Statement: Disclosure of PACE trial data under the Freedom of Information Act. 9 September 2016 Statement: Release of individual patient data from the PACE trial. http://www.qmul.ac.uk/media/news/items/smd/181216.html

10. FOIA request to QMUL (2014/F73). Dataset file: https://sites.google.com/site/pacefoir/pace-ipd_foia-qmul- 2014-f73.xlsx Readme file: https://sites.google.com/site/pacefoir/pace-ipd-readme.txt

11. Walwyn R, Potts L, McCrone P, Johnson AL, DeCesare JC, Baber H, Goldsmith K, Sharpe M, Chalder T, White PD. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials. 2013 Nov 13;14:386. doi: 10.1186/1745-6215-14-386. PMID: 24225069. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4226009/

12. Freedman DA. Randomization Does Not Justify Logistic Regression. Statistical Science. 2008;23(2):237–249. doi:10.1214/08-STS262. https://arxiv.org/pdf/0808.3914.pdf

13. Fleiss JL, Levin B, Paik MC. Statistical methods for rates and proportions. 3rd ed. Hoboken, N.J: J. Wiley; 2003. 760 p. IBSN: 978-0-471-52629-2. (Wiley series in probability and statistics). http://au.wiley.com/WileyCDA/WileyTitle/productCd-0471526290.html

14. Matthees A. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Ann Intern Med. 2015 Dec 1;163(11):886-7. doi: 10.7326/L15-5173. PMID: 26618293.

15. Adamowicz JL, Caikauskaite I, Friedberg F. Defining recovery in chronic fatigue syndrome: a critical review. Qual Life Res. 2014 Nov;23(9):2407-16. doi: 10.1007/s11136-014-0705-9. Epub 2014 May 3. PMID: 24791749. http://link.springer.com/article/10.1007%2Fs11136-014-0705-9

16. Goldsmith KA, White PD, Chalder T, Johnson AL, Sharpe M. The PACE trial: analysis of primary outcomes using composite measures of improvement. 8 September 2016. http://www.wolfson.qmul.ac.uk/images/pdfs/pace/PACE_published_protocol_based_analysis_final_8th_Sept_201 6.pdf

17. BMC editor’s comment on [Protocol for the PACE trial] (Version: 2. Date: 31 January 2007) http://www.biomedcentral.com/imedia/2095594212130588_comment.pdf

18. UK House of Lords. PACE Trial: Chronic Fatigue Syndrome/Myalgic Encephalomyelitis. 6 February 2013. http://www.publications.parliament.uk/pa/ld201213/ldhansrd/text/130206-gc0001.htm

19. Evans S. When and how can endpoints be changed after initiation of a randomized clinical trial? PLoS Clin Trials. 2007 Apr 13;2(4):e18. PMID 17443237. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1852589/

20. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010 Mar 23;340:c869. doi: 10.1136/bmj.c869. PMID: 20332511. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844943

21. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011 Nov;22(11):1359-66. doi: 10.1177/0956797611417632. Epub 2011 Oct 17. PMID: 22006061. http://pss.sagepub.com/content/22/11/1359.long

22. Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HL, Kievit RA. An Agenda for Purely Confirmatory Research. Perspect Psychol Sci. 2012 Nov;7(6):632-8. doi: 10.1177/1745691612463078. PMID: 26168122. http://pps.sagepub.com/content/7/6/632.full

23. Godlee F. Data transparency is the only way. BMJ 2016;352:i1261. (Published 03 March 2016) doi: http://dx.doi.org/10.1136/bmj.i1261 http://www.bmj.com/content/352/bmj.i1261

In the first of two shows recorded at the University of North Carolina in Chapel Hill, Vincent meets up with faculty members to talk about how they got into science, their research on DNA viruses, and what they would be doing if they were not scientists.

You can find TWiV #407 part one at microbe.tv/twiv. Or watch the video above, or listen below.

Click arrow to play
Download TWiV 407a (43 MB .mp3, 71 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

prion conversionPrions are not viruses – they are infectious proteins that lack nucleic acids. Nevertheless, virologists have always been fascinated by prions – they appear in virology textbooks (where else would you put them?) and are taught in virology classes. I’ve written about prions on this blog (five articles, to be exact – look under P in the Table of Contents) and I’m fascinated by their biology and transmission. That’s why the newly solved structure of an infectious prion protein is the topic of the sixth prion article at virology blog.

Spongiform encephalopathies are neurodegenerative diseases caused by misfolding of normal cellular prion proteins. Human spongiform encephalopathies are placed into three groups: infectious, familial or genetic, and sporadic, distinguished by how the disease is acquired initially. In all cases, the pathogenic protein is the host-encoded PrPC protein with an altered conformation, called PrPsc. In the simplest case, PrPSc converts normal PrPC protein into more copies of the pathogenic form (illustrated).

The structure of the normal PrPprotein, solved some time ago, revealed that it is largely alpha-helical with little beta-strand content. The structure of PrPSc protein has been elusive, because it forms aggregates and amyloid fibrils. It has been suggested that the PrPSc protein has more beta-strand content than the normal protein, but how this property would lead to prion replication was unknown. Clearly solving the structure of prion protein was needed to fully understand the biology of this unusual pathogen.

The structure of PrPSc protein has now been solved by cryo-electron microscopy and image reconstruction (link to paper). The protein was purified from transgenic mice programmed to produce a form of  PrPSc protein that is not anchored to the cell membrane, and which is also underglycosylated. The protein causes disease in mice but is more homogeneous and forms fibrillar plaques, allowing gentler purification methods.

prion structureThe structure of this form of the PrPSc protein reveals that it consists of two intertwined fibrils (red in the image) which most likely consist of a series of repeated beta-strands, or rungs, called a beta-solenoid. The structure provides clues about how a pathogenic prion protein converts a normal PrPC into PrPSc . The upper and lower rungs of beta-solenoids are likely the initiation points for hydrogen-bonding with new PrPC molecules – in many proteins with beta-solenoids, they are blocked to prevent propagation of beta-sheets. Once added to the fibrils, the ends would serve to recruit additional proteins, and the chain lengthens.

The authors note that the molecular interactions that control prion templating, including hydrogen-bonding, charge and hydrophobic interactions, aromatic stacking, and steric constraints, also play roles in DNA replication.

The structure of PrPSc protein provides a mechanism for prion replication by incorporation of additional molecules into a growing beta-solenoid. I wonder if incorporation into fibrils is the sole driving force for converting PrPCprotein into PrPSc, or if PrPC is conformationally altered before it ever encounters a growing fibril.


The TWiV team discusses eye infections caused by Zika virus, failure of Culex mosquitoes to transmit the virus, and replication of norovirus in stem cell derived enteroids.

You can find TWiV #406 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 406 (59 MB .mp3, 98 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

Professor Simon Gaskell
President and Principal
Queen Mary University of London
Mile End Road
London E1 4NS

Dear Professor Gaskell:

Last month, the First-Tier Tribunal ordered Queen Mary University of London to release critical data from the PACE trial of treatments for chronic fatigue syndrome, also known as myalgic encephalomyelitis, or ME/CFS. In its decision, which rejected the university’s appeal of last fall’s ruling from the Information Commissioner’s Office, the tribunal dismantled all of QMUL’s rationalizations for keeping the data secret.

In particular, the tribunal dismissed the fears of QMUL’s security expert that hostile patients would be able to de-anonymize the data in order to identify and harass trial participants. Such concerns, noted the tribunal, were “grossly exaggerated” and based on “a considerable amount of supposition and speculation with no actual evidence.” The tribunal also noted pointedly that the “seeming reluctance” of the PACE investigators “to engage with other academics they thought were seeking to challenge their findings” had strengthened the case for releasing the data publicly.

Significantly, the tribunal emphasized that criticism of the PACE trial was not limited to a small group of patients, citing the “impressive” roster of 42 scientists and clinicians who supported Virology Blog’s open letter of concern to The Lancet last February. The open letter was based on an investigative report about PACE by David Tuller, a lecturer in public health and journalism at the University of California, Berkeley; Virology Blog posted Dr. Tuller’s investigation last October. The letter outlined some of the study’s major deficiencies, declared that “such flaws have no place in published research,” and requested that The Lancet seek a fully independent analysis of the trial data. All of us signed or later endorsed that open letter.

The current case involves a freedom-of-information request filed by an Australian patient, Alem Matthees. The data he requested would allow for an independent analysis based on the primary outcome thresholds and the definition of “recovery” outlined in the published trial protocol. These promised results remain unknown, because the investigators dramatically changed their protocol methods of assessing the two primary measures of fatigue and physical function. They also relaxed all four criteria for determining “recovery”—so much so that participants could already be “recovered” at baseline on the two primary measures, before any treatment at all.

The investigators have refused to provide the results per the original methods established in the protocol. They have also not provided any sensitivity analyses to assess the impact of the mid-trial changes on the reported findings. This is unacceptable. It is also antithetical to good science and honest debate, as are the other flaws cited in the Virology Blog open letter and in Dr. Tuller’s investigation. (In fact, patients and advocates began pointing out the study’s problems years ago, but their legitimate concerns were consistently ignored, ridiculed or misrepresented.)

The PACE trial has greatly impacted policies and attitudes toward ME/CFS, popularizing the notion that psychotherapy and exercise are effective treatments. Yet patients routinely experience serious relapses after even minimal activity. A report last year from the Institute of Medicine called this phenomenon “exertion intolerance” and identified it as the defining symptom of the disease. This key IOM finding strongly suggests that to increase activity levels, as the PACE interventions recommend, is contraindicated and potentially harmful.

The PACE trial remains under an enormous cloud, and the requested data will provide answers to some of the questions. Given the tribunal’s powerful and persuasive rejection of QMUL’s arguments, prolonging the legal process will only further tarnish the university’s reputation, waste more public funds, and discourage others from participating in future QMUL-sponsored research—all for an indefensible and ultimately losing cause.

We strongly urge QMUL not to appeal the decision of the First-Tier Tribunal and to release the PACE trial data as soon as possible.


Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Stanford, California

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
London, England

Rebecca Goldin, PhD
Professor of Mathematics
George Mason University
Fairfax, Virginia

Bruce Levin, PhD
Professor of Biostatistics
Columbia University
New York, New York

Zaher Nahle, PhD, MPA
Vice President for Research and Scientific Programs
Solve ME/CFS Initiative
Los Angeles, California

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University
New York, New York

Charles Shepherd, MB BS
Honorary Medical Adviser
ME Association
London, England

John Swartzberg, MD
Clinical Professor Emeritus
School of Public Health
University of California, Berkeley
Berkeley, California

The TWiXers discuss a study on vertical transmission of Zika virus by Aedes mosquitoes, and uncovering Earth’s virome by mining existing metagenomic sequence data.

You can find TWiV #405 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 405 (70 MB .mp3, 117 min)
Subscribe (free): iTunesRSSemail

Become a patron of TWiV!

By David Tuller, DrPH

In January, I posted a list of the questions I still wanted to ask the PACE authors, who have repeatedly refused my requests to interview them about their ethically and methodologically challenged study. Richard Horton, editor of The Lancet, has similarly declined to talk with me, ignoring my e-mails seeking comment for the initial investigation, posted on Virology Blog last October, as well as for several follow-up articles. Now Dr. Horton has doubled-down on his efforts to keep a lid on the controversy by rejecting a letter that he personally solicited—a major breach of professional courtesy to the 43 well-regarded researchers and clinicians who signed it.

As Dr. Racaniello explained this week at Virology Blog, he submitted the letter on behalf of the group in March, in response to an express invitation from Dr. Horton. The invitation came right after Virology Blog posted an open letter, based on my investigation, that outlined the trial’s major missteps. Dr. Racaniello presumed from the wording of Dr. Horton’s invitation that the letter would, in fact, be published, as did the other signatories. On Monday, having been dissed by The Lancet, Dr. Racaniello finally posted the letter on PubMed Commons. He also called the PACE trial “a sham.” (I’ve called it “a piece of crap.” I might also have referred to it somewhere as “doggie-poo,” but I’m not sure.)

In rejecting the letter that he himself solicited, Dr. Horton certainly appeared to be trying to squelch the growing public controversy over PACE and its recommendations that graded exercise therapy and cognitive behavior therapy are effective treatments for chronic fatigue syndrome (or myalgic encephalomyelitis, CFS, ME, CFS/ME, or ME/CFS, or some other name). But The Lancet’s effort to shield PACE is doomed, not only because the study is so bad but because the emerging science presents a completely different portrait of the illness. On Monday, a paper in Proceedings of the National Academy of Sciences reported distinct patterns of metabolites in the plasma of ME/CFS patients—an important finding that, if confirmed, could finally lead to diagnostic tests. The PNAS paper and other recent research support the conclusion of reports last year from both the Institute of Medicine and National Institutes of Health: ME/CFS is a devastating physiological disease.

Back in January, Columbia statistics professor Andrew Gelman blogged about the harm Dr. Horton was already inflicting on his journal by not addressing the serious questions that serious critics were raising about PACE. The longstanding claim of the PACE authors, The Lancet and the trial’s other defenders—that the opponents were a small cabal of irrational, dangerous, and anti-psychiatry patients—has been exposed as false. The PACE authors, The Lancet and their colleagues wielded this narrative for years to discredit those challenging the trial. To their dismay, this tactic is no longer working.

The Lancet’s decision to reject the Virology Blog letter will only compound the journal’s growing reputational damage over the issue. It also seems deeply short-sighted, in light of last month’s powerful court decision ordering Queen Mary University of London, the professional home of principal PACE investigator Peter White, to release the raw trial data. That would allow others to determine whether the PACE investigators altered their outcome assessments strategies to produce results more likely to get published in The Lancet and other journals. (The answer should not surprise anyone except those in extreme stages of denial.)

The decision involved a freedom-of-information request filed two years ago by Alem Matthees, an Australian patient. Since the published results did not include the results per the assessment methods outlined in the PACE trial protocol, Matthees wanted the data necessary to calculate those results for the two primary outcomes of fatigue and physical function, as well as for the original definition of “recovery.” Last October, the Information Commissioner’s Office, an independent agency, found that QMUL had no grounds for refusing to provide the data. QMUL appealed that ruling to the First-Tier Tribunal, which issued the recent decision.

The U.K. medical-academic-media establishment has wholly endorsed the PACE trial’s unreliable findings and accepted the authors’ unsubstantiated claims that they have been subjected to a concerted campaign of threats and harassment. In contrast, the tribunal demonstrated a refreshing unwillingness to play along. In robust language, the tribunal smacked down the specious arguments raised by the university in its attempt to shield the data from public disclosure.

The chance that any participant could or would be identified from the anonymized data was “remote,” the tribunal found. The scenarios envisioned ed by QMUL’s data security expert, who sketched out far-fetched strategies that “activist” patients might pursue to re-identify and then harass trial participants, were “grossly exaggerated” and “a considerable amount of supposition and speculation,” wrote the tribunal. In fact, noted the tribunal, the only incident of “harassment” proven by QMUL’s experienced legal team was that someone somewhere once heckled Trudie Chalder, a principal PACE investigator who also testified at the tribunal hearing. (I also have some thoughts on Dr. Chalder’s testimony, but will hold those for another time.)

In contrast to the QMUL portrait of PACE opponents, the tribunal cited Virology Blog’s open letter to The Lancet as evidence of a robust scientific debate, noting that “the identity of those questioning the research…was impressive.” The tribunal also noted that QMUL’s decision to share data with friendly researchers but not with others had created the impression that it was acting out of self-interest, not principle. “There is a strong public interest in releasing the data given the continued academic interest so long after the research was published and the seeming reluctance for Queen Mary University to engage with other academics they thought were seeking to challenge their findings,” declared the tribunal in the decision.

The PACE authors, QMUL, Dr. Horton, and The Lancet are stonewalling the obvious, at the expense of millions of sick patients. Although Dr. Horton will never grant me an interview, I want to highlight some of the questions I have about his actions, claims and thoughts, in case someone else gets the chance to talk with him. This list of questions is certainly not exhaustive, but it’s a decent start.

So, Dr. Horton–Here’s what I’d like to ask you:

1) Do you agree that the invitation you sent to Dr. Racaniello certainly implied, even if it didn’t explicitly promise, that The Lancet would publish the letter? Since the letter submitted by Dr. Racaniello, on behalf of himself and 42 other experts, reflected the points made in the Virology Blog open letter that triggered your invitation, what changed your mind about whether it added something to the debate? Since you personally solicited the letter from Dr. Racaniello and his colleagues, do you feel you should have sent him a personal apology, rather than leaving your correspondence editor, Audrey Ceschia, to answer for your behavior?

2) In your invitation to Dr. Racaniello, you noted that the PACE authors would have a chance to respond, alongside the published letter. That was a fair plan. When did that plan of offering them a response morph into the plan of offering them a role in discussions about whether to publish the critical letter in the first place? What impact did their views have on your decision? Did the PACE authors argue, as they have in the past, that they have already answered all these criticisms?

This repeated claim that they have answered all questions is simply untrue. They have never explained, for example, how it is possible to be disabled and “within normal range” on an indicator simultaneously, and why 13 % of their participants were already “within normal range” on one or both primary outcome sat baseline. When anyone asks legitimate questions, they evade, ignore or misstate the issues—including in the correspondence following The Lancet’s 2011 paper. (This pattern of non-response is clear from their non-responsive responses to the charges raised in my Virology Blog investigation, and my rebuttal of their non-responses.)

3) What’s your reaction to the First-Tier Tribunal’s decision ordering the release of the PACE trial data? Do you agree with the tribunal’s observation, referring to Virology Blog’s February open letter to you and The Lancet, that the roster of scientists and researchers now publicly questioning the methodology and findings of PACE is “impressive”?

4) Do think QMUL should spend more public money to appeal the decision?

If QMUL decides to appeal, do you think this will fuel the already-widespread assumption that PACE had null findings per the original protocol methods of assessment?

5) The PACE interventions, as described in The Lancet, are based on the premise that deconditioning rather than any pathological process perpetuates the illness, and that increased activity and a new psychological mind-set will fix the problem. The descriptions of the interventions categorically exclude the possibility of a continuing organic disease as the cause. Do you think this portrait of the illness squares with the view emerging from this week’s study in PNAS and other recent research, including last year’s reports from the Institute of Medicine and the National Institutes of Health?

6) The IOM report identified “exertion intolerance”—the prolonged relapses patients often suffer after minimal activity–as the core symptom of the illness. Yet a key aspect of the PACE rehabilitative interventions, GET and CBT, is urging patients to increase their activity and to interpret a resurgence of symptoms as a transient event, not a sign of deterioration. Given the IOM’s focus on “exertion intolerance” as the central phenomenon, isn’t the PACE approach contraindicated?

7) Does it bother you that you published a paper in which 13% of the sample had already, at baseline, met the outcome thresholds for one or both primary measures? These outcome thresholds, which represented worse health than the entry criteria, were variously defined as being “within normal range” (the Lancet paper), “back to normal” (Dr. Chalder’s statement at the press conference for the Lancet paper), and “a strict criterion for recovery” (the Lancet commentary by colleagues of the PACE authors). Can you point me to any other studies published in The Lancet, or anywhere, in which positive outcome scores represented worse health than entry criteria?

8) Does it bother you personally that the PACE authors did not inform you or your editorial staff that a significant minority of patients were already “within normal range” on at least one primary outcome at baseline? (I presume they didn’t mention it to you because, well, it’s hard to imagine you would have published the paper if you or anyone there had been told about or noticed the inexplicable overlap in the entry criteria and the post-hoc “normal range” thresholds.)

9) During a 2011 Australian radio interview not long after The Lancet published the first PACE results, you said the following about the trial’s critics: “One sees a fairly small, but highly organised, very vocal and very damaging group of individuals who have I would say actually hijacked this agenda and distorted the debate so that it actually harms the overwhelming majority of patients.” Given that the First-Tier Tribunal expressed a different perspective on the stature and credibility of those criticizing PACE, do you still agree with your 2011 characterization of the trial’s opponents?

10) During the same interview, you stated that the PACE trial had undergone “endless rounds of peer review.” Yet the trial was also “fast-tracked” to publication, as indicated on the version of the article in the ScienceDirect database. Can you explain the mechanics of “fast-tracking” a paper to publication while simultaneously subjecting it to “endless rounds of peer review”? How long was the fast-track process for the PACE paper, and how many actual rounds of review did the paper undergo during that endless period?

11) Can you explain why the Lancet’s endless peer review process did not catch the most criticized aspect of the paper—the very obvious fact that participants could be simultaneously disabled enough for entry yet already “within normal range”/”back to normal”/”recovered” on the primary outcomes? Can you explain why the reviewers did not request the authors to provide either the original results promised in the protocol or else sensitivity analyses to assess the impact of the mid-trial changes they introduced?

12) Do you think it was appropriate for the PACE investigators to publish a mid-trial newsletter that promoted the therapies under study and included glowing testimonials from earlier participants about their excellent outcomes? Can you point to other published studies that featured such mid-trial dissemination of personal testimonials and explicit descriptions of outcomes? The PACE authors have stated that the newsletter testimonials did not identify participants’ trial arms and therefore could not have created any bias. Do you agree with this novel and creative argument that influencing all remaining participants in a trial in a positive direction is not a form of bias?

13) Did The Lancet’s peer review process include an evaluation of the PACE trial’s consent forms, given the authors’ explicit promise in the protocol to abide by the Declaration of Helsinki? The Declaration of Helsinki requires investigators to disclose “any possible conflicts of interest” not just to journals but to prospective participants. Yet the PACE consent forms did not disclose the authors’ close financial and consulting ties with the insurance industry. Do you agree this omission violates their protocol promise, and that given this violation the PACE authors failed to obtain legitimate informed consent from their participants? Without legitimate informed consent, did the PACE authors have the right to publish their findings in The Lancet and other journals? What should happen to the PACE papers already published, since the authors do not appear to have legitimate informed consent from participants?

14) Who do you think should be held responsible for the $8,000,000 in U.K. government funds wasted on the PACE trial? Who should be held responsible for the harm it has caused? What responsibility, if any, does The Lancet bear for the debacle?