Trial By Error, Continued: The Real Data

by David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

‘The PACE trial is a fraud.’ Ever since Virology Blog posted my 14,000-essord investigation of the PACE trial last October, I’ve wanted to write that sentence. (I should point out that Dr. Racaniello has already called the PACE trial a “sham,” and I’ve already referred to it as “doggie-poo.” I’m not sure that “fraud” is any worse. Whatever word you use, the trial stinks.)

Let me be clear: I don’t mean “fraud” in the legal sense—I’m not a lawyer–but in the sense that it’s a deceptive and morally bankrupt piece of research. The investigators made dramatic changes from the methodology they outlined in their protocol, which allowed them to report purported “results” that were much, much better than those they would have been able to claim under their originally planned methods. Then they reported only the better-looking “results,” with no sensitivity analyses to analyze the impact of the changes—the standard statistical approach in such circumstances.

This is simply not allowed in science. It means the reported benefits for cognitive behavior therapy and graded exercise therapy were largely illusory–an artifact of the huge shifts in outcome assessments the authors introduced mid-trial. (That’s putting aside all the other flaws, like juicing up responses with a mid-trial newsletter promoting the interventions under investigation, failing to obtain legitimate informed consent from the participants, etc.)

That PACE suffered from serious methodological deficiencies should have been obvious to anyone who read the studies. That includes the reviewers for The Lancet, which published the PACE results for “improvement” in 2011 after what editor Richard Horton has called “endless rounds of peer-review,” and the journal Psychological Medicine, which published results for “recovery” in 2013. Certainly the deficiencies should have been obvious to anyone who read the trenchant letters and commentaries that patients routinely published in response to the egregious errors committed by the PACE team. Even so, the entire U.K. medical, academic and public health establishments refused to acknowledge what was right before their eyes, finding it easier instead to brand patients as unstable, anti-science, and possibly dangerous.

Thanks to the efforts of the incredible Alem Matthees, a patient in Perth, Australia, the U.K.’s First-Tier Tribunal last month ordered the liberation of the PACE trial data he’d requested under a freedom-of-information request. (The brief he wrote for the April hearing, outlining the case against PACE in great detail, was a masterpiece.) Instead of appealing, Queen Mary University of London, the home institution of lead PACE investigator Peter White, made the right decision. On Friday, September 9, the university announced its intention to comply with the tribunal ruling, and sent the data file to Mr. Matthees. The university has a short window of time before it has to release the data publicly.

I’m guessing that QMUL forced the PACE team’s hand by refusing to allow an appeal of the tribunal decision. I doubt that Dr. White and his colleagues would ever have given up their data willingly, especially now that I’ve seen the actual results. Perhaps administrators had finally tired of the PACE shenanigans, recognized that the study was not worth defending, and understood that continuing to fight would further harm QMUL’s reputation. It must be clear to the university now that its own reputational interests diverge sharply from those of Dr. White and the PACE team. I predict that the split will become more apparent as the trial’s reputation and credibility crumble; I don’t expect QMUL spokespeople to be out there vigorously defending the unacceptable conduct of the PACE investigators.

Last weekend, several smart, savvy patients helped Mr. Matthees analyze the newly available data, in collaboration with two well-known academic statisticians, Bruce Levin from Columbia and Philip Stark from Berkeley.  Yesterday, Virology Blog published the group’s findings of the single-digit, non-statistically significant “recovery” rates the trial would have been able to report had the investigators adhered to the methods they outlined in the protocol. That’s a remarkable drop from the original Psychological Medicine paper, which claimed that 22 percent of those in the favored intervention groups achieved “recovery,” compared to seven percent for the non-therapy group.

Now it’s clear: The PACE authors themselves are the anti-science faction. They tortured their data and ended up producing sexier results. Then they claimed they couldn’t share their data because of alleged worries about patient confidentiality and sociopathic anti-PACE vigilantes. The court dismissed these arguments as baseless, in scathing terms. (It should be noted that their ethical concerns for patients did not extend to complying with a critical promise they made in their protocol—to tell prospective participants about “any possible conflicts of interest” in obtaining informed consent. Given this omission, they have no legitimate informed consent for any of their 641 participants and therefore should not be allowed to publish any of their data at all.)

The day before QMUL released the imprisoned data to Mr. Matthees, the PACE authors themselves posted a pre-emptive re-analysis of results for the two primary outcomes of physical function and fatigue, according to the protocol methods. In the Lancet paper, they had revised and weakened their own definition of what constituted “improvement.” With this revised definition, they could report in The Lancetthat approximately 60 % in the cognitive behavior and graded exercise therapy arms “improved” to a clinically significant degree on both fatigue and physical function.

The re-analysis the PACE authors posted last week sought to put the best possible face on the very poor data they were required to release. Yet patients examining the new numbers quickly noted that, under the more stringent definition of “improvement” outlined in the protocol, only about 20 percent in the two groups could be called “overall improvers.”. Solely by introducing a more relaxed definition of “improvement,” the PACE team—enabled by The Lancet’s negligence and an apparently inadequate “endless” review process–was able to triple the trial’s reported success rate..

So now it’s time to ask what happens to the papers already published. The editors have made their feelings clear. I have written multiple e-mails to Lancet editor Richard Horton since I first contacted him about my PACE investigation, almost a year before it ran. He never responded until September 9, the day QMUL liberated the PACE data. Give that the PACE authors’ own analysis showed that the new data showed significantly less impressive results than those published in The Lancet, I sent Dr. Horton a short e-mail asking when we could expect some sort of addendum or correction to the 2011 paper. He responded curtly: “Mr. Tuller–We have no such plans.”

The editors of Psychological Medicine are Kenneth Kendler of Virginia Commonwealth University and Robin Murray of Kings College London. After I wrote to the journal last December, pointing out the problems, I received the following from Dr. Murray, whose home base is KCL’s Department of Psychosis Studies: “Obviously the best way of addressing the truth or otherwise of the findings is to attempt to replicate them. I would therefore like to encourage you to initiate an attempted replication of the study. This would be the best way for you to contribute to the debate…Should you do this, then Psychological Medicine will be most interested in the findings either positive or negative.”

This was not an appropriate response. I told Dr. Murray it was “disgraceful,” given that the paper was so obviously flawed. This week, I wrote again to Dr. Murray and Dr. Kendler, asking if they now planned to deal with the paper’s problems, given the re-analysis by Matthees et al. In response, Dr. Murray suggested that I submit a re-analysis, based on the released data, and Psychological Medicine would be happy to consider it. “We would, of course, send it out to referees for scientific scrutiny in the same manner as we did for the original paper,” he wrote.

I explained that it was his and the journal’s responsibility to address the problems, whether or not anyone submitted a re-analysis. I also noted that I could not improve on the Matthees re-analysis, which completed rebutted the results reported in Psychological Medicine’s paper. I urged Dr. Murray to contact either Dr. Racaniello or Mr. Matthees to discuss republishing it, if he truly wished to contribute to the debate. Finally, I noted that the peer-reviewers for the original paper had okayed a study in which participants could be disabled and recovered simultaneously, so I wasn’t sure if the journal’s assessment process could be trusted.

(By the way, Kings College London, where Dr. Murray is based, is also the home institution of PACE investigator Trudie Chalder as well as Simon Wessely, a close colleague of the PACE authors and president of the Royal College of Psychiatrists*. That could explain Dr. Murray’s inability or reluctance to acknowledge that the “recovery” paper his journal peer-reviewed and published is meaningless.)

Earlier today, the PACE authors posted a blog on The BMJ site, their latest effort to salvage their damaged reputations. They make no mention of their massive research errors and focus only on their supposed fears that releasing even anonymous data will frighten away future research participants. They have provided no evidence to back up this unfounded claim, and the tribunal flatly rejected it. They also state that only researchers who present  “pre-specified” analysis plans should be able to obtain trial data. This is laughable, since Dr. White and his colleagues abandoned their own pre-specified analyses in favor of analyses they decided they preferred much later on, long after the trial started.

They have continued to refer to their reported analyses, deceptively, as “pre-specified,” even though these methods were revised mid-trial. The following point has been stated many times before, but bears repeating: In an open label trial like PACE, researchers are likely to know very well what the outcome trends are before they review any actual data. So the PACE team’s claim that the changes they made were “pre-specified” because they were made before reviewing outcome data is specious. I have tried to ask them about this issue multiple times, and have never received an answer.

Dr. White, his colleagues, and their defenders don’t yet seem to grasp that the intellectual construct they invented and came to believe in—the PACE paradigm or the PACE enterprise or the PACE cult, have your pick—is in a state of collapse. They are used to saying whatever they want about patients—Internet Abuse! Knife-wielding! Death threats!!–and having it be believed. In responding to legitimate concerns and questions, they have covered up their abuse of the scientific process by providing non-answers, evasions and misrepresentations—the academic publishing equivalent of “the dog ate my homework.” Amazingly, journal editors, health officials, reporters and others have accepted these non-responsive responses as reasonable and sufficient. I do not.

Now their work is finally being scrutinized the way it should have been by peer reviewers before this damaging research was ever published in the first place. The fallout is not going to be pretty. If nothing else, they have provided a great gift to academia with their $8 million disaster—for years to come, graduate students in the U.S., the U.K. and elsewhere will be dissecting PACE as a classic case study of bad research and mass delusion.

*Correction: The original version of the post mistakenly called the organization the Royal Society of Psychiatrists.

Zika Virus in the USA

On this episode of Virus Watch we cover three Zika virus stories: the first human trial of a Zika virus vaccine, the first local transmission of infection in the United States, and whether the virus is a threat to participants in the 2016 Summer Olympic and Paralympic Games.

TWiV 397: Trial by error

Journalism professor David Tuller returns to TWiV for a discussion of the PACE trial for ME/CFS: the many flaws in the trial, why its conclusions are useless, and why the data must be released and re-examined.

You can find TWiV #397 at microbe.tv/twiv, or listen below.

Click arrow to play
Download TWiV 397 (67 MB .mp3, 93 min)
Subscribe (free): iTunesRSSemailGoogle Play Music

Become a patron of TWiV!

A request for data from the PACE trial

Mr. Paul Smallcombe
Records & Information Compliance Manager
Queen Mary University of London
Mile End Road
London E1 4NS

Dear Mr Smallcombe:

The PACE study of treatments for ME/CFS has been the source of much controversy since the first results were published in The Lancet in 2011. Patients have repeatedly raised objections to the study’s methodology and results. (Full title: “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome: a randomized trial.”)

Recently, journalist and public health expert David Tuller documented that the trial suffered from many serious flaws that raise concerns about the validity and accuracy of the reported results. We cited some of these flaws in an open letter to The Lancet that urged the journal to conduct a fully independent review of the trial. (Dr. Tuller did not sign the open letter, but he is joining us in requesting the trial data.)

These flaws include, but are not limited to: major mid-trial changes in the primary outcomes that were not accompanied by the necessary sensitivity analyses; thresholds for “recovery” on the primary outcomes that indicated worse health than the study’s own entry criteria; publication of positive testimonials about trial outcomes and promotion of the therapies being investigated in a newsletter for participants; rejection of the study’s objective outcomes as irrelevant after they failed to support the claims of recovery; and the failure to inform participants about investigators’ significant conflicts of interest, and in particular financial ties to the insurance industry, contrary to the trial protocol’s promise to adhere to the Declaration of Helsinki, which mandates such disclosures.

Although the open letter was sent to The Lancet in mid-November, editor Richard Horton has not yet responded to our request for an independent review. We are therefore requesting that Queen Mary University of London to provide some of the raw trial data, fully anonymized, under the provisions of the U.K.’s Freedom of Information law.

In particular, we would like the raw data for all four arms of the trial for the following measures: the two primary outcomes of physical function and fatigue (both bimodal and Likert-style scoring), and the multiple criteria for “recovery” as defined in the protocol published in 2007 in BMC Neurology, not as defined in the 2013 paper published in Psychological Medicine. The anonymized, individual-level data for “recovery” should be linked across the four criteria so it is possible to determine how many people achieved “recovery” according to the protocol definition.

We are aware that previous requests for PACE-related data have been rejected as “vexatious.” This includes a recent request from psychologist James Coyne, a well-regarded researcher, for data related to a subsequent study about economic aspects of the illness published in PLoS One—a decision that represents a violation of the PLoS policies on data-sharing.

Our request clearly serves the public interest, given the methodological issues outlined above, and we do not believe any exemptions apply. We can assure Queen Mary University of London that the request is not “vexatious,” as defined in the Freedom of Information law, nor is it meant to harass. Our motive is easy to explain: We are extremely concerned that the PACE studies have made claims of success and “recovery” that appear to go beyond the evidence produced in the trial. We are seeking the trial data based solely on our desire to get at the truth of the matter.

We appreciate your prompt attention to this request.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

David Tuller, DrPH
Lecturer in Public Health and Journalism
University of California, Berkeley

Trial by error, Continued: PACE Team’s Work for Insurance Companies Is “Not Related” to PACE. Really?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In my initial story on Virology Blog, I charged the PACE investigators with violating the Declaration of Helsinki, developed in the 1950s by the World Medical Association to protect human research subjects. The declaration mandates that scientists disclose “institutional affiliations” and “any possible conflicts of interest” to prospective trial participants as part of the process of obtaining informed consent.

The investigators promised in their protocol to adhere to this foundational human rights document, among other ethical codes. Despite this promise, they did not tell prospective participants about their financial and consulting links with insurance companies, including those in the disability sector. That ethical breach raises serious concerns about whether the “informed consent” they obtained from all 641 of their trial participants was truly “informed,” and therefore legitimate.

The PACE investigators do not agree that the lack of disclosure is an ethical breach. In their response to my Virology Blog story, they did not even mention the Declaration of Helsinki or explain why they violated it in seeking informed consent. Instead, they defended their actions by noting that they had disclosed their financial and consulting links in the published articles, and had informed participants about who funded the research–responses that did not address the central concern.

“I find their statement that they disclosed to The Lancet but not to potential subjects bemusing,” said Jon Merz, a professor of medical ethics at the University of Pennsylvania. “The issue is coming clean to all who would rely on their objectivity and fairness in conducting their science. Disclosure is the least we require of scientists, as it puts those who should be able to trust them on notice that they may be serving two masters.”

In their Virology Blog response, the PACE team also stated that no insurance companies were involved in the research, that only three of the 19 investigators “have done consultancy work at various times for insurance companies,” and that this work “was not related to the research.” The first statement was true, but direct involvement in a study is of course only one possible form of conflict of interest. The second statement was false. According to the PACE team’s conflict of interest disclosures in The Lancet, the actual number of researchers with insurance industry ties was four—along with the three principal investigators, physiotherapist Jessica Bavington acknowledged such links.

But here, I’ll focus on the third claim–that their consulting work “was not related to the research.” In particular, I’ll examine an online article posted by Swiss Re, a large reinsurance company. The article describes a “web-based discussion group” held with Peter White, the lead PACE investigator, and reveals some of the claims-assessing recommendations arising from that presentation. White included consulting work with Swiss Re in his Lancet disclosure.

The Lancet published the PACE results in February, 2011; the undated Swiss Re article was published sometime within the following year or so. The headline: “Managing claims for chronic fatigue the active way.” (Note that this headline uses “chronic fatigue” rather than “chronic fatigue syndrome,” although chronic fatigue is a symptom common to many illnesses and is quite distinct from the disease known as chronic fatigue syndrome. Understanding the difference between the two would likely be helpful in making decisions about insurance claims.)

The Swiss Re article noted that the illness “can be an emotive subject” and then focused on the implications of the PACE study for assessing insurance claims. It started with a summary account of the findings from the study, reporting that the “active rehabilitation” arms of cognitive behavioral therapy and graded exercise therapy “resulted in greater reduction of patients’ fatigue and larger improvement in physical functioning” than either adaptive pacing therapy or specialist medical care, the baseline condition. (The three intervention arms also received specialist medical care.)

The trial’s “key message,” declared the article, was that “pushing the limits in a therapeutic setting using well described treatment modalities is more effective in alleviating fatigue and dysfunction than staying within the limits imposed by the illness traditionally advocated by ‘pacing.’”

Added the article: “If a CFS patient does not gradually increase their activity, supported by an appropriate therapist, then their recovery will be slower. This seems a simple message but it is an important one as many believe that ‘pacing’ is the most beneficial treatment.”

This understanding of the PACE research—presumably based on information from Peter White’s web-based discussion—was wrong. Pacing is not and has never been a “treatment.” It is also not one of the “four most commonly used therapies,” as the newsletter article declared, since it has never been a “therapy” either. It is a self-help method practiced by many patients seeking the best way to manage their limited energy reserves.

The PACE investigators did not test pacing. Instead, the intervention they dubbed “adaptive pacing therapy” was an operationalized version of “pacing” developed specifically for the study. Many patients objected to the trial’s form of pacing as overly prescriptive, demanding and unlike the version they practiced on their own. Transforming an intuitive, self-directed approach into a “treatment” administered by a “therapist” was not a true test of whether the self-help approach is effective, they argued–with significant justification. Yet the Swiss Re article presented “adaptive pacing therapy” as if it were identical to “pacing.”

The Swiss Re article did not mention that the reported improvements from “active rehabilitation” were based on subjective outcomes and were not supported by the study’s objective data. Nor did it report any of the major flaws of the PACE study or offer any reasons to doubt the integrity of the findings.

The article next asked, “What can insurers and reinsurers do to assist the recovery and return to work of CFS claimants?” It then described the conclusions to be drawn from the discussion with White about the PACE trial—the “key takeaways for claims management.”

First, Swiss Re advised its employees, question the diagnosis, because “misdiagnosis is not uncommon.”

The second point was this: “It is likely that input will be required to change a claimant’s beliefs about his or her condition and the effectiveness of active rehabilitation…Funding for these CFS treatments is not expensive (in the UK, around £2,000) so insurers may well want to consider funding this for the right claimants.”

Translation: Patients who believe they have a medical disease are wrong, and they need to be persuaded that they are wrong and that they can get better with therapy. Insurers can avoid large payouts by covering the minimal costs of these treatments for patients vulnerable to such persuasion, given the right “input.”

Finally, the article warned that private therapists might not provide the kinds of “input” required to convince patients they were wrong. Instead of appropriately “active” approaches like cognitive behavior therapy and graded exercise therapy, these therapists might instead pursue treatments that could reinforce claimants’ misguided beliefs about being seriously ill, the article suggested.

“Check that private practitioners are delivering active rehabilitation therapies, such as those described in this article, as opposed to sick role adaptation,” the Swiss RE article advised. (The PACE investigators, drawing on the concept known as “the sick role” in medical sociology, have long expressed concern that advocacy groups enabled patients’ condition by bolstering their conviction that they suffered from a “medical disease,” as Michael Sharpe, another key PACE investigator, noted in a 2002 UNUMProvident report. This conviction encouraged patients to demand social benefits and health care resources rather than focus on improving through therapy, Sharpe wrote.)

Lastly, the Swiss Re article addressed “a final point specific to claims assessment.” A diagnosis of chronic fatigue syndrome, stated the article, provided an opportunity in some cases to apply a mental health exclusion, depending upon the wording of the policy. In contrast, a diagnosis of myalgic encephalomyelitis did not.

The World Health Organization’s International Classification for Diseases, or ICD, which clinicians and insurance companies use for coding purposes, categorizes myalgic encephalomyelitis as a neurological disorder that is synonymous with the terms “post-viral fatigue syndrome” and “chronic fatigue syndrome.” But the Swiss Re article stated that, according to the ICD, “chronic fatigue syndrome” can also “alternatively be defined as neurasthenia which is in the mental health chapter.”

The PACE investigators have repeatedly advanced this questionable idea. In the ICD’s mental health section, neurasthenia is defined as “a mental disorder characterized by chronic fatigue and concomitant physiologic symptoms,” but there is no mention of “chronic fatigue syndrome” as a discrete entity. The PACE investigators (and Swiss Re newsletter writers) believe that the neurasthenia entry encompasses the illness known as “chronic fatigue syndrome,” not just the common symptom of “chronic fatigue.”

This interpretation, however, appears to be at odds with an ICD rule that illnesses cannot be listed in two separate places—a rule confirmed in an e-mail from a WHO official to an advocate who had questioned the PACE investigators’ argument. “It is not permitted for the same condition to be classified to more than one rubric as this would mean that the individual categories and subcategories were no longer mutually exclusive,” wrote the official to Margaret Weston, the pseudonym for a longtime clinical manager in the U.K. National Health Service.

Presumably, after White disseminated the good news about the PACE results at the web-based discussion, Swiss Re’s claims managers felt better equipped to help ME/CFS claimants. And presumably that help included coverage for cognitive behavior therapy and graded exercise therapy so that claimants could receive the critical “input” they needed in order to recognize and accept that they didn’t have a medical disease after all.

In sum, contrary to the investigators’ argument in their response to Virology Blog, the PACE research and findings appear to be very much “related to” insurance industry consulting work. The claim that these relationships did not represent “possible conflicts of interest” and “institutional affiliations” requiring disclosure under the Declaration of Helsinki cannot be taken seriously.

Update 11/17/15 12:22 PM: I should have mentioned in the story that, in the PACE trial, participants in the cognitive behavior therapy and graded exercise therapy arms were no more likely to have increased their hours of employment than those in the other arms. In other words, there was no evidence for the claims presented in the Swiss Re article, based on Peter White’s presentation, that these treatments were any more effective in getting people back to work.

The PACE investigators published this employment data in a 2012 paper in PLoS One. It is unclear whether Peter White already knew these results at the time of his Swiss Re presentation on the PACE results.

Update 11/18/15 6:54 AM: I also forgot to mention in the story that the three principal PACE investigators did not respond to an e-mail seeking comment about their insurance industry work. Lancet editor Richard Horton also did not respond to an e-mail seeking comment.

An open letter to Dr. Richard Horton and The Lancet

Dr. Richard Horton
The Lancet
125 London Wall
London, EC2Y 5AS, UK

Dear Dr. Horton:

In February, 2011, The Lancet published an article called “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomized trial.” The article reported that two “rehabilitative” approaches, cognitive behavior therapy and graded exercise therapy, were effective in treating chronic fatigue syndrome, also known as myalgic encephalomyelitis, ME/CFS and CFS/ME. The study received international attention and has had widespread influence on research, treatment options and public attitudes.

The PACE study was an unblinded clinical trial with subjective primary outcomes, a design that requires strict vigilance in order to prevent the possibility of bias. Yet the study suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings. The patient and advocacy communities have known this for years, but a recent in-depth report on this site, which included statements from five of us, has brought the extent of the problems to the attention of a broader public. The PACE investigators have replied to many of the criticisms, but their responses have not addressed or answered key concerns.

The major flaws documented at length in the recent report include, but are not limited to, the following:

*The Lancet paper included an analysis in which the outcome thresholds for being “within the normal range” on the two primary measures of fatigue and physical function demonstrated worse health than the criteria for entry, which already indicated serious disability. In fact, 13 percent of the study participants were already “within the normal range” on one or both outcome measures at baseline, but the investigators did not disclose this salient fact in the Lancet paper. In an accompanying Lancet commentary, colleagues of the PACE team defined participants who met these expansive “normal ranges” as having achieved a “strict criterion for recovery.” The PACE authors reviewed this commentary before publication.

*During the trial, the authors published a newsletter for participants that included positive testimonials from earlier participants about the benefits of the “therapy” and “treatment.” The same newsletter included an article that cited the two rehabilitative interventions pioneered by the researchers and being tested in the PACE trial as having been recommended by a U.K. clinical guidelines committee “based on the best available evidence.” The newsletter did not mention that a key PACE investigator also served on the clinical guidelines committee. At the time of the newsletter, two hundred or more participants—about a third of the total sample–were still undergoing assessments.

*Mid-trial, the PACE investigators changed their protocol methods of assessing their primary outcome measures of fatigue and physical function. This is of particular concern in an unblinded trial like PACE, in which outcome trends are often apparent long before outcome data are seen. The investigators provided no sensitivity analyses to assess the impact of the changes and have refused requests to provide the results per the methods outlined in their protocol.

*The PACE investigators based their claims of treatment success solely on their subjective outcomes. In the Lancet paper, the results of a six-minute walking test—described in the protocol as “an objective measure of physical capacity”–did not support such claims, notwithstanding the minimal gains in one arm. In subsequent comments in another journal, the investigators dismissed the walking-test results as irrelevant, non-objective and fraught with limitations. All the other objective measures in PACE, presented in other journals, also failed. The results of one objective measure, the fitness step-test, were provided in a 2015 paper in The Lancet Psychiatry, but only in the form of a tiny graph. A request for the step-test data used to create the graph was rejected as “vexatious.”

*The investigators violated their promise in the PACE protocol to adhere to the Declaration of Helsinki, which mandates that prospective participants be “adequately informed” about researchers’ “possible conflicts of interest.” The main investigators have had financial and consulting relationships with disability insurance companies, advising them that rehabilitative therapies like those tested in PACE could help ME/CFS claimants get off benefits and back to work. They disclosed these insurance industry links in The Lancet but did not inform trial participants, contrary to their protocol commitment. This serious ethical breach raises concerns about whether the consent obtained from the 641 trial participants is legitimate.

Such flaws have no place in published research. This is of particular concern in the case of the PACE trial because of its significant impact on government policy, public health practice, clinical care, and decisions about disability insurance and other social benefits. Under the circumstances, it is incumbent upon The Lancet to address this matter as soon as possible.

We therefore urge The Lancet to seek an independent re-analysis of the individual-level PACE trial data, with appropriate sensitivity analyses, from highly respected reviewers with extensive expertise in statistics and study design. The reviewers should be from outside the U.K. and outside the domains of psychiatry and psychological medicine. They should also be completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial.

Thank you very much for your quick attention to this matter.

Sincerely,

Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University

Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London

Leonard A. Jason, PhD
Professor of Psychology
DePaul University

Bruce Levin, PhD
Professor of Biostatistics
Columbia University

Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

Arthur L. Reingold, MD
Professor of Epidemiology
University of California, Berkeley

Trial By Error, Continued: Why has the PACE Study’s “Sister Trial” been “Disappeared” and Forgotten?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

In 2010, the BMJ published the results of the Fatigue Intervention by Nurses Evaluation, or FINE. The investigators for this companion trial to PACE, also funded by the Medical Research Council, reported no benefits to ME/CFS patients from the interventions tested.

 In medical research, null findings often get ignored in favor or more exciting “positive” results. In this vein, the FINE trial seems to have vanished from the public discussion over the controversial findings from the PACE study. I thought it was important to re-focus some attention on this related effort to prove that “deconditioning” is the cause of the devastating symptoms of ME/CFS. (This piece is also too long but hopefully not quite as dense.)

An update on something else: I want to thank the public relations manager from Queen Mary University of London for clarifying his previous assertion that I did not seek comment from the PACE investigators before Virology Blog posted my story. In an e-mail, he explained that he did not mean to suggest that I hadn’t contacted them for interviews. He only meant, he wrote, that I hadn’t sent them my draft posts for comment before publication. He apologized for the misunderstanding.

I accept his apology, so that’s the end of the matter. In my return e-mail, however, I did let him know I was surprised at the expectation that I might have shared the draft with the PACE investigators before publication. I would not have done that whether or not they had granted me interviews. This is journalism, not peer-review. Different rules.

************************************************************************

In 2003, with much fanfare, the U.K. Medical Research Council announced that it would fund two major studies of non-pharmacological treatments for chronic fatigue syndrome. In addition to PACE, the agency decided to back a second, smaller study called “Fatigue Intervention by Nurses Evaluation,” or FINE. Because the PACE trial was targeting patients well enough to attend sessions at a medical clinic, the complementary FINE study was designed to test treatments for more severely ill patients.

(Chronic fatigue syndrome is also known as myalgic encephalomyelitis, CFS/ME, and ME/CFS, which has now been adopted by U.S. government agencies. The British investigators of FINE and PACE prefer to call it chronic fatigue syndrome, or sometimes CFS/ME.)

Alison Wearden, a psychologist at the University of Manchester, was the lead FINE investigator. She also sat on the PACE Trial Steering Committee and wrote an article about FINE for one of the PACE trial’s participant newsletters. The Medical Research Council and the PACE team referred to FINE as PACE’s “sister” trial. The two studies included the same two primary outcome measures, self-reported fatigue and physical function, and used the same scales to assess them.

The FINE results were published in BMJ in April, 2010. Yet when the first PACE results were published in The Lancet the following year, the investigators did not mention the FINE trial in the text. The trial has also been virtually ignored in the subsequent public debate over the results of the PACE trial and the effectiveness, or lack thereof, of the PACE approach.

What happened? Why has the FINE trial been “disappeared”?

*****

The main goal of the FINE trial was to test a treatment for homebound patients that adapted and combined elements of cognitive behavior therapy and graded exercise therapy, the two rehabilitative therapies being tested in PACE. The approach, called “pragmatic rehabilitation,” had been successfully tested in a small previous study. In FINE, the investigators planned to compare “pragmatic rehabilitation” with another intervention and with standard care from a general practitioner.

Here’s what the Medical Research Council wrote about the main intervention in an article in its newsletter, MRC Network, in the summer of 2003: “Pragmatic rehabilitation…is delivered by specially trained nurses, who give patients a detailed physiological explanation of symptom patterns. This is followed by a treatment programme focussing on graded exercise, sleep and relaxation.”

The second intervention arm featured a treatment called “supportive listening,” a patient-centered and non-directive counseling approach. This treatment presumed that patients might improve if they felt that the therapist empathized with them, took their concerns seriously, and allowed them to find their own approach to addressing the illness.

The Medical Research Council committed 1.3 million pounds to the FINE trial. The study was conducted in northwest England, with 296 patients recruited from primary care. Each intervention took place over 18 weeks and consisted of ten sessions–five home visits lasting up to 90 minutes alternating with five telephone conversations of up to 30 minutes.

As in the PACE trial, patients were selected using the Oxford criteria for chronic fatigue syndrome, defined as the presence of six months of medically unexplained fatigue, with no other symptoms required. The Oxford criteria have been widely criticized for yielding heterogeneous samples, and a report commissioned by the National Institutes of Health this year recommended by the case definition be “retired” for that reason.

More specific case definitions for the illness require the presence of core symptoms like post-exertional malaise, cognitive problems and sleep disorders, rather than just fatigue per se. Because the symptom called post-exertional malaise means that patients can suffer severe relapses after minimal exertion, many patients and advocacy organizations consider increases in activity to be potentially dangerous.

To be eligible for the FINE trial, participants needed to score 70 or less out of 100 on the physical function scale, the Medical Outcomes Study 36-Item Short Form Health Survey, known as the SF-36. They also needed to score a 4 or more out of 11 on the 11-item Chalder Fatigue Scale, with each item scored as either 0 or 1. On the fatigue scale, a higher score indicated greater fatigue.

Among other measures, the trial also included a key objective outcome–the “time to take 20 steps, (or number of steps
taken, if this is not achieved) and maximum heart rate reached on a step-test.”

Participants were to be assessed on these measures at 20 weeks, which as right after the end of the treatment period, and again at 70 weeks, which was one year after the end of treatment. According to the FINE trial protocol, published in the journal BMC Medicine in 2006, “short-term assessments of outcome in a chronic health condition such as CFS/ME can be misleading” and declared the 70-week assessment to be the “primary outcome point.”

*****

The theoretical model behind the FINE trial and pragmatic rehabilitation paralleled the PACE concept. The physical symptoms were presumed to be the result not of a pathological disease process but of “deconditioning” or “dysregulation” caused by sedentary behavior, accompanied by disrupted sleep cycles and stress. The sedentary behavior was itself presumed to be triggered by patients’ “unhelpful’ conviction that they suffered from a progressive medical illness. Counteracting the deconditioning involved re-establishing normal sleep cycles, reducing anxiety levels and gently increasing physical exertion, even if patients remained homebound.

“The treatment [pragmatic rehabilitation] is based on a model proposing that CFS/ME is best understood as a consequence of physiological dysregulation associated with inactivity and disturbance of sleep and circadian rhythms,” stated the FINE trial protocol. “We have argued that these conditions…are often maintained by illness beliefs that lead to exercise-avoidance. The essential feature of the treatment is the provision of a detailed explanation for patients’ symptoms, couched in terms of the physiological dysregulation model, from which flows the rationale for a graded return to activity.”

On the FINE trial website, a 2004 presentation about pragmatic rehabilitation explained the illness in somewhat simpler terms, comparing it to “very severe jetlag.” After explaining how and why pragmatic rehabilitation led to physical improvement, the presentation offered this hopeful message, in boldface: “There is no disease–you have a right to full health. This is a good news diagnosis. Carefully built up exercise can reverse the condition. Go for 100% recovery.”

In contrast, patients, advcoates and many leading scientists have completely rejected the PACE and FINE approach. They believe the evidence overwhelmingly points to an immunological and neurological disorder triggered by an initial infection or some other physiological insult. Last month, the National Institutes of Health ratified this perspective when it announced a major new push to seek biomedical answers to the disease, which it refers to as ME/CFS.

As in PACE, patients in the FINE trial were issued different treatment manuals depending upon their assigned study arm. The treatment manual for pragmatic rehabilitation repeatedly informed participants that the therapy could help them get better—even though the trial itself was designed to test the effectiveness of the therapy. (In the PACE trial, the manuals for the cognitive behavior and graded therapy arms also included many statements promoting the idea that the therapies could successfully treat the illness.)

“This booklet has been written with the help of patients who have made a full recovery from Chronic Fatigue Syndrome,” stated the FINE pragmatic rehabilitation manual on its second page. “Facts and information which were important to them in making this recovery have been included.” The manual noted that the patients who helped write it had been treated at the Royal Liverpool University Hospital but did not include more specific details about their “full recovery” from the illness.

Among the “facts and information” included in the manual were assertions that the trial participants, contrary to what they might themselves believe, had no persistent viral infection and “no underlying serious disease.” The manual promised them that pragmatic rehabilitation could help them overcome the illness and the deconditioning perpetuating it. “Instead of CFS controlling you, you can start to regain control of your body and your life,” stated the manual.

Finally, as in PACE, participants were encouraged to change their beliefs about their condition by “building the right thoughts for your recovery.” Participants were warned that “unhelpful thoughts”—such as the idea that continued symptoms indicated the presence of an organic disease and could not be attributed to deconditioning—“can put you off parts of the treatment programme and so delay or prevent recovery.”

The supportive listening manual did not similarly promote the idea that “recovery” from the illness was possible. During the sessions, the manual explained, “The listener, your therapist, will provide support and encourage you to find ways to cope by using your own resources to change, manage or adapt to difficulties…She will not tell you what to do, advise, coach or direct you.”

*****

A qualitative study about the challenges of the FINE research process, published by the investigators in the journal Implementation Science in 2011, shed light on how much the theoretical framework and the treatment approaches frustrated and angered trial participants. According to the interviews with some of the nurses, nurse supervisors, and participants involved in FINE, the home visits often bristled with tension over the different perceptions of what caused the illness and which interventions could help.

“At times, this lack of agreement over the nature of the condition and lack of acceptance as to the rationale behind the treatment led to conflict,” noted the FINE investigators in the qualitative paper. “A particularly difficult challenge of interacting with patients for the nurses and their supervisors was managing patients’ resistance to the treatment.”

One participant in the pragmatic rehabilitation arm, who apparently found it difficult to do what was apparently expected, attributed this resistance to the insistence that deconditioning caused the symptoms and that activity would reverse them. “If all that was standing between me and recovery was the reconditioning I could work it out and do it, but what I have got is not just a reconditioning problem,” the participant said. “I have got something where there is damage and a complete lack of strength actually getting into the muscles and you can’t work with what you haven’t got in terms of energy.”

Another participant in the pragmatic rehabilitation arm was more blunt. “I kept arguing with her [the nurse administering the treatment] all the time because I didn’t agree with what she said,” said the participant, who ended up dropping out of the trial.

Some participants in the supportive listening arm also questioned the value of the treatment they were receiving, according to the study. “I mostly believe it was more physical than anything else, and I didn’t see how talking could truthfully, you know, if it was physical, do anything,” said one.

In fact, the theoretical orientation also alienated some prospective participants as well, according to interviews the investigators conducted with some patients who declined to enter the trial. ‘It [the PR intervention] insisted that physiologically there was nothing wrong,” said one such patient. “There was nothing wrong with my glands, there was nothing wrong, that it was just deconditioned muscles. And I didn’t believe that…I can’t get well with treatment you don’t believe in.”

When patients challenged or criticized the therapeutic interventions, the study found, nurses sometimes felt their authority and expertise to be under threat. “They are testing you all the time,” said one nurse. Another reported: “That anger…it’s very wearing and demoralizing.”

One nurse remembered the difficulties she faced with a particular participant. “I used to go there and she would totally block me, she would sit with her arms folded, total silence in the house,” said the nurse. “It was tortuous for both of us.”

At times, nurses themselves responded to these difficult interactions with bouts of anger directed at the participants, according to a supervisor.

“Their frustration has reached the point where they sort of boiled over,” said the supervisor. “There is sort of feeling that the patient should be grateful and follow your advice, and in actual fact, what happens is the patient is quite resistant and there is this thing like you know, ‘The bastards don’t want to get better.’”

*****

BMJ published the FINE results in 2010. The FINE investigators found no statistically significant benefits to either pragmatic rehabilitation or supportive listening at 70 weeks. Despite these null findings one year after the end of the 18-week course of treatment, the mean scores of those in the pragmatic rehabilitative arm demonstrated at 20 weeks a “clinically modest” but statistically significant reduction in fatigue—a drop of one point (plus a little) on the 11-point fatigue scale. The slight improvement still meant that participants were much more fatigued than the initial entry threshold for disability, and any benefits were no longer statistically significant by the final assessment.

Despite the null findings at 70 weeks, the authors put a positive gloss on the results, reporting first in the abstract that fatigue was “significantly improved” at 20 weeks. Given the very modest one-point change in average fatigue scores, perhaps the FINE investigators intended to report instead that there was a “statistically significant improvement” at 20 weeks—an accurate phrase with a somewhat different meaning.

The abstract included another interesting linguistic element. While the trial protocol had designated the 70-week assessment as “the primary outcome point,” the abstract of the paper itself now stated that “the primary clinical outcomes were fatigue and physical functioning at the end of treatment (20 weeks) and 70 weeks from recruitment.”

After redefining their primary outcome points to include the 20-week as well as the 70-week assessment, the abstract promoted the positive effects found at the earlier point as the study’s main finding. Only after communicating the initial benefits did they note that these advantages for pragmatic rehabilitation later wore off. The FINE paper cited no oversight committee approval for this expanded interpretation of the trial’s primary outcome points to include the 20-week assessment, nor did it mention the protocol’s caveat about the “misleading” nature of short-term assessments in chronic health conditions.

In fact, within the text of the paper, the investigators noted that the “pre-designated outcome point” was 70 weeks. But they did not explain why they then decided to highlight most in the abstract what was not the pre-designated but instead a post-hoc “primary” outcome point—the 20-week assessment.

A BMJ editorial that accompanied the FINE trial also accentuated the positive results at 20 weeks rather than the bad news at 70 weeks. According to the editorial’s subhead, pragmatic rehabilitation “has a short term benefit, but supportive listening does not.” The editorial did not note that this was not the pre-designated primary outcome point. The null results for that outcome point—the 70-week assessment—were not mentioned until later in the editorial.

*****

Patients and advocates soon began criticizing the study in the “rapid response” section of the BMJ website, citing its theoretical framework, the use of the broad Oxford criteria as a case definition, and the failure to provide the step-test outcomes, among other issues.

“The data provide strong evidence that the anxiety and deconditioning model of CFS/ME on which the trial is predicated is either wrong or, at best, incomplete,” wrote one patient. “These results are immensely important because they demonstrate that if a cure for CFS/ME is to be found, one must look beyond the psycho-behavioural paradigm.”

Another patient wrote that the study was “a wake-up call to the whole
of the medical establishment” to take the illness seriously. One predicted “that there will those who say that the this trial failed because
the patients were not trying hard enough.”

A physician from Australia sought to defend the interests not of patients but of the English language, decrying the lack of hyphens in the paper’s full title: “Nurse led, home based self help treatment for patients in primary care with chronic fatigue syndrome: randomised controlled trial.”

“The hyphen is a coupling 
between carriages of words to ensure unambiguous
 transmission of thought,” wrote the doctor. “Surely this should read ‘Nurse-led, home-based, self-
help…’

“Lest English sink further into the Great Despond of 
ambiguity and non-sense [hyphen included in the original comment], may I implore the co-editors of
the BMJ to be the vigilant watchdogs of our mother tongue
 which at the hands of a younger ‘texting’ generation is heading towards anarchy.” [The original comment did not include the expected comma between ‘tongue’ and ‘which.’]

*****

In a response on the BMJ website a month after publishing the study, the FINE investigators reported that they had conducted a post-hoc analysis with a different kind of scoring for the Chalder Fatigue Scale.

Instead of scoring the answers as 0 or 1 using what was called a bimodal scale, they rescored them using what was called a continuous scale, with values ranging from 0 to 3. The full range of possible scores now ran from 0 to 33, rather than 0 to 11. (As collected, the data for the Chalder Fatigue Scale allowed for either scoring system; however, the original entry criteria of 4 on the bimodal scale would translate into a range from 4 to as high as 19 on the revised scale.)

With the revised scoring, they now reported a “clinically modest, but statistically significant effect” of pragmatic rehabilitation at 70 weeks—a reduction from baseline of about 2.5 points on the 0 to 33 scale. This final score represented some increase in fatigue from the 20-week interim assessment point.

In their comment on the website, the FINE investigators now reaffirmed that the 70-week assessment was “our primary outcome point.” This statement conformed to the protocol but differed from the suggestion in the BMJ paper that the 20-week results also represented “primary” outcomes. Given that the post-hoc rescoring allowed the investigators to report statistically significant results at the 70-week endpoint, this zig-zag back to the protocol language was perhaps not surprising.

In their comment, the FINE investigators also explained that they did not report their step-test results—their one objective measure of physical capacity–“due to a significant amount of missing data.” They did not provide an explanation for the missing data. (One obvious possible reason for missing data on an objective fitness test is that participants were too disabled to perform it at all.)

The FINE investigators did not address the question of whether the title of their paper should have included hyphens.

In the rapid comments, Tom Kindlon, a patient and advocate from a Dublin suburb, responded to the FINE investigators’ decision to report their new post-hoc analysis of the fatigue scale. He noted that the investigators themselves had chosen the bimodal scoring system for their study rather than the continuous method.

“I’m
 sure many pharmacological and non-pharmacological studies could look
 different if investigators decided to use a different scoring method or
scale at the end, if the results weren’t as impressive as they’d hoped,” he wrote. “But that is not normally how medicine works. So, while it is interesting
 that the researchers have shared this data, I think the data in the main
paper should be seen as the main data.”

*****

The FINE investigators have published a number of other papers arising from their study. In a 2013 paper on mediators of the effects of pragmatic rehabilitation, they reported that there were no differences between the three groups on the objective measure of physical capacity, the step test, despite their earlier decision not to publish the data in the BMJ paper.

Wearden herself presented the trial as a high point of her professional career in a 2013 interview for the website of the University of Manchester’s School of Psychological Sciences. “I suppose the thing I did that I’m most proud of is I ran a large treatment trial of pragmatic rehabilitation treatment for patients with chronic fatigue syndrome,” she said in the interview. “We successfully carried that trial out and found a treatment that improved patients’ fatigue, so that’s probably the thing that I’m most proud of.”

The interview did not mention that the improvement at 20 weeks was transient until the investigators performed a post-hoc-analysis and rescored the fatigue scale.

*****

The Science Media Centre, a self-styled “independent” purveyor of information about science and scientific research to journalists, has consistently shown an interest in research on what it calls CFS/ME. It held a press briefing for the first PACE results published in The Lancet in 2011, and has helped publicize the release of subsequent studies from the PACE team.

However, the Science Media Centre does not appear to have done anything to publicize the 2010 release of the FINE trial, despite its interest in the topic. A search of the center’s website for the lead FINE investigator, Alison Wearden, yielded no results. And a search for CFS/ME indicated that the first study embraced by the center’s publicity machine was the 2011 Lancet paper.

That might help explain why the FINE trial was virtually ignored by the media. A search on the LexisNexis database for “PACE trial” and “chronic fatigue syndrome” yielded 21 “newspaper” articles (I use the “apostrophes” here because I don’t know if that number includes articles on newspaper websites that did not appear in the print product; the accuracy of the number is also in question because the list did not include two PACE-related articles that I wrote for The New York Times).

Searches on the database combining “chronic fatigue syndrome” with either “FINE trial” or “pragmatic rehabilitation” yielded no results. (I used the version of LexisNexis Academic available to me through the University of California library system.)

Other researchers have also paid scant attention to the FINE trial, especially when compared to the PACE study. According to Google Scholar, the 2011 PACE paper in The Lancet has been cited 355 times. In contrast, the 2010 FINE paper in BMJ has only been cited 39 times.

*****

The PACE investigators likely exacerbated this virtual disappearance of the FINE trial by their decision not to mention it in their Lancet paper, despite its longstanding status as a “sister trial” and the relevance of the findings to their own study of cognitive behavior therapy and graded exercise therapy. The PACE investigators have not explained their reasons for ignoring the FINE trial. (I wrote about this lapse in my Virology Blog story, but in their response the PACE investigators did not mention it.)

This absence is particularly striking in light of the decision made by the PACE investigators to drop their protocol method of assessing the Chalder Fatigue Scale. In the protocol, their primary fatigue outcome was based on bimodal scoring on the 11-item fatigue scale. The protocol included continuous scoring on the fatigue scale, with the 0 to 33 scale, as a secondary outcome.

In the PACE paper itself, the investigators announced that they had dropped the bimodal scoring in favor of the continuous scoring “to more sensitively test our hypotheses of effectiveness.” They did not explain why they simply didn’t provide the findings under both scoring methods, since the data as collected allowed for both analyses. They also did not cite any references to support this mid-trial decision, nor did they explain what prompted it.

They certainly did not mention that PACE’s “sister” study, the FINE trial, had reported null results at the 70-week endpoint—that is, until the investigators rescored the data using a continuous scale rather than the bimodal scale used in the original paper.

The three main PACE investigators—psychiatrist Peter White and Michael Sharpe, and behavioral psychologist Trudie Chalder—did not respond to an e-mail request for comment on why their Lancet paper did not mention the FINE study, especially in reference to their post-hoc decision to change the method of scoring the fatigue scale. Lancet editor Richard Horton also did not respond to an e-mail request for an interview on whether he believed the Lancet paper should have included information about the FINE trial and its results.

*****

Update 11/9/15 10:46 PM: According to a list of published and in-process papers on the FINE trial website, the main FINE study was rejected by The Lancet before being accepted by BMJ, suggesting that the journal was at least aware of the trial well before it published the PACE study. That raises further questions about the absence of any mention of FINE and its null findings in the text of the PACE paper.

Trial By Error, Continued: Did the PACE Study Really Adopt a ‘Strict Criterion’ for Recovery?

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

First, some comments: When Virology Blog posted my very, very, very long investigation of the PACE trial two weeks ago, I hoped that the information would gradually leak out beyond the ME/CFS world. So I’ve been overwhelmed by the response, to say the least, and technologically unprepared for my viral moment. I didn’t even have a photo on my Twitter profile until yesterday.

Given the speed at which events are unfolding, I thought it made sense to share a few thoughts, prompted by some of the reactions and comments and subsequent developments.

I approached this story as a journalist, not an academic. I read as much as I could and talked to a lot of people. I did not set out to write the definitive story about the PACE trial, document every single one of its many oddities, or credit everyone involved in bringing these problems to light. My goal was to explain what I recognized as some truly indefensible flaws in a clear, readable way that would resonate with scientists, public health and medical professionals, and others not necessarily immersed in the complicated history of this terrible disease.

To do that most effectively and maximize the impact, I had to find a story arc, some sort of narrative, to carry readers through 14,000 words and many dense explanations of statistical and epidemiologic concepts. After a couple of false starts, I settled on a patient and advocate, Tom Kindlon, as my “protagonist”—someone readers could understand and empathize with. Tom is smart, articulate, and passionate about good science–and he knows the PACE saga inside out. He was a terrific choice whose presence in the story, I think, made reading it a lot more bearable.

That decision in no way implied that Tom was the only possible choice or even the best possible choice. I built my work on the work of others, including many that James Coyne recently referred to as “citizen-scientists.” Tom’s dedication to tracking and critiquing the research has been heroic, given his health struggles. But the same could be said, and should be said, of many others who have fought to raise awareness about the problems with PACE since the trial was announced in 2003.

The PACE study has generated many peer-reviewed publications and a healthy paper trail. My account of the story, notwithstanding its length, has significant gaps. I haven’t finished writing about PACE, so I hope to fill in some of them myself—as with today’s story on the 2011 Lancet commentary written by colleagues of Peter White, the lead PACE investigator. But I have no monopoly on this story, nor would I want one—the stakes are too high and too many years have already been wasted. Given the trial’s wealth of problems and its enormous influence and ramifications, there are plenty of PACE-related stories left for everyone to tackle.

I am, obviously, indebted to Tom—for his good humor, his willingness to trust me given so many unfair media portrayals of ME/CFS, and his patience when I peppered him with question after question via Facebook, Twitter, and e-mail.

I am also indebted to my friend Valerie Eliot Smith. We met when I began research on this project in July, 2014; since then, she has become an indispensible resource, offering transatlantic support across multiple domains. Valerie has given me invaluable legal counsel, making sure that what I was writing was verifiable and, just as important, defendable—especially in the U.K. (I don’t want to know how many billable hours she has invested!) She has provided keen strategic advice. She has been a terrific editor, whose input greatly improved the story’s flow and readability. She has done all this, I realize, at some risk to her own health. I am lucky she decided to join me on this unexpected journey.

I would like to thank, as well, Dr. Malcolm Hooper, Margaret Williams, Dr. Nigel Speight, Dr. William Weir, Natalie Boulton, Lois Addy, and the Countess of Mar for their help and hospitality while I was in England researching the story last year. I will always cherish the House of Lords plastic bag that I received from the Countess. (The bag was stuffed with PACE-related reports and documents.)

So far, Richard Horton, the editor of The Lancet, has not responded to the criticisms documented in my story. As for the PACE investigators, they provided their own response last Friday on Virology Blog, followed by my rebuttal.

In seeking that opportunity for the PACE investigators to respond, a public relations representative from Queen Mary University of London, or QMUL, had approached Virology Blog. In e-mails to Dr. Racaniello, the public relations representative had suggested that “misinformation” and “inaccuracies” in my article had triggered social media “abuse” and could cause “reputational damage.”

These are serious charges, not to be taken lightly. Last Friday’s exchange has hopefully put an end to such claims. It seems unlikely that calling rituximab an “anti-inflammatory” rather than an “immunomodulatory” drug would trigger social media abuse or cause reputational damage.

Last week, in an effort to expedite Virology Blog’s publication of the PACE investigators’ response, the QMUL public relations representative further charged that I had not sought their input before the article was posted. This accusation goes to the heart of my professional integrity as a journalist. It is also untrue—as the public relations representative would have known had he read my piece or talked to the PACE investigators themselves. (Whether earlier publication of their response would have helped their case is another question.)

Disseminating false information to achieve goals is not usually an effective PR strategy. I have asked the QMUL public relations representative for an explanation as to why he conveyed false information to Dr. Racaniello in his attempt to advance the interests of the PACE investigators. I have also asked for an apology.


 

Since 2011, the PACE investigators have released several papers, repeatedly generating enthusiastic news coverage about the possibility of “recovery”–coverage that has often drawn conclusions beyond what the publications themselves have reported.

The PACE researchers can’t control the media and don’t write headlines. But in at least one case, their actions appeared to stimulate inaccurate media accounts–and they made no apparent effort immediately afterwards to correct the resulting international coverage. The misinformation spread to medical and public health journals as well.

(I mentioned this episode, regarding the Lancet “comment” that accompanied the first PACE results in 2011, in my excruciatingly long series two weeks ago on Virology Blog. However, that series focused on the PACE study, and the comment itself raised additional issues that I did not have the chance to explore. Because the Lancet comment had such an impact on media coverage, and ultimately most likely on patient care, I felt it was important to return to it.)

The Lancet comment, written by Gils Bleijenberg and Hans Knoop from the Expert Centre for Chronic Fatigue at Radboud University Nijmegen in the Netherlan was called “Chronic fatigue syndrome: where to PACE from here?” It reported that 30 percent of those receiving the two rehabilitative interventions favored by the PACE investigators–cognitive behavior therapy and graded exercise therapy–had “recovered.” Moreover, these participants had “recovered” according to what the comment stated was the “strict criterion” used by the PACE study itself.

Yet the PACE investigators themselves did not make this claim in their paper. Rather, they reported that participants in the two rehabilitative arms were more likely to improve and to be within what they referred to as “the normal range” for physical function and fatigue, the study’s two primary outcome measures. (“Normal range” is a statistical concept that has no inherent connection to “normal functioning” or “recovery.” More on that below.)

In addition, the comment did not mention that 15 percent of those receiving only the baseline condition of “specialist medical care” also “recovered” according to the same criterion. Thus, only half of this 30 percent “recovery” rate could actually be attributed to the interventions.

The PACE investigators themselves reviewed the comment before publication.

Thanks to this inaccurate account of the PACE study’s reported findings, the claim of a 30 percent “recovery” rate dominated much of the news coverage. Trudie Chalder, one of the key PACE investigators, reinforced the message of the Lancet comment when she declared at the press conference announcing the PACE results that participants in the two rehabilitative interventions got “back to normal.”

Just as the PACE paper did not report that anyone had “recovered,” it also did not report that anyone got “back to normal.”

Three months later, the PACE authors acknowledged in correspondence in The Lancet that the paper did not discuss “recovery” at all and that they would be presenting “recovery” data in a subsequent paper. They did not explain, however, why they had not taken earlier steps to correct the apparently inaccurate news coverage about how patients in the trial had “recovered” and gotten “back to normal.”

*****

It is not unusual for journals, when they publish studies of significance, to also commission commentaries or editorials that discuss the implications of the findings. It is also not unusual for colleagues of a study’s authors to be asked to write such commentaries. In this case, Bleijenberg and Knoop were colleagues of Peter White, the lead PACE investigator.  In 2007, the three had published, along with two other colleagues, a paper called “Is a full recovery possible after cognitive behavior therapy for chronic fatigue syndrome?” in the journal Psychotherapy and Psychosomatics.

(In their response last Friday to my Virology Blog story, the PACE investigators noted that they had published a “correction” to clarify that the 2011 Lancet paper was not about “recovery”; presumably, they were referring to the Lancet correspondence three months later. In their response to Virology Blog, they blamed the misconception on an “editorial…written by others.” But they did not mention that those “others” were White’s colleagues. In their response, they also did not explain why they did not “correct” this “recovery” claim during their pre-publication review of the comment, nor why Chalder spoke at the press conference of participants getting “back to normal.”)

In the Lancet comment, Bleijenberg and Knoop hailed the PACE team for its work. And here’s what they wrote about the trial’s primary outcome measures for physical function and fatigue: “PACE used a strict criterion for recovery: a score on both fatigue and physical function within the range of the mean plus (or minus) one standard deviation of a healthy person’s score.”

This statement was problematic for a number of reasons. Given that the PACE paper itself made no claims for “recovery,” Bleijenberg and Knoop’s assertion that it “used” any criterion for “recovery” at all was false. The PACE study protocol had outlined four specific criteria that constituted what the investigators referred to as “recovery.” Two of them were thresholds on the physical function and fatigue measures, but the Lancet paper did not present data for the other criteria and so could not report “recovery” rates.

Instead, the Lancet paper reported the rates of participants in all the groups who finished the study within what the researchers referred to as “the normal ranges” for physical function and fatigue. But as noted immediately by some in the patient community, these “normal ranges” featured a bizarre paradox: the thresholds for being “within the normal range” on both the physical function and fatigue scales indicated worse health than the entry thresholds required to demonstrate enough disability to qualify for the trial in the first place.

*****

To many patients and other readers, for the Lancet comment to refer to “normal range” scales in which entry and outcome criteria overlapped as a “strict criterion for recovery” defied logic and common sense. (According to data not included in the Lancet paper but obtained later by a patient through a freedom-of-information request, 13 percent of the total sample was already “within normal range” for physical function, fatigue or both at baseline, before any treatment began.)

In the Lancet comment, Bleijenberg and Knoop also noted that these “normal ranges” were based on “a healthy person’s score.” In other words, the “normal ranges” were purportedly derived from responses to the physical function and fatigue questionnaires by population-based samples of healthy people.

But this statement was also at odds with the fact. The source for the fatigue scale was a population of attendees at a medical practice—a population that could easily have had more health issues than a sample from the general population. And as the PACE authors themselves acknowledged in the Lancet correspondence several months after the initial publication, the SF-36 population-based scores they used to determine the physical function “normal range” were from an “adult” population, not the healthier, working-age population they had inaccurately referred to in The Lancet. (An “adult” population includes the elderly.)

The Lancet has never corrected this factual mistake in the PACE paper itself. The authors had described—inaccurately–how they derived a key outcome for one of their two primary measures. This error indisputably made the results appear better than they were, but only those who scrutinized the correspondence were aware of this discrepancy.

The Lancet comment, like the Lancet paper itself, has also never been corrected to indicate that the source population for the SF-36 responses was not a “healthy” population after all, but an “adult” one that included many elderly. The comment’s parallel claim that the source population for the fatigue scale “normal range” was “healthy” as well has also not been corrected.

Richard Horton, the editor of The Lancet, did not respond to a request for an interview to discuss whether he agreed that the “normal range” thresholds represented “a strict criterion for recovery.” Peter White, Trudie Chalder and Michael Sharpe, the lead PACE investigators, and Gils Bleijenberg, the lead author of the Lancet comment, also did not respond to requests for interviews for this story.

*****

How did the PACE study end up with “normal ranges” in which participants could get worse and still be counted as having achieved the designated thresholds?

Here’s how: The investigators committed a major statistical error in determining the PACE “normal ranges.” They used a standard statistical formula designed for normally distributed populations — that is, populations in which most people score somewhere in the middle, with the rest falling off evenly on each side. When normally distributed populations are graphed, they form the classic bell curve. In PACE, however, the data they were analyzing was far from normally distributed. The population-based responses to the physical function and fatigue questionnaires were skewed—that is, clustered toward the healthy end rather than symmetrically spread around a mean value.

With a normally distributed set of data, a “normal range” using the standard formula used in PACE—taking the mean, plus/minus one standard deviation–contains 68 percent of the values. But when the values are clustered toward one end, as in the source populations for physical function and fatigue, a larger percentage ends up being included in a “normal range” calculated using this same formula. Other statistical methods can be used to calculate 68 percent of the values when a dataset does not form a normal distribution.

If the standard formula is used on a population-based survey with scores clustered toward the healthier end, the result is an expanded “normal range” that pushes the lower threshold even lower, as happened with the PACE physical function scale. And in PACE, the threshold wasn’t just low–it was lower than the score required for entry into the trial. This score, of course, already represented severe disability, not “recovery” or being “back to normal”—and certainly not a “strict criterion” for anything.

Bleijenberg and Knoop, the comment authors, were themselves aware of the challenges faced in calculating accurate “normal ranges,” since the issue was addressed in the 2007 paper they co-wrote with Peter White. In this paper, White, Bleijenberg, and Knoop discussed the concerns related to determining a “normal range” from population data that was heavily clustered toward the healthy end of the scale. The paper noted that using the standard formula “assumed a normal distribution of scores” and generated different results under the “violation of the assumptions of normality.”

*****

Despite the caveats the three scientists included in this 2007 paper, Bleijenberg and Knoop’s 2011 Lancet comment did not mention these concerns about distortion arising from applying the standard statistical formula to values that were not normally distributed. (White and his colleagues also did not mention this problem in the PACE study itself.)

Moreover, the 2007 paper from White, Bleijenberg, and Knoop had identified a score of 80 on the SF-36 as representing “recovery”—a much higher “recovery” threshold than the SF-36 score of 60 that Bleijenberg and Knoop now declared to be a “strict criterion” In the Lancet comment, the authors did not mention this major discrepancy, nor did they explain how and when they had changed their minds about whether an SF-36 score of 60 or 80 best represented “recovery.” (In 2011, White and his colleagues also did not mention this discrepancy between the score for “recovery” in the 2007 paper and the much lower “normal range” threshold in the PACE paper.)

Along with the PACE paper, The Lancet comment caused an uproar in the patient and advocacy communities–especially since the claim that 30 percent of participants in the rehabilitative arms “recovered” per a “strict criterion” was widely disseminated.

The comment apparently caused some internal consternation at The Lancet as well. In an e-mail to Margaret Williams, the pseudonym for a longtime clinical manager in the National Health Service who had complained about the Lancet comment, an editor at the journal, Zoe Mullan, agreed that the reference to “recovery” was problematic.

“Yes I do think we should correct the Bleijenberg and Knoop Comment, since White et al explicitly state that recovery will be reported in a separate report,” wrote Mullan in the e-mail. “I will let you know when we have done this.”

No correction was made, however.

*****

In 2012, to press the issue, the Countess of Mar pursued a complaint about the comment’s claim of “recovery” with the (now-defunct) Press Complaints Commission, a regulatory body established by the media industry that was authorized to investigate the conduct of news organizations. The countess, who frequently championed the cause of the ME/CFS patient community in Parliament’s House of Lords, had long questioned the scientific basis of support of cognitive behavior therapy and graded exercise therapy, and she believed the Lancet’s comment’s claims of “recovery” contradicted the study itself.

In defending itself to the Press Complaints Commission, The Lancet acknowledged the earlier suggestion by a journal editor that the comment should be corrected.

“I can confirm that our editor of our Correspondence section, Zoe Mullan, did offer her personal opinion at the time, in which she said that she thought that we should correct the Comment,” wrote Lancet deputy editor Astrid James to the Press Complaints Commission, in an e-mail.

“Zoe made a mistake in not discussing this approach with a more senior member of our editorial team,” continued James in the e-mail. “Now, however, we have discussed this case at length with all members of The Lancet’s senior editorial team, and with Zoe, and we do not agree that there is a need to publish a correction.”

The Lancet now rejected the notion that the comment was inaccurate. Despite the explicit language in the comment identifying the “normal range” thresholds as the PACE trial’s own “strict criterion for recovery,” The Lancet argued in its response to the Press Complaints Commission that the authors were only expressing their personal opinion about what constituted “recovery.”

In other words, according to The Lancet, Bleijenberg and Knoop were not describing—wrongly–the conclusions of the PACE paper itself. They were describing their own interpretation of the findings. Therefore, the comment was not inaccurate and did not need to be corrected.

(In its response to the Press Complaints Commission, The Lancet did not explain why thresholds that purportedly represented a “strict criterion for recovery” overlapped with the entry criteria for disability.)

*****

The Press Complaints Commission issued its findings in early 2013. The commission agreed with the Countess of Mar that the statement about “recovery” in the Lancet comment was inaccurate. But the commission gave a slightly different reason. The commission accepted the Lancet’s argument that Bleijenberg and Knoop were trying to express their own opinion. The problem, the commission ruled, was that the comment itself didn’t make that point clear.

“The authors of the comment piece were clearly entitled to take a view on how “recovery” should be defined among the patients in the trial,” wrote the commission. However, continued the decision: “The authors of the comment had failed to make clear that the 30 per cent figure for ‘recovery’ reflected their view that function within “normal range’ was an appropriate way of ‘operationalising’ recovery–rather than statistical analysis by the researchers based on the definition for recovery provided. This was a distinction of significance, particularly in the context of a comment on a clinical trial published in a medical journal. The comment was misleading on this point and raised a breach of Clause 1 (Accuracy) of the Code.”

However, this determination seemed based on a msreading of what Bleijenberg and Knoop had actually written: “PACE used a strict criterion for recovery.” That phrasing did not suggest that the authors were expressing their own opinion about “recovery.” Rather, it was a statement about how the PACE study itself purportedly defined “recovery.” And the statement was demonstrably untrue.

Compounding the confusion, the Press Complaints Commission decision noted that the Lancet comment had been discussed with the PACE investigators prior to publication. Since the phrase “strict criterion for recovery” had thus apparently been vetted by the PACE team itself, it remained unclear why the commission determined that Bleijenberg and Knoop were only expressing their own opinion.

The commission’s response left other questions unanswered. The commission noted that the Countess had pointed out that the “recovery” score for physical function cited by the commenters was lower than the score required for entry. Despite this obvious anomaly, the commission did not indicate whether it had asked The Lancet or Bleijenberg and Knoop to explain how such a nonsensical scale could be used to assess “recovery.”.

*****

Notwithstanding the inaccuracy of the Lancet comment’s “recovery” claim, the commission also found that the journal had already taken “sufficient remedial action” to rectify the problem. The commission noted that the correspondence published after the trial had provided a prominent forum to debate concerns over the definition of “recovery.” The decision also noted that the PACE authors themselves had clarified in the correspondence that the actual “recovery” findings would be published in a subsequent paper.

In ruling that “sufficient remedial action” had already been taken, however, the commission did not mention the potential damage that already might have been caused by this inaccurate “recovery” claim. Given the comment’s declaration that 30 percent of participants in the cognitive behavior and graded exercise therapy arms had “recovered” according to a “strict criterion,” the message received worldwide dissemination—even though the PACE paper itself made no such claim.

Medical and public health journals, conflating the Lancet comment and the PACE study itself, also transmitted the 30 percent “recovery” rate directly to clinicians and others who treat or otherwise deal with ME/CFS patients.

The BMJ referred to the approximately 30 percent of patients who met the “normal range” thresholds as “cured.” A study in BMC Health Services Research cited PACE as having demonstrated “a recovery rate of 30-40%”—months after the PACE authors had issued their “correction” that their paper did not report on “recovery” at all. (Another mystery about the BMC Health Services Research report is the source of the 40 percent figure for “recovery.”) A 2013 paper in PLoS One similarly cited the PACE study—not the Lancet comment—and noted that 30 percent achieved a “full recovery.”

Given that relapsing after too much exertion is a core symptom of the illness, it is impossible to calculate the possible harms that could have arisen from this widespread dissemination of misinformation to health care professionals—all based on the flawed claim from the comment that 30 percent of participants had recovered according to the PACE study’s “strict criterion for recovery.”

And that “strict criterion,” it should be remembered, allowed participants to get worse and still be counted as better.

David Tuller responds to the PACE investigators

David Tuller’s three-installment investigation of the PACE trial for chronic fatigue syndrome, “Trial By Error,” has received enormous attention. Although the PACE investigators declined David’s efforts to interview them, they have now requested the right to reply. Today, virology blog posts their response to David’s story, and below, his response to their response. 

According to the communications department of Queen Mary University, the PACE investigators have been receiving abuse on social media as a result of David Tuller’s posts. When I published Mr. Tuller’s articles, my intent was to provide a forum for discussion of the controversial PACE results. Abuse of any kind should not have been, and must not be, part of that discourse. -vrr


Last December, I offered to fly to London to meet with the main PACE investigators to discuss my many concerns. They declined the offer. Dr. White cited my previous coverage of the issue as the reason and noted that “we think our work speaks for itself.” Efforts to reach out to them for interviews two weeks ago also proved unsuccessful.

After my story ran on virology blog last week, a public relations manager for medicine and dentistry in the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello. He requested, on behalf of the PACE authors, the right to respond. (Queen Mary University is Dr. White’s home base.)

That response arrived Wednesday. My first inclination, when I read it, was that I had already rebutted most of their criticisms in my 14,000-word piece, so it seemed like a waste of time to engage in further extended debate.

Later in the day, however, the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello again, with an urgent request to publish the response as soon as possible. The PACE investigators, he said, were receiving “a lot of abuse” on social media as a result of my posts, so they wanted to correct the “misinformation” as soon as possible.

Because I needed a day or two to prepare a careful response to the PACE team’s rebuttal, Dr. Racaniello agreed to post them together on Friday morning.

On Thursday, Dr. Racaniello received yet another appeal from the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University. Dissatisfied with the Friday publishing timeline, he again urged expedited publication because “David’s blog posts contain a number of inaccuracies, may cause a considerable amount of reputational damage, and he did not seek comment from any of the study authors before the virology blog was published.”

The charge that I did not seek comment from the authors was at odds with the facts, as Dr. Racaniello knew. (It is always possible to argue about accuracy and reputational damage.) Given that much of the argument for expedited posting rested on the public relations manager’s obviously “dysfunctional cognition” that I had unfairly neglected to provide the PACE authors with an opportunity to respond, Dr. Racaniello decided to stick with his pre-planned posting schedule.

Before addressing the PACE investigators’ specific criticisms, I want to apologize sincerely to Dr. White, Dr. Chalder, Dr. Sharpe and their colleagues on behalf of anyone who might have interpreted my account of what went wrong with the PACE trial as license to target the investigators for “abuse.” That was obviously not my intention in examining their work, and I urge anyone engaging in such behavior to stop immediately. No one should have to suffer abuse, whether online or in the analog world, and all victims of abuse deserve enormous sympathy and compassion.

However, in this case, it seems I myself am being accused of having incited a campaign of social media “abuse” and potentially causing “reputational damage” through purportedly inaccurate and misinformed reporting. Because of the seriousness of these accusations, and because such accusations have a way of surfacing in news reports, I feel it is prudent to rebut the PACE authors’ criticisms in far more detail that I otherwise would. (I apologize in advance to the obsessives and others who feel they need to slog through this rebuttal; I urge you to take care not to over-exert yourself!)

In their effort to correct the “misinformation” and “inaccuracies” in my story about the PACE trial, the authors make claims and offer accounts similar to those they have previously presented in published comments and papers. In the past, astonishingly, journal editors, peer reviewers, reporters, public health officials, and the British medical and academic establishments have accepted these sorts of non-responsive responses as adequate explanations for some of the study’s fundamental flaws. I do not.

None of what they have written in their response actually addresses or resolves the core issues that I wrote about last week. They have ignored many of the questions raised in the article. In their response, they have also not mentioned the devastating criticisms of the trial from top researchers from Columbia, Stanford, University College London, and elsewhere. They have not addressed why major reports this year from the Institute of Medicine and the National Institutes of Health have presented portraits of the disease starkly at odds with the PACE framework and approach.

I will ignore their overview of the findings and will focus on the specific criticisms of my work. (I will, however, mention here that my piece discussed why their claims of cost-effectiveness for cognitive behavior therapy and graded exercise therapy are based on inaccurate statements in a paper published in PLoS One in 2012).

13% of patients had already “recovered” on entry into the trial

I did not write that 13% of the participants were “recovered” at baseline, as the PACE authors state. I wrote that they were “recovered” or already at the “recovery” thresholds for two specific indicators, physical function and fatigue, at baseline—a different statement, and an accurate one.

The authors acknowledge, in any event, that 13% of the sample was “within normal range” at baseline. For the 2013 paper in Psychological Medicine, these “normal range” thresholds were re-purposed as two of the four required “recovery” criteria.

And that begs the question: Why, at baseline, was 13% of the sample “within normal range” or “recovered” on any indicator in the first place? Why did entry criteria for disability overlap with outcome scores for being “within the normal range” or “recovered”? The PACE authors have never provided an explanation of this anomaly.

In their response, the authors state that they outlined other criteria that needed to be met for someone to be called “recovered.” This is true; as I wrote last week, participants needed to meet “recovery” criteria on four different indicators to be considered “recovered.” The PACE authors did not provide data for two of the indicators in the 2011 Lancet paper, so in that paper they could not report results for “recovery.”

However, at the press conference presenting the 2011 Lancet paper, Trudie Chalder referred to people who met the overlapping disability/”normal range” thresholds as having gotten “back to normal”—an explicit “recovery” claim. In a Lancet comment published along with the PACE study itself, colleagues of the PACE team referred to these bizarre “normal range” thresholds for physical function and fatigue as a “strict criterion for recovery.” As I documented, the Lancet comment was discussed with the PACE authors before publication; the phrase “strict criterion for recovery” obviously survived that discussion.

Much of the coverage of the 2011 paper reported that patients got “back to normal” or “recovered,” based on Dr. Chalder’s statement and the Lancet comment. The PACE authors made no public attempt to correct the record in the months after this apparently inaccurate news coverage, until they published a letter in the Lancet. In the response to Virology Blog, they say that they were discussing “normal ranges” in the Lancet paper, and not “recovery.” Yet they have not explained why Chalder spoke about participants getting “back to normal” and why their colleagues wrote that the nonsensical “normal ranges” thresholds represented a “strict criterion of recovery.”

Moreover, they still have not responded to the essential questions: How does this analysis make sense? What are the implications for the findings if 13 % are already “within normal range” or “recovered” on one of the two primary outcome measures? How can they be “disabled” enough on the two primary measures to qualify for the study if they’re already “within normal range” or “recovered”? And why did the PACE team use the wrong statistical methods for calculating their “normal ranges” when they knew that method was wrong for the data sources they had?

Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee.

The PACE authors apparently believe it is appropriate to disseminate positive testimonials during a trial as long as the therapies or interventions are not mentioned. (James Coyne dissected this unusual position yesterday.)

This is their argument: “It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.” Apparently, the PACE investigators believe that if you bias all the arms of your study in a positive direction, you are not introducing bias into your study. It is hard to know what to say about this argument.

Furthermore, the PACE authors argue that the U.K. government’s new treatment guidelines had been widely reported. Therefore, they contend, it didn’t matter that–in the middle of a trial to test the efficacy of cognitive behavior therapy and graded exercise therapy–they had informed participants that the government had already approved cognitive behavior therapy and graded exercise therapy “based on the best available evidence.”

They are wrong. They introduced an uncontrolled, unpredictable co-intervention into their study, and they have no idea what the impact might have been on any of the four arms.

In their response, the PACE authors note that the participants’ newsletter article, in addition to cognitive behavior therapy and graded exercise therapy, included a third intervention, Activity Management. As they correctly note, I did not mention this third intervention in my Virology Blog story. The PACE authors now write: “These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.”

This statement is nonsense. Their third intervention was called “Adaptive Pacing Therapy,” and they developed it specifically for testing in the PACE trial. It is unclear why they now state that their third intervention was Activity Management, or why they think participants would know that Activity Management was synonymous with Adaptive Pacing Therapy. After all, cognitive behavior therapy and graded exercise therapy also involve some form of “activity management.” Precision in language matters in science.

Finally, the investigators say that Jessica Bavington, a co-author of the 2011 paper, had already left the PACE team before she served on the government committee that endorsed the PACE therapies. That might be, but it is irrelevant to the question that I raised in my piece: whether her dual role presented a conflict of interest that should have been disclosed to participants in the newsletter article about the U.K. treatment guidelines. The PACE newsletter article presented the U.K. guideline committee’s work as if it were independent of the PACE trial itself, when it was not.

Bias was caused by changing the two primary outcomes and how they were analyzed

 The PACE authors seem to think it is acceptable to change methods of assessing primary outcome measures during a trial as long as they get committee approval, announce it in the paper, and provide some sort of reasonable-sounding explanation as to why they made the change. They are wrong.

They need as well to justify the changes with references or citations that support their new interpretations of their indicators, and they need to conduct sensitivity analyses to assess the impact of the changes on their findings. Then they need to explain why their preferred findings are more robust than the initial, per-protocol findings. They did not take these steps for any of the many changes they made from their protocol.

The PACE authors mention the change from bimodal to Likert-style scoring on the Chalder Fatigue Scale. They repeat their previous explanation of why they made this change. But they have ignored what I wrote in my story—that the year before PACE was published, its “sister” study, called the FINE trial, had no significant findings on the physical function and fatigue scales at the end of the trial and only found modest benefits in a post-hoc analysis after making the same change in scoring that PACE later made. The FINE study was not mentioned in PACE. The PACE authors have not explained why they left out this significant information about their “sister” study.

Regarding the abandonment of the original method of assessing the physical function scores, this is what they say in their response: “We decided this composite method [their protocol method] would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.” They mention that they received committee approval, and that the changes were made before examining the outcome data.

The authors have presented these arguments previously. However, they have not responded to the questions I raised in my story. Why did they not report any sensitivity analyses for the changes in methods of assessing the primary outcome measures? (Sensitivity analyses can assess how changes in assumptions or variables impact outcomes.) What prompted them to reconsider their assessment methods in the middle of the trial? Were they concerned that a mean-based measure, unlike their original protocol measure, did not provide any information about proportions of participants who improved or got worse? Any information about proportions of participants who got better or worse were from post-hoc analyses—one of which was the perplexing “normal range” analysis.

Moreover, this was an unblinded trial, and researchers generally have an idea of outcome trends before examining outcome data. When the PACE authors made the changes, did they already have an idea of outcome trends? They have not answered that question.

Our interpretation was misleading after changing the criteria for determining recovery

 The PACE authors relaxed all four of their criteria for “recovery” in their 2013 paper and cited no committees who approved this overall redefinition of this critical concept. Three of these relaxations involved expanded thresholds; the fourth involved splitting one category into two sub-categories—one less restrictive and one more restrictive. The authors gave the full results for the less restrictive category of “recovery.”

The PACE authors now say that they changed the “recovery” thresholds on three of the variables “since we believed that the revised thresholds better reflected recovery.” Again, they apparently think that simply stating their belief that the revisions were better justifies making the changes.

Let’s review for a second. The physical function threshold for “recovery” fell from 85 out of 100 in the protocol, to a score of 60 in the 2013 paper. And that “recovery” score of 60 was lower than the entry score of 65 to qualify for the study. The PACE authors have not explained how the lower score of 60 “better reflected recovery”—especially since the entry score of 65 already represented serious disability. Similar problems afflicted the fatigue scale “recovery” threshold.

The PACE authors also report that “we included those who felt “much” (and “very much”) better in their overall health” as one of the criteria for “recovery.” This is true. They are referring to the Clinical Global Impression scale. In the protocol, participants needed to score a 1 (“very much better”) on this scale to be considered “recovered” on that indicator. In the 2013 paper, participants could score a 1 (“very much better”) or a 2 (“much better”). The PACE authors provided no citations to support this expanded interpretation of the scale. They simply explained in the paper that they now thought “much better” reflected the process of recovery and so those who gave a score of 2 should also be considered to have achieved the scale’s “recovery” threshold.

With the fourth criterion—not meeting any of the three case definitions used to define the illness in the study—the PACE authors gave themselves another option. Those who did not meet the study’s main case definition but still met one or both of the other two were now eligible for a new category called “trial recovery.” They did not explain why or when they made this change.

The PACE authors provided no sensitivity analyses to measure the impact of the significant changes in the four separate criteria for “recovery,” as well as in the overall re-definition. And remember, participants at baseline could already have achived the “recovery” requirements for one or two of the four criteria—the physical function and fatigue scales. And 13% of them already had.

Requests for data under the freedom of information act were rejected as vexatious

The PACE authors have rejected requests for the results per the protocol and many other requests for documents and data as well—at least two for being “vexatious,” as they now report. In my story, I incorrectly stated that requests for per-protocol data were rejected as “vexatious” [see clarification below]. In fact, earlier requests for per-protocol data were rejected for other reasons.

One recent request rejected as “vexatious” involved the PACE investigators’ 2015 paper in The Lancet Psychiatry. In this paper, they published their last “objective” outcome measure (except for wages, which they still have not published)—a measure of fitness called a “step-test.” But they only published a tiny graph on a page with many other tiny graphs, not the actual numbers from which the graph was drawn.

The graph was too small to extract any data, but it appeared that the cognitive behavior therapy and graded exercise therapy groups did worse than the other two. A request for the step-test data from which they created the graph was rejected as “vexatious.”

However, I apologize to the PACE authors that I made it appear they were using the term “vexatious” more extensively in rejecting requests for information than they actually have been. I also apologize for stating incorrectly that requests for per protocol data specifically had been rejected as “vexatious” [see clarification below].

This is probably a good time to address the PACE authors’ repeated refrain that concerns about patient confidentiality prevent them from releasing raw data and other information from the trial. They state: “The safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does [sic] not always protect the identity of a person, as they may be recognized from personal and medical information.”

This argument against the release of data doesn’t really hold up, given that researchers share data all the time without compromising confidentiality. Really, it’s not that difficult to do!

(It also bears noting that the PACE authors’ dedication to participant protection did not extend to fulfilling their protocol promise to inform participants of their “possible conflicts of interest”—see below.)

Subjective and objective outcomes

The PACE authors included multiple objective measures in their protocol. All of them failed to demonstrate real treatment success or “recovery.” The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.

The authors now write: “We interpreted these data in the light of their context and validity.”

What the PACE team actually did was to dismiss their own objective data as irrelevant or not actually objective after all. In doing so, they cited various reasons they should have considered before including these measures in the study as “objective” outcomes. They provide one example in their response. They selected employment data as an objective measure of function, and then—as they explain in their response, and have explained previously–they decided afterwards that it wasn’t an objective measure of function after all, for this and that reason.

The PACE authors consider this interpreting data “in light of their context and validity.” To me, it looks like tossing data they don’t like.

What they should do, but have not, is to ask whether the failure of all their objective measures might mean they should start questioning the meaning, reliability and validity of their reported subjective results.

There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent

The PACE authors here seriously misstate the concerns I raised in my piece. I did not assert that bias was caused by their involvement with insurance companies. I asserted that they violated an international research ethics document and broke a commitment they made in their protocol to inform participants of “any possible conflicts of interest.” Whether bias actually occurred is not the point.

In their approved protocol, the authors promised to adhere to the Declaration of Helsinki, a foundational human rights document that is explicit on what constitutes legitimate informed consent: Prospective participants must be “adequately informed” of “any possible conflicts of interest.” The PACE authors now suggest this disclosure was unnecessary because 1) the conflicts weren’t really conflicts after all; 2) they disclosed these “non-conflicts” as potential conflicts of interest in the Lancet and other publications, 3) they had a lot of investigators but only three had links with insurers, and 4) they informed participants about who funded the research.

These responses are not serious. They do nothing to explain why the PACE authors broke their own commitment to inform participants about “any possible conflicts of interest.” It is not acceptable to promise to follow a human rights declaration, receive approvals for a study, and then ignore inconvenient provisions. No one is much concerned about PACE investigator #19; people are concerned because the three main PACE investigators have  advised disability insurers that cognitive behavior therapy and graded exercise therapy can get claimants off benefits and back to work.

That the PACE authors made the appropriate disclosures to journal editors is irrelevant; it is unclear why they are raising this as a defense. The Declaration of Helsinki is about protecting human research subjects, not about protecting journal editors and journal readers. And providing information to participants about funding sources, however ethical that might be, is not the same as disclosing information about “any possible conflicts of interest.” The PACE authors know this.

Moreover, the PACE authors appear to define “conflict of interest” quite narrowly. Just because the insurers were not involved in the study itself does not mean there is no conflict of interest and does not alleviate the PACE authors of the promise they made to inform trial participants of these affiliations. No one required them to cite the Declaration of Helsinki in their protocol as part of the process of gaining approvals for their trial.

As it stands, the PACE study appears to have no legitimate informed consent for any of the 641 participants, per the commitments the investigators themselves made in their protocol. This is a serious ethical breach.

I raised other concerns in my story that the authors have not addressed. I will save everyone much grief and not go over them again here.

I want to acknowledge two additional minor errors. In the last section of the piece, I referred to the drug rituximab as an “anti-inflammatory.” While it does have anti-inflammatory effects, rituximab should more properly be referred to as an “immunomodulatory” drug.

Also, in the first section of the story, I wrote that Dr. Chalder and Dr. Sharpe did not return e-mails I sent them last December, seeking interviews. However, during a recent review of e-mails from last December, I found a return e-mail from Dr. Sharpe that I had forgotten about. In the e-mail, Dr. Sharpe declined my request for an interview.

I apologize to Dr. Sharpe for suggesting he hadn’t responded to my e-mail last December.

Clarification: In a decision on a data request, the UK Information Commissioner’s Office noted last year that Queen Mary University of London “has advised that the effect of these requests [for PACE-related material] has been that the team involved in the PACE trial, and in particular the professor involved, now feel harassed and believe that the requests are vexatious in nature.” In other words, whatever the stated reason for denying requests, White and his colleagues regarded them all as “vexatious” by definition. Therefore, the statement that the investigators rejected the requests for data as being “vexatious” is accurate, and I retract my previous apology.

PACE trial investigators respond to David Tuller

Professors Peter White, Trudie Chalder and Michael Sharpe (co-principal investigators of the PACE trial) respond to the three blog posts by David Tuller, published here on 21st, 22nd and 23rd October 2015, about the PACE trial.

Overview

The PACE trial was a randomized controlled trial of four non-pharmacological treatments for 641 patients with chronic fatigue syndrome (CFS) attending secondary care clinics in the United Kingdom (UK) (http://www.wolfson.qmul.ac.uk/current-projects/pace-trial) The trial found that individually delivered cognitive behaviour therapy (CBT) and graded exercise therapy (GET) were more effective than both adaptive pacing therapy (APT), when added to specialist medical care (SMC), and SMC alone. The trial also found that CBT and GET were cost-effective, safe, and were about three times more likely to result in a patient recovering than the other two treatments.

There are a number of published systematic reviews and meta-analyses that support these findings from both before and after the PACE trial results were published (Whiting et al, 2001, Edmonds et al, 2004, Chambers et al, 2006, Malouff et al, 2008, Price et al, 2008, Castell et al, 2011, Larun et al, 2015, Marques et al, 2015, Smith et al, 2015). We have published all the therapist and patient manuals used in the trial, which can be down-loaded from the trial website (http://www.wolfson.qmul.ac.uk/current-projects/pace-trial).

We will only address David Tuller’s main criticisms. Most of these are often repeated criticisms that we have responded to before, and we will argue that they are unjustified.

Main criticisms:

13% of patients had already “recovered” on entry into the trial

Some 13% of patients entering the trial did have scores within normal range (i.e. within one standard deviation of the population means) for either one or both of the primary outcomes of fatigue and physical function – but this is clearly not the same as being recovered; we have published a correction after an editorial, written by others, implied that it was (White et al, 2011a). In order to be considered recovered, patients also had to:

  • Not meet case criteria for CFS
  • Not meet eligibility criteria for either of the primary outcome measures for entry into the trial
  • Rate their overall health (not just CFS) as “much” or “very much” better.

It would therefore be impossible to be recovered and eligible for trial entry (White et al, 2013). 

Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee

It is considered good practice to publish newsletters for participants in trials, so that they are kept fully informed both about the trial’s progress and topical news about their illness. We published four such newsletters during the trial, which can all be found at http://www.wolfson.qmul.ac.uk/current-projects/pace-trial. The newsletter referred to is the one found at this link: http://www.wolfson.qmul.ac.uk/images/pdfs/participantsnewsletter3.pdf.

As can be seen no specific treatment or therapy is named in this newsletter and we were careful to print feedback from participants from all four treatment arms. All newsletters were approved by the independent research ethics committee before publication. It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.

The same newsletter also mentioned the release of the UK National Institute for Health and Care Excellence guideline for the management of this illness (this institute is independent of the UK government). This came out in 2007 and received much media interest, so most patients would already have been aware of it. Apart from describing its content in summary form we also said “The guidelines emphasize the importance of joint decision making and informed choice and recommended therapies include Cognitive Behavioural Therapy, Graded Exercise Therapy and Activity Management.” These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.

The “key investigator” on the guidelines committee, who was mentioned by David Tuller, helped to write the GET manuals, and provided training and supervision for one of the therapies; however they had left the trial team two years before the newsletter’s publication. 

Bias was caused by changing the two primary outcomes and how they were analyzed

These criticisms were first made four years ago, and have been repeatedly addressed and explained by us (White et al, 2013a, White 2015), including explicit descriptions and justification within the main paper itself (White et al, 2011), the statistical analysis plan (Walwyn et al, 2013), and the trial website section of frequently asked questions, published in 2011 (http://www.wolfson.qmul.ac.uk/images/pdfs/pace/faq2.pdf).

The two primary outcomes for the trial were the SF36 physical function sub-scale and the Chalder fatigue questionnaire, as in the published trial protocol; so there was no change in the outcomes themselves. The only change to the primary outcomes from the original protocol was the use of the Likert scoring method (0, 1, 2, 3) of the fatigue questionnaire. This was used in preference to the binary method of scoring (0, 0, 1, 1). This was done in order to improve the variance of the measure (and thus provide better evidence of any change).

The other change was to drop the originally chosen composite measures (the number of patients who either exceeded a threshold score or who changed by more than 50 per cent). After careful consideration, we decided this composite method would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.

All these changes were made before any outcome data were analyzed (i.e. they were pre-specified), and were all approved by the independent Trial Steering Committee and Data Monitoring and Ethics committee.

Our interpretation was misleading after changing the criteria for determining recovery

We addressed this criticism two years ago in correspondence that followed the paper (White et al, 2013b), and the changes were fully described and explained in the paper itself (White et al, 2013). We changed the thresholds for recovery from the original protocol for our secondary analysis paper on recovery for three, not four, of the variables, since we believed that the revised thresholds better reflected recovery. For instance, we included those who felt “much” (and “very much”) better in their overall health as one of the five criteria that defined recovery. This was done before the analysis occurred (i.e. it was pre-specified). In the discussion section of the paper we discussed the limitations and difficulties in measuring recovery, and stated that other ways of defining recovery could produce different results. We also provided the results of different criteria for defining recovery in the paper. The bottom line was that, however we defined recovery, significantly more patients had recovered after receiving CBT and GET than after other treatments (White et al, 2013).

Requests for data under the freedom of information act were rejected as vexatious

 We have received numerous Freedom of Information Act requests over the course of many years. These even included a request to know how many Freedom of Information requests we had received. We have provided these data when we were able to (e.g. the 13% figure mentioned above came from our releasing these data). However, the safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does not always protect the identity of a person, as they may be recognized from personal and medical information. We have only considered two of these many Freedom of Information requests as vexatious, although an Information Tribunal judge considered an earlier request was also vexatious (General Regulation Chamber, 2013).

Subjective and objective outcomes

These issues were first raised seven years ago and have all been addressed before (White et al, 2008, White et al, 2011, White et al, 2013a, White et al, 2013b, Chalder et al, 2015a). We chose (subjective) self-ratings as the primary outcomes, since we considered that the patients themselves were the best people to determine their own state of health. We have also reported the results of a number of objective outcomes, including a walking test, a stepping test, employment status and financial benefits (White et al, 2011a, McCrone et al, 2012, Chalder et al, 2015). The distance participants could walk in six minutes was significantly improved following GET, compared to other treatments. There were no significant differences in fitness, employment or benefits between treatments. We interpreted these data in the light of their context and validity. For instance, we did not use employment status as a measure of recovery or improvement, because patients may not have been in employment before falling ill, or they may have lost their job as a consequence of being ill (White et al, 2013b). Getting better and getting a job are not the same things, and being in employment depends on the prevailing state of the local economy as much as being fit for work.

There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent

No insurance company was involved in any aspect of the trial. There were some 19 investigators, three of whom have done consultancy work at various times for insurance companies. This was not related to the research and was listed as a potential conflict of interest in the relevant papers. The patient information sheet informed all potential participants as to which organizations had funded the research, which is consistent with ethical guidelines.

References

Castell BD et al, 2011. Cognitive Behavioral Therapy and Graded Exercise for Chronic Fatigue Syndrome: A Meta‐Analysis. Clin Psychol Sci Pract 18; 311-324.

doi: http://dx.doi.org/10.1111/j.1468-2850.2011.01262.x

Chalder T et al, 2015. Rehabilitative therapies for chronic fatigue syndrome: a secondary mediation analysis of the PACE trial. Lancet Psychiatry 2; 141-152.

doi: http://dx.doi.org/10.1016/S2215-0366(14)00069-8

Chalder T et al, 2015a. Methods and outcome reporting in the PACE trial–Author’s reply. Lancet Psychiatry 2; e10–e11. doi: http://dx.doi.org/10.1016/S2215-0366(15)00114-5.

Chambers D et al, 2006. Interventions for the treatment, management and rehabilitation of patients with chronic fatigue syndrome/myalgic encephalomyelitis: an updated systematic review. J R Soc Med 99: 506-520.

Edmonds M et al, 2004. Exercise therapy for chronic fatigue syndrome. Cochrane Database Syst Rev 3: CD003200. doi: http://dx.doi.org/10.1002/14651858.CD003200.pub2

General Regulation Chamber (Information Rights) First Tier Tribunal. Mitchell versus Information commissioner. EA 2013/0019.

www.informationtribunal.gov.uk/DBFiles/Decision/i1069/20130822%20Decision%20EA20130019.pdf

Larun L et al, 2015. Exercise therapy for chronic fatigue syndrome. Cochrane Database of Systematic Reviews Issue 2. Art. No.: CD003200.

doi: http://dx.doi.org/10.1002/14651858.CD003200.pub3

Malouff JM et al, 2008. Efficacy of cognitive behavioral therapy for chronic fatigue syndrome: a meta-analysis. Clin Psychol Rev 28: 736–45.

doi: http://dx.doi.org/10.1016/j.cpr.2007.10.004

Marques MM et al, 2015. Differential effects of behavioral interventions with a graded physical activity component in patients suffering from Chronic Fatigue (Syndrome): An updated systematic review and meta-analysis. Clin Psychol Rev 40; 123–137. doi: http://dx.doi.org/10.1016/j.cpr.2015.05.009

McCrone P et al. Adaptive pacing, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome: a cost effectiveness analysis. PLoS ONE 2012; 7: e40808. Doi: http://dx.doi.org/10.1371/journal.pone.0040808

Price JR et al, 2008. Cognitive behaviour therapy for chronic fatigue syndrome in adults. Cochrane Database Syst Rev 3: CD001027.

doi: http://dx.doi.org/10.1002/14651858.CD001027.pub2

Smith MB et al, 2015. Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop. Ann Intern Med. 162: 841-850. doi: http://dx.doi.org/10.7326/M15-0114

Walwyn R et al, 2013. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan. Trials 14: 386. http://www.trialsjournal.com/content/14/1/386

White PD et al, 2007. Protocol for the PACE trial: a randomised controlled trial of adaptive pacing, cognitive behaviour therapy, and graded exercise, as supplements to standardised specialist medical care versus standardised specialist medical care alone for patients with the chronic fatigue syndrome/myalgic encephalomyelitis or encephalopathy. BMC Neurol 7:6. doi: http://dx.doi.org/10.1186/1471-2377-7-6

White PD et al, 2008. Response to comments on “Protocol for the PACE trial”. http://www.biomedcentral.com/1471-2377/7/6/COMMENTS/prepub#306608

White PD et al, 2011. The PACE trial in chronic fatigue syndrome – Authors’ reply. Lancet 377; 1834-35. DOI: http://dx.doi.org/10.1016/S0140-6736(11)60651-X

White PD et al, 2011a. Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial. Lancet 377:823-36. doi: http://dx.doi.org/10.1016/S0140-6736(11)60096-2

White PD et al, 2013. Recovery from chronic fatigue syndrome after treatments given in the PACE trial. Psychol Med 43: 227-35. doi: http://dx.doi.org/10.1017/S0033291713000020

White PD et al, 2013a. Chronic fatigue treatment trial: PACE trial authors’ reply to letter by Kindlon. BMJ 347:f5963. doi: http://dx.doi.org/10.1136/bmj.f5963

White PD et al, 2013b. Response to correspondence concerning ‘Recovery from chronic fatigue syndrome after treatments in the PACE trial’. Psychol Med 43; 1791-2. doi: http://dx.doi.org/10.1017/S0033291713001311

White PD et al, 2015. The planning, implementation and publication of a complex intervention trial for chronic fatigue syndrome: the PACE trial. Psychiatric Bulletin 39, 24-27. doi: http://dx.doi.org/10.1192/pb.bp.113.045005

Whiting P et al, 2001. Interventions for the Treatment and Management of Chronic Fatigue Syndrome: A Systematic Review. JAMA. 286:1360-68. doi: http://dx.doi.org/10.1001/jama.286.11.1360