Trial By Error, Continued: Questions for Dr. White and his PACE Colleagues

By David Tuller, DrPH

David Tuller is academic coordinator of the concurrent masters degree program in public health and journalism at the University of California, Berkeley.

I have been seeking answers from the PACE researchers for more than a year. At the end of this post, I have included the list of questions I’d compiled by last September, when my investigation was nearing publication. Most of these questions remain unanswered.

The PACE researchers are currently under intense criticism for having rejected as “vexatious” a request for trial data from psychologist James Coyne—an action called “unforgivable” by Columbia statistician Andrew Gelman and “absurd” by Retraction Watch. Several colleagues and I have filed a subsequent request for the main PACE results, including data for the primary outcomes of fatigue and physical function and for “recovery” as defined in the trial protocol. The PACE team has two more weeks to release this data, or explain why it won’t.

Any data from the PACE trial will likely confirm what my Virology Blog series has already revealed: The results cannot stand up to serious scrutiny. But the numbers will not provide answers to the questions I find most compelling. Only the researchers themselves can explain why they made so many ill-advised choices during the trial.

In December, 2014, after months of research, I e-mailed Peter White, Trudie Chalder and Michael Sharpe—the lead PACE researcher and his two main colleagues–and offered to fly to London to meet them. They declined to talk with me. In an email, Dr. White cited my previous coverage of the illness as a reason. (The investigators and I had already engaged in an exchange of letters in The New York Times in 2011, involving a PACE-related story I had written.) “I have concluded that it would not be worthwhile our having a conversation,” Dr. White wrote in his e-mail.

I decided to postpone further attempts to contact them for the story until it was near completion. (Dr. Chalder and I did speak in January 2015 about a new study from the PACE data, and I previously described our differing memories of the conversation.) In the meantime, I wrote and rewrote the piece and tweaked it and trimmed it and then pasted back in stuff that I’d already cut out. Last June, I sent a very long draft to Retraction Watch, which had agreed to review it for possible publication.

I still hoped Dr. White would relent and decide to talk with me. Over the summer, I drew up a list of dozens of questions that covered every single issue addressed in my investigation.

I had noticed the kinds of non-responsive responses Dr. White and his colleagues provided in journal correspondence and other venues whenever patients made cogent and incontrovertible points. They appeared to excel at avoiding hard questions, ignoring inconvenient facts, and misstating key details. I was surprised and perplexed that smart journal editors, public health officials, reporters and others accepted their replies without pointing out glaring methodological problems—such as the bizarre fact that the study’s outcome thresholds for improvement on its primary measures indicated worse health status than the entry criteria required to demonstrate serious disability.

So my list of questions included lots of follow-ups that would help me push past the PACE team’s standard portfolio of evasions. And if, as I suspected, I wouldn’t get the chance to pose the questions myself, I hoped the list would be a useful guide for anyone who wanted to conduct a rigorous interview with Dr. White or his colleagues about the trial’s methodological problems. (Dr. White never agreed to talk with me; I sent my questions to Retraction Watch as part of the fact-checking process.)

In September, Retraction Watch interviewed Dr. White in connection with my piece, as noted in a recent post about Dr. Coyne’s data request. Retraction Watch and I subsequently determined that we differed on the best approach and direction for the story. On October 21st to 23rd, Virology Blog ran my 14,000-word investigation.

But I still don’t have the answers to my questions.


List of Questions, September 1, 2015:

I am posting this list verbatim, although if I were pulling it together today I would add, subtract and rephrase some questions. (I might have misstated a statistical concept or two.) The list is by no means exhaustive. Patients and researchers could easily come up with a host of additional items. The PACE team seems to have a lot to answer for.

1) In June, a report commissioned by the National Institutes of Health declared that the Oxford criteria should be “retired” because the case definition impeded progress and possibly caused harm. As you know, the concern is that it is so non-specific that it leads to heterogeneous study samples that include people with many illnesses besides ME/CFS. How do you respond to that concern?

2) In published remarks after Dr. White’s presentation in Bristol last fall, Dr. Jonathan Edwards wrote: “What Dr White seemed not to understand is that a simple reason for not accepting the conclusion is that an unblinded trial in a situation where endpoints are subjective is valueless.” What is your response to Dr. Edward’s position?

3) The December 2008 PACE participants’ newsletter included an article about the UK NICE guidelines. The article noted that the recommended treatments, “based on the best available evidence,” included two of the interventions being studied–CBT and GET. (The article didn’t mention that PACE investigator Jessica Bavington also served on the NICE guidelines committee.) The same newsletter included glowing testimonials from satisfied participants about their positive outcomes from the trial “therapåy” and “treatment” but included no statements from participants with negative outcomes. According to the graph illustrating recruitment statistics in the same newsletter, about 200 or so participants were still slated to undergo one or more of their assessments after publication of the newsletter.

Were you concerned that publishing such statements would bias the remaining study subjects? If not, why not? A biostatistics professor from Columbia told me that for investigators to publish such information during a trial was “the height of clinical trial amateurism,” and that at the very least you should have assessed responses before and after disseminating the newsletter to ensure that there was no bias resulting from the statements. What is your response? Also, should the article about the NICE guidelines have disclosed that Jessica Bavington was on the committee and therefore playing a dual role?

4) In your protocol, you promised to abide by the Declaration of Helsinki. The declaration mandates that obtaining informed consent requires that prospective participants be “adequately informed” about “any possible conflicts of interest” and “institutional affiliations of the researcher.” In the Lancet and other papers, you disclosed financial and consulting ties with insurance companies as “conflicts of interest.” But trial participants I have interviewed said they did not find out about these “conflicts of interest” until after they completed the trial. They felt this violated their rights as participants to informed consent. One demanded her data be removed from the study after the fact. I have reviewed participant information and consent forms, including those from version 5.0 of the protocol, and none contain the disclosures mandated by the Declaration of Helsinki.

Why did you decide not to inform prospective participants about your “conflicts of interest” and “institutional affiliations” as part of the informed consent process? Do you believe this omission violates the Declaration of Helsinki’s provisions on disclosure to participants? Can you document that any PACE participants were told of your “possible conflicts of interest” and “institutional affiliations” during the informed consent process?

5) For both fatigue and physical function, your thresholds for “normal range” (Lancet) and “recovery” (Psych Med) indicated a greater level of disability than the entry criteria, meaning participants could be fatigued or physically disabled enough for entry but “recovered” at the same time. Thirteen percent of the sample was already “within normal range” on physical function, fatigue or both at baseline, according to information obtained under a freedom-of-information request.

Can you explain the logic of that overlap? Why did the Lancet and Psych Med papers not specifically mention or discuss the implication of the overlaps, or disclose that 13 percent of the study sample were already “within normal range” on an indicator at baseline? Do you believe that such overlaps affect the interpretation of the results? If not, why not? What oversight committee specifically approved this outcome measure? Or was it not approved by any committee, since it was a post-hoc analysis?

6) You have explained these “normal ranges” as the product of taking the mean value +/- 1 SD of the scores of  representative populations–the standard approach to obtaining normal ranges when data are normally distributed. Yet the values in both those referenced source populations (Bowling for physical function, Chalder for fatigue) are clustered toward the healthier ends, as both papers make clear, so the conventional formula does not provide an accurate normal range. In a 2007 paper, Dr. White mentioned this problem of skewed populations and the challenge they posed to calculation of normal ranges.

Why did you not use other methods for determining normal ranges from your clustered data sets from Bowling and Chalder, such as basing them on percentiles? Why did you not mention the concern or limitation about using conventional methods in the PACE papers, as Dr. White did in the 2007 paper? Is this application of conventional statistical methods for non-normally distributed data the reason why you had such broad normal ranges that ended up overlapping with the fatigue and physical function entry criteria?

7) According to the protocol, the main finding from the primary measures would be rates of “positive outcomes”/”overall improvers,” which would have allowed for individual-level. Instead, the main finding was a comparison of the mean performances of the groups–aggregate results that did not provide important information about how many got better or worse. Who approved this specific change? Were you concerned about losing the individual-level assessments?

8) The other two methods of assessing the primary outcomes were both post-hoc analyses. Do you agree that post-hoc analyses carry significantly less weight than pre-specified results? Did any PACE oversight committees specifically approve the post-hoc analyses?

9) The improvement required to achieve a “clinically useful benefit” was defined as 8 points on the SF-36 scale and 2 points on the continuous scoring for the fatigue scale. In the protocol, categorical thresholds for a “positive outcome” were designated as 75 on the SF-36 and 3 on the Chalder fatigue scale, so achieving that would have required an increase of at least 10 points on the SF-36 and 3 points (bimodal) for fatigue. Do you agree that the protocol measure required participants to demonstrate greater improvements to achieve the “positive outcome” scores than the post-hoc “clinically useful benefit”?

10) When you published your protocol in BMC Neurology in 2007, the journal appended an “editor’s comment” that urged readers to compare the published papers with the protocol “to ensure that no deviations from the protocol occurred during the study.” The comment urged readers to “contact the authors” in the event of such changes. In asking for the results per the protocol, patients and others followed the suggestion in the editor’s comment appended to your protocol. Why have you declined to release the data upon request? Can you explain why Queen Mary has considered requests for results per the original protocol “vexatious”?

11) In cases when protocol changes are absolutely necessary, researchers often conduct sensitivity analyses to assess the impact of the changes, and/or publish the findings from both the original and changed sets of assumptions. Why did you decide not to take either of these standard approaches?

12) You made it clear, in your response to correspondence in the Lancet, that the 2011 paper was not addressing “recovery.” Why, then, did Dr. Chalder refer at the 2011 press conference to the “normal range” data as indicating that patients got “back to normal”–i.e. they “recovered”? And since you had input into the accompanying commentary in the Lancet before publication, according to the press complaints commission, why did you not dissuade the writers from declaring a 30 percent “recovery” rate? Do you agree with the commentary that PACE used “a strict criterion for recovery,” given that in both of the primary outcomes participants could get worse and be counted as “recovered,” or “back to normal” in Dr. Chalder’s words?

13) Much of the press coverage focused on “recovery,” even though the paper was making no such claim. Were you at all concerned that the media was mis-interpreting or over-interpreting the results, and did you feel some responsibility for that, given that Dr. Chalder’s statement of “back to normal” and the commentary claim of a 30 percent “recovery” rate were prime sources of those claims?

14) You changed your fatigue outcome scoring method from bimodal to continuous mid-trial, but cited no references in support of this that might have caused you to change your mind since the protocol. Specifically, you did not explain that the FINE trial reported benefits for its intervention only in a post-hoc re-analysis of its fatigue data using continuous scoring.

Were the FINE findings the impetus for the change in scoring in your paper? If so, why was this reason not mentioned or cited? If not, what specific change prompted your mid-trial decision to alter the protocol in this way? And given that the FINE trial was promoted as the “sister study” to PACE, why were that trial and its negative findings not mentioned in the text of the Lancet paper? Do you believe those findings are irrelevant to PACE? Moreover, since the Likert-style analysis of fatigue was already a secondary outcome in PACE, why did you not simply provide both bimodal and continuous analyses rather than drop the bimodal scoring altogether?

15)  The “number needed to treat” (NNT) for CBT and GET was 7, as Dr. Sharpe indicated in an Australian radio interview after the Lancet publication. But based on the “normal range” data, the NNT for SMC was also 7, since those participants achieved a 15% rate of “being within normal range,” accounting for half of the rate experienced under the rehabilitative interventions.

Is that what Dr. Sharpe meant in the radio interview when he said: “What this trial wasn’t able to answer is how much better are these treatments and really not having very much treatment at all”? If not, what did Dr. Sharpe mean? Wasn’t the trial designed to answer the very question Dr. Sharpe cited? Since each of the rehabilitative intervention arms as well as the SMC arm had an NNT of 7, would it be accurate to interpret the “normal range” findings as demonstrating that CBT and GET worked as well as SMC, but not any better?

16) The PACE paper was widely interpreted, based on your findings and statements, as demonstrating that “pacing” isn’t effective. Yet patients describe “pacing” as an individual, flexible, self-help method for adapting to the illness. Would packaging and operationalizing it as a “treatment” to be administered by a “therapist” alter its nature and therefore its impact? If not, why not? Why do you think the evidence from APT can be extrapolated to what patients themselves call “pacing”? Also, given your partnership with Action4ME in developing APT, how do you explain the organization rejection of the findings in the statement issued after the study was published?

17) In your response to correspondence in the Lancet, you acknowledged a mistake in describing the Bowling sample as a “working age” rather than “adult” population–a mistake that changes the interpretation of the findings. Comparing the PACE participants to a sicker group but mislabeling it a healthier one makes the PACE results look better than they were; the percentage of participants scoring “within normal range” would clearly have been even lower had they actually been compared to the real “working age” population rather than the larger and more debilitated “adult” population. Yet the Lancet paper itself has not been corrected, so current readers are provided with misinformation about the measurement and interpretation of one of the study’s two primary outcomes.

Why hasn’t the paper been corrected? Do you believe that everyone who reads the paper also reads the correspondence, making it unnecessary to correct the paper itself? Or do you think the mistake is insignificant and so does not warrant a correction in the paper itself? Lancet policy calls for corrections–not mentions in correspondence–for mistakes that affect interpretation or replicability. Do you disagree that this mistake affects interpretation or replicability?

18) In our exchange of letters in the NYTimes four years ago, you argued that PACE provided “robust” evidence for treatment with CBT and GET “no matter how the illness is defined,” based on the two sub-group analyses. Yet Oxford requires that fatigue be the primary complaint–a requirement that is not a part of either of your other two sub-group case definitions. (“Fatigue” per se is not part of the ME definition at all, since post-exertional malaise is the core symptom; the CDC obviously requires “fatigue,” but not that it be the primary symptom, and patients can present with post-exertional malaise or cognitive problems as being their “primary” complaint.)

Given that discrepancy, why do you believe the PACE findings can be extrapolated to others “no matter how the illness is defined,” as you wrote in the NYTimes? Is it your assumption that everyone who met the other two criteria would automatically be screened in by the Oxford criteria, despite the discrepancies in the case definitions?

19) None of the multiple outcomes you cited as “objective” in the protocol supported the subjective outcomes suggesting improvement (excluding the extremely modest increase in the six-minute walking test for the GET group)? Does this lack of objective support for improvement and recovery concern you?  Should the failure of the objective measures raise questions about whether people have achieved any actual benefits or improvements in performance?

20) If wearing the actometer was considered too much of a burden for patients to wear at the end of the trial, when presumably many of them would have been improved, why wasn’t it too much of a burden for patients at the beginning of the trial? In retrospect, given that your other objective findings failed, do you regret having made that decision?

21) In your response to correspondence after publication of the Psych Med paper, you mentioned multiple problems with the “objectivity” of the six-minute walking test that invalidated comparisons with other studies. Yet PACE started assessing people using this test when the trial began recruitment in 2005, and the serious limitations–the short corridors requiring patients to turn around more than was standard, the decision not to encourage patients during the test, etc.–presumably become apparent quickly.

Why then, in the published protocol in 2007, did you describe the walking test as an “objective” measure of function? Given that the study had been assessing patients for two years already, why had you not already recognized the limitations of the test and realized that it was apparently useless as an objective measure? When did you actually recognize these limitations?

22) In the Psych Med paper, you described “recovery” as recovery only from the current episode of illness–a limitation of the term not mentioned in the protocol. Since this definition describes what most people would refer to as “remission,” not “recovery,” why did you choose to use the word “recovery”–in the protocol and in the paper–in the first place? Would the term “remission” have been more accurate and less misleading? Not surprisingly, the media coverage focused on “recovery,” not on “remission.” Were you concerned that this coverage gave readers and viewers an inaccurate impression of the findings, since few readers or viewers would understand that what the Psych Med paper examined was in fact “remission” and not “recovery,” as most people would understand the terms?

23) In the Psychological Medicine definition of “recovery,” you relaxed all four of the criteria. For the first two, you adopted the “normal range” scores for fatigue and physical function from the Lancet paper, with “recovery” thresholds lower than the entry criteria. For the Clinical Global Impression scale, “recovery” in the Psych Med paper required a 1 or 2, rather than just a 1, as in the protocol. For the fourth element, you split the single category of not meeting any of the three case definitions into two separate categories–one less restrictive (‘trial recovery’) than the original proposed in the protocol (now renamed ‘clinical recovery’).

What oversight committee approved the changes in the overall definition of recovery from the protocol, including the relaxation of all four elements of the definition? Can you cite any references for your reconsideration of the CGI scale, and explain what new information prompted this reconsideration after the trial? Can you provide any references for the decision to split the final “recovery” element into two categories, and explain what new information prompted this change after the trial?

24) The Psychological Medicine paper, in dismissing the original “recovery” threshold of 85 on the SF-36, asserted that 50 percent of the population would score below this mean value and that it was therefore not an appropriate cut-off. But that statement conflates the mean and median values; given that this is not a normally distributed sample and that the median value is much higher than the mean in this population, the statement about 50 percent performing below 85 is clearly wrong.

Since the source populations were skewed and not normally distributed, can you explain this claim that 50 percent of the population would perform below the mean? And since this reasoning for dismissing the threshold of 85 is wrong, can you provide another explanation for why that threshold needed to be revised downward so significantly? Why has this erroneous claim not been corrected?

25) What are the results, per the protocol definition of “recovery”?

26) The PLoS One paper reported that a sensitivity analysis found that the findings of the societal cost-effectiveness of CBT and GET would be “robust” even when informal care was measured not by replacement cost of a health-care worker but using alternative assumptions of minimum wage or zero pay. When readers challenged this claim that the findings would be “robust” under these alternative assumptions, the lead author, Paul McCrone, agreed in his responses that changing the value for informal care would, in fact, change the outcomes. He then criticized the alternative assumptions because they did not adequately value the family’s caregiving work, even though they had been included in the PACE statistical plan.

Why did the PLoS One paper include an apparently inaccurate sensitivity analysis that claimed the societal cost-effectiveness findings for CBT and GET were “robust” under the alternative assumptions, even though that wasn’t the case? And if the alternative assumptions were “controversial” and “restrictive, as the lead author wrote in one of his posted responses, then why did the PACE team include them in the statistical plan in the first place?

Comments on this entry are closed.

  • Sasha

    Thank you for another valuable nail in the coffin of this awful study. I’ve just been reading the long-term follow-up paper in detail and I’d like to add another couple of questions for the authors (and for Lancet Psychiatry):

    Who decided it was OK to destroy the randomisation in a £5 million, publicly funded trial by encouraging participants to have more PACE treatments after the end of the trial, on a non-random basis, so that long-term follow-up is essentially meaningless?

    And why did the PACE authors only repeat the subject self-assessments of fatigue and physical function instead of also asking for objective data such as employment status? After all the criticism of your subjective measures they should have been well aware that objective measures are crucial at this point.

    David, I’d seriously encourage you to edit this into a paper or open letter for submission to a journal. This stuff needs to start getting into the academic literature as well as onto social media.

  • Spamlet

    QMUL researchers discover new element: ‘The-dog-ate-our-homework-but-we-can’t-tell-you-ium.’

  • Sasha

    BTW, there’s a new, US-focused (though all can sign) petition to get the HHS to investigate PACE, that was launched in November. I hope all will sign:

  • A.B.

    “Psychobabble” and “weasel words” are two terms often used by patients to describe PACE authors writing style.

    A weasel word is an informal term for words and phrases aimed at creating an impression that a specific
    and/or meaningful statement has been made, when only a vague or
    ambiguous claim has been communicated, enabling the specific meaning to
    be denied if the statement is challenged.

    As used by patients, psychobabble refers to the practice of making up bullshit about psychological factors or personality traits using an impressive and technical vocabulary. Psychobabble is difficult to criticise because it lacks substance (i.e makes specific claims about things that cannot be measured). Psychobabble is normally treated as if it were fact when it is merely a belief. Psychobabble is used to win arguments by appeal to authority or by stubbornly ignoring critics and evidence to the contrary and simply repeating the claims until they become accepted as truth.

  • RhymesWithElena

    Thank you so much for putting so much of your energy, time and thought into this. Hugely valuable and warmly appreciated.

  • Jimbobsky

    I was treated one-to-one by Jessica Bavinton for CFS/ME using graded exercise therapy, paid for by health insurance. I was assessed as a moderate sufferer despite being only 40% functional. I have 26+ weeks of activity diaries that show how bad I was suffering yet I was told to push through the pain. By 20 weeks I was virtually bedbound, yet told by her ‘I’m not sure why you are like this, so keep pushing through as it won’t hurt you’. She quickly passed me onto one of her staff, who was amazing but could see I had suffered too much with this. Funding stopped after 6 months. I wasn’t back to full health & I was left to sort myself out. I have emails confirming I was doing everything expected of me doing GET. I finished GET 30% functional. 4 months later I was ill health dismissed from my job and lost my career. So the question is: if I followed GET principles properly, directed by a co-author of the NICE guidelines, and ended up bedridden, struggling to walk and am still unable to work full-time, is it my fault or the process to blame? Fast forward 6 months and my NHS ME/CFS consultant has written to my GP stating ‘I believe that the patients ME was underestimated at the start of GET and a consequence of this is they have been pushed too far too soon.’ In other words, I put my trust in an expert, who made a point of telling me that they had been involved with guidelines etc, who then under diagnosed me, pushed me too far, left me in the lurch and pocketed fees from the insurance company. Well done her and her flourishing business. Someone tell me how this is acceptable?

  • A.B.

    I’m sorry to hear this. This isn’t acceptable. We hear stories like this all the time. The safety of CBT and GET should be investigated by public health authorities. If this was a drug with a similar adverse effects profile then health authorities would act rapidly to protect patients.

  • mesupport

    A class action is how a drug with adverse effects is stopped from causing further harm. Using much less evidence of negligence/ abuse than is available.

  • tomkindlon

    On question 17, they actually once published a “clarification” in the Lancet on a fairly minor point (see: ). This shows they know the possibility is there to correct the record.

  • davetuller

    Hi, I’d be interested in talking and seeing your e-mails and documentation…David

  • Jimbobsky

    I’ll be in touch David – I haven’t publicly realised info for fear of legal reprisals eg defamation, slander etc so need to be 100% sure that it’s the right thing to do. I will share a little anecdote though: I was asked to keep an activity diary & rate my pain and symptoms. When I got worse I was asked to stop recording these & stop focussing on my symptoms as this was making things worse, I should instead ignore the symptoms and just care on regardless. Go figure.

  • davetuller

    sure, you can e-mail me or friend/message me on Facebook

  • Victor

    Many thanks, David, for your further attempt to get the PACE Trial investigators to see sense.

    Jimbobsky, testimony like yours, and more importantly the evidence you have to support it, seems to me to be the main reason why David Tuller and others have taken up the PACE case in the first place. Patients do appear to have been harmed and, worse, the harm appears to have been done on the back of some very questionable evidence even more questionably obtained. I think it is important that others who may have been harmed by GET come forward with their own stories.

    The only sensible and decent stance for the PACE Trial investigators to take at this stage is to be open with their data; they must surely see that people’s perfectly reasonable demands that they release the data are not going to stop until they do so.

  • Gwebo

    I’m really sorry you’ve been treated that way, but not surprised.

    Your experience and that of others, brings into question the monitoring of all forms of adverse reactions in PACE and in all other research carried out by this school of psychiatrists.

    How can patients and clinicians trust these researchers assessment of adverse effects and the safety of treatments?

  • Gwebo

    Excellent article, these questions must be answered – for the sake of patient safety and the integrity of the scientific process.

  • RustyJ

    If there is documented admission of poor diagnosis then there should be recourse to sue for loss of earnings etc.

  • Portalpass

    Does anyone remember that at one point perhaps 2-3 years ago after being pressed on a question, one of the pace trial researchers had to admit that the PACE TRIAL was only investigating Chronic Fatigue and not ME/CFS. I think that now might be a good time for that statement to be found as nothing changed and they continue to this day to apply it to ME/CFS

  • Jimbobsky

    The problem is how does one substantiate a ‘poor’ diagnosis’? I was originally diagnosed by an Infectious Diseases Consultant who said it was CFS and that I had had Glandular Fever previously; all JB had to go on was me saying I had CFS and after filling out her questionnaires I graded myself at 40% functionality. After that it was up to her to decide on the extent of GET and the programme to follow. I have no understanding of what it all equates to regarding mild/moderate/severe, but my NHS consultant does so I was surprised by his statement. The Consultant was also very aware of JB as she trained them! They were also surprised I had relapsed so severely considering it was her doing the GET – I showed them my diaries and info collected over the course of the treatment, and they looked a little worried by it.

  • Spamlet

    They just have to spin it out until retirement to sunny Brighton with the rest of the Wessely school pensioners.

  • RustyJ

    Hi Jim. I am mod/sev, so I understand how difficult it is to progress this. Is it possible to get legal advice? Possibly for class action? Approach IiME, not MEA or AfME. A class action would bust this wide open.

    I can reproduce your story on my website, along with any others ie, promote bad stories coming out of GET, similar to what MEA is doing, but they are implicated. See if we can generate some heat, just as a starter. Perhaps David Tuller can assist by exploring legal avenues or discussion of legal action? Some previous action has been successful against Lightning Process as a precedent.

  • Jimbobsky

    Out of interest, how are MEA implicated?

  • Jimbobsky

    I’m all for getting PACE and NICE guidelines removed but not interested in any compensation for me (though I do see that it would set a precedent for others) so it’s a bit of a dilemma on moving this forward. Perhaps I should ask JB to answer any questions I have in writing first and then publicise her responses for all to see?

  • RustyJ

    They disseminate information promoting GET and CBT, even as they publicly criticize it. They participate in the CMRC which is a Wessely School front.

  • Rob Wijbenga

    I didn’t follow the entire line since last summer, David, but did you send these questions to the editors of The Lancet as well? And didn’t you get any reaction from them as well?

    I think they should be nailed to the front doors of all institutions treating ME-patients with CBT and GET, like Martin Luther’s 95 theses on the front door of the chapel at Wittenberg
    in 1517

  • Boka

    I think the ME Association has always attempted to be ‘fair’ and ‘scientifically accurate’ and so they publish studies even when those studies appear to contradict the biomedical nature of ME/CFS. However, I don’t think they go as far as ‘promoting’ GET and CBT. My understanding is that they have been campaigning hard to get the NICE Guidelines reviewed. They are also the ones who commissioned the large study showing that 74% of patients reports harm from GET:
    And here is Dr. Charles Shepherd’s recent article in the Daily Telegraph. (The Telegraph used a different set of diagnotic criteria btw … not the ones provided by Dr. Shepherd in his original article!)

  • Boka

    It is of extreme importance that this matter is now pursued to the bitter end and that these guys are not let ‘off the hook’. (Thank you David.)
    Why? Professor Sharpe is a member of the DSM-5 study group that is redefining somatoform disorders, with the creation of a new category of “Complex Somatic Symptom Disorder” (CSSD). The existing evidence suggests that the DSM Somatic Symptom Disorder Work Group intends to ensure that ME/CFS will fall within the purview of the new category of CSSD because Sharpe et al believe ME/CFS to be an example of a CSSD. Pressumably, if the PACE Trial is not retracted, it could be used as one of the sources of evidence to support this view!

    So as well as continuing to question the results of the PACE trial, we also need to question and expose the underlying assumptions that these guys are stubbornly continuing to purport. It is clear from David’s points, that if reanalysed, the PACE trial may actually UNDERMINE their hypothesis – and that’s even after all sorts of things were done to ensure that as many people with real ME/CFS were taken out of the study, and that many of those that remained, were relatively high functioning.

    We also need to keep citing the growing biomedical evidence which is consistently showing biological abnormalities underpinning every symptom that is experienced by people with ME/CFS.

  • Boka

    The summary paper published by Jacobs Journal of Physiology, ‘Deviant Cellular and Physiological Responses to Exercise in ME/CFS’ from Frank N.M.Twisk and Keith J. Geraghty, is an excellent source for the rebuttal of the behavioural model. (The footnotes are actual links to each of the research papers cited.)

    For some reason, whenever I provide the link to the paper on a blog, it gets blocked as ‘spam’. So please do google it. It really is worth a read!

  • Pingback: Tuller’s questions about the PACE trial | WAMES (Working for ME in Wales)()

  • Spamlet

    It was Action for ME that got mixed up with CBT/GET researchers for a while, I think: not MEA.

    MEA joined the Research Collaborative despite the involvement of the Wessely School. The collaborative was set up through the contrarian ‘Science Media Centre’ which has Wessely amongst its team of ‘experts’, and those who joined had to sign that they wouldn’t ‘harass’ each other, but it does not seem to have stopped real research coming out of the initiative, as even with the ‘pledge’, the behaviourists are still a minority now.

  • Pingback: Weekend reads: A celebrity surgeon's double life; misconduct in sports medicine; researcher loses honor - Retraction Watch at Retraction Watch()

  • patrick holland

    Sad to hear that. These poor results were predictable. Have a read of the following to understand why
    and have a read of the following to see what legal options and court options you have to get justice and destroy these fools

  • patrick holland

    Thanks for that excellent contribution to the field, David. We need to move from social media to other forums offline to bring these matters to a more practical conclusion.There should be enough evidence now for patients to bring legal actions and court actions and government actions against these “researchers” and challenge them directly. Some people have proposed plans for this

  • Pingback: Problemen med PACE – ME-patienten()

  • Pingback: What I'm Into (January 2016) | Tanya Marlow - Thorns and Gold()

  • Pingback: Trial By Error, Continued: My Questions for Lancet Editor Richard Horton()

  • Pingback: Dr Tuller’s questions for Lancet Editor Dr Horton on handling of PACE trial issues | WAMES (Working for ME in Wales)()