David Tuller’s three-installment investigation of the PACE trial for chronic fatigue syndrome, “Trial By Error,†has received enormous attention. Although the PACE investigators declined David’s efforts to interview them, they have now requested the right to reply. Today, virology blog posts their response to David’s story, and below, his response to their response.Â
According to the communications department of Queen Mary University, the PACE investigators have been receiving abuse on social media as a result of David Tuller’s posts. When I published Mr. Tuller’s articles, my intent was to provide a forum for discussion of the controversial PACE results. Abuse of any kind should not have been, and must not be, part of that discourse. -vrr
Last December, I offered to fly to London to meet with the main PACE investigators to discuss my many concerns. They declined the offer. Dr. White cited my previous coverage of the issue as the reason and noted that “we think our work speaks for itself.†Efforts to reach out to them for interviews two weeks ago also proved unsuccessful.
After my story ran on virology blog last week, a public relations manager for medicine and dentistry in the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello. He requested, on behalf of the PACE authors, the right to respond. (Queen Mary University is Dr. White’s home base.)
That response arrived Wednesday. My first inclination, when I read it, was that I had already rebutted most of their criticisms in my 14,000-word piece, so it seemed like a waste of time to engage in further extended debate.
Later in the day, however, the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University e-mailed Dr. Racaniello again, with an urgent request to publish the response as soon as possible. The PACE investigators, he said, were receiving “a lot of abuse†on social media as a result of my posts, so they wanted to correct the “misinformation†as soon as possible.
Because I needed a day or two to prepare a careful response to the PACE team’s rebuttal, Dr. Racaniello agreed to post them together on Friday morning.
On Thursday, Dr. Racaniello received yet another appeal from the public relations manager for medicine and dentistry from the marketing and communications department of Queen Mary University. Dissatisfied with the Friday publishing timeline, he again urged expedited publication because “David’s blog posts contain a number of inaccuracies, may cause a considerable amount of reputational damage, and he did not seek comment from any of the study authors before the virology blog was published.â€
The charge that I did not seek comment from the authors was at odds with the facts, as Dr. Racaniello knew. (It is always possible to argue about accuracy and reputational damage.) Given that much of the argument for expedited posting rested on the public relations manager’s obviously “dysfunctional cognition†that I had unfairly neglected to provide the PACE authors with an opportunity to respond, Dr. Racaniello decided to stick with his pre-planned posting schedule.
Before addressing the PACE investigators’ specific criticisms, I want to apologize sincerely to Dr. White, Dr. Chalder, Dr. Sharpe and their colleagues on behalf of anyone who might have interpreted my account of what went wrong with the PACE trial as license to target the investigators for “abuse.†That was obviously not my intention in examining their work, and I urge anyone engaging in such behavior to stop immediately. No one should have to suffer abuse, whether online or in the analog world, and all victims of abuse deserve enormous sympathy and compassion.
However, in this case, it seems I myself am being accused of having incited a campaign of social media “abuse†and potentially causing “reputational damage†through purportedly inaccurate and misinformed reporting. Because of the seriousness of these accusations, and because such accusations have a way of surfacing in news reports, I feel it is prudent to rebut the PACE authors’ criticisms in far more detail that I otherwise would. (I apologize in advance to the obsessives and others who feel they need to slog through this rebuttal; I urge you to take care not to over-exert yourself!)
In their effort to correct the “misinformation†and “inaccuracies†in my story about the PACE trial, the authors make claims and offer accounts similar to those they have previously presented in published comments and papers. In the past, astonishingly, journal editors, peer reviewers, reporters, public health officials, and the British medical and academic establishments have accepted these sorts of non-responsive responses as adequate explanations for some of the study’s fundamental flaws. I do not.
None of what they have written in their response actually addresses or resolves the core issues that I wrote about last week. They have ignored many of the questions raised in the article. In their response, they have also not mentioned the devastating criticisms of the trial from top researchers from Columbia, Stanford, University College London, and elsewhere. They have not addressed why major reports this year from the Institute of Medicine and the National Institutes of Health have presented portraits of the disease starkly at odds with the PACE framework and approach.
I will ignore their overview of the findings and will focus on the specific criticisms of my work. (I will, however, mention here that my piece discussed why their claims of cost-effectiveness for cognitive behavior therapy and graded exercise therapy are based on inaccurate statements in a paper published in PLoS One in 2012).
13% of patients had already “recovered†on entry into the trial
I did not write that 13% of the participants were “recovered†at baseline, as the PACE authors state. I wrote that they were “recovered†or already at the “recovery†thresholds for two specific indicators, physical function and fatigue, at baseline—a different statement, and an accurate one.
The authors acknowledge, in any event, that 13% of the sample was “within normal range†at baseline. For the 2013 paper in Psychological Medicine, these “normal range†thresholds were re-purposed as two of the four required “recovery†criteria.
And that begs the question: Why, at baseline, was 13% of the sample “within normal range†or “recovered†on any indicator in the first place? Why did entry criteria for disability overlap with outcome scores for being “within the normal range†or “recovered� The PACE authors have never provided an explanation of this anomaly.
In their response, the authors state that they outlined other criteria that needed to be met for someone to be called “recovered.†This is true; as I wrote last week, participants needed to meet “recovery†criteria on four different indicators to be considered “recovered.†The PACE authors did not provide data for two of the indicators in the 2011 Lancet paper, so in that paper they could not report results for “recovery.â€
However, at the press conference presenting the 2011 Lancet paper, Trudie Chalder referred to people who met the overlapping disability/â€normal range†thresholds as having gotten “back to normalâ€â€”an explicit “recovery†claim. In a Lancet comment published along with the PACE study itself, colleagues of the PACE team referred to these bizarre “normal range†thresholds for physical function and fatigue as a “strict criterion for recovery.†As I documented, the Lancet comment was discussed with the PACE authors before publication; the phrase “strict criterion for recovery†obviously survived that discussion.
Much of the coverage of the 2011 paper reported that patients got “back to normal†or “recovered,†based on Dr. Chalder’s statement and the Lancet comment. The PACE authors made no public attempt to correct the record in the months after this apparently inaccurate news coverage, until they published a letter in the Lancet. In the response to Virology Blog, they say that they were discussing “normal ranges†in the Lancet paper, and not “recovery.†Yet they have not explained why Chalder spoke about participants getting “back to normal†and why their colleagues wrote that the nonsensical “normal ranges†thresholds represented a “strict criterion of recovery.â€
Moreover, they still have not responded to the essential questions: How does this analysis make sense? What are the implications for the findings if 13 % are already “within normal range†or “recovered†on one of the two primary outcome measures? How can they be “disabled†enough on the two primary measures to qualify for the study if they’re already “within normal range†or “recovered� And why did the PACE team use the wrong statistical methods for calculating their “normal ranges†when they knew that method was wrong for the data sources they had?
Bias was caused by a newsletter for patients giving quotes from patients and mentioning UK government guidance on management. A key investigator was on the guideline committee.
The PACE authors apparently believe it is appropriate to disseminate positive testimonials during a trial as long as the therapies or interventions are not mentioned. (James Coyne dissected this unusual position yesterday.)
This is their argument: “It seems very unlikely that this newsletter could have biased participants as any influence on their ratings would affect all treatment arms equally.†Apparently, the PACE investigators believe that if you bias all the arms of your study in a positive direction, you are not introducing bias into your study. It is hard to know what to say about this argument.
Furthermore, the PACE authors argue that the U.K. government’s new treatment guidelines had been widely reported. Therefore, they contend, it didn’t matter that–in the middle of a trial to test the efficacy of cognitive behavior therapy and graded exercise therapy–they had informed participants that the government had already approved cognitive behavior therapy and graded exercise therapy “based on the best available evidence.â€
They are wrong. They introduced an uncontrolled, unpredictable co-intervention into their study, and they have no idea what the impact might have been on any of the four arms.
In their response, the PACE authors note that the participants’ newsletter article, in addition to cognitive behavior therapy and graded exercise therapy, included a third intervention, Activity Management. As they correctly note, I did not mention this third intervention in my Virology Blog story. The PACE authors now write: “These three (not two as David Tuller states) therapies were the ones being tested in the trial, so it is hard to see how this might lead to bias in the direction of one or other of these therapies.â€
This statement is nonsense. Their third intervention was called “Adaptive Pacing Therapy,†and they developed it specifically for testing in the PACE trial. It is unclear why they now state that their third intervention was Activity Management, or why they think participants would know that Activity Management was synonymous with Adaptive Pacing Therapy. After all, cognitive behavior therapy and graded exercise therapy also involve some form of “activity management.†Precision in language matters in science.
Finally, the investigators say that Jessica Bavington, a co-author of the 2011 paper, had already left the PACE team before she served on the government committee that endorsed the PACE therapies. That might be, but it is irrelevant to the question that I raised in my piece: whether her dual role presented a conflict of interest that should have been disclosed to participants in the newsletter article about the U.K. treatment guidelines. The PACE newsletter article presented the U.K. guideline committee’s work as if it were independent of the PACE trial itself, when it was not.
Bias was caused by changing the two primary outcomes and how they were analyzed
 The PACE authors seem to think it is acceptable to change methods of assessing primary outcome measures during a trial as long as they get committee approval, announce it in the paper, and provide some sort of reasonable-sounding explanation as to why they made the change. They are wrong.
They need as well to justify the changes with references or citations that support their new interpretations of their indicators, and they need to conduct sensitivity analyses to assess the impact of the changes on their findings. Then they need to explain why their preferred findings are more robust than the initial, per-protocol findings. They did not take these steps for any of the many changes they made from their protocol.
The PACE authors mention the change from bimodal to Likert-style scoring on the Chalder Fatigue Scale. They repeat their previous explanation of why they made this change. But they have ignored what I wrote in my story—that the year before PACE was published, its “sister†study, called the FINE trial, had no significant findings on the physical function and fatigue scales at the end of the trial and only found modest benefits in a post-hoc analysis after making the same change in scoring that PACE later made. The FINE study was not mentioned in PACE. The PACE authors have not explained why they left out this significant information about their “sister†study.
Regarding the abandonment of the original method of assessing the physical function scores, this is what they say in their response: “We decided this composite method [their protocol method] would be hard to interpret clinically, and would not answer our main question of comparing effectiveness between treatment arms. We therefore chose to compare mean scores of each outcome measure between treatment arms instead.†They mention that they received committee approval, and that the changes were made before examining the outcome data.
The authors have presented these arguments previously. However, they have not responded to the questions I raised in my story. Why did they not report any sensitivity analyses for the changes in methods of assessing the primary outcome measures? (Sensitivity analyses can assess how changes in assumptions or variables impact outcomes.) What prompted them to reconsider their assessment methods in the middle of the trial? Were they concerned that a mean-based measure, unlike their original protocol measure, did not provide any information about proportions of participants who improved or got worse? Any information about proportions of participants who got better or worse were from post-hoc analyses—one of which was the perplexing “normal range†analysis.
Moreover, this was an unblinded trial, and researchers generally have an idea of outcome trends before examining outcome data. When the PACE authors made the changes, did they already have an idea of outcome trends? They have not answered that question.
Our interpretation was misleading after changing the criteria for determining recovery
 The PACE authors relaxed all four of their criteria for “recovery†in their 2013 paper and cited no committees who approved this overall redefinition of this critical concept. Three of these relaxations involved expanded thresholds; the fourth involved splitting one category into two sub-categories—one less restrictive and one more restrictive. The authors gave the full results for the less restrictive category of “recovery.â€
The PACE authors now say that they changed the “recovery†thresholds on three of the variables “since we believed that the revised thresholds better reflected recovery.†Again, they apparently think that simply stating their belief that the revisions were better justifies making the changes.
Let’s review for a second. The physical function threshold for “recovery†fell from 85 out of 100 in the protocol, to a score of 60 in the 2013 paper. And that “recovery†score of 60 was lower than the entry score of 65 to qualify for the study. The PACE authors have not explained how the lower score of 60 “better reflected recoveryâ€â€”especially since the entry score of 65 already represented serious disability. Similar problems afflicted the fatigue scale “recovery†threshold.
The PACE authors also report that “we included those who felt “much†(and “very muchâ€) better in their overall health†as one of the criteria for “recovery.†This is true. They are referring to the Clinical Global Impression scale. In the protocol, participants needed to score a 1 (“very much betterâ€) on this scale to be considered “recovered†on that indicator. In the 2013 paper, participants could score a 1 (“very much betterâ€) or a 2 (“much betterâ€). The PACE authors provided no citations to support this expanded interpretation of the scale. They simply explained in the paper that they now thought “much better†reflected the process of recovery and so those who gave a score of 2 should also be considered to have achieved the scale’s “recovery†threshold.
With the fourth criterion—not meeting any of the three case definitions used to define the illness in the study—the PACE authors gave themselves another option. Those who did not meet the study’s main case definition but still met one or both of the other two were now eligible for a new category called “trial recovery.†They did not explain why or when they made this change.
The PACE authors provided no sensitivity analyses to measure the impact of the significant changes in the four separate criteria for “recovery,†as well as in the overall re-definition. And remember, participants at baseline could already have achived the “recovery†requirements for one or two of the four criteria—the physical function and fatigue scales. And 13% of them already had.
Requests for data under the freedom of information act were rejected as vexatious
The PACE authors have rejected requests for the results per the protocol and many other requests for documents and data as well—at least two for being “vexatious,†as they now report. In my story, I incorrectly stated that requests for per-protocol data were rejected as “vexatious†[see clarification below]. In fact, earlier requests for per-protocol data were rejected for other reasons.
One recent request rejected as “vexatious†involved the PACE investigators’ 2015 paper in The Lancet Psychiatry. In this paper, they published their last “objective†outcome measure (except for wages, which they still have not published)—a measure of fitness called a “step-test.†But they only published a tiny graph on a page with many other tiny graphs, not the actual numbers from which the graph was drawn.
The graph was too small to extract any data, but it appeared that the cognitive behavior therapy and graded exercise therapy groups did worse than the other two. A request for the step-test data from which they created the graph was rejected as “vexatious.â€
However, I apologize to the PACE authors that I made it appear they were using the term “vexatious†more extensively in rejecting requests for information than they actually have been. I also apologize for stating incorrectly that requests for per protocol data specifically had been rejected as “vexatiousâ€Â [see clarification below].
This is probably a good time to address the PACE authors’ repeated refrain that concerns about patient confidentiality prevent them from releasing raw data and other information from the trial. They state: “The safe-guarding of personal medical data was an undertaking enshrined in the consent procedure and therefore is ethically binding; so we cannot publicly release these data. It is important to remember that simple methods of anonymization does [sic] not always protect the identity of a person, as they may be recognized from personal and medical information.â€
This argument against the release of data doesn’t really hold up, given that researchers share data all the time without compromising confidentiality. Really, it’s not that difficult to do!
(It also bears noting that the PACE authors’ dedication to participant protection did not extend to fulfilling their protocol promise to inform participants of their “possible conflicts of interestâ€â€”see below.)
Subjective and objective outcomes
The PACE authors included multiple objective measures in their protocol. All of them failed to demonstrate real treatment success or “recovery.†The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.
The authors now write: “We interpreted these data in the light of their context and validity.â€
What the PACE team actually did was to dismiss their own objective data as irrelevant or not actually objective after all. In doing so, they cited various reasons they should have considered before including these measures in the study as “objective†outcomes. They provide one example in their response. They selected employment data as an objective measure of function, and then—as they explain in their response, and have explained previously–they decided afterwards that it wasn’t an objective measure of function after all, for this and that reason.
The PACE authors consider this interpreting data “in light of their context and validity.†To me, it looks like tossing data they don’t like.
What they should do, but have not, is to ask whether the failure of all their objective measures might mean they should start questioning the meaning, reliability and validity of their reported subjective results.
There was a bias caused by many investigators’ involvement with insurance companies and a failure not to declare links with insurance companies in information regarding consent
The PACE authors here seriously misstate the concerns I raised in my piece. I did not assert that bias was caused by their involvement with insurance companies. I asserted that they violated an international research ethics document and broke a commitment they made in their protocol to inform participants of “any possible conflicts of interest.†Whether bias actually occurred is not the point.
In their approved protocol, the authors promised to adhere to the Declaration of Helsinki, a foundational human rights document that is explicit on what constitutes legitimate informed consent: Prospective participants must be “adequately informed†of “any possible conflicts of interest.†The PACE authors now suggest this disclosure was unnecessary because 1) the conflicts weren’t really conflicts after all; 2) they disclosed these “non-conflicts†as potential conflicts of interest in the Lancet and other publications, 3) they had a lot of investigators but only three had links with insurers, and 4) they informed participants about who funded the research.
These responses are not serious. They do nothing to explain why the PACE authors broke their own commitment to inform participants about “any possible conflicts of interest.†It is not acceptable to promise to follow a human rights declaration, receive approvals for a study, and then ignore inconvenient provisions. No one is much concerned about PACE investigator #19; people are concerned because the three main PACE investigators have  advised disability insurers that cognitive behavior therapy and graded exercise therapy can get claimants off benefits and back to work.
That the PACE authors made the appropriate disclosures to journal editors is irrelevant; it is unclear why they are raising this as a defense. The Declaration of Helsinki is about protecting human research subjects, not about protecting journal editors and journal readers. And providing information to participants about funding sources, however ethical that might be, is not the same as disclosing information about “any possible conflicts of interest.†The PACE authors know this.
Moreover, the PACE authors appear to define “conflict of interest†quite narrowly. Just because the insurers were not involved in the study itself does not mean there is no conflict of interest and does not alleviate the PACE authors of the promise they made to inform trial participants of these affiliations. No one required them to cite the Declaration of Helsinki in their protocol as part of the process of gaining approvals for their trial.
As it stands, the PACE study appears to have no legitimate informed consent for any of the 641 participants, per the commitments the investigators themselves made in their protocol. This is a serious ethical breach.
I raised other concerns in my story that the authors have not addressed. I will save everyone much grief and not go over them again here.
I want to acknowledge two additional minor errors. In the last section of the piece, I referred to the drug rituximab as an “anti-inflammatory.†While it does have anti-inflammatory effects, rituximab should more properly be referred to as an “immunomodulatory†drug.
Also, in the first section of the story, I wrote that Dr. Chalder and Dr. Sharpe did not return e-mails I sent them last December, seeking interviews. However, during a recent review of e-mails from last December, I found a return e-mail from Dr. Sharpe that I had forgotten about. In the e-mail, Dr. Sharpe declined my request for an interview.
I apologize to Dr. Sharpe for suggesting he hadn’t responded to my e-mail last December.
Clarification: In a decision on a data request, the UK Information Commissioner’s Office noted last year that Queen Mary University of London “has advised that the effect of these requests [for PACE-related material] has been that the team involved in the PACE trial, and in particular the professor involved, now feel harassed and believe that the requests are vexatious in nature.” In other words, whatever the stated reason for denying requests, White and his colleagues regarded them all as “vexatious” by definition. Therefore, the statement that the investigators rejected the requests for data as being “vexatious” is accurate, and I retract my previous apology.
I read the earlier underwhelming response by the PACE authors with dismay, and just a bit of embarrassment at their lack of intellectual application–qualities I note are abundant in this response by David Tuller. This is a thoughtful considered and honest response. It is a slapdown.
The people concerned have previously described Freedom of Information requests and questions in Parliament as ‘harassment’. Have they provided evidence of this ‘abuse’?
Abuse is always inappropriate, of course. But some of these researchers have been known to label published scientific letters in journals responding to their research as “harassment”, as well as perfectly reasonable Freedom of Information requests.
So please … presume the accused patients are innocent until proven otherwise. This group of researchers has been running exactly the same smear campaign against their own for years now.
I would take any accusation of abuse with a large pinch of salt. As has been pointed out by others, these researchers class any criticism of their work as abuse, however politely put.
“however politely put” would be an exaggeration, but they do seem to demand a high level of respect and deference though, particularly considering the quality of their work and it’s impact upon patients. They have complained about the ‘harassment’ of FOI requests, which seems particularly difficult to justify given that they now claim that they’ve only ever classed two FOI requests as vexatious.
I sense that Vincent and David have the measure of what those involved in this and similar trials regard as abuse, but naturally must caution their readers to protect themselves from accusations of inciting said abuse by these articles. For anyone not familiar with these definitions of harassment, this was obtained and published by Queen’s Award-winning and long established independent charity for children and young people with ME, The Young ME Sufferers Trust – https://www.dropbox.com/s/92m09l9tq55pihh/Behind%20the%20Scenes%20-%20Research%20Collaborative.pdf?dl=0
Re. rituximab – UK involvement in Phase III Norway trials through B-cell research aiming to identify likely responders to rituximab, prerequisite to a UK trial – http://www.ukrituximabtrial.org/
Abuse is to be abhorred in whatever form it comes. I am sorry to hear that the PACE authors have received abuse on social media: such abuse is wrong and helps no one.
Just as anyone who abuses others should be sorry for what they have done, I hope that the PACE authors are sorry that their misleading overclaims of improvement and recovery based on the “normal range” analyses have led to CBT and GET being inappropriately prescribed to patients.
I look forward to reading their apology.
While I’m waiting, may I ask people to sign the already 4,300-strong petition calling for the retraction of these “normal-range”-based claims and for the publication of the per-protocol recovery results.
http://my.meaction.net/petitions/pace-trial-needs-review-now
As has been said, the PACE trial authors’ “normal” range” for physical function starts close to the average level of Class II congestive heart failure patients.
Odd that The Lancet and Psychological Medicine didn’t mind.
Thank you, David Tuller, for again addressing PACE and its shortcomings. Reading this was not a slog. You wrote:
‘The PACE authors consider this interpreting data “in light of their context and validity.†To me, it looks like tossing data they don’t like.’
Ostensibly, peer review is designed to protect patients from this type of statistical cherry picking. I’m so relieved journalists such as yourself and scientists are joining the voices of patient advocates in calling out the Investigators and the Lancet for being complicit in passing off the shell game that is PACE as science.
As I begin my third decade with this illness, having neither worked nor left my house hardly at all despite attempting exercise therapy several times, it is my greatest hope that I am finally witnessing a paradigm change.
Thank you again for this thorough and precise explanation of why these repeated excuses fail. I confess to being one of the two “vexatious” applicants, asking for the data represented on the tiny fitness graph.
One small point though: if you have already covered it, and I have missed it, I apologize: put it down to age and brainfog)! They operationalised all of the case definitions by choosing to add the entry criteria for the trial, the sf-36 score of 65 or less and the bimodal CFQ score of 6 or more. Is it ever acceptable to change accepted diagnostic criteria in that way?
Just breaking either one of these entry criteria nullified the diagnosis: in reality this fourth criterion added very little indeed to the definition of “recovery”. In their recovery paper, adding the criterion of not meeting Oxford definition to those of the sf-36 and the CFQ scores removed 2 patients from the CBT group and none at all from the GET group, with the addition of the London definition removing another 1 from each group. This fourth criterion, unlike the emphasis placed upon it by the discussion in the House of Lords, surely was trivial.
Brilliant journalism from David Tuller. Interesting to see the games that the psych lobby plays. Thank you to Vincent Racaniello for hosting. Scientific process needs to be followed even when the subject isn’t viewable under a microscope. Thank you so much for taking a stand for science and for patients who have been maligned for so long.
I still fail to understand why self reported scales are allowed to be included as measures of recovery at all, when CBT is purposely being used to alter the thinking that goes behind the reporting!
The working premise of this school of research is that patients’ illness is maintained by erroneous beliefs. The idea of engaging in CBT is to modify these beliefs.
The ‘fatigue scale’ does not measure fatigue: it only measures perceptions of fatigue as conditioned by the patients’ beliefs. Likewise the ‘physical fitness scale’.
It can therefore be anticipated that the form of brainwashing that is CBT, will, if successful, result in patients saying they feel better even if they do not: this is how they have been instructed to think.
Improvements on the patient reported scales, thus only measure the success of the brainwashing, plus any actual improvement. It is impossible to say how much of any change was due to each.
The scales in this particular application, are thus a measure of the success of CBT in changing a patient’s belief. They cannot simultaneously be used as a measure of any change in fitness. The researchers  have turned the scales into proxy measures of brainwashing only.
Where CBT is being used to modify perceptions, you cannot use those perceptions as a standard for measuring something else. Â The team might as well have used different patients for the ‘before and after’ questionnaires.
If the, debatable, minor positive results attributed to CBT/GET show anything, they show that the researchers have gone a small way in ‘correcting’ their patients’ ‘faulty beliefs’. Now they must prove to us that, with these, ‘corrected beliefs’, the patients actually physically improve.
‎
Why are we even bothering to concern ourselves with anything further?
You’re absolutely right, Steve. Even if they managed to change a few patients’ perceptions, a bit, that’s where the whole evidence stops. After all, they’ve been saying the whole condition consists of beliefs. It’s a wonder they’ve been allowed to get away with extending their conclusions and just ditch the objective data which rebuts their beliefs.
It’s great David does not allow them to get away with their evasions and contradictions and calls these.
Bimodal data is even more limiting than a Likert scale which is ordinal.Whilst it could be used sensibly for a question such as gender where there are only two options normally provided in a question if there are more options a more refined scale would be used. The change to a Likert scale provided more options for graded responses than a bimodal scale but still is in sufficiently week for testing hypotheses where an objective based measurement which would have either used interval or ratio data would have been more appropriate and added validity. Whilst I have completed a sf36 in a fibro session I’d like to see a copy of both this and the Caldler questionnaire as there could be many concerns raised in the instruments themselves. Given the very nature the lack of evidence in a post positivist paradimn compared to biophysical one would hope that greater efforts on triangulation would be made so as not to reach biased conclusions.
Peer review depends on those academics involved in that journal. Academics are like consultants in that they specialise and therfore they may have more of a subject specialism such as psychiatry rather than an in depth knowledge base on an application to a soecific field. in addition from a panel view point the authors are respected in their fields. Consequently it us easy to see why their articles might be published in such high ranking journals even if we might scrutise more carefully and take issue with the content due to our own research and empirical understandung. a robust debate between researchers of different philosophical perspectives and patients would help understanding and no academic should be afraid of justifying and discussing their findings.
One thing which comes to mind is whether there are any longitudinal studies of pwme looking at how their health has changed since they had ME.
I was approached by a journalist, Michael Hanlon, in the autumn of 2012 regarding an interview for a Sunday Times Magazine article about Myalgic Encephalomyelitis. I was told by him that he was “in support of the psychiatrists” but that he would “accurately reflect the view of everyone in this debate”. I maintained a silence, not replying in any way to him.
The eventual article, published in May 2013, was more or less about how Simon Wessely has been, in his view, ‘victimised’. In place of my contribution, Hanlon had managed to find someone willing to speak against Wessely. Hanlon, in his piece, writes about this interviewee: “He was struck off by the British Veterinary Association after inadvertently killing a cat, and was convicted of indecent assault on a 12-year old girl…”
A cynic might imagine that the reason these people get published despite the poor quality of their work, is that the media knows they will generate a lot of readers and publicity.
Thanks Acacia. It’s mindboggling to me that there is any debate about these measures at all. The nature of the questionnaire technique rules it out of being used in anything where perceptions are intended to be altered, except to see if those perceptions are altered.
I think this could be the case for newspapers as they depend on readership but articles accepted for publishing in journals would not be appraised on that basis. They would be perceived to be of good quality. By and large the publishing of academic articles attracts very little media coverage except for studies into topics on what I refer as unimportant studies such as whether tea is better milk first or second . Engagement also is important on social media and no doubt Twitter and Facebook have been thriving on lots of articles. The interesting issue for academics is having their work robustly challenged and having to defend it in a non academic arena, none of which is bad. What I don’t understand is why the authors do not cross check against different articles where there is clearly a conflict in understanding as this should be done. No academic should exist in isolation and I would expect robust discussions where there are different views. I remember picking up a booklet a while ago, I think at invest in ME conference on the published studies from the two main competing perspectives but have not seen it updated.
Why did the Lancet fast-track PACE? As Malcolm Hooper noted, it provided no new information about some therapies that were known before 2011 to be “moderately effective” at best.
http://www.meactionuk.org.uk/COMPLAINT-to-Lancet-re-PACE.htm
Prof. emeritus Malcolm Hooper made in 2010 a 442 page report, detailing the failings of the Medical Research Council and specifically the PACE trials:
“Magical Medicine. How to make a disease disappear. Background to, consideration of, and quotations from the manuals for the Medical Research Council’s PACE Trial of behavioural interventions for Chronic Fatigue Symdrome / Myalgic Encephalomyelitis together with evidence that such interventions are unlikely to be effective and may even be contraindicated.” http://www.investinme.org/Documents/Library/magical-medicine.pdf
Well worth a read!
Off course I disapprove of any abuse towards the investigators of the PACE study.
Let us also take into account that their way of scientific (mis)behaviour has caused
many patients to undergo medical, psychological and governmental mistreatment
for years and years, not just in the UK but also in other countries worldwide.
I strongly support the demand that they should correct their findings and publish about
them, preferably after serious reviews by non-corrupted scientists.
Hooper also made a concise summary of major concerns about the trial in a complaint to the ethics committee, in 2010 pre-Lancet. Tuller’s focus on how the protocol changes to outcomes might have affected analysis is important, but Hooper shows that equally serious protocol changes were made to increase recruitment while the trial was underway. These diluted the intake so patients with somatization disorders who had previously been excluded were then reinvited. The original protocol was based on consecutive referrals to CFS clinics, but recruitment was so poor that White was reduced to asking family doctors to refer anyone with any illness that could be described as “chronic
fatigue (or a synonym)â€. The trial was invalid from early on.
I’m actually not even remotely surprised that these folks have been tracked down and harassed via social media. After all, the PACE trials have been used as justification for treating ME/CFS/SEID with CBT and GET around the world – and the results have led to increased disability and deaths (primarily suicides). Imagine being told you must either continue a therapy that is making you sicker and more disabled, or your disability benefits will be stripped from you. Just imagine.
ME/CFS/SEID patients are angry. We may be too sick to do much more about it than complain online, but you would be hard-pressed to find a more pissed-off patient body than ours, right now. We’ve been underfunded, subjected to worthless and harmful interventions, denied disability, disbelieved or ridiculed by the public and media, incarcerated in mental institutions (Karina Hansen is still not free)… and now, finally, we’re seeing some small glimmer of hope, some small glimmer of an answer in the Rituximab trials and considerably less official personal accounts of those taking low-dose naltrexone… and we have had enough.
We’re not willing to accept this mistreatment in silence any longer. We have nothing other to spend our limited energy on that is more important than getting well again. That means we must fight for justice and gain attention for our disease in any way we can.
And yes, that means anyone fighting against us can kiss their reputation goodbye. Because by this point, most of US know more about our illness than THEY do, whatever degrees they might hold.
Being quiet has gained us NOTHING. You say abuse is always inappropriate? Who, then, will punish these people for the harm they have done? Answer that question.
No. I don’t feel sorry for them at all. Not even a little.
Wow. That’s…a lot worse than I’d previously believed (and I’d known it was bad to start with).
“Precision in language matters in science.”
Frankly, it seems like this is always at the heart of this group of researchers’ problems. They keep on redefining words to mean something different, and then whenever they’re called on it, it seems like they turn around and say they didn’t mean it that way, they haven’t used that phrasing recently and anyway, they didn’t say it at all! And somehow it’s always our fault for mentioning their imprecise use of language.
It’s exhausting just trying to keep track of the confusion they cause by using words in the same way they use numbers – to mean whatever they want it to mean (at the moment). I end up feeling like Janet trying desperately to hold on to Tam Lin while the faeries change him into all sorts of shapes in order to make her let go!
David, under the heading “Subjective and objective outcomes” there is a sentence that says:
The extremely modest improvements in the exercise therapy arm in the walking test still left them more severely disabled people with people with pacemakers, cystic fibrosis patients, and relatively healthy women in their 70s.
Did you mean “unhealthy†women in their 70s?
Quite an excellent article. Thank you so much for all of the very specific information. I hope that soon those involved in the PACE trial have to provide the information requested in some of the FOIAs. It is ridiculous that they are allowed to make the claims that they do, based on such unscientific information. It is like playing a game and the other person keeps changing the rules — you certainly won’t have a correct and fair outcome.
“Steve Hawkins
I love it!!! What you said about failing to “understand why self reported scales are allowed to be included as measures of recovery at all, when CBT is purposely being used to alter the thinking that goes behind the reporting!” The self-reported assessments showed that CBT successfully changed the patients’ beliefs by modifying their perceptions about their illness, to the point that they thought they were better than they were previously. So how can self-reported improvements after the brainwashing be believed? Why didn’t they use/report data from an activity tracker? Well, looking at the six minute walking test, these patients really hadn’t improved much, despite CBT and GET. The patients in the CBT & GET group also received Standard Medical Care. Any minor improvement could have been due to receiving pain meds (allowing them to walk better if they were in pain) or possibly “energy enhancers” (Adderall, Provigil, etc.) from the medical care practitioner.
Thanks J. Rae.
The decision to use these scales only makes sense if the ‘researchers’ are so absolutely certain that they are right, that they really were trying to change the answers with CBT, and when they succeeded in changing the perception, the ‘false disease’ would disappear, because it was only a delusion.
It is possible that they really were shocked, when they found the physical results did not improve, as their own false belief system had not allowed for the possibility, right from the outset.
Yes, don’t they use CBT to change what they claim are erroneous (in their unsupported biased view) illness perceptions? And then they claim some objective status for their unscientific, irrational trial based on the perceptions of the participants after the CBT? The whole concept is absurd and irrational and clearly about the researchers’ perceptions of the illness being given an intellectual appearance (to those who don’t actually read it).
In general, people on the internet can be pretty jerky. If the PACE investigators wrote a 3-word blog post just saying “I like Obama” they would receive a ton of abuse. Doesn’t make it OK, but I suspect it’s an internet thing more than an ME/CFS patient thing.
You are right. The researchers were probably shocked that The physical results didn’t improve. They should have then questioned their hypotheses & questioned the ethics of using CBT to change a patients beliefs to the point of falsehood; to the point that the patient believed they could do more than they realistically could physically perform.
CBT would set the patient up for great disappointment & possible harm when the patient would set a goal they honestly believed they Could accomplish (due to false beliefs created by CBT), but then they would be unable to accomplish the goal. How unethical to do that to someone!
I think it was OK in their world because the study was about ‘fatigue’, not disability, and fatigue in their world is a subjective, woolly concept, more disagreeable than disabling. In M.E. it is something quite different, as we all know, but they have long since stopped bothering with our experience of this disease.